Open AI exec warns AI can become ‘extremely addictive’

OpenAI chief technology officer Mira Murati urged the conducting of close research on the impact of artificial intelligence (AI) technology as it advances to mitigate risks of it becoming addictive and dangerous during an interview Thursday. 

Murati, a top executive at the company behind the popular ChatGPT AI tool, warned as AI advances it can become “even more addictive” than the systems that exist today, during an interview at The Atlantic Festival. 

Companies are introducing features that have longer memory or more capability for personalization, that will produce results more relevant to users, she said. 

ChatGPT alone has already advanced since the first form was released to the public. On Monday, the company announced it is bringing a voice mode to the tool, which will let users engage in a conversation with the chatbot on the go. 

“With the capability and this enhanced capability comes the other side, the possibility that we design them in the wrong way and they become extremely addictive and we sort of become enslaved to them,” she said. 

To avoid that, she said researchers have to be “extremely thoughtful” and study how people are using them as systems are deployed to learn from “intuitive engagement” with users. 

“We really don’t know out of the box. We have to discover, we have to learn, and we have to explore. There is a significant risk in making them, developing them wrong in a way that really doesn’t enhance our lives and in fact it introduces more risk,” Murati said. 

Since ChatGPT launched nearly a year ago, it skyrocketed in popularity and has been integrated into Microsoft products.

Other companies, like Google, Amazon and Meta, have since announced and released their own large language models creating an AI arms race — and leaving lawmakers racing to regulate the technology and the risks that come along with it. 

One risk lawmakers have been considering is the spread of misinformation from AI, from when the systems produce “hallucinations,” or inaccurate results. That risk could be especially concerning during elections. 

Murati said she doesn’t think it is realistic to imagine a “zero risk” situation, but the goal is to minimize levels of risk while maximizing on the benefits it poses. 

“I think about it in terms of trade offs. How much value is this technology providing in the real world and how much we mitigate the risks,” she said. 

One of the most immediate challenges that was highlighted from the launch of ChatGPT was students using it for schoolwork, in some cases to cheat. Murati said the technology will require adapting to new ways to teach, and highlight new ways of learning. 

Another key concern lawmakers have been considering is the threat the technology poses to jobs. Murati agreed that those threats are real. 

She said the technology will call for “a lot of work and thoughtfulness” to address those risks. 

“Just like every major revolution I think a lot of jobs will be lost, probably a bigger impact on jobs than any other revolution. And we have to prepare for this new way of life,” she said. 

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Secular Times is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – seculartimes.com. The content will be deleted within 24 hours.

Leave a Comment