OpenAI sees continued attempts to use AI models for election interference  

OpenAI sees continued attempts to use AI models for election interference  

OpenAI has seen continued attempts by cybercriminals to use its artificial intelligence (AI) models for fake content aimed at interfering with this year’s elections, the ChatGPT maker said in a new report.  

According to OpenAI’s report, released Wednesday, the AI developer discovered and disrupted more than 20 operations this year that tried to influence the election with the company’s technology, including its popular tool, ChatGPT. 

These deceptive networks attempted to use OpenAI’s models to generate a variety of fake content, some of which was intended to be shared by fake personas on social media, the report stated.  

OpenAI’s models were also used to make articles for websites, analyze and reply to social media posts or debug malware, the tech giant said.  

This activity was detected in-part by OpenAI’s own AI tools, which often caught it in a matter of minutes, according to the report.  

While threat actors may be “experimenting” with OpenAI models, the company emphasized their reach is limited.  

“Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences,” the report wrote.  

The report laid out several examples of the misuse it observed in recent months.  

In early July, for example, the company said it banned several ChatGPT accounts from Rwanda after discovering they were being used to generate comments about the country’s elections.  

And in August, OpenAI disrupted a “covert Iranian influence operation” which produced social media comments and long-form articles about the U.S. election, conflict in the Middle East, Venezuelan politics and Scottish independence.  

Most of these posts received little engagement and there were no indications these were shared across social media sites, the report noted.  

Fears about how the elections could be compromised have ramped up amid a flurry of recent reports about foreign adversaries’ attempts to meddle with the U.S. presidential election this November. 

Last month, federal intelligence officials warned foreign adversaries are using AI to enhance ongoing discrimination efforts. Countries involved with this misuse include Russia, Iran and China, the officials said.  

Microsoft also released a report that found Russian influence operations were behind a viral video falsely accusing Vice President Harris of a hit-and-run, while the Justice Department seized more than 30 web domains used by Russia for covert campaigns. 

Former President Trump’s campaign, meanwhile, was hacked in June by Iran, which sought to share information with President Biden’s campaign, according to the FBI.  

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Secular Times is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – seculartimes.com. The content will be deleted within 24 hours.

Leave a Comment