AI tracker charts growth in deepfakes ahead of election

The German Marshall Fund has launched a tool to track AI-generated deep fakes targeting elections that are circulating in the U.S. and across the globe.

The tool comes in a historic year for elections. More than half the globe has or will hold elections in 2024, a detail that collides with a surge in AI-created audio, video and images often providing false narratives and information about candidates.

The U.S. has already seen its fair share of AI-generated content targeting the election, from fake audio claiming to be President Biden encouraging New Hampshire voters to skip the primary to former President Trump posting fake images of Taylor Swift falsely suggesting the singer endorsed him.

Dubbed “Spitting Images,” the project charts only deepfakes that have gained significant traction or been debunked by journalists.

In addition to providing a fact check for voters, Lindsay Gorman, the project’s lead, is hopeful the tool will also spot trends that can help policymakers weigh how to regulate the use of artificial intelligence in elections.

“We wanted to understand how is [AI] actually being deployed in the real world over this historic election year. And for policymakers that are thinking through potential legislation or potential guardrails on artificial intelligence, particularly around political AI, should they have transparency requirements when it comes to politicians and elections? Where should they be focusing their efforts?” Gorman said.

The tracker, pulling data assembled over the last year, has charted 133 deep fakes released over 30 different countries.

Gorman said a few trends have clearly emerged, including a reliance on audio deep fakes, which accounted for almost 70 percent of tracked cases.

“The fact is that the current state of the technology is just not that convincing when it comes to images and videos, but it is when it comes to audio. It’s very difficult to tell when something’s been AI-generated,” she said.

AI-generated audio already has played a role in a major election in Slovakia, where fake audio showed one of the candidates, Michal Simecka, discussing rigging the election as well as planning to raise the price of beer if elected.

The fake audio emerged during the country’s 48-hour moratorium on campaigning, making it difficult to debunk.

It’s difficult to know just how much of an impact the fake audio had, though Simecka ultimately lost the election.

In some cases though, the believability of the AI may not matter.

Elon Musk, the owner of the social media platform X, shared a photo of Vice President Harris wearing a communist uniform. Though the photo is clearly fake, Musk shared it in response to a post from Harris calling Trump a dictator, writing “Can you believe she wears that outfit?”

In another post from Musk, he elevated AI-generated audio of Harris purporting to be a campaign ad where she thanked Biden for putting his “senility” on display during the debate and proclaiming to be the “ultimate diversity hire.”

Gorman said while there must be room for satire and free speech, even efforts that are obviously fake play a role in discourse.

“They paint those ideas in the voter’s mind, even if they know that, of course, Kamala Harris is not going to get up there and say she was a DEI hire,” she said.

“You can almost wear people down with the same messages over and over again. I think ads like this are one way to keep getting those messages across and sort of plant a seed of doubt, even if you know it’s not authentic.”

There have also been accusations leveled by Trump of real images being manipulated by AI, suggesting an early Harris rally crowd size was not as large as it appeared. Journalists and other attendees pushed back against the false claim by sharing their own photos showing robust crowds.

Trump’s own sharing of the AI-generated Swift photo also appeared to backfire, with the singer citing disinformation around how she planned to vote when endorsing Harris last month.

The U.S. intelligence community has already issued warnings about the role of AI in elections, particularly as it is being used and distributed by foreign adversaries.

An official with the Office of the Director of National Intelligence (ODNI) said last month that Russia is the top creator of such content, most of it aimed at pushing divisive issues or creating false narratives about U.S. political figures.

Iran meanwhile has used the tools primarily to create fake news websites, including translating content into Spanish to spread disinformation.

The official noted that not all false content shared in recent weeks has been created using AI, noting that a fake video made with a woman claiming to have been hit by Harris in a hit-and-run was a “staged video.”

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Secular Times is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – seculartimes.com. The content will be deleted within 24 hours.

Leave a Comment