Rapid spread of election disinformation stokes alarm

Experts and political figures are sounding the alarm on the spread of election disinformation on social media, putting leading platforms under intense scrutiny in the final days of the presidential race.

Between investigations into leading social media companies to prominent figures voicing concerns about false election claims, the past week saw increased discussion around the topic as some brace for postelection disinformation.

The long-lasting falsehoods over the 2020 election have made voters and election watchers more attuned to the potential for disinformation, though experts said recent technology advances are making it more difficult for users to discern fake content.

“We are seeing new formats, new modalities of manipulation of some sort including … this use of generative AI [artificial intelligence], the use of these mock news websites to preach more fringe stories and, most importantly perhaps, the fact that now these campaigns span the entire media ecosystem online,” said Emilio Ferrara, professor of computer science and communication at the University of Southern California.

“And they are not just limited to perhaps one mainstream platform like we [saw] in 2020 or even in 2016,” said Ferrara, who co-authored a study that discovered a multiplatform network amplifying “conservative narratives” and former President Trump’s 2024 campaign.

False content has emerged online throughout this election cycle, often in the form of AI-generated deepfakes. The images have sparked a flurry of warnings from lawmakers and strategists about attempts to influence the race’s outcome or sow chaos and distrust in the electoral process.

Just last week, a video falsely depicting individuals claiming to be from Haiti and voting illegally in multiple Georgia counties circulated across social media, prompting Georgia Secretary of State Brad Raffensperger (R) to ask X and other social platforms to remove the content.

Intelligence agencies later determined Russian influence actors were behind the video.

Thom Shanker, director of the Project for Media and National Security at George Washington University, noted the fake content used in earlier cycles was “sort of clumsy and obvious,” unlike newer, AI-generated content.

“Unless you really are applying attention and concentration and media literacy, a casual viewer would say, ‘Well, that certainty looks real to me,’” he said, adding, “And of course, they are spreading at internet speeds.”

Over the weekend, the FBI said it is “aware” of two fake videos claiming to be from the agency about the election. Attempts to deceive the public “undermines our democratic process and aims to erode trust in the electoral system,” the agency said.

News outlets are also trying to debunk fake content before it reaches large audiences.

A video recently circulated showing a fake CBS News banner claiming the FBI warned citizens “to vote with caution due to high terrorist threat level.” CBS said the screenshot “was manipulated with a fabricated banner that never aired on any CBS News platform.”

Another screenshot showing a CNN “race alert” with Vice President Harris ahead of Trump in Texas reportedly garnered millions of views over the weekend before the network confirmed the image was “completely fabricated and manipulated.”

In one since-deleted post of the fake CNN screenshot, a user wrote, “Hey Texas, looks like they are stealing your election.”

False content like this can go unchecked for longer periods of time, as they often posted into an “echo chamber,” and shown only to users with similar interests and algorithms, said Sandra Matz, a professor at Columbia Business School.

“It’s not necessarily that there’s more misinformation, it’s also that it’s hidden,” Matz said, warning it is not possible for experts to “easily access the full range of content that is shown to different people.”

Social media companies have faced even more scrutiny after four news outlets released last week separate investigations into X, YouTube and Meta — the parent company for Facebook and Instagram. All of the probes say those major companies failed to stop some content containing election misinformation before it went live.

Since purchasing X, Elon Musk and the company have faced repeated criticism for scaling back content moderation features and reinstating several conspiracy theorists’ accounts.

Concerns over disinformation on the platform increased earlier this year when the billionaire became a vocal surrogate for Trump and ramped up his sharing of false or misleading claims.

The Center for Countering Digital Hate (CCDH), an organization tracking online hate speech and misinformation, released a report Monday finding Musk’s political posts garnered 17.1 billion views since endorsing Trump, more than twice as many views as the U.S. “political campaigning ads” recorded by X in the same period.

Musk’s X Corp. filed a lawsuit against the CCDH last year.

“It used to be that Twitter at least TRIED to police disinformation. Now its owner TRAFFICS in it, all as he invests hundreds of millions of dollars to elect Trump—and make himself a power-wielding oligarch,” Democratic strategist David Axelrod wrote Monday in a post on X.

Former Rep. Liz Cheney (R-Wyo.), one of the most vocal GOP critics of Trump, predicted last week that X will be a “major channel” for those claiming the election was stolen and called the platform a “cesspool” under Musk’s leadership.

An X spokesperson sent The Hill a list of actions it is taking to prevent false or fake claims from spreading, including the implementation of its “Community Notes” feature intended to fact-check false or misleading posts.

ProPublica published a report Thursday finding eight “deceptive advertising networks” placed more than 160,000 election and social issue ads across more than 340 Facebook pages. Meta removed some of the ads after initially approving them but did not catch some with similar or identical content, the report stated.

Forbes also reported Facebook allowed hundreds of ads falsely claiming the election may be rigged or postponed to run on its website.

“We welcome investigation into this scam activity, which includes deceptive ads,” Meta spokesperson Ryan Daniels told The Hill. “This is a highly-adversarial space. We continuously update our enforcement systems to respond to evolving scammer behavior and review and remove any ads that violate our policies.”

Facebook has faced intense scrutiny in recent election cycles over its handling of political misinformation. In response, Meta has invested millions in its election fact-checking and media literacy initiatives and prohibits ads that discourage users from voting, question the election’s legitimacy or feature premature victory claims.

Daniels said Meta has about 40,000 people globally working on safety and security, more than the company had in 2020.

Meta has “grown our fact checking program to more than 100 independent partners, and taken down over 200 covert coordinated influence operations,” Daniels said. “Our integrity efforts continue to lead the industry, and with each election we incorporate the lessons we’ve learned to help stay ahead of emerging threats.”

A separate report published last week from The New York Times and progressive watchdog Media Matters for America claimed YouTube in June 2023 “decided to stop fighting” the false claim that President Biden stole the 2020 election.

This included allowing more than 280 videos containing election misinformation from an estimated 30 conservative channels.

“The ability to openly debate political ideas, even those that are controversial, is an important value—especially in the midst of election season,” a YouTube spokesperson said in response to the report. “And when it comes to what content can monetize, we strike a balance between allowing creators to express differing perspectives and upholding the higher bar that we and our advertisers have for where ads run.”

YouTube said the platform has a multilayered approach to connect users with authoritative news and information while ensuring a variety of viewpoints are represented.

This includes policies against certain election misinformation, defined by YouTube as content “that can cause real-world harm, like certain types of technically manipulated content, and content interfering with democratic processes.”

Sacha Haworth, the executive director of the Tech Oversight Project, a nonprofit advocating for reining in tech giants’ market power, said she was not surprised to see the flurry of reports.

“We as a public, as lawmakers, as policy makers, must understand that this has to be the last time we allow them to do this to our elections,” Haworth said. “They are never going to self-regulate.”

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Secular Times is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – seculartimes.com. The content will be deleted within 24 hours.

Leave a Comment