Although many tech companies and start-ups have touted the potential of automated fact-checking services powered by artificial intelligence to stem the rising tide of online misinformation, a new study led by researchers at Indiana University has found that AI-fact checking can, in some cases, actually increase belief in false headlines whose veracity the AI was unsure about, as well as decrease belief in true headlines mislabeled as false.
The work also found that participants given the option to view headlines fact checked by large language model-powered AI were significantly more likely to share both true and false news—but only more likely to believe false headlines, not true headlines.
The study, “Fact-checking information from large language models can decrease headline discernment,” was published Dec. 4 in the Proceedings of the National Academy of Sciences. The first author is Matthew DeVerna, a Ph.D. student at the Indiana University Luddy School of Informatics, Computing and Engineering in Bloomington. The senior author is Filippo Menczer, IU Luddy Distinguished Professor and director of IU’s Observatory on Social Media.
“There is a lot of excitement about leveraging AI to scale up applications like fact-checking, as human fact-checkers cannot keep up with the volume of false or misleading claims spreading on social media, including content generated by AI,” DeVerna said. “However, our study highlights that when people interact with AI, unintended consequences can arise, highlighting how important it is to carefully consider how these tools are deployed.”
In the study, IU scientists specifically investigated the impact of fact-checking information generated by a popular large language model on belief in, and sharing intent of, political news headlines in a pre-registered randomized control experiment.
Although the model accurately identified 90% of false headlines, the researchers found that this did not significantly improve participants’ ability to distinguish between true and false headlines, on average.
In contrast, the researchers found the use of human-generated fact checks did enhance users’ discernment of true headlines.
“Our findings highlight an important source of potential harm stemming from AI applications and underscore the critical need for policies to prevent or mitigate such unintended consequences,” said Menczer. “More research is needed to improve the accuracy of AI fact-checking as well as understand the interactions between humans and AI better.”
Additional contributors to the paper were Kai-Cheng Yang of Northeastern University and Harry Yaojun Yan of the Stanford Social Media Lab.
More information:
Matthew R. DeVerna et al, Fact-checking information from large language models can decrease headline discernment, Proceedings of the National Academy of Sciences (2024). DOI: 10.1073/pnas.2322823121
Citation:
AI fact checks can increase belief in false headlines, study finds (2024, December 4)
retrieved 4 December 2024
from https://phys.org/news/2024-12-ai-fact-belief-false-headlines.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.