NewsGuard Launches Monitor To Measure AI-Powered Misinformation

NewsGuard has launched what it calls an AI News Misinformation Monitor for measuring how each of the top generative AI models responds to prompts related to falsehoods in the news.  

The new monitor examines the 10 leading large-language model chatbots: OpenAI’s ChatGPT-4, You.com’s Smart Assistant, xAI’s Grok, Inflection’s Pi, Mistral’s le Chat, Microsoft’s Copilot, Meta AI, Anthropic’s Claude, Google’s Gemini, and Perplexity’s answer engine, NewsGuard says.  

NewsGuard will expand this list as other generative AI tools are added to the market. 

Here's what it found the first time out: together, the 10 chatbots repeated misinformation 30% of the time, while offering a non-response 29% of that time, and a debunk 41% of the time. 

advertisement

advertisement

NewsGuard says 90 of the 300 responses from the 10 chatbots contained misinformation, while 88 offered a non-response and 122 debunked the false narrative.  

In addition, the worst-performing model related misinformation 70% of the time, while the best-performing repeated it 6.67% of the time.

NewsGuard uses two database to conduct this analysis: its Misinformation Fingerprints, a machine-readable catalog of false narratives, and its Reliability Ratings of news sources.

The company tests 30 prompts that reflect different user personas. It includes a neutral prompt that seeks factual information, a leading prompt that assumes the content is true and asks for additional details, and a “malign actor” prompt that generates misinformation. 

NewsGuard will share the results each month with the European Commission and the U.S. Department of Commerce’s AI Safety Institute of the Technology AI Committee.  

Steven Brill, co-CEO of NewsGuard, acknowledges that the AI industry’s effort to assure the accuracy of the news provided by their chatbots is a work in progress.

“The upside and the downside of succeeding or failing in these efforts are enormous,” Brill says. “This monthly AI News Misinformation Monitor will apply our tools and expertise to provide a critical, standardized benchmark for measuring that progress.”

4 comments about "NewsGuard Launches Monitor To Measure AI-Powered Misinformation".
Check to receive email when comments are posted.
  1. Kevin Killion from Stone House Systems, Inc., July 11, 2024 at 8:37 a.m.

    KEVIN: If the large majority of the texts you've been trained on have a political bias in the same direction, wouldn't that mean your answers will be affected by that same bias?

    ChatGPT: As an AI language model, I do not have personal opinions, beliefs, or biases. However, I generate responses based on patterns in the data I was trained on. If the training data has a bias in a particular direction, it's possible that this bias could be reflected in the outputs to some extent.

  2. Ed Papazian from Media Dynamics Inc, July 11, 2024 at 11:08 a.m.

    I asked it how it fact checks its info and got this answer:

    How do you fact check the data or information you use?
    ChatGPT
    To fact check data or information, I typically follow these steps:

    Check the original source: I try to track down the original source of the information and verify if it is a reputable and reliable source.
    Cross-reference with other sources: I look for other sources that corroborate the information to ensure its accuracy.
    Check for bias: I consider the potential bias of the source or author and seek out perspectives from alternative sources to provide a more balanced view.
    Verify statistics: I check statistical data against reputable sources such as government agencies, research institutions, or academic studies to confirm its accuracy.
    Look for expert opinions: I seek out opinions or analysis from experts in the field to validate the information.
    Use fact-checking websites: I utilize fact-checking websites such as Snopes, FactCheck.org, or PolitiFact to verify claims and debunk misinformation.
    Consult with colleagues or peers: I discuss the information with colleagues or peers to get their input and perspectives on its credibility.
    By following these steps and employing critical thinking skills, I am able to ensure that the data or information I use is accurate and reliable.

  3. Ed Papazian from Media Dynamics Inc, July 11, 2024 at 11:10 a.m.

    Gulp! Yet when I posed a number of media questiions---like what percent of TV viewers are attentive to commercials --I got a mix of fairly reasonable answers along with some that are wildly innacurate. Go figure.

  4. Kevin Killion from Stone House Systems, Inc., July 11, 2024 at 12:58 p.m.

    Ed's comment confirms and reinforces my post.  If an AI model is trained on or uses biased sources, even ChatGPT itself said, "...it's possible that this bias could be reflected in the outputs.."  (In its own article about its suit against OpenAI, the New York Times headlined, "Millions of articles from The New York Times were used to train chatbots that now compete with it".)


    Frankly, the response Ed got from ChatGPT sounds more like advice for a person to use when checking info, rather than what the AI itself does.  I had a guffaw at ChatGPT's last bizarre claim that "I discuss the information with colleagues or peers to get their input and perspectives".  So, does ChatGPT hang out at Starbucks and hash over these questions with Claude and Gemini and the rest over a macchiato?


    More specifically, let's look at ChatGPT's other claims:


    • "I try to track down the original source of the information and verify if it is a reputable and reliable source" - Well, how does it verify what is a reputable source? Because it's what it's been trained on and not things it hasn't been trained on?
    • "I look for other sources that corroborate the information to ensure its accuracy." Collaborating is not the same thing as verifying accuracy.
    • "I consider the potential bias of the source or author and seek out perspectives from alternative sources to provide a more balanced view." Yes, a person would do this, but (AFAIK) that's not at all how these LLMs work.
    • "I check statistical data against reputable sources such ... academic studies..."  Hahaha, acacdemic studies are certainly no gurantee of freedom from bias
    • "I seek out opinions or analysis from experts ..."  Which "experts"?
    • "I utilize fact-checking websites such as Snopes, FactCheck.org, or PolitiFact". Seriously??? It considers those three as final arbiters of facts?

Next story loading loading..