AI

Are AI models doomed to always hallucinate?

Comment

Text to video concept, text-to-video by generative AI. Language model technology. Cyborg hand holding vdo generated by artificial intelligence.
Image Credits: Ole_CNX (opens in a new window) / Getty Images

Large language models (LLMs) like OpenAI’s ChatGPT all suffer from the same problem: they make stuff up.

The mistakes range from strange and innocuous — like claiming that the Golden Gate Bridge was transported across Egypt in 2016 — to highly problematic, even dangerous.

A mayor in Australia recently threatened to sue OpenAI because ChatGPT mistakenly claimed he pleaded guilty in a major bribery scandal. Researchers have found that LLM hallucinations can be exploited to distribute malicious code packages to unsuspecting software developers. And LLMs frequently give bad mental health and medical advice, like that wine consumption can “prevent cancer.”

This tendency to invent “facts” is a phenomenon known as hallucination, and it happens because of the way today’s LLMs — and all generative AI models, for that matter — are developed and trained.

Training models

Generative AI models have no real intelligence — they’re statistical systems that predict words, images, speech, music or other data. Fed an enormous number of examples, usually sourced from the public web, AI models learn how likely data is to occur based on patterns, including the context of any surrounding data.

For example, given a typical email ending in the fragment “Looking forward…”, an LLM might complete it with “… to hearing back” — following the pattern of the countless emails it’s been trained on. It doesn’t mean the LLM is looking forward to anything.

“The current framework of training LLMs involves concealing, or ‘masking,’ previous words for context” and having the model predict which word should follow this context, Sebastian Berns, a Ph.D. researchers at Queen Mary University of London, told TechCrunch in an email interview. “This is conceptually similar to using predictive text in iOS and continually pressing one of the suggested next words.”

This probability-based approach works remarkably well at scale — for the most part. But while the range of words and their probabilities are likely to result in text that makes sense, it’s far from certain.

The emerging types of language models and why they matter

LLMs can generate something that’s grammatically correct but nonsensical, for instance — like the claim about the Golden Gate. Or they can spout mistruths, propagating inaccuracies in their training data. Or they can conflate different sources of information, including fictional sources, even if those sources clearly contradict each other.

It’s not malicious on the LLMs’ part. They don’t have malice, and the concepts of true and false are meaningless to them. They’ve simply learned to associate certain words or phrases with certain concepts, even if those associations aren’t accurate.

“‘Hallucinations’ are connected to the inability of an LLM to estimate the uncertainty of its own prediction,” Berns said. “An LLM is typically trained to always produce an output, even when the input is very different from the training data. A standard LLM does not have any way of knowing if it’s capable of reliably answering a query or making a prediction.”

Solving hallucination

The question is, can hallucination be solved? It depends on what you mean by “solved.”

Vu Ha, an applied researcher and engineer at the Allen Institute for Artificial Intelligence, asserts that LLMs “do and will always hallucinate.” But he also believes there are concrete ways to reduce — albeit not eliminate — hallucinations, depending on how an LLM is trained and deployed. 

“Consider a question answering system,” Ha said via email. “It’s possible to engineer it to have high accuracy by curating a high-quality knowledge base of questions and answers, and connecting this knowledge base with an LLM to provide accurate answers via a retrieval-like process.”

Ha illustrated the difference between an LLM with a “high-quality” knowledge base to draw on versus one with less careful data curation. He ran the question “Who are the authors of the Toolformer paper?” (Toolformer is an AI model trained by Meta) through Microsoft’s LLM-powered Bing Chat and Google’s Bard. Bing Chat correctly listed all eight Meta co-authors, while Bard misattributed the paper to researchers at Google and Hugging Face.

“Any deployed LLM-based system will hallucinate. The real question is if the benefits outweigh the negative outcome caused by hallucination,” Ha said. In other words, if there’s no obvious harm done by a model — the model gets a date or name wrong once in a while, say — but it’s otherwise helpful, then it might be worth the trade-off. “It’s a question of maximizing expected utility of the AI,” he added.

Age of AI: Everything you need to know about artificial intelligence

Berns pointed out another technique that had been used with some success to reduce hallucinations in LLMs: reinforcement learning from human feedback (RLHF). Introduced by OpenAI in 2017, RLHF involves training an LLM, then gathering additional information to train a “reward” model and fine-tuning the LLM with the reward model via reinforcement learning.

In RLHF, a set of prompts from a predefined dataset are passed through an LLM to generate new text. Then, human annotators are used to rank the outputs from the LLM in terms of their overall “helpfulness” — data that’s used to train the reward model. The reward model, which at this point can take in any text and assign it a score of how well humans perceive it, is then used to fine-tune the LLM’s generated responses.

OpenAI leveraged RLHF to train several of its models, including GPT-4. But even RLHF isn’t perfect, Berns warned.

“I believe the space of possibilities is too large to fully ‘align’ LLMs with RLHF,” Berns said. “Something often done in the RLHF setting is training a model to produce an ‘I don’t know’ answer [to a tricky question], primarily relying on human domain knowledge and hoping the model generalizes it to its own domain knowledge. Often it does, but it can be a bit finicky.”

Alternative philosophies

Assuming hallucination isn’t solvable, at least not with today’s LLMs, is that a bad thing? Berns doesn’t think so, actually. Hallucinating models could fuel creativity by acting as a “co-creative partner,” he posits — giving outputs that might not be wholly factual but that contain some useful threads to tug on nonetheless. Creative uses of hallucination can produce outcomes or combinations of ideas that might not occur to most people.

“‘Hallucinations’ are a problem if generated statements are factually incorrect or violate any general human, social or specific cultural values — in scenarios where a person relies on the LLM to be an expert,” he said. “But in creative or artistic tasks, the ability to come up with unexpected outputs can be valuable. A human recipient might be surprised by a response to a query and therefore be pushed into a certain direction of thoughts which might lead to the novel connection of ideas.”

Ha argued that the LLMs of today are being held to an unreasonable standard — humans “hallucinate” too, after all, when we misremember or otherwise misrepresent the truth. But with LLMs, he believes we experience a cognitive dissonance because the models produce outputs that look good on the surface but contain errors upon further inspection.

“Simply put, LLMs, just like any AI techniques, are imperfect and thus make mistakes,” he said. “Traditionally, we’re OK with AI systems making mistakes since we expect and accept imperfections. But it’s more nuanced when LLMs make mistakes.”

Indeed, the answer may well not lie in how generative AI models work at the technical level. Insofar as there’s a “solution” to hallucination today, treating models’ predictions with a skeptical eye seems to be the best approach.

More TechCrunch

For frontier AI models, when it rains, it pours. Mistral released a fresh new flagship model on Wednesday, Large 2, which it claims to be on par with the latest…

Mistral’s Large 2 is its answer to Meta and OpenAI’s latest models

Researchers at MIT CSAIL this week are showcasing a new method for training home robots in simulation.

Researchers are training home robots in simulations based on iPhone scans

Apple announced on Wednesday that Apple Maps is now available on the web via a public beta, which means you can now access the service directly from your browser. The…

Apple Maps launches on the web to challenge Google Maps

AltStore, an alternative app store, has launched its first batch of third-party iOS apps in the European Union. The rollout comes a few months after the company launched an updated…

Alternative app store AltStore PAL adds third-party iOS apps in wake of EU Apple ruling

Microsoft this afternoon previewed its answer to Google’s AI-powered search experiences: Bing generative search. Available only for a “small percentage” of users at the moment, Bing generative search, underpinned by…

Bing previews its answer to Google’s AI Overviews

Hiya, folks, welcome to TechCrunch’s regular AI newsletter. Last Sunday, President Joe Biden announced that he no longer plans to seek reelection, instead offering his “full endorsement” of VP Kamala…

This Week in AI: How Kamala Harris might regulate AI

But the fate of many generative AI businesses — even the best-funded ones — looks murky.

VCs are still pouring billions into generative AI startups

Thousands of stories have been written about former NFL quarterback and civil rights activist Colin Kaepernick. If anyone knows a thing or two about losing control of your own narrative,…

Colin Kaepernick lost control of his story. Now he wants to help creators own theirs

Several people who received the CrowdStrike offer found that the gift card didn’t work, while others got an error saying the voucher had been canceled.

CrowdStrike offers a $10 apology gift card to say sorry for outage

TikTok Lite, a low-bandwidth version of the video platform popular across Africa, Asia and Latin America, is exposing users to harmful content because of its lack of safety features compared…

TikTok Lite exposes users to harmful content, say Mozilla researchers

If the models continue eating each other’s data, perhaps without even knowing it, they’ll progressively get weirder and dumber until they collapse.

‘Model collapse’: Scientists warn against letting AI eat its own tail

Astranis has fully funded its next-generation satellite program, called Omega, after closing its $200 million Series D round, the company said Wednesday.  “This next satellite is really the milestone into…

Astranis is set to build Omega constellation after $200M Series D

Reworkd’s founders went viral on GitHub last year with AgentGPT, a free tool to build AI agents that acquired more than 100,000 daily users in a week. This earned them…

After AgentGPT’s success, Reworkd pivots to web-scraping AI agents

We’re so excited to announce that we’ve added a dedicated AI Stage presented by Google Cloud to TechCrunch Disrupt 2024. It joins Fintech, SaaS and Space as the other industry-focused…

Announcing the agenda for the AI Stage at TechCrunch Disrupt 2024

The firm has numerous legs to it, ranging from a venture studio to standard funds, where it does everything from co-founding companies to deploying capital.

CityRock launches second fund to back founders from diverse backgrounds

Since launching xAI last year, Elon Musk has been using X as a sandbox to test some of the Grok model’s AI capabilities. Beyond the basic chatbot, X uses the…

X launches underwhelming Grok-powered ‘More About This Account’ feature

Lakera, a Swiss startup that’s building technology to protect generative AI applications from malicious prompts and other threats, has raised $20 million in a Series A round led by European…

Lakera, which protects enterprises from LLM vulnerabilities, raises $20M

Alongside a slew of announcements for Play — such as AI-powered app comparisons and a feature that bundles similar apps — Google has introduced new “Curated Spaces,” hubs dedicated to…

Google Play gets ‘Comics’ feature for manga readers in Japan

Farmers have got to do something about pests. But nobody really likes the idea of using more chemical pesticides. Thomas Laurent’s company, Micropep, thinks the answer might already be in…

Micropep taps tiny proteins to make pesticides safer

Play Store is getting AI-powered app comparisons, automatically organized categories for similar apps, dedicated hubs for content, data personalization controls, support for playing multiple mobile games on PCs, and more…

Google adds AI-powered comparisons, collections and more data controls to Play Store

Vanta, a trust management platform that helps businesses automate much of their security and compliance processes, today announced that it has raised a $150 million Series C funding round led…

Vanta raises $150M Series C, now valued at $2.45B

The Overture Maps Foundation is today releasing data sets for 2.3B building “footprints” globally, 54M notable places of interest, a visual overlay of “boundaries,” and land and water features such…

Backed by Microsoft, AWS and Meta, the Overture Maps Foundation launches its first open map datasets

The startup is not disclosing its valuation, but sources close to the company say the figure is just under $400 million post-money.

Dazz snaps up $50M for AI-based, automated cloud security remediation

The outcome of the Spanish authority’s probe could take up to two years to complete, and leave Apple on the hook for fines in the billions.

Apple’s App Store hit with antitrust probe in Spain

Proton’s first cryptocurrency product is a wallet called Proton Wallet that’s designed to make it easier to get started with bitcoin.

Proton releases a self-custody bitcoin wallet

Dental care is a necessity, yet many patients lack confidence in their dentists’ ability to provide accurate diagnoses and appropriate treatments. Some dentists overtreat patients, leading to unnecessary expenses, while…

Pearl raises $58M to help dentists make better diagnoses using AI 

Exoticca’s platform connects flights, hotels, meals, transfers, transportation and more, plus the local companies at the destinations.

Spanish startup Exoticca raises a €60M Series D for its tour packages platform

Content creators are busy people. Most spend more than 20 hours a week creating new content for their respective corners of the web. That doesn’t leave much time for audience…

Mark Zuckerberg imagines content creators making AI clones of themselves

Elon Musk says he will show off Tesla’s purpose-built “robotaxi” prototype during an event October 10, after scrapping a previous plan to reveal it August 8. Musk said Tesla will…

Elon Musk sets new date for Tesla robotaxi reveal, calls everything beyond autonomy ‘noise’

Alphabet will spend an additional $5 billion on its self-driving subsidiary, Waymo, over the next few years, according to Ruth Porat, the company’s chief financial officer. Porat announced the commitment…

Alphabet to invest another $5B into Waymo