AI

Are AI models doomed to always hallucinate?

Comment

Text to video concept, text-to-video by generative AI. Language model technology. Cyborg hand holding vdo generated by artificial intelligence.
Image Credits: Ole_CNX (opens in a new window) / Getty Images

Large language models (LLMs) like OpenAI’s ChatGPT all suffer from the same problem: they make stuff up.

The mistakes range from strange and innocuous — like claiming that the Golden Gate Bridge was transported across Egypt in 2016 — to highly problematic, even dangerous.

A mayor in Australia recently threatened to sue OpenAI because ChatGPT mistakenly claimed he pleaded guilty in a major bribery scandal. Researchers have found that LLM hallucinations can be exploited to distribute malicious code packages to unsuspecting software developers. And LLMs frequently give bad mental health and medical advice, like that wine consumption can “prevent cancer.”

This tendency to invent “facts” is a phenomenon known as hallucination, and it happens because of the way today’s LLMs — and all generative AI models, for that matter — are developed and trained.

Training models

Generative AI models have no real intelligence — they’re statistical systems that predict words, images, speech, music or other data. Fed an enormous number of examples, usually sourced from the public web, AI models learn how likely data is to occur based on patterns, including the context of any surrounding data.

For example, given a typical email ending in the fragment “Looking forward…”, an LLM might complete it with “… to hearing back” — following the pattern of the countless emails it’s been trained on. It doesn’t mean the LLM is looking forward to anything.

“The current framework of training LLMs involves concealing, or ‘masking,’ previous words for context” and having the model predict which word should follow this context, Sebastian Berns, a Ph.D. researchers at Queen Mary University of London, told TechCrunch in an email interview. “This is conceptually similar to using predictive text in iOS and continually pressing one of the suggested next words.”

This probability-based approach works remarkably well at scale — for the most part. But while the range of words and their probabilities are likely to result in text that makes sense, it’s far from certain.

The emerging types of language models and why they matter

LLMs can generate something that’s grammatically correct but nonsensical, for instance — like the claim about the Golden Gate. Or they can spout mistruths, propagating inaccuracies in their training data. Or they can conflate different sources of information, including fictional sources, even if those sources clearly contradict each other.

It’s not malicious on the LLMs’ part. They don’t have malice, and the concepts of true and false are meaningless to them. They’ve simply learned to associate certain words or phrases with certain concepts, even if those associations aren’t accurate.

“‘Hallucinations’ are connected to the inability of an LLM to estimate the uncertainty of its own prediction,” Berns said. “An LLM is typically trained to always produce an output, even when the input is very different from the training data. A standard LLM does not have any way of knowing if it’s capable of reliably answering a query or making a prediction.”

Solving hallucination

The question is, can hallucination be solved? It depends on what you mean by “solved.”

Vu Ha, an applied researcher and engineer at the Allen Institute for Artificial Intelligence, asserts that LLMs “do and will always hallucinate.” But he also believes there are concrete ways to reduce — albeit not eliminate — hallucinations, depending on how an LLM is trained and deployed. 

“Consider a question answering system,” Ha said via email. “It’s possible to engineer it to have high accuracy by curating a high-quality knowledge base of questions and answers, and connecting this knowledge base with an LLM to provide accurate answers via a retrieval-like process.”

Ha illustrated the difference between an LLM with a “high-quality” knowledge base to draw on versus one with less careful data curation. He ran the question “Who are the authors of the Toolformer paper?” (Toolformer is an AI model trained by Meta) through Microsoft’s LLM-powered Bing Chat and Google’s Bard. Bing Chat correctly listed all eight Meta co-authors, while Bard misattributed the paper to researchers at Google and Hugging Face.

“Any deployed LLM-based system will hallucinate. The real question is if the benefits outweigh the negative outcome caused by hallucination,” Ha said. In other words, if there’s no obvious harm done by a model — the model gets a date or name wrong once in a while, say — but it’s otherwise helpful, then it might be worth the trade-off. “It’s a question of maximizing expected utility of the AI,” he added.

Age of AI: Everything you need to know about artificial intelligence

Berns pointed out another technique that had been used with some success to reduce hallucinations in LLMs: reinforcement learning from human feedback (RLHF). Introduced by OpenAI in 2017, RLHF involves training an LLM, then gathering additional information to train a “reward” model and fine-tuning the LLM with the reward model via reinforcement learning.

In RLHF, a set of prompts from a predefined dataset are passed through an LLM to generate new text. Then, human annotators are used to rank the outputs from the LLM in terms of their overall “helpfulness” — data that’s used to train the reward model. The reward model, which at this point can take in any text and assign it a score of how well humans perceive it, is then used to fine-tune the LLM’s generated responses.

OpenAI leveraged RLHF to train several of its models, including GPT-4. But even RLHF isn’t perfect, Berns warned.

“I believe the space of possibilities is too large to fully ‘align’ LLMs with RLHF,” Berns said. “Something often done in the RLHF setting is training a model to produce an ‘I don’t know’ answer [to a tricky question], primarily relying on human domain knowledge and hoping the model generalizes it to its own domain knowledge. Often it does, but it can be a bit finicky.”

Alternative philosophies

Assuming hallucination isn’t solvable, at least not with today’s LLMs, is that a bad thing? Berns doesn’t think so, actually. Hallucinating models could fuel creativity by acting as a “co-creative partner,” he posits — giving outputs that might not be wholly factual but that contain some useful threads to tug on nonetheless. Creative uses of hallucination can produce outcomes or combinations of ideas that might not occur to most people.

“‘Hallucinations’ are a problem if generated statements are factually incorrect or violate any general human, social or specific cultural values — in scenarios where a person relies on the LLM to be an expert,” he said. “But in creative or artistic tasks, the ability to come up with unexpected outputs can be valuable. A human recipient might be surprised by a response to a query and therefore be pushed into a certain direction of thoughts which might lead to the novel connection of ideas.”

Ha argued that the LLMs of today are being held to an unreasonable standard — humans “hallucinate” too, after all, when we misremember or otherwise misrepresent the truth. But with LLMs, he believes we experience a cognitive dissonance because the models produce outputs that look good on the surface but contain errors upon further inspection.

“Simply put, LLMs, just like any AI techniques, are imperfect and thus make mistakes,” he said. “Traditionally, we’re OK with AI systems making mistakes since we expect and accept imperfections. But it’s more nuanced when LLMs make mistakes.”

Indeed, the answer may well not lie in how generative AI models work at the technical level. Insofar as there’s a “solution” to hallucination today, treating models’ predictions with a skeptical eye seems to be the best approach.

More TechCrunch

BDO, the auditor for Indian edtech startup Byju’s, has resigned with immediate effect, marking the second auditor departure for the embattled startup in about a year and further intensifying concerns…

Second Byju’s auditor exits in a year as financial turmoil deepens

A federal judge says he will deliver a punishment in Google’s antitrust case by August 2025, according to The New York Times, after ruling earlier this month that Google had…

Google to receive punishment for search monopoly by next August, says judge

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to hyper-charge productivity through writing essays and code…

ChatGPT: Everything you need to know about the AI-powered chatbot

The world will have to wait a little longer to see Blue Origin’s massive New Glenn rocket fly for the first time. That rocket had been scheduled to launch two…

The maiden voyage of Blue Origin’s massive new rocket won’t be for NASA

After 93 days on orbit, Starliner is coming home.  The spacecraft is a “go” for undocking from the International Space Station at 6:04 p.m. EST, though it will be leaving…

Watch live as Boeing and NASA attempt to bring empty Starliner back to Earth

Some of Vice President Kamala Harris’ wealthier donors are informally asking for FTC Chair Lina Khan to be replaced, reports Bloomberg. It’s not really surprising: Her expansive definition of antitrust…

Wealthy Harris donors are reportedly pressing for ouster of FTC Chair Lina Khan

Mangomint seeks to make it easier for spa and salon owners to run their businesses.

How a cold email to a VC helped salon software startup Mangomint raise $35M

The honors program is one of the first in the U.S. that allows incoming freshmen to apply for the program as part of their initial admission application.

University of Texas opens robotics program up to incoming freshmen

By using readily available natural gas as the feedstock, C-Zero hopes to produce emission-free hydrogen for less than other green hydrogen startups.

C-Zero is raising $18M to make emission-free hydrogen using natural gas, filings reveal

Meta on Friday published an update on how it plans to comply with the Digital Markets Act (DMA), the European law that aims to promote competition in digital marketplaces, where…

Meta will let third-party apps place calls to WhatsApp and Messenger users — in 2027

At the annual Roblox Developers Conference, the company announced on Friday a series of changes coming to the platform in the next few months and years. Most notably, Roblox is…

Roblox introduces new earning opportunities for creators, teases generative AI project

Apple is likely to unveil its iPhone 16 series of phones and maybe even some Apple Watches at its Glowtime event on September 9.

How to watch the iPhone 16 reveal during this year’s big Apple Event

Welcome to Startups Weekly — your weekly recap of everything you can’t miss from the world of startups. Want it in your inbox every Friday? Sign up here. You won’t…

Startups have to be clever when fighting larger rivals

The Philadelphia Eagles and the Green Bay Packers will face off tonight in their first game of the NFL season. But this season opener is a bit different. As the…

NFL kicks off in Brazil for the first time, but reporters and fans can’t post on X due to nationwide ban

Venture capitalist Tim Draper’s international pitch competition, “Meet the Drapers,” is partnering up with TikTok as it heads into its seventh season. Under the new tie-up, entrepreneurs will pitch their…

VC pitch show ‘Meet the Drapers’ partners with TikTok

It’s tempting to think the trend of EV startups merging with special purpose acquisition companies (SPACs) to go public has ended, seeing how many of them are struggling or defunct.…

Public EV startup with an indicted CEO is looking to raise an additional $100 million

In the world of modern AI, data is more than just a resource — it’s the fundamental core that aligns decision-makers, supports processes and enables innovation. As AI applications become…

The New Data Pipeline: Fivetran, DataStax and NEA are coming to TechCrunch Disrupt 2024

In a brief update ahead of the weekend, the London transport network said it has no evidence yet that customer data was compromised.

Transport for London outages drag into weekend after cyberattack

Meta-owned Instagram is jazzing up the inbox by adding new features for photo editing, sticker creation and themes. The company is trying to make Instagram more appealing as a messaging…

Instagram jazzes up its DMs with stickers, photo editing, and themes

Keep the excitement of TechCrunch Disrupt 2024 alive by hosting an exclusive Side Event after hours. Don’t miss out — today is the final day to apply for free! Maximize…

Last call: Boost your brand by hosting a Side Event at TechCrunch Disrupt 2024

Today’s your final chance to secure your TechCrunch Disrupt 2024 Student Pass with a $200 discount! Maximize your savings by opting for the Student 4+ Bundle and bring four or…

Students and recent grads: Last day to save on TechCrunch Disrupt 2024 Student Passes

The Equity podcast crew is wrapping up another eventful week, with real estate, AI agents, gambling and secondary markets — which are, of course, a form of legalized gambling. Mary…

Real estate revolutions and beanie baby economies

More antitrust woes for Google. The U.K’.s competition watchdog said on Friday that it suspects the company of adtech antitrust abuses. The tech giant will now have a chance to…

Google faces provisional antitrust charges in UK for ‘self-preferencing’ its ad exchange

You can build a reminder and task management system for yourself, and use a service that works for your team. But it might not be easy to get your family…

Karo is a to-do app that lets you assign tasks to your friends and family

Earlier this week, the EU’s lead privacy regulator ended its court proceeding related to how X processed user data to train its Grok AI chatbot, but the saga isn’t over…

Elon Musk’s X could still face sanctions for training Grok on Europeans’ data

Telegram has updated its website to explicitly allow users to report private chats to its moderators, the company said in its FAQ page, as it updated some of its other…

Telegram quietly updates website to allow abuse reports following founder’s arrest

SpaceX President Gwynne Shotwell made a public plea to one of Brazil’s top judicial figures on Thursday, asking him to “please stop harassing Starlink” amid the ongoing battle in the…

‘Stop harassing Starlink,’ SpaceX president tells Brazilian judge

OSOM always had a difficult road, with plans to launch a privacy-focused handset.

Osom is shutting down on Friday, as it had ‘no customers for a mobile phone’

Salesforce has acquired Own Company, a New Jersey-based provider of data management and protection solutions, for $1.9 billion in cash. Own is Salesforce’s biggest deal since buying Slack for $27.7…

Salesforce acquires data management firm Own for $1.9B in cash

The U.S. government indictment demonstrated deep knowledge of the Russian spies’ activities, including their real-world meetings at a cafe in Moscow.

US charges five Russian military hackers with targeting Ukraine’s government with destructive malware