Featured Article

The Great Pretender

AI doesn’t know the answer, and it hasn’t learned how to care

Comment

Hands holding a mask of anonymity. Polygonal design of interconnected elements.
Image Credits: llya Lukichev / Getty / Getty Images

There is a good reason not to trust what today’s AI constructs tell you, and it has nothing to do with the fundamental nature of intelligence or humanity, with Wittgensteinian concepts of language representation, or even disinfo in the dataset. All that matters is that these systems do not distinguish between something that is correct and something that looks correct. Once you understand that the AI considers these things more or less interchangeable, everything makes a lot more sense.

Now, I don’t mean to short circuit any of the fascinating and wide-ranging discussions about this happening continually across every form of media and conversation. We have everyone from philosophers and linguists to engineers and hackers to bartenders and firefighters questioning and debating what “intelligence” and “language” truly are, and whether something like ChatGPT possesses them.

This is amazing! And I’ve learned a lot already as some of the smartest people in this space enjoy their moment in the sun, while from the mouths of comparative babes come fresh new perspectives.

But at the same time, it’s a lot to sort through over a beer or coffee when someone asks “what about all this GPT stuff, kind of scary how smart AI is getting, right?” Where do you start — with Aristotle, the mechanical Turk, the perceptron or “Attention is all you need”?

During one of these chats I hit on a simple approach that I’ve found helps people get why these systems can be both really cool and also totally untrustable, while subtracting not at all from their usefulness in some domains and the amazing conversations being had around them. I thought I’d share it in case you find the perspective useful when talking about this with other curious, skeptical people who nevertheless don’t want to hear about vectors or matrices.

There are only three things to understand, which lead to a natural conclusion:

  1. These models are created by having them observe the relationships between words and sentences and so on in an enormous dataset of text, then build their own internal statistical map of how all these millions and millions of words and concepts are associated and correlated. No one has said, this is a noun, this is a verb, this is a recipe, this is a rhetorical device; but these are things that show up naturally in patterns of usage.
  2. These models are not specifically taught how to answer questions, in contrast to the familiar software companies like Google and Apple have been calling AI for the last decade. Those are basically Mad Libs with the blanks leading to APIs: Every question is either accounted for or produces a generic response. With large language models the question is just a series of words like any other.
  3. These models have a fundamental expressive quality of “confidence” in their responses. In a simple example of a cat recognition AI, it would go from 0, meaning completely sure that’s not a cat, to 100, meaning absolutely sure that’s a cat. You can tell it to say “yes, it’s a cat” if it’s at a confidence of 85, or 90, whatever produces your preferred response metric.

So given what we know about how the model works, here’s the crucial question: What is it confident about? It doesn’t know what a cat or a question is, only statistical relationships found between data nodes in a training set. A minor tweak would have the cat detector equally confident the picture showed a cow, or the sky, or a still life painting. The model can’t be confident in its own “knowledge” because it has no way of actually evaluating the content of the data it has been trained on.

The AI is expressing how sure it is that its answer appears correct to the user.

This is true of the cat detector, and it is true of GPT-4 — the difference is a matter of the length and complexity of the output. The AI cannot distinguish between a right and wrong answer — it only can make a prediction of how likely a series of words is to be accepted as correct. That is why it must be considered the world’s most comprehensively informed bullshitter rather than an authority on any subject. It doesn’t even know it’s bullshitting you — it has been trained to produce a response that statistically resembles a correct answer, and it will say anything to improve that resemblance.

The AI doesn’t know the answer to any question, because it doesn’t understand the question. It doesn’t know what questions are. It doesn’t “know” anything! The answer follows the question because, extrapolating from its statistical analysis, that series of words is the most likely to follow the previous series of words. Whether those words refer to real places, people, locations, etc. is not material — only that they are like real ones.

It’s the same reason AI can produce a Monet-like painting that isn’t a Monet — all that matters is it has all the characteristics that cause people to identify a piece of artwork as his. Today’s AI approximates factual responses the way it would approximate “Water Lilies.”

Now, I hasten to add that this isn’t an original or groundbreaking concept — it’s basically another way to explain the stochastic parrot, or the undersea octopus. Those problems were identified very early by very smart people and represent a great reason to read commentary on tech matters widely.

Ethicists fire back at ‘AI Pause’ letter they say ‘ignores the actual harms’

But in the context of today’s chatbot systems, I’ve just found that people intuitively get this approach: The models don’t understand facts or concepts, but relationships between words, and its responses are an “artist’s impression” of an answer. Their goal, when you get down to it, is to fill in the blank convincingly, not correctly. This is the reason why its responses fundamentally cannot be trusted.

Of course sometimes, even a lot of the time, its answer is correct! And that isn’t an accident: For many questions, the answer that looks the most correct is the correct answer. That is what makes these models so powerful — and dangerous. There is so, so much you can extract from a systematic study of millions of words and documents. And unlike recreating “Water Lilies” exactly, there’s a flexibility to language that lets an approximation of a factual response also be factual — but also make a totally or partially invented response appear equally or more so. The only thing the AI cares about is that the answer scans right.

This leaves the door open to discussions around whether this is truly knowledge, what if anything the models “understand,” if they have achieved some form of intelligence, what intelligence even is and so on. Bring on the Wittgenstein!

Furthermore, it also leaves open the possibility of using these tools in situations where truth isn’t really a concern. If you want to generate five variants of an opening paragraph to get around writer’s block, an AI might be indispensable. If you want to make up a story about two endangered animals, or write a sonnet about Pokémon, go for it. As long as it is not crucial that the response reflects reality, a large language model is a willing and able partner — and not coincidentally, that’s where people seem to be having the most fun with it.

Where and when AI gets it wrong is very, very difficult to predict because the models are too large and opaque. Imagine a card catalog the size of a continent, organized and updated over a period of a hundred years by robots, from first principles that they came up with on the fly. You think you can just walk in and understand the system? It gives a right answer to a difficult question and a wrong answer to an easy one. Why? Right now that is one question that neither AI nor its creators can answer.

This may well change in the future, perhaps even the near future. Everything is moving so quickly and unpredictably that nothing is certain. But for the present this is a useful mental model to keep in mind: The AI wants you to believe it and will say anything to improve its chances.

More TechCrunch

Tags

BDO, the auditor for Indian edtech startup Byju’s, has resigned with immediate effect, marking the second auditor departure for the embattled startup in about a year and further intensifying concerns…

Second Byju’s auditor exits in a year as financial turmoil deepens

A federal judge says he will deliver a punishment in Google’s antitrust case by August 2025, according to The New York Times, after ruling earlier this month that Google had…

Google to receive punishment for search monopoly by next August, says judge

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to hyper-charge productivity through writing essays and code…

ChatGPT: Everything you need to know about the AI-powered chatbot

The world will have to wait a little longer to see Blue Origin’s massive New Glenn rocket fly for the first time. That rocket had been scheduled to launch two…

The maiden voyage of Blue Origin’s massive new rocket won’t be for NASA

After 93 days on orbit, Starliner is coming home.  The spacecraft is a “go” for undocking from the International Space Station at 6:04 p.m. EST, though it will be leaving…

Watch live as Boeing and NASA attempt to bring empty Starliner back to Earth

Some of Vice President Kamala Harris’ wealthier donors are informally asking for FTC Chair Lina Khan to be replaced, reports Bloomberg. It’s not really surprising: Her expansive definition of antitrust…

Wealthy Harris donors are reportedly pressing for ouster of FTC Chair Lina Khan

Mangomint seeks to make it easier for spa and salon owners to run their businesses.

How a cold email to a VC helped salon software startup Mangomint raise $35M

The honors program is one of the first in the U.S. that allows incoming freshmen to apply for the program as part of their initial admission application.

University of Texas opens robotics program up to incoming freshmen

By using readily available natural gas as the feedstock, C-Zero hopes to produce emission-free hydrogen for less than other green hydrogen startups.

C-Zero is raising $18M to make emission-free hydrogen using natural gas, filings reveal

Meta on Friday published an update on how it plans to comply with the Digital Markets Act (DMA), the European law that aims to promote competition in digital marketplaces, where…

Meta will let third-party apps place calls to WhatsApp and Messenger users — in 2027

At the annual Roblox Developers Conference, the company announced on Friday a series of changes coming to the platform in the next few months and years. Most notably, Roblox is…

Roblox introduces new earning opportunities for creators, teases generative AI project

Apple is likely to unveil its iPhone 16 series of phones and maybe even some Apple Watches at its Glowtime event on September 9.

How to watch the iPhone 16 reveal during this year’s big Apple Event

Welcome to Startups Weekly — your weekly recap of everything you can’t miss from the world of startups. Want it in your inbox every Friday? Sign up here. You won’t…

Startups have to be clever when fighting larger rivals

The Philadelphia Eagles and the Green Bay Packers will face off tonight in their first game of the NFL season. But this season opener is a bit different. As the…

NFL kicks off in Brazil for the first time, but reporters and fans can’t post on X due to nationwide ban

Venture capitalist Tim Draper’s international pitch competition, “Meet the Drapers,” is partnering up with TikTok as it heads into its seventh season. Under the new tie-up, entrepreneurs will pitch their…

VC pitch show ‘Meet the Drapers’ partners with TikTok

It’s tempting to think the trend of EV startups merging with special purpose acquisition companies (SPACs) to go public has ended, seeing how many of them are struggling or defunct.…

Public EV startup with an indicted CEO is looking to raise an additional $100 million

In the world of modern AI, data is more than just a resource — it’s the fundamental core that aligns decision-makers, supports processes and enables innovation. As AI applications become…

The New Data Pipeline: Fivetran, DataStax and NEA are coming to TechCrunch Disrupt 2024

In a brief update ahead of the weekend, the London transport network said it has no evidence yet that customer data was compromised.

Transport for London outages drag into weekend after cyberattack

Meta-owned Instagram is jazzing up the inbox by adding new features for photo editing, sticker creation and themes. The company is trying to make Instagram more appealing as a messaging…

Instagram jazzes up its DMs with stickers, photo editing, and themes

Keep the excitement of TechCrunch Disrupt 2024 alive by hosting an exclusive Side Event after hours. Don’t miss out — today is the final day to apply for free! Maximize…

Last call: Boost your brand by hosting a Side Event at TechCrunch Disrupt 2024

Today’s your final chance to secure your TechCrunch Disrupt 2024 Student Pass with a $200 discount! Maximize your savings by opting for the Student 4+ Bundle and bring four or…

Students and recent grads: Last day to save on TechCrunch Disrupt 2024 Student Passes

The Equity podcast crew is wrapping up another eventful week, with real estate, AI agents, gambling and secondary markets — which are, of course, a form of legalized gambling. Mary…

Real estate revolutions and beanie baby economies

More antitrust woes for Google. The U.K’.s competition watchdog said on Friday that it suspects the company of adtech antitrust abuses. The tech giant will now have a chance to…

Google faces provisional antitrust charges in UK for ‘self-preferencing’ its ad exchange

You can build a reminder and task management system for yourself, and use a service that works for your team. But it might not be easy to get your family…

Karo is a to-do app that lets you assign tasks to your friends and family

Earlier this week, the EU’s lead privacy regulator ended its court proceeding related to how X processed user data to train its Grok AI chatbot, but the saga isn’t over…

Elon Musk’s X could still face sanctions for training Grok on Europeans’ data

Telegram has updated its website to explicitly allow users to report private chats to its moderators, the company said in its FAQ page, as it updated some of its other…

Telegram quietly updates website to allow abuse reports following founder’s arrest

SpaceX President Gwynne Shotwell made a public plea to one of Brazil’s top judicial figures on Thursday, asking him to “please stop harassing Starlink” amid the ongoing battle in the…

‘Stop harassing Starlink,’ SpaceX president tells Brazilian judge

OSOM always had a difficult road, with plans to launch a privacy-focused handset.

Osom is shutting down on Friday, as it had ‘no customers for a mobile phone’

Salesforce has acquired Own Company, a New Jersey-based provider of data management and protection solutions, for $1.9 billion in cash. Own is Salesforce’s biggest deal since buying Slack for $27.7…

Salesforce acquires data management firm Own for $1.9B in cash

The U.S. government indictment demonstrated deep knowledge of the Russian spies’ activities, including their real-world meetings at a cafe in Moscow.

US charges five Russian military hackers with targeting Ukraine’s government with destructive malware