Tech
Scissors
(Getty Images)

Eat rocks and run with scissors — Google’s AI Overviews are wild

From getting basic US history wrong to surfacing racist conspiracy theories, the results are not encouraging.

5/24/24 8:40AM

Earlier this month Google began rolling out its AI Overview feature to the masses — and it’s going poorly.

Google, in some instances, has been using generative AI to answer questions at the top of people’s searches, rather than surface relavent links there and show tidbits of that information like it used to. The responses are direct and in plain language, offering an air of authority. The problem is when you “let Google do the Googling for you” the results can be at best hilarious and at worst out-right dangerous.

A Google spokesperson told me these errors come from “generally very uncommon queries, and aren’t representative of most people’s experiences.” But that doesn’t acknowledge just how widely and wildly Google Search is used. “We conducted extensive testing before launching this new experience, and will use these isolated examples as we continue to refine our systems overall,” the spokesperson said.

Naturally, people have been having a field day seeing just how bad the AI’s responses can be. Here are some fun and scary examples of Google’s AI Overview gone wrong that I’ve been able to confirm are real:

Apparently people should “eat at least one small rock a day” (it told me ingesting “pea gravel slowly” was fine), which suggests it’s pulling answers from satire magazine The Onion. Apparently it also said that the CIA uses black highlighters, which would have come from this Onion story, but I wasn’t able to replicate that. Google didn’t respond to a question about whether it trained its AI on The Onion.

Here’s AI Overview telling me running with scissors is just fine!

“Should you run with scissors”

It said president Barack Obama is Muslim, a known conspiracy theory. Google told me they’ve since taken this down since they said it violates their policies.

It suggested many US presidents have been non-white. This bears some similarity to Google’s ill-fated “woke” image generator that showed Black founding fathers and Nazis. Google subsequently paused the feature.

It suggested adding glue to get the cheese to stick to pizza, a result apparently pulled directly from an 11-year-old Reddit post. Google pays Reddit $60 million a year to use its content.

It said there’s no country in Africa that starts with a “K.” Sorry Kenya!

It is bad at spelling fruit.

fruits that end in um

Google’s AI Overview also says that Google violates antitrust law. However, the “yes” here actually goes on to say “yes, the U.S. Justice Department and 11 states are suing Google for antitrust violations.” This is partly true but actually doesn't add there is a second, near-identical lawsuit involving 35 states.

Google has shut off AI Overview for many of these queries after they went viral.

“Our systems aim to automatically prevent policy-violating content from appearing in AI Overviews,” the Google spokesperson said. “If policy-violating content does appear, we will take action as appropriate.”

For now it seems like a game of Whac-A-Mole. Google didn’t respond to a question about whether they’d keep the AI Overview feature up and running.

More Tech

See all Tech
tech

AI-generated police reports: What could go wrong?

Police body cam manufacturer Axon doesn’t want to get left behind in the generative AI scene. Politico reports that at least seven police departments across the US are using the company’s Draft One software which uses AI to generate an arrest report based on transcripts from body cam footage.

According to Politico, once an officer signs an acknowledgement that the report was generated using AI, there’s “no way” to discern which reports were generated by AI and that it would be “impossible to find them.” This, of course, creates huge issues for the use of these reports as evidence in the courts.

According to Politico, once an officer signs an acknowledgement that the report was generated using AI, there’s “no way” to discern which reports were generated by AI and that it would be “impossible to find them.” This, of course, creates huge issues for the use of these reports as evidence in the courts.

Latest Stories