Bribery. Embezzlement. Terrorism.
What if an AI chatbot accused you of doing something terrible? When bots make mistakes, the false claims can ruin lives, and the legal questions around these issues remain murky.
That's according to several people suing the biggest AI companies. But chatbot makers hope to avoid liability, and a string of legal threats has revealed how easy it might be for companies to wriggle out of responsibility for allegedly defamatory chatbot responses.
Earlier this year, an Australian regional mayor, Brian Hood, made headlines by becoming the first person to accuse ChatGPT's maker, OpenAI, of defamation. Few seemed to notice when Hood resolved his would-be landmark AI defamation case out of court this spring, but the quiet conclusion to this much-covered legal threat offered a glimpse of what could become a go-to strategy for AI companies seeking to avoid defamation lawsuits.
It was mid-March when Hood first discovered that OpenAI's ChatGPT was spitting out false responses to user prompts, wrongly claiming that Hood had gone to prison for bribery. Hood was alarmed. He had built his political career as a whistleblower exposing corporate misconduct, but ChatGPT had seemingly garbled the facts, fingering Hood as a criminal. He worried that the longer ChatGPT was allowed to repeat these false claims, the more likely it was that the chatbot could ruin his reputation with voters.
Hood asked his lawyer to give OpenAI an ultimatum: Remove the confabulations from ChatGPT within 28 days or face a lawsuit that could become the first to prove that ChatGPT's mistakes—often called "hallucinations" in the AI field—are capable of causing significant harms.
We now know that OpenAI chose the first option. By the end of April, the company had filtered the false statements about Hood from ChatGPT. Hood's lawyers told Ars that Hood was satisfied, dropping his legal challenge and considering the matter settled.