OpenAI has revealed operations linked to Russia, China, Iran and Israel have been using its artificial intelligence tools to create and spread disinformation, as technology becomes a powerful weapon in information warfare in an election-heavy year.
The San Francisco-based maker of the ChatGPT chatbot said in a report on Thursday that five covert influence operations had used its AI models to generate text and images at a high volume, with fewer language errors than previously, as well as to generate comments or replies to their own posts. OpenAI’s policies prohibit the use of its models to deceive or mislead others.
The content focused on issues “including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments,” OpenAI said in the report.
The networks also used AI to enhance their own productivity, applying it to tasks such as debugging code or doing research into public social media activity, it said.
Social media platforms, including Meta and Google’s YouTube, have sought to clamp down on the proliferation of disinformation campaigns in the wake of Donald Trump’s 2016 win in the US presidential election when investigators found evidence that a Russian troll farm had sought to manipulate the vote.
Pressure is mounting on fast-growing AI companies such as OpenAI, as rapid advances in their technology mean it is cheaper and easier than ever for disinformation perpetrators to create realistic deepfakes and manipulate media and then spread that content in an automated fashion.
As about 2 billion people head to the polls this year, policymakers have urged the companies to introduce and enforce appropriate guardrails.