Americas

  • United States

Asia

Oceania

by Gyana Swain

OpenAI accuses Russia, China, Iran, and Israel of misusing its GenAI tools for covert Ops

News
31 May 20244 mins
Generative AI

OpenAI’s generative AI tools were used to create and post propaganda content on various geo-political and socio-economic issues across social media platforms, the company said.

Webpage of OpenAI's GPT-4 is seen on a smartphone
Credit: Tada Images / Shutterstock

OpenAI said malicious actors from China, Russia, Iran, and Israel have been using its generative AI tools to run covert influence campaigns to manipulate public opinion, adding that the company successfully exposed and shut down five such operations over the last three months.

These actors used OpenAI’s AI tools to create and post propaganda content on various geo-political and socio-economic issues across social media platforms, OpenAI said in a report. These campaigns aimed to influence political outcomes and public discourse by producing fake social media comments, articles, and translated texts in multiple languages.

“In the last three months, we have disrupted five covert IO (influence operations) that sought to use our models in support of deceptive activity across the internet,” OpenAI said in the report. However, the report also added that as of May 2024, these campaigns do not appear to have “meaningfully increased their audience engagement or reach as a result of our services.”

The ChatGPT creator defines IO as “deceptive attempts to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them.”

A Russian campaign dubbed “Bad Grammar” targeted Ukraine, Moldova, the Baltic States, and the US, to create political comments and debug code for Telegram bots using OpenAI’s AI tools, the report revealed. Another Russian operation, “Doppelganger,” generated comments in multiple languages, translating articles, and posted them on platforms such as Facebook, 9GAG, and X.

“A Chinese network known as Spamouflage, which used our models to research public social media activity, generate texts in languages including Chinese, English, Japanese, and Korean that were then posted across platforms including X, Medium and Blogspot, and debug code for managing databases and websites, including a previously unreported domain, revealscum[.]com,” the report added.

Similarly, an Iranian operation known as the “The International Union of Virtual Media” (IUVM) used AI tools to write long-form articles and headlines to publish on ivumpress.co website.

Additionally, a commercial entity in Israel referred to as “Zero Zeno,” also used AI tools to generate articles and comments that were then posted across multiple platforms, including Instagram, Facebook, X, and own websites.

“The content posted by these various operations focused on a wide range of issues, including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments,” the report stated.

OpenAI’s report, the first of such kind by the company, highlights several trends among these operations. The bad actors relied on AI tools such as ChatGPT to generate large volumes of content with fewer language errors, create the illusion of engagement on social media, and enhance productivity by summarizing posts and debugging code. However, the report added that none of the operations managed to “engage authentic audiences meaningfully.

Misuse of AI tools becomes commonplace

Facebook recently published a similar report and echoed OpenAI’s sentiment on the growing misuse of AI tools by such “influence operations” to run malicious agendas. The company calls them CIB or coordinated inauthentic behavior and defines it as “coordinated efforts to manipulate public debate for a strategic goal, in which fake accounts are central to the operation.

In each case, people coordinate with one another and use fake accounts to mislead others about who they are and what they are doing.”

In its quarterly threat report, Meta said it recently took down many such covert operations that used AI to generate images, video, and text to run their agenda.

After disrupting the influence operations, as defensive measures, OpenAI leveraged AI tools to enhance detection and analysis, making investigations more efficient. The company also imposed safety measures that often prevented the creation of malicious content, the report stated. “OpenAI is publishing these findings, as other tech companies do, to promote information sharing and best practices amongst the broader community of stakeholders,” it said.