OpenAI’s generative AI tools were used to create and post propaganda content on various geo-political and socio-economic issues across social media platforms, the company said. Credit: Tada Images / Shutterstock OpenAI said malicious actors from China, Russia, Iran, and Israel have been using its generative AI tools to run covert influence campaigns to manipulate public opinion, adding that the company successfully exposed and shut down five such operations over the last three months. These actors used OpenAI’s AI tools to create and post propaganda content on various geo-political and socio-economic issues across social media platforms, OpenAI said in a report. These campaigns aimed to influence political outcomes and public discourse by producing fake social media comments, articles, and translated texts in multiple languages. “In the last three months, we have disrupted five covert IO (influence operations) that sought to use our models in support of deceptive activity across the internet,” OpenAI said in the report. However, the report also added that as of May 2024, these campaigns do not appear to have “meaningfully increased their audience engagement or reach as a result of our services.” The ChatGPT creator defines IO as “deceptive attempts to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them.” A Russian campaign dubbed “Bad Grammar” targeted Ukraine, Moldova, the Baltic States, and the US, to create political comments and debug code for Telegram bots using OpenAI’s AI tools, the report revealed. Another Russian operation, “Doppelganger,” generated comments in multiple languages, translating articles, and posted them on platforms such as Facebook, 9GAG, and X. “A Chinese network known as Spamouflage, which used our models to research public social media activity, generate texts in languages including Chinese, English, Japanese, and Korean that were then posted across platforms including X, Medium and Blogspot, and debug code for managing databases and websites, including a previously unreported domain, revealscum[.]com,” the report added. Similarly, an Iranian operation known as the “The International Union of Virtual Media” (IUVM) used AI tools to write long-form articles and headlines to publish on ivumpress.co website. Additionally, a commercial entity in Israel referred to as “Zero Zeno,” also used AI tools to generate articles and comments that were then posted across multiple platforms, including Instagram, Facebook, X, and own websites. “The content posted by these various operations focused on a wide range of issues, including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments,” the report stated. OpenAI’s report, the first of such kind by the company, highlights several trends among these operations. The bad actors relied on AI tools such as ChatGPT to generate large volumes of content with fewer language errors, create the illusion of engagement on social media, and enhance productivity by summarizing posts and debugging code. However, the report added that none of the operations managed to “engage authentic audiences meaningfully. Misuse of AI tools becomes commonplace Facebook recently published a similar report and echoed OpenAI’s sentiment on the growing misuse of AI tools by such “influence operations” to run malicious agendas. The company calls them CIB or coordinated inauthentic behavior and defines it as “coordinated efforts to manipulate public debate for a strategic goal, in which fake accounts are central to the operation. In each case, people coordinate with one another and use fake accounts to mislead others about who they are and what they are doing.” In its quarterly threat report, Meta said it recently took down many such covert operations that used AI to generate images, video, and text to run their agenda. After disrupting the influence operations, as defensive measures, OpenAI leveraged AI tools to enhance detection and analysis, making investigations more efficient. The company also imposed safety measures that often prevented the creation of malicious content, the report stated. “OpenAI is publishing these findings, as other tech companies do, to promote information sharing and best practices amongst the broader community of stakeholders,” it said. Related content news Securiti to help businesses build secure, compliant Gen AI with a new tool Gencore AI will leverage Securiti’s existing data security and compliance capabilities to overcome control and governance challenges with enterprise GenAI. By Shweta Sharma 29 Oct 2024 3 mins Generative AI Security Software Security news CISOs have to get on top of AI technologies, warns Microsoft Digital Defense Report notes the depth at which threat actors are already using artificial intelligence tools. By Howard Solomon 24 Oct 2024 9 mins CSO and CISO Generative AI opinion Beyond ChatGPT: The rise of agentic AI and its implications for security Agentic AI is on the rise, but so are its security risks. Learn how to harness its transformative power while mitigating potential threats and navigating the complex security landscape its use presents. By Stephen Kaufman 22 Oct 2024 16 mins Generative AI Security opinion 3 key considerations when evaluating GenAI solutions for cybersecurity Many organisations are now exploring the use of GenAI for cybersecurity, but what are some things to consider before taking the plunge? By Steven Sim 18 Oct 2024 5 mins Generative AI Security PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe