OpenAI disrupts five covert influence operations

OpenAI disrupts five covert influence operations


In the last three months, OpenAI has disrupted five covert influence operations (IO) that attempted to exploit the company’s models for deceptive activities online. As of May 2024, these campaigns have not shown a substantial increase in audience engagement or reach due to OpenAI’s services.

OpenAI claims its commitment to designing AI models with safety in mind has often thwarted the threat actors’ attempts to generate desired content. Additionally, the company says AI tools have enhanced the efficiency of OpenAI’s investigations.

Detailed threat reporting by distribution platforms and the open-source community has significantly contributed to combating IO. OpenAI is sharing these findings to promote information sharing and best practices among the broader community of stakeholders.

Disrupting covert IO

In the past three months, OpenAI disrupted several IO operations using its models for various tasks, such as generating short comments, creating fake social media profiles, conducting open-source research, debugging simple code, and translating texts.

itrust

Specific operations disrupted include:

Bad Grammar: A previously unreported operation from Russia targeting Ukraine, Moldova, the Baltic States, and the US. This group used OpenAI’s models to debug code for running a Telegram bot and to create political comments in Russian and English, posted on Telegram.

Doppelganger: Another Russian operation generating comments in multiple languages on platforms like X and 9GAG, translating and editing articles, generating headlines, and converting news articles into Facebook posts.

Spamouflage: A Chinese network using OpenAI’s models for public social media activity research, generating texts in several languages, and debugging code for managing databases and websites.

International Union of Virtual Media (IUVM): An Iranian operation generating and translating long-form articles, headlines, and website tags, published on a linked website.

Zero Zeno: A commercial company in Israel, with operations generating articles and comments posted across multiple platforms, including Instagram, Facebook, X, and affiliated websites.

The content posted by these operations focused on various issues, including Russia’s invasion of Ukraine, the Gaza conflict, Indian elections, European and US politics, and criticisms of the Chinese government.

Despite these efforts, none of these operations showed a significant increase in audience engagement due to OpenAI’s models. Using Brookings’ Breakout Scale – which assesses the impact of covert IO – none of the five operations scored higher than a 2, indicating activity on multiple platforms but no breakout into authentic communities.

Attacker trends

Investigations into these influence operations revealed several trends:

Content generation: Threat actors used OpenAI’s services to generate large volumes of text with fewer language errors than human operators could achieve alone.

Mixing old and new: AI was used alongside traditional formats, such as manually written texts or copied memes.

Faking engagement: Some networks generated replies to their own posts to create the appearance of engagement, although none managed to attract authentic engagement.

Productivity gains: Threat actors used AI to enhance productivity, summarising social media posts and debugging code.

Defensive trends

OpenAI’s investigations benefited from industry sharing and open-source research. Defensive measures include:

Defensive design: OpenAI’s safety systems imposed friction on threat actors, often preventing them from generating the desired content.

AI-enhanced investigation: AI-powered tools improved the efficiency of detection and analysis, reducing investigation times from weeks or months to days.

Distribution matters: IO content, like traditional content, must be distributed effectively to reach an audience. Despite their efforts, none of the disrupted operations managed substantial engagement.

Importance of industry sharing: Sharing threat indicators with industry peers increased the impact of OpenAI’s disruptions. The company benefited from years of open-source analysis by the wider research community.

The human element: Despite using AI, threat actors were prone to human error, such as publishing refusal messages from OpenAI’s models on their social media and websites.

OpenAI says it remains dedicated to developing safe and responsible AI. This involves designing models with safety in mind and proactively intervening against malicious use.

While admitting that detecting and disrupting multi-platform abuses like covert influence operations is challenging, OpenAI claims it’s committed to mitigating the dangers.

(Photo by Chris Yang)

See also: EU launches office to implement AI Act and foster innovation

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: ai, artificial intelligence, generative ai, influence operations, openai



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest