• The Prompt Innovator
  • Pages
  • AI-Powered Propaganda: How OpenAI Dismantled Iran's Attempt to Sway U.S. Voters

AI-Powered Propaganda: How OpenAI Dismantled Iran's Attempt to Sway U.S. Voters

OpenAI recently took action against a covert Iranian influence operation, codenamed Storm-2035, which had been using ChatGPT to produce and distribute content aimed at influencing the 2024 U.S. presidential election. The operation involved a cluster of ChatGPT accounts generating a variety of politically charged content, including long-form articles and social media posts, targeting both progressive and conservative audiences.

The content covered topics like the U.S. presidential race, the conflict in Gaza, Israel's participation in the Olympics, and other global events. These materials were shared through websites designed to mimic legitimate news outlets and social media accounts that posed as everyday users. Despite these efforts, OpenAI reported that the operation had minimal impact, with most of the content failing to gain significant traction or audience engagement.

OpenAI’s swift response, which included banning the accounts involved and sharing threat intelligence with relevant stakeholders, underscores the growing concern about the misuse of AI in disinformation campaigns. This incident highlights the broader issue of how AI tools can be exploited to manipulate public opinion, even if these attempts don’t always achieve widespread influence.

The discovery and disruption of Storm-2035 mark a significant moment in the ongoing battle against foreign interference in democratic processes, particularly as AI technologies become more sophisticated and accessible.