The campaigning for the US Presidential Elections is going on in full force, and the efforts to influence it is also going on. Sam Altman-led OpenAI on Friday announced that it has taken down some accounts of an Iranian group that used its AI chatbot ChatGPT to generate content with the aim of influencing the US Presidential Elections. This operation has been identified as Storm-2035 and it uses the said AI chatbot to generate content focused on topics such as commentary on the candidates on both sides in the US elections, and on the tension between Gaza and Israel at the Olympic Games. 


These Iranian groups were allegedly sharing this content via social media accounts and websites.


ALSO READ | 10 Raksha Bandhan Gift Ideas For Your Tech Savvy Siblings: From Clink Audio VoiceBuds To Amkette EvoFox Deck, & More


Research Results


A Microsoft-backed AI company's investigation revealed that ChatGPT was used to create long-form articles and shorter social media posts. However, OpenAI reported that these efforts did not generate significant audience engagement. Most of the identified social media content received minimal or no likes, shares, or comments, and there was no evidence that the web articles were widely shared on social media.


The accounts responsible for this activity have been banned from using OpenAI's services, and the company stated that it will continue to monitor for any further policy violations. Earlier in August, a Microsoft threat intelligence report highlighted that the Iranian network Storm-2035, which operates four websites disguised as news outlets, has been actively engaging U.S. voter groups on opposite sides of the political spectrum.


The report noted that this engagement involved "polarizing messaging on issues such as the U.S. presidential candidates, LGBTQ rights, and the Israel-Hamas conflict." As the November 5 presidential election approaches, Democratic candidate Kamala Harris and Republican candidate Donald Trump are in a close race. The AI company also revealed in May that it had disrupted five covert influence operations that attempted to use its models for "deceptive activities" across the internet.