Meta has been under a lot of fire for the past few years, with accusations that it has allowed AI-generated misinforming content to be posted on its social media platforms during major elections to influence voters. Meta has now found out that AI-generated content made up even less than one per cent of misinformation that was fact-checked during elections held in over 40 countries this year, including in India. 


The discovery was made through the social media giant's analysis of content shared on its platforms during elections in countries such as the US, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the UK, South Africa, Mexico, and Brazil.


ALSO READ | Samsung Galaxy S25 Ultra Leaks: Here's How Much The Upcoming Flagship Smartphone Might Cost You


Nick Clegg, the global affairs president at Meta, in a blog post wrote, “While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content.”


Meta's statements indicate that earlier concerns about AI's role in propagating propaganda and disinformation did not materialise on its platforms like Facebook, WhatsApp, Instagram, and Threads. The company also claimed success in preventing foreign interference in elections by dismantling more than 20 new "covert influence operations."


Meta said, “We also closely monitored the potential use of generative AI by covert influence campaigns – what we call Coordinated Inauthentic Behavior (CIB) networks – and found they made only incremental productivity and content-generation gains using generative AI.”


The company also reported that it denied more than 590,000 requests from users to create election-related deepfakes, including AI-generated images of figures like President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden, on its AI image generation tool, Imagine.


Meta Admits Excess Content Moderation During Pandemic


Recently, Meta's Nick Clegg admitted that the company regrets its heavy-handed approach to content moderation during the COVID-19 pandemic. The Verge quoted Clegg as saying, “No one during the pandemic knew how the pandemic was going to unfold, so this really is wisdom in hindsight. But with that hindsight, we feel that we overdid it a bit. We’re acutely aware because users quite rightly raised their voice and complained that we sometimes over-enforce and we make mistakes and we remove or restrict innocuous or innocent content.”


He also admitted that Meta’s moderation error rates were “still too high which gets in the way of the free expression that we set out to enable.” He added, “Too often, harmless content gets taken down, or restricted, and too many people get penalized unfairly.”