Meta Implements Stricter Guidelines For Altered Media, Including Deepfakes
Meta's new strategy involves keeping manipulated content on its platforms but adding information about its creation process for viewer transparency.
Meta, the parent company of Facebook, unveiled significant revisions to its guidelines concerning digitally produced and altered media on Friday, just ahead of the impending US elections that will serve as a test for its capacity to manage deceptive content stemming from emerging artificial intelligence technologies. According to Monika Bickert, Vice President of Content Policy at Meta, the social media behemoth will commence the application of “Made with AI” labels starting in May.
These labels will be affixed to AI-generated videos, images, and audio shared across Meta's platforms. This initiative marks an expansion of their existing policy, which had previously only addressed a limited subset of manipulated videos.
Additionally, Bickert announced that Meta will introduce distinct and more conspicuous labels for digitally altered media that present a substantial risk of misleading the public on critical issues. These labels will be applied irrespective of whether the content was produced using AI or traditional editing tools.
This new strategy signifies a shift in Meta's approach toward manipulated content. Rather than solely focusing on removing specific posts, the company will now maintain such content on its platforms while ensuring that viewers are informed about its creation process.
Previously, Meta revealed plans to detect images generated using external generative AI tools through embedded invisible markers within the files, although no specific rollout date was provided at that time.
According to a spokesperson speaking to Reuters, Meta's revised labelling strategy will extend to the content shared on its Facebook, Instagram, and Threads platforms. However, different guidelines apply to its other services, such as WhatsApp and Quest virtual reality devices.
The spokesperson mentioned that Meta will promptly start implementing the more noticeable "high-risk" labels.
These modifications arrive several months ahead of the US presidential election in November, a period when tech experts caution about the potential impact of new generative AI technologies. Political campaigns have already begun utilising AI tools in countries like Indonesia, testing the boundaries of guidelines set by Meta and the leading generative AI provider, OpenAI.
Meta Board Criticises Existing Rules On Deepfakes
In February, Meta's oversight board criticised the company's existing rules on manipulated media, describing them as "incoherent." This critique followed the review of a video on Facebook from last year featuring altered footage of US President Joe Biden, creating a false impression of inappropriate behaviour.
Under Meta's current policy, manipulated videos are allowed to remain on the platform if they are not generated by AI or do not fabricate words attributed to individuals. The oversight board suggested expanding this policy to cover non-AI content, which can be equally misleading, as well as audio-only content and videos depicting fabricated actions.