Meta Platforms is currently under scrutiny as its independent Oversight Board investigates how the company managed two AI-generated sexually explicit images of female celebrities on its social media networks, Facebook and Instagram. The images, which have raised significant concerns about the misuse of AI technology to create pornographic content, are being used by the board to evaluate the effectiveness of Meta's current policies and response strategies regarding such content.


The Oversight Board, although funded by Meta, operates autonomously and has taken a proactive stance on these issues, aiming to address the broader implications of digital consent and privacy. In its public communications, the board refrained from identifying the celebrities involved to mitigate any further harm.


Rise Of Deepfakes


This review comes in the wake of increasing capabilities in AI technology that allow for the creation of highly realistic fake images and videos. Such advancements have led to a worrying trend where most of the victims of these so-called "deep fakes" are women and girls. The phenomenon gained additional public attention following an incident on the social media platform X, owned by Elon Musk, where searches for images of US pop star Taylor Swift were temporarily blocked due to the proliferation of fake explicit content featuring her.


ALSO READ: Tay-ken For A Ride: A Deeper Look Into How Taylor Swift Became A Target For Misinformation


In one of the specific cases being examined by the Oversight Board, an AI-generated image posted on Instagram depicted a nude woman closely resembling a well-known public figure from India. This image was part of a larger account dedicated exclusively to AI-generated representations of Indian women. The other controversial image appeared in a Facebook group dedicated to AI creations and showed a nude woman, similar in appearance to an American public figure, being inappropriately touched by a man.


Meta's Response


Meta's initial response varied between the two cases. The image of the American celebrity was removed for breaching the platform’s bullying and harassment policies, which forbid sexually derogatory images. However, the image of the Indian public figure was initially allowed to remain online, only being taken down after the Oversight Board chose to review the case.


In response to these incidents and the board's decisions, Meta has expressed its commitment to enforce and potentially revise its regulations based on the board's recommendations. This ongoing issue highlights the challenges and ethical considerations facing tech companies in regulating AI-generated content and the need for potential legislative action to better control the creation and distribution of harmful "deepfakes."