San Francisco has launched a significant lawsuit against several prominent deepfake websites that use artificial intelligence to create non-consensual, manipulated nude images of women and girls. As reported by the New York Post, The city attorney’s office has filed suit against 16 of the most trafficked "AI undressing" sites, which collectively garnered over 200 million visits in the first six months of 2024.


These websites enable users to upload images of fully clothed individuals, which are then altered by AI to produce fake nude photos. The suit highlights the harmful nature of these sites, citing a disturbing promotional message from one platform that trivialised the exploitation by suggesting users could bypass dating to obtain nude images directly.


ALSO READ: 21-Year-Old Ukrainian Student Studying In US Gets Deepfaked Into Russian Supporting China. Here's Olga Loiek's Story


Violation of Multiple Laws


The lawsuit accuses these sites of violating multiple federal and state laws, including those against revenge porn, deepfake pornography, and child pornography. Additionally, it claims that these operations are in breach of California’s unfair competition law, arguing that the significant harm caused to individuals far outweighs any perceived benefits.


'Darkest Corners Of The Internet'


As reported by the Post, San Francisco City Attorney David Chiu expressed deep concern over the exploitation facilitated by these platforms, emphasising the emotional and psychological toll on victims.


“This investigation has taken us to the darkest corners of the internet, and I am absolutely horrified for the women and girls who have had to endure this exploitation,” Chiu stated.


ALSO READ: Google Develops New Ranking System To Reduce Visibility Of Deepfakes


Prevent Future Incidents


The city is not only seeking civil penalties but is also pushing for the removal of these websites and the implementation of measures to prevent future incidents. The lawsuit also referenced a troubling case earlier this year in which five students from a California middle school were expelled after creating and distributing AI-generated nude images of their classmates.


One victim, whose image was manipulated without consent, conveyed the lasting impact of such exploitation, saying she feels powerless and perpetually fearful that the images could resurface at any moment. Another described the ongoing fear and despair that her family endures as a result of the deepfake images.


This lawsuit marks a significant legal challenge against the rising tide of AI-generated deepfake content, an issue that has gained increasing visibility as the technology becomes more widespread. While AI holds the potential for numerous positive applications, its misuse in creating deepfakes has raised serious ethical and legal concerns, particularly regarding privacy and the spread of misinformation.