X, formerly known as Twitter, has taken swift action by restricting searches for Taylor Swift after explicit AI-generated images of the singer surfaced on the platform. In a statement provided to the BBC, Joe Benarroch, X's head of business operations, described the move as a "temporary action" aimed at prioritising user safety. When attempting to search for Taylor Swift on the platform, users now encounter a message that states, "Something went wrong. Try reloading."


The emergence of fake graphic images depicting the singer gained significant traction earlier this week, with some of the content going viral and garnering millions of views. This prompted concerns from both US officials and fans of Taylor Swift.


Swift's loyal fan base responded proactively by flagging posts and accounts sharing the fabricated images. They flooded the platform with authentic images and videos of the singer, utilising the hashtag "protect Taylor Swift."


ALSO READ: Digital Doppelgangers: How To Navigate The Deepfake Deluge


In response to the incident, X released a statement on Friday asserting its commitment to prohibiting the posting of non-consensual nudity on the platform. The statement emphasised a "zero-tolerance policy" toward such content, with active efforts underway to remove identified images and take appropriate actions against the responsible accounts.


It remains unclear when X initiated the blocking of searches for Taylor Swift on its platform, and whether similar measures have been implemented for other public figures or terms in the past.


In correspondence with the BBC, Benarroch clarified that the action was taken "with an abundance of caution as we prioritise safety on this issue."


The matter drew attention from the White House, with officials expressing alarm over the spread of AI-generated photos. White House press secretary Karine Jean-Pierre highlighted the disproportionate impact on women and girls, advocating for legislation to address the misuse of AI technology on social media platforms. She emphasised the role of platforms in enforcing rules to prevent the dissemination of misinformation and non-consensual, intimate imagery.


In the realm of US politics, calls for new laws to criminalise the creation of deepfake images have gained momentum. Deepfakes, which leverage artificial intelligence to manipulate videos, have seen a significant uptick, with a 550 per cent increase in doctored images reported since 2019.


ALSO READ: Misinformation, Deepfake, And AI Undercut Faith And Trust Of People In Electoral Process: CEC


As of now, there are no federal laws specifically targeting the sharing or creation of deepfake images, although some states have taken measures to address the issue. The UK, for instance, made the sharing of deepfake pornography illegal as part of the Online Safety Act in 2023.


In a related context, Prime Minister Narendra Modi of India raised concerns about the misuse of AI in the creation of deceptive deepfake content during a speech in November last year. He emphasised the need for media to educate the public about the potential crisis associated with deepfakes, underlining his commitment to transforming India into a developed nation. PM Modi also highlighted the 'vocal for local' initiative and expressed confidence in the nation's continued progress, especially in the context of achievements during the COVID-19 pandemic.