X Removes Ban On Taylor Swift Searches After Her Fake Explicit Images Went Viral
Last week, when users tried to search for Taylor Swift's name on the social media platform, formerly known as Twitter, they got an error message.
Elon Musk-owned social media company X has removed the ban on searches for Taylor Swift, which had been imposed after the circulation of fake sexually-explicit images of the pop singer on the platform last week, the media has reported. The search functionality has now been reactivated on X, formerly Twitter, and Joe Benarroch, head of business operations at X, mentioned in a statement that the social media platform will remain vigilant against any attempt to spread such content, promptly removing it if detected, says a report by news agency Reuters.
On Sunday afternoon, when users tried to search for Taylor Swift's name on the social media platform, formerly known as Twitter, they got an error message that read: "Something went wrong. Try reloading." The emergence of fake graphic images depicting the singer gained significant traction last week, with some of the content going viral and garnering millions of views. This prompted concerns from both US officials and fans of Taylor Swift.
Swift's loyal fan base responded proactively by flagging posts and accounts sharing the fabricated images. They flooded the platform with authentic images and videos of the singer, with the hashtag "protect Taylor Swift."
X had described this measure as a temporary action taken with an "abundance of caution." Notably, one image of Swift, named Time Magazine's "Person of the Year" in 2023, garnered 47 million views on X before the account was suspended, as reported by The New York Times.
Notably, the incident drew attention from the White House, with officials expressing alarm over the spread of AI-generated photos. White House press secretary Karine Jean-Pierre highlighted the disproportionate impact on women and girls, advocating for legislation to address the misuse of AI technology on social media platforms. She emphasised the role of platforms in enforcing rules to prevent the dissemination of misinformation and non-consensual, intimate imagery.
In the realm of US politics, calls for new laws to criminalise the creation of deepfake images have gained momentum. Deepfakes, which leverage artificial intelligence (AI) to manipulate videos, have seen a significant uptick, with a 550 per cent increase in doctored images reported since 2019.