The Internet Watch Foundation, a safety watchdog, has noted that the issue of child sexual abuse imagery being created with the help of AI tools is becoming bigger with each day that passes. The watchdog has said that this has reached a “tipping point.” According to the Internet Watch Foundation (IWF), the volume of AI-generated illegal content discovered online in the past six months has already surpassed the total reported for the entire previous year.
The organisation noted that the vast majority of this content was accessible on open parts of the internet, rather than hidden within the dark web, which requires special browsers to navigate. Derek Ray-Hill, the IWF's interim chief executive, remarked that the advanced nature of these images suggests that the AI tools involved were likely trained using real images and videos of actual victims. The Guardian quoted Hill as saying, “Recent months show that this problem is not going away and is in fact getting worse.”
ALSO READ | Elon Musk Talks About How Samsung Uses AI To Morph Moon Images & Make Them Look Realistic — WATCH
According to a report by the Guardian, the matter has become so serious that as per an IWF analyst, authorities are now having a real hard time figuring out whether the image is generated by AI or actually a child is is need of help.
Deepfake Is A Real Threat
The Internet Watch Foundation (IWF) reported handling 74 instances of AI-generated child sexual abuse material (CSAM) in the six months leading up to September, a figure that already surpasses the 70 cases recorded over the previous 12 months. Each report can refer to a webpage hosting multiple illegal images.
The IWF's findings included not only AI-generated images involving real-life victims but also "deepfake" videos where adult content was manipulated to resemble CSAM. Previous reports revealed that AI technology has been used to create de-aged images of celebrities, portraying them as children in abusive scenarios. Another disturbing trend involved using AI tools to “nudify” photos of fully clothed children found online.
More than half of this flagged content was hosted on servers in Russia and the US, with significant amounts also traced to Japan and the Netherlands. Webpages containing this harmful material are added to an IWF-managed list, shared with the tech industry to block and prevent access.
The Internet Watch Foundation (IWF) reported that 80 per cent of illegal AI-generated images flagged by the public were often found on accessible platforms such as forums or AI image galleries. In response to increasing concerns around online exploitation, Instagram has introduced new tools to combat sextortion, a scam where individuals, often posing as young women, trick users into sharing intimate images, only to later blackmail them.
Instagram's Contribution To Curb This Issue
Instagram has introduced a new protective feature that automatically blurs any nude images sent via direct messages (DMs), helping users exercise caution before viewing. Along with this, users are reminded of the option to block or report senders, adding an extra layer of security against potential exploitation. The aim is to enhance user control over sensitive content.
Starting this week, the feature will be automatically enabled for teenage accounts worldwide and will also work with encrypted messages. However, flagged images detected by Instagram’s “on device detection” system won’t be reported to the platform or authorities unless manually triggered. For adults, the feature will be available as an opt-in option.
Additionally, Instagram will hide follower and following lists from suspected sextortion scammers who may use those connections as part of their threats to spread private images.