Last year, the U.K. charity Internet Watch Foundation (IWF) declared it as the “most extreme year on record”, having uncovered a disturbing 275,652 instances of child sexual abuse imagery online. Shockingly, a significant number of these cases involved predators coercing victims into creating explicit materials. The IWF urgently called upon technology companies and online platforms to take swift action, as regulations have been slow to address this issue and the use of Artificial Intelligence (AI) has escalated.
According to the IWF, these statistics were obtained through “proactive searching” and an analysis of nearly 400,000 reports from over 50 reporting portals worldwide. These figures reflect an 8% increase from the previous year’s findings.
With 14.8% of the websites, the United States topped the list, hosting 41,502 URLs. This represents a significant rise from the previous year, accounting for one-third of all websites involved in such crimes.
The IWF’s analysis revealed that 2,401 self-generated videos featuring children aged 3-6 were discovered. Most were girls, highlighting the alarming fact that abusers, who are now considered “opportunists,” are actively engaging in sexual abuse not only with teenagers but also with young children.
The IWF firmly believes that “tech companies and online platforms” should prioritize strengthening security measures for children online instead of relying on governments to take regulatory action. Waiting for legislation, such as the U.K.’s Online Safety Act, would only delay the much-needed protection. The IWF also reported a 22% increase in extreme content compared to 2022, indicating a concerning trend. Additionally, the number of sextortion cases, where perpetrators use pictures, information, or videos to blackmail their victims, has been on the rise. In 2021, only six cases were recorded, but last year, the number increased to 176.
AI poses a serious threat to children online, according to the IWF. In 2023, the organization processed 51 webpages containing auto-generated images of child sexual abuse, with 38 appearing remarkably realistic. These images were classified as “real” in the statistical reports. Furthermore, 228 URLs featured AI-generated content. Although this represents a small percentage of the material examined by the IWF, the charity is alarmed due to the potential for rapid growth. Particularly concerning is the emergence of manuals on publishing and distributing child pornographic material through AI, which may fall outside existing legal frameworks. The IWF uncovered a text manual on the dark web that provided instructions on how to use AI for such purposes. The IWF commented, “We have seen such behavior before, but the fact that this is the first evidence of criminals acting in concert to advise and encourage each other to use AI for self-defense purposes is particularly disturbing.”
This article was originally published in Forbes.