AI fuels surge in child sexual abuse imagery, law enforcement struggles

Key Points
- AI‑generated child sexual abuse material reports have more than doubled in two years.
- The Internet Watch Foundation logged 8,029 AI‑created CSAM images and videos in 2025.
- NCMEC received 1.5 million AI‑linked CSAM reports in 2025, up from 67,000 the year before.
- Minnesota case: school employee used AI to digitally undress children, yielding 800 abuse images.
- Law‑enforcement now must determine if depicted children are real, altered or fictional.
- Offenders also use AI for manipulated photos and abusive chatbot conversations.
- Automated moderation systems are overwhelmed by the surge, slowing response times.
Generative artificial intelligence is amplifying the production of child sexual abuse material, prompting a sharp rise in reports to watchdogs and law‑enforcement agencies. Reuters found actionable AI‑generated CSAM reports more than doubled over two years, while the Internet Watch Foundation logged 8,029 such images and videos in 2025 alone. The National Center for Missing & Exploited Children received 1.5 million AI‑linked reports that year, up from 67,000 the previous year. A Minnesota case involving a school employee illustrated how everyday photos can be weaponized, leaving investigators to untangle whether a child in an image is real, altered or entirely fabricated.
Generative artificial intelligence is reshaping the dark underworld of child sexual abuse material (CSAM). Recent investigations reveal a steep climb in AI‑crafted images and videos, overwhelming platforms, regulators and child‑safety groups that are already stretched thin.
Reuters reported that actionable reports of AI‑generated CSAM more than doubled over the past two years. The Internet Watch Foundation (IWF) confirmed it identified 8,029 AI‑created images and videos of child sexual abuse in 2025 alone. Those numbers translate into a torrent of content that is harder to detect, verify and remove.
In the United States, the National Center for Missing & Exploited Children (NCMEC) received a staggering 1.5 million AI‑linked CSAM reports in 2025. That figure dwarfs the 67,000 reports logged a year earlier and the 4,700 reported in 2023. The surge reflects not only more content but also the growing sophistication of AI tools that can produce realistic, often indistinguishable, depictions of abuse.
Law‑enforcement officers now face an added layer of complexity. Determining whether a child depicted in an image is a real victim, a digitally altered subject or a wholly fabricated figure consumes valuable time. Each misstep delays action that could protect a child in immediate danger.
A high‑profile Minnesota case underscores the new threat. William Michael Haslach, a school lunch monitor and traffic guard, allegedly used AI applications to strip clothing from photos he had taken of children at work. Federal investigators uncovered more than 90 victims and nearly 800 AI‑generated abuse images on his devices. The case shows how ordinary photographs harvested from social media can become the raw material for illicit AI manipulation.
Beyond static images, offenders are experimenting with other formats. Reports cite manipulated photos of real children, as well as chatbot conversations where perpetrators seek grooming advice or role‑play abusive scenarios. These textual and visual hybrids further confound automated moderation systems, which now have to sift through a flood of false positives and low‑quality tips.
Automated moderation tools, once considered a frontline defense, are being swamped. The sheer volume of AI‑generated content generates “junk tips” that overload task forces already coping with limited resources. Every erroneous flag or missed piece of evidence represents a lost opportunity to intervene in an ongoing abuse situation.
Experts warn that the problem will only intensify as AI models become more accessible and user‑friendly. Without decisive policy action, technical safeguards and coordinated law‑enforcement responses, the internet may see an even larger influx of synthetic abuse material, further eroding the protective net for children.