OpenAI Reports Surge in Child Exploitation Alerts Amid Growing AI Scrutiny

OpenAI’s child exploitation reports increased sharply this year
Ars Technica2

Key Points

  • OpenAI submitted roughly 75,000 CyberTipline reports in the first half of 2025, up from under 1,000 a year earlier.
  • Reports cover CSAM found in user uploads, model requests, and generated content across ChatGPT and API services.
  • NCMEC identified a 1,325% increase in generative‑AI‑related CSAM reports between 2023 and 2024.
  • 44 state attorneys general sent a joint letter warning AI firms to protect children from predatory AI products.
  • OpenAI and other AI companies face lawsuits alleging their chatbots contributed to child deaths.
  • The U.S. Senate Judiciary Committee held a hearing on AI chatbot harms, and the FTC launched a market study on AI companion bots.
  • Upcoming AI products such as OpenAI’s video‑generation tool Sora are not yet reflected in current reporting data.

OpenAI disclosed a dramatic rise in its reports to the National Center for Missing & Exploited Children’s CyberTipline, sending roughly 75,000 reports in the first half of 2025 compared with under 1,000 in the same period a year earlier. The increase mirrors a broader jump in generative‑AI‑related child‑exploitation reports identified by NCMEC. OpenAI attributes the growth to its broader product suite, which includes the ChatGPT app, API access, and forthcoming video‑generation tool Sora. The escalation has prompted heightened regulatory attention, including a joint letter from 44 state attorneys general, a Senate Judiciary Committee hearing, and an FTC market study focused on protecting children from AI‑driven harms.

Sharp Increase in CyberTipline Reports

OpenAI revealed that in the first half of 2025 it submitted approximately 75,027 reports to the National Center for Missing & Exploited Children’s (NCMEC) CyberTipline. This figure is nearly identical to the number of distinct pieces of content – about 74,559 – that the reports referenced. By contrast, the same six‑month window in 2024 saw OpenAI file just 947 reports covering roughly 3,252 pieces of content. The data underscores a massive uptick in the volume of child‑exploitation material that the company is detecting and forwarding to authorities.

Scope of Reporting and Product Landscape

OpenAI’s policy mandates reporting every instance of child sexual abuse material (CSAM) it encounters, whether the material is uploaded directly by a user or generated through model requests. The company’s reporting covers its consumer‑facing ChatGPT app—where users can upload files, including images, and receive generated text or images—as well as its broader API offerings that enable developers to embed the models in external services. A forthcoming video‑generation product, named Sora, was released after the reporting period and therefore does not appear in the current NCMEC figures.

Generative AI Driving Broader Trend

The surge in OpenAI’s reports aligns with a larger pattern observed by NCMEC. The center’s analysis indicates that reports involving generative‑AI technologies rose by roughly 1,325 percent between 2023 and 2024. While other major AI labs such as Google have published their own CyberTipline statistics, they have not broken down what portion of those reports is directly tied to generative AI outputs.

Regulatory and Legislative Response

The heightened reporting activity has drawn intensified scrutiny from policymakers. A coalition of 44 state attorneys general issued a joint letter to several AI firms—including OpenAI, Meta, Character.AI, and Google—warning that they would exert all available authority to protect children from “predatory artificial intelligence products.” OpenAI and Character.AI have also faced lawsuits alleging that their chatbots played a role in the deaths of minors.

In the United States Senate, the Committee on the Judiciary convened a hearing to examine the harms associated with AI chatbots. Simultaneously, the Federal Trade Commission launched a market study of AI companion bots, specifically probing how companies are mitigating negative impacts on children. These actions signal a coordinated effort across federal and state levels to address the emerging risks posed by generative AI.

Implications and Next Steps

The data suggests that as AI capabilities expand, the potential for misuse—including the creation and distribution of CSAM—also grows. OpenAI’s commitment to reporting every encounter with such material reflects an industry‑wide acknowledgment of responsibility. Ongoing regulatory focus, combined with the pending availability of new tools like Sora, will likely shape how AI providers balance innovation with child‑safety safeguards in the months ahead.

#OpenAI#NCMEC#CyberTipline#CSAM#generative AI#child safety#state attorneys general#Senate Judiciary Committee#FTC market study#ChatGPT#Sora#AI regulation
Generated with  News Factory -  Source: Ars Technica2

Also available in: