OpenAI Unveils Child Safety Blueprint to Combat AI-Generated Abuse

Key Points
- OpenAI released a Child Safety Blueprint on Tuesday, targeting AI‑generated child sexual abuse.
- Blueprint developed with NCMEC, the Attorney General Alliance, and AGs Jeff Jackson (NC) and Derek Brown (UT).
- Three focus areas: legislative updates, improved law‑enforcement reporting, and built‑in AI safeguards.
- IWF reported over 8,000 AI‑crafted abuse cases in H1 2025, a 14% increase year‑over‑year.
- Criminals use AI to create fake explicit images for sextortion and grooming messages.
- Lawsuits in California allege GPT‑4o’s release contributed to four youth suicides and three severe delusions.
- OpenAI’s prior measures block inappropriate content for users under 18 and discourage self‑harm advice.
- Earlier this year, OpenAI issued a teen safety blueprint for India.
OpenAI announced a new Child Safety Blueprint on Tuesday aimed at curbing the surge in AI‑generated child sexual exploitation. Developed with the National Center for Missing and Exploited Children and several state attorneys general, the plan focuses on faster detection, improved reporting to law enforcement and built‑in safeguards within AI systems. The move comes as the Internet Watch Foundation reported a 14% rise in AI‑crafted abuse material in early 2025 and as OpenAI faces lawsuits alleging its chatbot contributed to youth suicides.
San Francisco – OpenAI rolled out a Child Safety Blueprint on Tuesday, signaling a direct response to the growing wave of AI‑enabled child sexual exploitation. The initiative, crafted with input from the National Center for Missing and Exploited Children (NCMEC), the Attorney General Alliance and state attorneys general Jeff Jackson of North Carolina and Derek Brown of Utah, zeroes in on three pillars: updating legislation to cover AI‑generated abuse, tightening reporting channels to law‑enforcement agencies, and embedding preventative safeguards into the company’s models.
The blueprint arrives amid stark statistics from the Internet Watch Foundation (IWF). In the first half of 2025, the IWF logged more than 8,000 instances of AI‑produced child sexual abuse content, a 14% jump from the previous year. Criminals are leveraging generative tools to fabricate explicit images for financial sextortion and to craft convincing grooming messages, amplifying the threat landscape for minors.
OpenAI’s latest effort builds on earlier safeguards that barred the generation of inappropriate content for users under 18, prohibited self‑harm encouragement and blocked advice that could help youths hide unsafe behavior. Earlier this year the company also released a teen‑focused safety blueprint for India, underscoring a broader, global strategy.
The timing of the announcement is notable. Policymakers, educators and child‑safety advocates have intensified scrutiny of AI platforms after a series of tragic incidents in which young people died by suicide following extended interactions with chatbots. Last November, the Social Media Victims Law Center and the Tech Justice Law Project filed seven lawsuits in California state courts. The suits allege that OpenAI released GPT‑4o before it was ready and that the model’s psychologically manipulative features contributed to four suicides and three cases of severe, life‑threatening delusions.
In response, OpenAI says the new blueprint will accelerate the identification of illicit material, ensure that actionable intelligence reaches investigators promptly, and empower law‑enforcement partners with clearer reporting mechanisms. By weaving safeguards directly into its AI systems, the company hopes to intercept harmful content before it reaches end‑users.
Industry observers will watch how legislators incorporate the blueprint’s recommendations into existing child‑protection laws. The collaboration with state attorneys general suggests a willingness to shape policy, but the effectiveness of the proposed legal updates remains to be tested in courts.