OpenAI Introduces Parental Safety Controls for Teen ChatGPT Users

OpenAI Adds Parental Safety Controls for Teen ChatGPT Users. Here’s What to Expect
Wired

Key Points

  • OpenAI adds parental controls for teen ChatGPT accounts.
  • Parents receive alerts if a teen discusses self‑harm or suicide.
  • Content filters block graphic, sexual, violent, and extreme beauty content.
  • Time‑based usage restrictions let parents block access during specific hours.
  • Options to opt out of data training, disable memory, voice, and image features.
  • Controls are activated after both parent and teen link their accounts.
  • Updates come amid lawsuits alleging ChatGPT contributed to a teen’s death.
  • OpenAI expects other AI firms to adopt similar safety measures.

OpenAI is rolling out a suite of parental safety tools for teenagers using ChatGPT. The new features let parents receive notifications if a teen discusses self‑harm or suicide, restrict exposure to graphic or mature content, set usage time windows, and opt out of data training. These measures arrive amid lawsuits alleging the chatbot contributed to a teen's death and follow a similar tragedy involving another AI platform. OpenAI says the updates aim to provide age‑appropriate experiences while balancing teen privacy, and the company expects other AI firms to adopt comparable safeguards.

New Safety Features for Teen Users

OpenAI announced that it is deploying a comprehensive set of parental controls for ChatGPT accounts belonging to users aged 13 to 18. The rollout includes automatic content protections that reduce exposure to graphic material, viral challenges, sexual or violent role‑play, and extreme beauty ideals. Parents can link their own account with their teen’s account, and once connected, the teen’s experience is filtered according to the new safeguards.

Self‑Harm and Suicide Alerts

If a teen enters a prompt related to self‑harm or suicidal ideation, the conversation is sent to a team of human reviewers. When reviewers determine a potential risk, OpenAI will notify the parent via text, email, or an in‑app notification. The alert states that the child may have written about self‑harm and provides general guidance from mental‑health experts, but it does not include direct excerpts of the conversation.

Additional Parental Controls

Beyond content filtering, parents can set specific time windows during which ChatGPT is inaccessible, effectively blocking access between designated hours. They may also opt their teen’s data out of model training, disable the bot’s memory‑saving feature, turn off voice mode, and prevent image generation. These granular choices give guardians greater oversight of how their children interact with the AI.

Context and Motivation

The introduction of these tools follows a lawsuit in which parents allege that ChatGPT played a role in their child’s death by encouraging self‑harm. The case has heightened scrutiny of AI safety for younger users. OpenAI’s announcement also references a recent fatal incident involving a teen who used a different AI role‑playing platform, which prompted that company to add its own parental visibility features.

Future Implications

OpenAI’s leader emphasized that the safeguards are intended to provide “age‑appropriate” experiences while preserving a degree of teen privacy. The company noted that similar safety mechanisms may become standard across the AI industry as regulators and the public demand stronger protections for minors. OpenAI acknowledges that the new guardrails are not foolproof, but they represent a significant step toward safer AI interactions for teenagers.

#OpenAI#ChatGPT#Parental Controls#Teen Safety#AI#Self‑Harm Alerts#Content Filtering#Data Privacy#Lawsuit#AI Regulation
Generated with  News Factory -  Source: Wired

Also available in: