OpenAI launches Trusted Contact feature to alert designated adults when ChatGPT users show self‑harm risk

Key Points
- OpenAI introduces Trusted Contact, letting users name a trusted adult for safety alerts.
- AI flags self‑harm language; users are warned before any contact is notified.
- Human reviewers assess flagged cases; alerts are sent without chat transcripts.
- Feature built with input from mental‑health experts and a network of 260+ doctors.
- Users can add, change or remove a Trusted Contact at any time.
- Some see the tool as a safety net; others worry about AI monitoring and stigma.
OpenAI has begun rolling out a new Trusted Contact tool for ChatGPT that lets users name a trusted adult who can be notified if the AI detects signs of self‑harm. The system flags at‑risk conversations, warns the user, and then passes the case to a human review team before any alert is sent. Notifications are delivered by email, text or in‑app message without sharing chat transcripts. Developed with input from mental‑health experts and a network of more than 260 doctors, the feature adds to OpenAI’s existing safety controls and raises questions about AI‑driven monitoring.
OpenAI is expanding its safety toolkit for ChatGPT with a feature called Trusted Contact, now in limited rollout. Users can tap their profile, select a trusted adult, and wait for that person to accept the role. Once active, the system monitors conversations for language that suggests a serious risk of self‑harm. If the AI flags such content, the user receives a warning that the designated contact may be alerted.
A specially trained human review team then evaluates the situation. Only when reviewers deem the risk genuine does the Trusted Contact receive a notification via email, text message or an in‑app alert, urging them to check in with the user. OpenAI says the alerts do not include chat transcripts or detailed conversation history, preserving user privacy. Users retain full control—they can remove or replace their Trusted Contact at any time.
The feature was built with guidance from mental‑health professionals, suicide‑prevention specialists and a global network of more than 260 doctors spanning 60 countries. OpenAI positions Trusted Contact as an extension of its existing parental controls and safety guardrails, acknowledging that ChatGPT now functions for many as more than a productivity tool—it can act as a confidant, life coach, or even a therapist.
OpenAI CEO Sam Altman has previously remarked that younger users treat ChatGPT like an operating system for life decisions, consulting the AI on everything from career moves to personal relationships. That reliance underpins the company’s push to embed emotional‑support infrastructure directly into the product.
Reactions to the rollout are mixed. Some users view the ability to enlist a trusted adult as reassuring, especially for vulnerable individuals who might otherwise suffer in silence. Others find the notion of AI‑driven monitoring unsettling. In a recent interview, Amy Sutton of Freedom Counselling warned that AI surveillance could exacerbate mental‑health stigma, prompting people to hide signs of distress and potentially deepening the problem.
OpenAI’s approach reflects a broader industry trend: as AI systems become more embedded in daily life, companies are grappling with the balance between user safety and privacy. Trusted Contact illustrates one attempt to provide a safety net while limiting data exposure, but it also raises questions about how comfortable users are with automated alerts and human review of their private conversations.
For now, the feature remains limited to users who opt in and designate a contact. OpenAI has not disclosed a timeline for a wider release, but the company says it will continue to refine the system based on feedback from mental‑health experts and real‑world use.