OpenAI adds Trusted Contact feature to flag ChatGPT users in crisis

Key Points
- OpenAI introduces Trusted Contact, an opt‑in safety feature for adult ChatGPT users.
- Users can designate a friend, family member, or caregiver to receive alerts if self‑harm language is detected.
- Notifications are limited; no chat transcripts are shared with the contact.
- A small team of trained reviewers assesses flagged conversations before any outreach.
- Both the user and the Trusted Contact can add, edit, or remove the link at any time.
- Feature expands on a September emergency‑contact option launched after a teen suicide case.
- Similar safety tools have appeared on platforms like Instagram, highlighting industry‑wide focus on mental‑health safeguards.
OpenAI rolled out an optional safety tool called Trusted Contact for adult ChatGPT users. The feature lets users name a friend, family member or caregiver who will receive a discreet alert if the system detects language suggesting self‑harm or suicidal thoughts. Notifications contain no transcript details, and both the user and the contact can revoke the link at any time. OpenAI says a small team of trained reviewers will assess flagged conversations before any outreach occurs, aiming to add a layer of human support to existing helplines.
OpenAI announced a new optional safety feature for ChatGPT called Trusted Contact. The tool allows any adult user to designate a trusted adult—friend, relative or caregiver—to be alerted if the AI detects that the conversation may involve self‑harm or suicidal ideation. The feature is designed to complement the chatbot’s built‑in helplines by giving users a direct line to someone they already know.
Enabling Trusted Contact is a straightforward process. Users go into their ChatGPT account settings, enter the contact’s name and email or phone number, and send an invitation. The invited person has seven days to accept; otherwise the request expires. Both parties retain full control: the user can edit or delete the contact at any moment, and the contact can remove themselves from the list without penalty.
When OpenAI’s automated systems flag a conversation as potentially dangerous, the chatbot first encourages the user to reach out to their designated Trusted Contact. If the user does not respond, a small team of specially trained staff reviews the exchange. After a brief assessment, the team may send a concise email, text or in‑app notification to the contact, warning them of a possible safety issue. Importantly, the notification does not include any chat transcripts or personal details beyond the fact that a concern was raised.
The Trusted Contact feature builds on an emergency‑contact option introduced in September, which followed a tragic case in which a 16‑year‑old who confided in ChatGPT took his own life. Meta has rolled out a comparable system for Instagram, alerting parents when minors repeatedly search for self‑harm content. OpenAI’s latest move signals a broader industry push to embed mental‑health safeguards directly into AI products.
OpenAI framed the addition as an “expert‑validated” approach, noting that connecting a person in crisis with someone they trust can make a meaningful difference. While the company highlighted the limited nature of the alerts, privacy advocates have raised questions about how the review team determines the seriousness of a flagged conversation and what data is retained. OpenAI maintains that the feature does not share chat content with the contact and that any review is conducted by a small, trained team.
Experts say the Trusted Contact option could fill a gap between anonymous AI assistance and professional help. By giving users a way to involve a personal support network, the feature may reduce reliance on generic crisis lines and encourage earlier intervention. As AI assistants become more ubiquitous, tools like Trusted Contact could become a standard part of responsible AI deployment.