OpenAI Adds Trusted Contact Feature to ChatGPT for Adult Users

Key Points
- OpenAI introduces Trusted Contact for adult ChatGPT users.
- Users can name a person over 18 (19 in South Korea) to receive alerts.
- Alert triggers only after a human reviewer confirms a serious risk.
- Notification contains no chat transcript, only a brief warning.
- Feature complements existing teen‑account safety alerts.
- Developed with input from clinicians and mental‑health groups.
- Users and contacts can add, change, or remove the Trusted Contact at any time.
- OpenAI aims to complete human review within one hour.
OpenAI is rolling out a new Trusted Contact option for adult ChatGPT accounts. The feature lets users name a designated person who will be alerted if the AI detects a serious self‑harm concern. After a brief human review, the contact receives a notification without any chat transcript details. OpenAI says the safeguard aims to complement existing safety tools and crisis resources, while giving users more control over their digital wellbeing.
OpenAI began offering a Trusted Contact option to adult users of ChatGPT this week, extending the safety toolkit that already covers teen accounts. The feature appears in the app’s settings and lets a user nominate a single person—at least 18 years old (19 in South Korea)—who will be notified if the chatbot flags a conversation as potentially indicating self‑harm.
Setting up the contact is optional. Once a user selects a nominee, the app sends the contact an invitation that explains the role and offers a one‑week window to accept. If the invitation is declined, the user can choose someone else. The process does not share any part of the conversation; the alert simply states that self‑harm was mentioned in a concerning way and asks the contact to check in.
When ChatGPT’s algorithms detect language that may signal a serious risk, the system first informs the user that a Trusted Contact could be notified. It also suggests conversation starters to help the user reach out directly. A small team of specially trained human reviewers then evaluates the situation. If they confirm a genuine threat, the contact receives a notification via email, text message, or an in‑app alert. OpenAI aims to complete this human review within an hour.
The Trusted Contact feature builds on OpenAI’s broader safety efforts, which include alerts for linked teen accounts when signs of distress appear. Development involved clinicians, researchers, and mental‑health organizations such as the American Psychological Association. OpenAI stresses that the new tool does not replace crisis hotlines, emergency services, or professional care; the chatbot continues to direct users to those resources when needed.
Users retain full control over the feature. They can remove or replace their Trusted Contact at any time, and contacts can opt out themselves. By giving users a way to involve a trusted person, OpenAI hopes to mitigate the limits of AI‑driven conversation when dealing with deeply personal issues.
Industry observers note that the addition reflects a growing trend among AI providers to embed human‑in‑the‑loop safeguards. As AI chatbots become more ingrained in daily life, platforms are under pressure to address potential harms without compromising user privacy. OpenAI’s approach—combining algorithmic detection, rapid human review, and minimal data sharing—offers a model that balances safety with confidentiality.