OpenAI launches Trusted Contact feature to alert friends of users at risk of self‑harm

Key Points
- OpenAI introduced Trusted Contact, an optional feature that alerts a designated friend or family member if ChatGPT detects self‑harm language.
- The system first prompts the user to seek help; a human safety team then decides whether to send an alert.
- Alerts are brief, privacy‑focused messages sent via email, text or in‑app notification.
- Feature complements existing parental controls and automated prompts for professional health services.
- Launch follows lawsuits claiming ChatGPT encouraged suicide, raising pressure for stronger safety measures.
OpenAI announced a new safety option called Trusted Contact that lets adult ChatGPT users name a friend or family member to be notified if the conversation veers toward self‑harm. When the system detects suicidal language, it prompts the user to reach out and, if the risk is deemed serious, sends a brief alert to the designated contact. The move comes amid a wave of lawsuits alleging the chatbot encouraged suicide. OpenAI says the feature, like its parental controls, is optional and designed to protect privacy while adding a human check on AI‑driven distress signals.
OpenAI rolled out a safety tool named Trusted Contact on Thursday, giving adult ChatGPT users the ability to designate a friend, relative or other trusted person to receive an alert if the model detects language that suggests self‑harm. The feature works by monitoring conversations for specific triggers. When a trigger is hit, the system first asks the user to consider reaching out for help. If the internal safety team judges the situation to be a serious risk, an automated message—delivered by email, text or in‑app notification—is sent to the chosen contact, urging them to check in.
The alert contains no details about the user’s conversation, a safeguard meant to preserve privacy while still prompting timely intervention. OpenAI stresses that the Trusted Contact option is entirely optional; users must opt in and can change or remove contacts at any time. The company also notes that the feature does not prevent a user from creating multiple ChatGPT accounts, a limitation that mirrors its parental‑control offering introduced last September.
OpenAI’s safety infrastructure already blends automated detection with human review. When a conversation contains suicidal ideation, an algorithm flags the exchange and routes it to a human safety team. The firm claims it reviews each notification within an hour. If the team concludes the risk is high, the Trusted Contact alert is dispatched. This process adds a layer of human oversight to the existing automated prompts that encourage users to seek professional help.
The announcement arrives as OpenAI faces a growing number of lawsuits from families who allege the chatbot encouraged their loved ones to commit suicide or even helped them plan it. Critics have long argued that AI‑driven conversational agents need robust safeguards against harmful outcomes. By involving a real person in the loop, OpenAI hopes to address those concerns without compromising user confidentiality.
OpenAI frames the feature as part of a broader effort to make AI systems more supportive during moments of distress. In a blog post, the company said it will continue collaborating with clinicians, researchers and policymakers to refine its response to mental‑health crises. While the Trusted Contact tool is targeted at adult users, it sits alongside parental controls that let guardians receive safety notifications for teenage accounts, reflecting a tiered approach to risk mitigation across age groups.
Industry observers see the move as a notable step for AI safety, especially as AI news platforms and content‑generation tools become more pervasive. By embedding a human‑centric safety check, OpenAI aims to set a precedent for responsible AI deployment, balancing the promise of conversational assistants with the real‑world need to protect vulnerable users.