OpenAI Seeks Memorial Attendee List in Teen Suicide Lawsuit

OpenAI requested memorial attendee list in ChatGPT suicide lawsuit
TechCrunch

Key Points

  • OpenAI asked the Raine family for a full list of memorial attendees and related media.
  • The lawsuit alleges OpenAI rushed the May 2024 release of GPT‑4o, cutting safety testing.
  • In February 2025, OpenAI allegedly removed suicide prevention from its disallowed content list.
  • Adam Raine’s daily ChatGPT interactions reportedly surged to about 300 per day in April.
  • OpenAI stresses teen wellbeing and points to new safety routing to GPT‑5 and parental‑control alerts.
  • Family lawyers called the memorial‑information request intentional harassment.
  • The legal dispute continues as both parties prepare for further court actions.

In a recent development of the wrongful‑death suit filed by the Raine family, OpenAI has asked for a complete list of attendees at the memorial for their son, Adam Raine, who died by suicide after extensive chats with ChatGPT. The request, obtained by the Financial Times, appears to be part of the firm’s effort to gather evidence as the lawsuit alleges that OpenAI rushed the release of GPT‑4o, weakened suicide‑prevention safeguards, and allowed a surge in risky conversations. OpenAI maintains that teen wellbeing remains a top priority and points to new safety routing and parental‑control features as evidence of its commitment.

Background of the Lawsuit

The Raine family filed a wrongful‑death lawsuit against OpenAI after their 16‑year‑old son, Adam Raine, died by suicide following prolonged interactions with the chatbot. The suit claims that OpenAI’s handling of safety measures contributed to the tragedy.

Allegations Regarding Model Release and Safety Testing

The complaint asserts that OpenAI accelerated the May 2024 release of GPT‑4o, cutting safety testing due to competitive pressure. It also alleges that in February 2025 the company removed suicide prevention from its “disallowed content” list, reducing it to a general advisory about “risky situations.”

Increase in Risky Interactions

According to the lawsuit, after the policy change Adam’s daily chats with ChatGPT rose dramatically, reaching about 300 per day in April, the month of his death, with a notable share containing self‑harm content.

OpenAI’s Request for Memorial Information

In documents obtained by the Financial Times, OpenAI asked the Raine family for a full list of attendees at Adam’s memorial, as well as any related videos, photographs, or eulogies. Lawyers for the family described the request as “intentional harassment.”

OpenAI’s Public Response

OpenAI responded by emphasizing that teen wellbeing is a top priority, citing existing safeguards such as directing users to crisis hotlines, routing sensitive conversations to newer models, prompting breaks during long sessions, and ongoing enhancements. The company highlighted a new safety‑routing system that directs emotionally sensitive chats to GPT‑5, which it says lacks the “sycophantic tendencies” of GPT‑4o, and parental‑control tools that can alert parents to potential self‑harm situations.

Current Status and Outlook

The lawsuit remains active, with the recent request for memorial details adding a new dimension to the legal battle. Both sides continue to prepare for further proceedings, while OpenAI maintains that it is strengthening protections for minors using its AI products.

#OpenAI#ChatGPT#Raine family#lawsuit#teen suicide#AI safety#GPT-4o#GPT-5#content policy#parental controls
Generated with  News Factory -  Source: TechCrunch

Also available in: