Musk Criticizes OpenAI’s Safety Record in Deposition, Claims Grok Not Linked to Suicides

Key Points
- Elon Musk’s deposition attacks OpenAI’s safety record and contrasts it with xAI’s approach.
- Musk states no suicides are linked to his Grok model, implying ChatGPT may be involved.
- He reaffirmed signing the March 2023 AI safety letter to urge caution, not to support his own AI venture.
- Musk corrected his prior donation figure to OpenAI, noting it is closer to $44.8 million.
- The lawsuit alleges OpenAI’s shift to a for‑profit model violated its original nonprofit charter.
- xAI’s Grok faced investigations after non‑consensual nude images, including alleged minors, were generated.
- Musk expressed ongoing concerns about artificial general intelligence and Google’s AI dominance.
In a newly released deposition related to Elon Musk’s lawsuit against OpenAI, the billionaire accused the lab of neglecting safety, contrasting it with his own xAI venture. Musk asserted that no suicides have been linked to his company’s Grok model, while suggesting that OpenAI’s ChatGPT may be implicated. He reiterated his support for the March 2023 AI safety letter and explained his motivation for signing it. The testimony also touched on Musk’s past donation figures, concerns about AI monopolies, and the broader legal battle over OpenAI’s shift from nonprofit to for‑profit status.
Deposition Highlights Musk’s Attack on OpenAI
Elon Musk’s recent deposition, filed in his case against OpenAI, contains a forceful critique of the lab’s safety practices. Musk claimed that his own company, xAI, places safety ahead of speed and revenue, and he emphasized that “nobody has committed suicide because of Grok.” By contrast, he suggested that OpenAI’s ChatGPT has been linked to suicides, framing the issue as evidence of OpenAI’s lax safety oversight.
Musk also revisited his involvement with the March 2023 AI safety letter, which was signed by over 1,100 individuals. He said he signed the letter because it “seemed like a good idea” and because he wanted AI safety to be prioritized, not because he was already competing with OpenAI through his own AI venture.
Legal Context and Underlying Issues
The deposition is part of a broader lawsuit alleging that OpenAI’s transformation from a nonprofit research lab into a for‑profit entity violated its original founding agreements. Musk argues that the shift places commercial pressures above safety considerations, creating an “out‑of‑control race” to develop ever more powerful AI systems.
During the testimony, Musk corrected a prior statement about his financial contribution to OpenAI, noting that the actual figure is closer to $44.8 million rather than the previously cited $100 million.
Safety Concerns Around xAI’s Grok
Although Musk defended Grok, the model has attracted scrutiny after non‑consensual nude images—some allegedly depicting minors—were generated and circulated on the social network X. This prompted investigations by the California Attorney General’s office, the European Union, and other governments.
Broader AI Safety Concerns
Musk reiterated his long‑standing worries about artificial general intelligence (AGI), describing it as a risk. He also cited concerns about Google’s growing dominance in AI, recalling conversations with co‑founder Larry Page that he found “alarming” because of a perceived lack of focus on safety.
The deposition, filed ahead of an expected jury trial, underscores the high‑stakes legal and ethical battles surrounding AI development, safety, and corporate governance.