OpenAI backs Kids Online Safety Act amid mounting legal challenges

Key Points
- OpenAI joins Apple, Microsoft, Snap and X in endorsing the Kids Online Safety Act (KOSA).
- KOSA requires platforms to let minors opt out of addictive features and algorithmic recommendations.
- The bill imposes a duty of care to curb content encouraging eating disorders, suicide or sexual exploitation.
- NetChoice and the Electronic Frontier Foundation oppose KOSA, citing concerns over censorship.
- OpenAI faces lawsuits alleging its chatbot contributed to a teen suicide and a drug overdose.
- Chief Global Affairs Officer Chris Lehane says KOSA complements OpenAI's existing safety work.
- The legislation cleared the Senate in 2024 and now heads to the House for a vote.
OpenAI announced its endorsement of the Kids Online Safety Act (KOSA), joining Apple, Microsoft, Snap and X in supporting the bill that would tighten protections for minors on digital platforms. The company framed the move as part of a broader push for AI‑specific safety rules, citing past failures to shield teens from harmful content. KOSA, which cleared the Senate in 2024, mandates opt‑out options for addictive features and a duty of care to curb content that encourages eating disorders, suicide or sexual exploitation. OpenAI’s stance comes as it faces lawsuits alleging its own chatbot contributed to a teen’s suicide and another’s overdose.
OpenAI, the creator of ChatGPT, threw its weight behind the Kids Online Safety Act (KOSA) on Tuesday, aligning with tech giants Apple, Microsoft, Snap and X. The company said its endorsement reflects a "broader commitment to create AI‑specific rules" for protecting children online.
KOSA, first introduced in 2022, cleared the Senate in 2024 and is gaining legislative traction. The bill would compel social‑media apps and other online services to let minors opt out of "addictive" features and algorithmic recommendations. It also imposes a "duty of care" requiring platforms to mitigate harmful content that promotes eating disorders, suicide or sexual exploitation.
OpenAI’s chief global affairs officer, Chris Lehane, warned that the tech industry cannot repeat the errors of early social‑media platforms, which delayed safeguards until they were already woven into young people’s lives. "We can’t repeat the mistakes made during the rise of social media, when stronger safeguards for teens weren't put in place until the platforms were already deeply embedded in young people's lives," Lehane said in a statement.
Other major players have signaled support. Apple, Microsoft, Snap and X also endorsed KOSA, citing concerns that unchecked algorithmic feeds expose children to harmful material. However, the bill faces opposition from NetChoice, a trade group whose members include Meta, which argues that KOSA could enable censorship without delivering real safety benefits. Digital‑rights advocates such as the Electronic Frontier Foundation have also voiced criticism, contending that the legislation may overreach and stifle free expression.
OpenAI’s endorsement arrives at a fraught moment for the company. It is currently defending against a series of lawsuits alleging that its chatbot contributed to tragic outcomes. One suit, filed by the family of a teenager who died by suicide, claims the teen discussed his plans with ChatGPT, which failed to intervene. Another case alleges a teen overdosed on drugs after receiving inaccurate medical advice from the same system. Both lawsuits underscore the heightened scrutiny around AI safety and the pressure on developers to embed robust safeguards.
In response, OpenAI has pledged to expand its safety measures, emphasizing that KOSA complements its existing efforts. The company has already rolled out content filters, age‑appropriate usage guidelines and ongoing monitoring of harmful interactions. By backing the bill, OpenAI hopes to shape the regulatory framework that will govern AI tools used by minors.
Lawmakers on both sides of the aisle have expressed interest in the bill’s provisions. Proponents argue that a clear legal standard will force platforms to prioritize child safety, while opponents caution that mandatory opt‑out mechanisms could limit innovation and user choice. The next step for KOSA is a vote in the House, where its fate will likely hinge on negotiations over the scope of the duty‑of‑care requirements.
Regardless of the outcome, OpenAI’s public support signals a shift toward proactive engagement with policymakers. As AI systems become increasingly integrated into everyday life, the company appears resolved to influence the rules that will shape their impact on the next generation.