OpenAI Backs Illinois Bill to Shield AI Labs from Liability for Mass Harm

OpenAI Backs Illinois Bill to Shield AI Labs from Liability for Mass Harm
Wired AI

Key Points

  • OpenAI testified in support of Illinois SB 3444, which would limit liability for AI labs in cases of mass casualties or $1 billion‑plus property loss.
  • The bill defines a "frontier model" as any AI system trained with over $100 million in compute costs, covering major players like OpenAI, Google, xAI, Anthropic and Meta.
  • To qualify for protection, labs must not act intentionally or recklessly and must publish safety, security and transparency reports on their websites.
  • OpenAI spokesperson Jamie Radice said the measure avoids a patchwork of state regulations and promotes consistent national standards.
  • Critics, including policy director Scott Wisor, argue the bill could reduce accountability and note strong public opposition in Illinois.
  • Illinois has previously passed AI‑related laws, such as restrictions on AI use in mental‑health services and the Biometric Information Privacy Act.
  • Federal AI liability legislation remains absent, leaving states to craft varied approaches that could impact industry innovation.

OpenAI testified in favor of Illinois Senate Bill 3444, which would protect developers of frontier AI models from civil liability for "critical harms" such as mass casualties or billion‑dollar property damage, provided they publish safety reports and avoid reckless conduct. The legislation defines a frontier model as one trained with over $100 million in compute costs and aims to create uniform standards while limiting state‑by‑state regulatory patches. Critics warn the bill could reduce accountability, but OpenAI argues it balances safety with innovation.

Chicago – OpenAI stepped onto the legislative stage on April 9, testifying for Illinois Senate Bill 3444, a proposal that would bar AI labs from being sued for "critical harms" caused by their most advanced models. The bill targets incidents that result in 100 or more deaths, injuries, or at least $1 billion in property loss, provided the lab did not act intentionally or recklessly and had posted safety, security and transparency reports online.

The legislation defines a "frontier model" as any AI system trained with more than $100 million in computational costs. That definition captures the industry’s biggest players – OpenAI, Google, xAI, Anthropic and Meta – whose models routinely exceed that threshold.

"We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois," OpenAI spokesperson Jamie Radice said in an emailed statement. "They also help avoid a patchwork of state‑by‑state rules and move toward clearer, more consistent national standards."

OpenAI’s global affairs representative Caitlin Niedermeyer, who testified before the Senate committee, echoed the company's stance on federal harmonization. She warned that a fragmented landscape of state regulations could create friction without improving safety, and urged Congress to adopt a unified framework that would "reinforce a path toward harmonization with federal systems."

Critics remain skeptical. Scott Wisor, policy director for the Secure AI project, told WIRED the bill faces slim odds of passage, noting that a recent Illinois poll showed 90 percent of residents oppose exempting AI firms from liability. He added that the state has already pursued stricter AI rules, such as limiting the use of AI in mental‑health services and enforcing the Biometric Information Privacy Act.

SB 3444 lists several scenarios that qualify as critical harms, including the creation of chemical, biological, radiological or nuclear weapons by a bad actor using AI, and autonomous AI conduct that would be criminal if performed by a human. Under the bill, a lab would escape liability if it had not intentionally or recklessly caused the outcome and had complied with reporting requirements.

Federal lawmakers have yet to pass any AI‑specific liability framework, leaving states to experiment with their own approaches. Illinois joins California and New York, which have enacted bills requiring AI developers to submit safety and transparency reports. The lack of a national standard leaves companies navigating a patchwork of regulations that could hinder innovation.

OpenAI’s endorsement marks a shift from its previous defensive posture, where the company opposed measures that could expose it to lawsuits over its technology. The firm now appears to favor legislation that it believes will protect both public safety and the competitive edge of U.S. AI research.

Family members of children who died by suicide after allegedly forming unhealthy relationships with ChatGPT have filed lawsuits against OpenAI, highlighting the personal‑level harms that also attract scrutiny. While SB 3444 concentrates on large‑scale events, the broader debate continues over how to address both individual and societal risks posed by increasingly powerful AI models.

As the industry watches Illinois’ effort, the outcome could set a precedent for how the United States balances accountability with rapid AI advancement.

#OpenAI#Illinois#AI liability#frontier AI#AI regulation#technology policy#AI safety#legislation#AI labs#AI models
Generated with  News Factory -  Source: Wired AI

Also available in: