Anthropic Opposes Illinois AI Liability Bill Backed by OpenAI

Anthropic Opposes Illinois AI Liability Bill Backed by OpenAI
Wired AI

Key Points

  • Anthropic publicly opposes Illinois Senate Bill 3444, which would shield AI labs from liability for large‑scale harms.
  • OpenAI supports the bill, arguing it balances safety with continued access to AI technology.
  • The bill would exempt developers who publish safety frameworks from lawsuits even if their models are misused.
  • Anthropic is lobbying Senator Bill Cunningham for major revisions or the bill’s defeat.
  • Governor JB Pritzker’s office will monitor AI legislation but cautions against granting full immunity.
  • Anthropic backs a different state bill, SB 3261, demanding audited safety and child‑protection plans from AI firms.
  • Legal experts warn SB 3444 could undermine existing common‑law liability that motivates risk mitigation.

Anthropic has formally rejected Illinois Senate Bill 3444, a proposal that would shield AI developers from liability for large‑scale harms such as mass casualties or billion‑dollar property losses. The bill, championed by state Senator Bill Cunningham and supported by OpenAI, would exempt labs that publish safety frameworks from responsibility if their models are misused. Anthropic’s U.S. state‑government liaison, Cesar Fernandez, called the measure a “get‑out‑of‑jail‑free card,” urging instead for transparency paired with real accountability. Illinois officials, including Governor JB Pritzker’s office, have signaled they will monitor the legislation but are wary of granting blanket immunity.

Anthropic entered the Illinois Capitol with a clear message: Senate Bill 3444, the state’s proposed AI liability shield, must be reworked or abandoned. The bill, which would protect AI labs from civil responsibility for catastrophic outcomes—such as a bioweapon that kills hundreds or damage exceeding $1 billion—has drawn backing from OpenAI, the maker of ChatGPT. In a statement to WIRED, Anthropic’s head of U.S. state and local government relations, Cesar Fernandez, said the legislation would give developers a “get‑out‑of‑jail‑free card” and fail to ensure public safety.

According to sources familiar with the lobbying effort, Anthropic has been meeting with Senator Bill Cunningham, the bill’s sponsor, and other Illinois lawmakers to push for substantial revisions. The company argues that any transparency law must also embed accountability mechanisms that compel developers to mitigate the most serious harms their frontier AI systems could cause. "We know that Senator Cunningham cares deeply about AI safety, and we look forward to working with him on changes that would instead pair transparency with real accountability," Fernandez added.

OpenAI, by contrast, defends SB 3444 as a balanced approach that reduces the risk of severe damage while keeping the technology accessible to businesses and consumers across the state. In a comment, OpenAI spokesperson Liz Bourgeois said the company has been collaborating with states such as New York and California to create a “harmonized” regulatory framework. "In the absence of federal action, we will continue to work with states—including Illinois—to work toward a consistent safety framework," she said, adding that the state bills could inform a national standard.

The disagreement hinges on who should bear liability when an AI system is weaponized or otherwise misused. Under the current draft, a lab that publishes a safety framework on its website would be insulated from lawsuits even if a bad actor repurposes its model for lethal outcomes. Critics, including Thomas Woodside of the Secure AI Project, warn that the bill would erode existing common‑law liability, which already incentivizes companies to address foreseeable risks.

Anthropic has also testified in favor of a separate Illinois proposal, Senate Bill 3261, which would require frontier AI developers to create publicly vetted safety and child‑protection plans subject to third‑party audits. If enacted, SB 3261 could become one of the nation’s toughest AI safety statutes.

The clash between the two leading AI labs reflects a broader strategic battle over how emerging technologies are governed. Anthropic, founded by former OpenAI staff, has positioned itself as a vocal advocate for robust safeguards, a stance that has drawn criticism from previous administrations. The Trump administration’s AI and crypto czar, David Sacks, once labeled Anthropic’s regulatory push as “fear‑mongering.”

Illinois Governor JB Pritzker’s office has issued a statement noting that the governor’s team will monitor the flood of AI‑related bills moving through the General Assembly. While the governor does not endorse a full liability shield, he emphasized that big‑tech firms should not be allowed to evade responsibilities that protect the public interest.

Although experts agree the likelihood of SB 3444 becoming law is low, the debate has already exposed a rift between two of the nation’s most influential AI developers. As both companies ramp up lobbying efforts nationwide, the outcome of Illinois’ legislative battles could set a precedent for how states address the complex question of AI accountability.

#Artificial Intelligence#AI regulation#AI liability#Anthropic#OpenAI#Illinois#SB 3444#AI safety#Tech policy#State legislation
Generated with  News Factory -  Source: Wired AI

Also available in: