California Senator Scott Wiener Pushes AI Safety Bill SB 53 Amid Industry Debate

Scott Wiener on his fight to make Big Tech disclose AI’s dangers
TechCrunch

Key Points

  • Senator Scott Wiener introduces SB 53 to require safety reports from AI firms with >$500 million revenue.
  • Bill targets catastrophic AI risks: human deaths, large‑scale cyber‑attacks, chemical weapons creation.
  • Anthropic publicly backs SB 53; Meta and former White House adviser also express support.
  • OpenAI and some industry groups argue for federal‑only regulation and raise constitutional concerns.
  • SB 53 creates protected employee reporting channels and proposes a state‑run cloud cluster, CalCompute.
  • Governor Gavin Newsom must decide whether to sign or veto the bill after previously vetoing SB 1047.

California Senator Scott Wiener is championing a new AI safety bill, SB 53, after his earlier proposal, SB 1047, was vetoed by Governor Gavin Newsom. The legislation would require major AI firms earning over $500 million to publish safety reports on their most advanced models and establish protected channels for employee concerns. Anthropic has endorsed the bill, while OpenAI and some industry groups argue that federal standards should apply. The bill also proposes a state‑run cloud computing cluster, CalCompute, to support AI research beyond Big Tech. Wiener argues that state action is essential as he doubts federal progress on AI safety.

Background and Legislative History

California State Senator Scott Wiener has long pursued AI safety regulation. In 2024 he introduced SB 1047, a bill that would have made tech companies liable for harms caused by AI systems. Governor Gavin Newsom vetoed SB 1047, citing concerns that the legislation could stifle the state’s AI boom.

New Proposal: SB 53

Wiener’s latest effort, SB 53, is now awaiting the governor’s decision. The bill narrows its focus to the largest AI developers—those generating more than $500 million in revenue—and requires them to publish safety reports for their most capable models. The reports must address the most catastrophic risks, including the potential for AI to cause human deaths, enable massive cyber‑attacks, or facilitate the creation of chemical weapons.

In addition to reporting requirements, SB 53 creates protected channels for AI‑lab employees to report safety concerns directly to government officials. It also establishes a state‑operated cloud computing cluster, named CalCompute, intended to provide AI research resources that are independent of the big‑tech infrastructure.

Industry Reactions

Anthropic has publicly endorsed SB 53, describing the legislation as a constructive step toward responsible AI development. A Meta spokesperson expressed support for AI regulation that balances safety guardrails with innovation, calling SB 53 “a step in that direction.” Former White House AI policy adviser Dean Ball praised the bill as “a victory for reasonable voices.”

OpenAI, however, argued that AI labs should only be subject to federal standards, suggesting that state‑level requirements could create a patchwork of regulations. Andreessen Horowitz raised concerns that certain state AI bills might run afoul of the Constitution’s dormant Commerce Clause, which limits states from unduly restricting interstate commerce.

Policy Context and Political Dynamics

Wiener has expressed skepticism about the federal government’s ability to enact meaningful AI safety measures, stating that states need to act when federal action stalls. He also criticized recent federal efforts that aim to block state AI laws, describing them as favoring industry interests.

The political backdrop includes debates over whether AI regulation should be handled at the federal level or left to individual states like California. While some industry leaders advocate for a uniform federal approach, others see state initiatives like SB 53 as essential to ensuring transparency and accountability in AI development.

Potential Impact

If signed, SB 53 would be among the first state laws to impose structured safety reporting on AI giants such as OpenAI, Anthropic, xAI, and Google. By focusing on the most severe risks and providing mechanisms for employee whistleblowing, the bill aims to improve public safety without unduly hampering innovation. Its passage could also set a precedent for other states considering similar AI oversight measures.

#Scott Wiener#SB 53#AI safety#California#Governor Gavin Newsom#Anthropic#OpenAI#Meta#AI regulation#Big Tech#CalCompute
Generated with  News Factory -  Source: TechCrunch

Also available in:

California Senator Scott Wiener Pushes AI Safety Bill SB 53 Amid Industry Debate | AI News