California’s SB 53 AI Safety Bill Targets Big AI Companies

Key Points
- SB 53 focuses on AI developers with annual revenue over $500 million.
- Requires safety report publication and incident reporting to the state.
- Creates a protected channel for employee safety concerns.
- Exempts most smaller startups, limiting extensive reporting to large firms.
- Backed by AI company Anthropic and praised as a meaningful check on big AI labs.
- Reflects California’s push for AI safety amid a less regulatory federal stance.
- Featured on TechCrunch’s Equity podcast, highlighting its industry relevance.
California’s Senate has approved SB 53, an AI safety bill that will be sent to Governor Gavin Newsom for signature. The legislation focuses on AI developers earning more than $500 million annually, requiring them to publish safety reports and report incidents to the state. It also creates a protected channel for employee concerns. Supporters cite the bill as a meaningful check on large AI firms such as OpenAI and Google DeepMind, while noting that smaller startups are largely exempt. The bill has earned backing from AI company Anthropic and reflects a state‑level push amid a federal environment that is less inclined toward regulation.
Background
California’s state Senate recently gave final approval to SB 53, an AI safety bill that will now move to Governor Gavin Newsom for final action. The legislation follows a previous effort by Senator Scott Wiener that was vetoed last year, but SB 53 narrows its focus to larger AI developers, specifically those generating more than $500 million in annual revenue.
Key Provisions
SB 53 requires qualifying AI companies to publish safety reports for their models and to report any incidents to the state government. It also establishes a confidential channel for employees to raise safety concerns without fear of retaliation, addressing the challenge of non‑disclosure agreements that often limit internal reporting.
The bill deliberately excludes most smaller startups, limiting extensive reporting requirements to major players such as OpenAI and Google DeepMind. Smaller firms must share limited safety information but are not subject to the full reporting regime.
Industry Reaction
The bill has received endorsement from AI company Anthropic, which sees the targeted approach as a balanced way to ensure safety without stifling innovation among smaller companies. Proponents argue that the legislation provides one of the few practical checks on the growing power of large AI firms.
Critics note that while the bill aims to protect the startup ecosystem, the carve‑outs and reporting obligations could still pose compliance challenges for the biggest AI labs.
Political Context
SB 53 emerges in a broader national conversation about AI regulation. While the federal administration has signaled a hands‑off approach, including language in funding bills that could limit state‑level AI regulation, no such measures have been enacted yet. The California bill therefore represents a state‑level effort to address AI safety concerns amid differing federal and state regulatory philosophies.
TechCrunch highlighted the bill on its flagship podcast, Equity, discussing its potential impact with hosts Max Zeff and Kirsten Korosec. The conversation underscored the significance of California as a hub for AI activity and the importance of targeted regulation that balances safety with innovation.