On Monday, Anthropic introduced an official endorsement of SB 53, a California invoice from state senator Scott Wiener that might impose first-in-the-nation transparency necessities on the world’s largest AI mannequin builders. Anthropic’s endorsement marks a uncommon and main win for SB 53, at a time when main tech teams just like the Shopper Know-how Affiliation (CTA) and Chamber for Progress are lobbying in opposition to the invoice.
“Whereas we consider that frontier AI security is greatest addressed on the federal stage as an alternative of a patchwork of state laws, highly effective AI developments received’t await consensus in Washington,” mentioned Anthropic in a weblog put up. “The query isn’t whether or not we want AI governance — it’s whether or not we’ll develop it thoughtfully in the present day or reactively tomorrow. SB 53 gives a stable path towards the previous.”
If handed, SB 53 would require frontier AI mannequin builders like OpenAI, Anthropic, Google, and xAI to develop security frameworks, in addition to launch public security and safety stories earlier than deploying highly effective AI fashions. The invoice would additionally set up whistleblower protections to staff who come ahead with security issues.
Senator Wiener’s invoice particularly focuses on limiting AI fashions from contributing to “catastrophic dangers,” which the invoice defines because the dying of not less than 50 folks or greater than a billion {dollars} in damages. SB 53 focuses on the intense aspect of AI threat — limiting AI fashions from getting used to offer expert-level help within the creation of organic weapons or being utilized in cyberattacks — reasonably than extra near-term issues like AI deepfakes or sycophancy.
California’s Senate accepted a previous model of SB 53 however nonetheless wants to carry a remaining vote on the invoice earlier than it could possibly advance to the governor’s desk. Governor Gavin Newsom has stayed silent on the invoice to this point, though he vetoed Senator Weiner’s final AI security invoice, SB 1047.
Payments regulating frontier AI mannequin builders have confronted important pushback from each Silicon Valley and the Trump administration, which each argue that such efforts may restrict America’s innovation within the race in opposition to China. Traders like Andreessen Horowitz and Y Combinator led among the pushback in opposition to SB 1047, and in current months, the Trump administration has repeatedly threatened to dam states from passing AI regulation altogether.
Probably the most frequent arguments in opposition to AI security payments are that states ought to depart the matter as much as federal governments. Andreessen Horowitz’s head of AI coverage, Matt Perault, and chief authorized officer, Jai Ramaswamy, revealed a weblog put up final week arguing that lots of in the present day’s state AI payments threat violating the Structure’s Commerce Clause — which limits state governments from passing legal guidelines that transcend their borders and impair interstate commerce.
Techcrunch occasion
San Francisco
|
October 27-29, 2025
Nevertheless, Anthropic co-founder Jack Clark argues in a put up on X that the tech business will construct highly effective AI techniques within the coming years and might’t await the federal authorities to behave.
“We’ve got lengthy mentioned we would favor a federal commonplace,” mentioned Clark. “However within the absence of that this creates a stable blueprint for AI governance that can’t be ignored.”
OpenAI’s chief international affairs officer, Chris Lehane, despatched a letter to Governor Newsom in August arguing that he shouldn’t go any AI regulation that might push startups out of California — though the letter didn’t point out SB 53 by identify.
OpenAI’s former head of coverage analysis, Miles Brundage, mentioned in a put up on X that Lehane’s letter was “full of deceptive rubbish about SB 53 and AI coverage usually.” Notably, SB 53 goals to solely regulate the world’s largest AI firms — notably ones that generated a gross income of greater than $500 million.
Regardless of the criticism, coverage specialists say SB 53 is a extra modest strategy than earlier AI security payments. Dean Ball, a senior fellow on the Basis for American Innovation and former White Home AI coverage adviser, mentioned in an August weblog put up that he believes SB 53 has a great probability now of changing into regulation. Ball, who criticized SB 1047, mentioned SB 53’s drafters have “proven respect for technical actuality,” in addition to a “measure of legislative restraint.”
Senator Wiener beforehand mentioned that SB 53 was closely influenced by an knowledgeable coverage panel Governor Newsom convened — co-led by main Stanford researcher and co-founder of World Labs, Fei-Fei Li — to advise California on easy methods to regulate AI.
Most AI labs have already got some model of the interior security coverage that SB 53 requires. OpenAI, Google DeepMind, and Anthropic often publish security stories for his or her fashions. Nevertheless, these firms usually are not sure by anybody however themselves, so typically they fall behind their self-imposed security commitments. SB 53 goals to set these necessities as state regulation, with monetary repercussions if an AI lab fails to conform.
Earlier in September, California lawmakers amended SB 53 to take away a piece of the invoice that might have required AI mannequin builders to be audited by third events. Tech firms have beforehand fought most of these third-party audits in different AI coverage battles, arguing that they’re overly burdensome.