The creator of California’s SB 1047, the nation’s most controversial AI security invoice of 2024, is again with a brand new AI invoice that would shake up Silicon Valley.
California state Senator Scott Wiener launched a brand new invoice on Friday that might shield staff at main AI labs, permitting them to talk out in the event that they assume their firm’s AI programs could possibly be a “crucial danger” to society. The brand new invoice, SB 53, would additionally create a public cloud computing cluster, known as CalCompute, to provide researchers and startups the required computing assets to develop AI that advantages the general public.
Wiener’s final AI invoice, California’s SB 1047, sparked a vigorous debate throughout the nation round the right way to deal with large AI programs that would trigger disasters. SB 1047 aimed to forestall the potential of very massive AI fashions creating catastrophic occasions, similar to inflicting lack of life or cyberattacks costing greater than $500 million in damages. Nonetheless, Governor Gavin Newsom in the end vetoed the invoice in September, saying SB 1047 was not the very best method.
However the debate over SB 1047 shortly turned ugly. Some Silicon Valley leaders mentioned SB 1047 would harm America’s aggressive edge within the world AI race, and claimed the invoice was impressed by unrealistic fears that AI programs might result in science fiction-like doomsday situations. In the meantime, Senator Wiener alleged that some enterprise capitalists engaged in a “propaganda marketing campaign” in opposition to his invoice, pointing partly to Y Combinator’s declare that SB 1047 would ship startup founders to jail, a declare specialists argued was deceptive.
SB 53 primarily takes the least controversial elements of SB 1047 – similar to whistleblower protections and the institution of a CalCompute cluster – and repackages them into a brand new AI invoice.
Notably, Wiener isn’t shying away from existential AI danger in SB 53. The brand new invoice particularly protects whistleblowers who imagine their employers are creating AI programs that pose a “crucial danger.” The invoice defines crucial danger as a “foreseeable or materials danger {that a} developer’s improvement, storage, or deployment of a basis mannequin, as outlined, will consequence within the loss of life of, or critical damage to, greater than 100 individuals, or greater than $1 billion in harm to rights in cash or property.”
SB 53 limits frontier AI mannequin builders – seemingly together with OpenAI, Anthropic, and xAI, amongst others – from retaliating in opposition to staff who disclose regarding data to California’s Legal professional Common, federal authorities, or different staff. Underneath the invoice, these builders could be required to report again to whistleblowers on sure inner processes the whistleblowers discover regarding.
As for CalCompute, SB 53 would set up a gaggle to construct out a public cloud computing cluster. The group would encompass College of California representatives, in addition to different private and non-private researchers. It might make suggestions for the right way to construct CalCompute, how massive the cluster ought to be, and which customers and organizations ought to have entry to it.
In fact, it’s very early within the legislative course of for SB 53. The invoice must be reviewed and handed by California’s legislative our bodies earlier than it reaches Governor Newsom’s desk. State lawmakers will certainly be ready for Silicon Valley’s response to SB 53.
Nonetheless, 2025 could also be a more durable 12 months to move AI security payments in comparison with 2024. California handed 18 AI-related payments in 2024, however now it appears as if the AI doom motion has misplaced floor.
Vice President J.D. Vance signaled on the Paris AI Motion Summit that America isn’t occupied with AI security, however moderately prioritizes AI innovation. Whereas the CalCompute cluster established by SB 53 might absolutely be seen as advancing AI progress, it’s unclear how legislative efforts round existential AI danger will fare in 2025.