California Governor Gavin Newsom vetoed a proposed AI safety bill, Senate Bill 1047 (SB1047), on Sunday, sparking mixed reactions from lawmakers, tech leaders and advocacy groups. /CFP
California Governor Gavin Newsom has vetoed Senate Bill 1047 (SB1047), also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, a proposed AI safety bill that aimed to regulate the development and deployment of advanced AI models. The decision has sparked mixed reactions from lawmakers, tech industry leaders and advocacy groups.
The bill, introduced by Democratic State Senator Scott Wiener, aimed to require safety testing for advanced AI models to prevent "catastrophic harm" before public release and hold developers liable for any damage caused by their systems. It targeted models costing over $100 million to develop or requiring significant computing power and proposed creating a state entity to oversee the development of "Frontier Models" with capabilities surpassing existing AI systems.
Governor Newsom vetoed the bill, arguing that it applied uniform standards to all AI systems without considering the different environments in which they are used or their associated levels of risk. In a letter to the state Senate, Newsom emphasized the need for an empirical, science-based approach to regulating AI, noting that he has requested leading experts on generative AI to help the state develop effective safety measures.
The tech industry largely welcomed the veto. Chamber of Progress, a tech coalition, praised the decision, saying that California's tech economy thrives on competition and openness. Major AI developers like Google, Meta and OpenAI opposed the bill, arguing that it could hinder innovation and and weaken both the state's and the U.S.'s global competitiveness in AI development.
Supporters of the bill, including Senator Wiener, expressed disappointment over the veto, warning that it leaves powerful AI developers unregulated and "makes California less safe." Wiener criticized the AI industry's voluntary commitments to safety as often unenforceable and ineffective.
Proponents like AI safety advocates and Tesla CEO Elon Musk backed regulation for responsible AI development. However, some AI experts sided with Newsom's view, advocating for a balanced, evidence-based approach to regulation. Fei-Fei Li, co-director of Stanford's Institute for Human-Centered Artificial Intelligence, agreed with the governor's call for careful regulation that both mitigates risks and supports innovation.
(With input from agencies)