California Gov. Gavin Newsom (D) on Sunday vetoed a landmark artificial intelligence (AI) bill that would have created new safety rules for the emerging tech, handing much of Silicon Valley a major win.
Newsom’s veto caps off weeks of skepticism over how he would act on the controversial legislation, known as California Senate Bill 104, or the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.
In a veto message published Sunday, the governor said the bill’s focus on the “most expensive and large-scale models” “could give the public a false sense of security about controlling” AI.
“Smaller, specialized models may emerge equally or even more dangerous than the models targeted by SB 1074 – at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good,” he wrote.
The legislation was sent to his desk late last month after it passed the state Legislature, and his veto came just a day before the Monday deadline.
The bill, also called SB 1047, would have required powerful AI models to undergo safety testing before being released to the public. This might include, for example, testing whether their models can be manipulated to hack into the state’s electric grid.
It also intended to hold developers liable for severe harm caused by their models but would have only applied to AI systems that cost more than $100 million to train. No current models have hit that number yet.
“While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” Newsom wrote. “Instead, the bill applies stringent standards to even the most basic functions – so as long as a large system deploys it.”
Newsom has often indicated skepticism about reining in AI technology, which stands to bring large amounts of money to the Golden State. California is home to 32 of the world’s “50 leading AI companies,” according to Newsom’s office, and has become a major hub for AI-related legislation as a result.
The governor stressed his veto does not mean he does not agree with the author’s argument that there is an urgent need to act on the advancing tech to prevent major catastrophe.
“California will not abandon its responsibility,” he said, adding, “Proactive guardrails should be implemented and severe consequences for bad actors must be clear and enforceable.”
Any solution, Newsom argued, should be informed by an “empirical trajectory analysis” of AI systems and their capabilities.
The bill received mixed opinions from AI startups, major technology firms, researchers and even some lawmakers who were divided over whether it would throttle the development of the technology or establish much-needed guardrails.
Those on both sides of the argument have piled on pressure on Newsom over the past few months.
Several of the country’s leading tech firms, including OpenAI, Google and Meta – the parent company of Facebook and Instagram – expressed concerns the legislation would have targeted developers rather than the abusers of AI and argued safety regulations for the technology should be decided on a federal level.
Meanwhile, Anthropic, a leading AI startup, said last month the benefits of the bill likely would have outweighed the risks.
Last week, more than 120 Hollywood figures wrote an open letter pushing him to sign the legislation, writing the “most powerful AI models may soon pose severe risks.” Earlier this month, over 100 current or former employees of leading AI companies – including OpenAI, Anthropic, Google’s DeepMind and Meta – also wrote to Newsom, warning of these same risks.
Congressional lawmakers joined the debate too, with former Speaker Nancy Pelosi (D-Calif.) and some other California politicians coming out against the bill. Pelosi last month said “many” in Congress viewed the legislation as “well-intentioned but ill informed.”
Newsom pushed back on the argument that California should not have a role in a bill with nationwide implications.
“A California-only approach may well be warranted – especially absent federal action by Congress – but it must be based on empirical evidence and science,” he said, pointing to the federal and state-based risk analyses currently being done on AI.
The governor signed a series of other bills earlier this month aimed at preventing abuses of AI and placing guardrails on the emerging tech.
Three of these bills are aimed at preventing the misuse of sexually explicit deepfakes, which can generate images, audio, and video and digitally alter likenesses and voices. He signed two other bills aimed at protecting actors and performers from having their names, images and likenesses copied by artificial intelligence without authorization.