Newsom throws AI regulation fight into uncertainty with veto 

California Gov. Gavin Newsom (D)’s decision to veto a sweeping artificial intelligence (AI) bill has renewed the debate over the future of AI regulation, leaving different sectors of the tech industry largely in disagreement on the best path forward. 

While Newsom’s veto of California Senate Bill 1047 may have put the contentious measure to rest – at least for now – it has state legislators, AI experts and tech advocacy groups at odds on what comes next for the emerging tech.  

Some tech advocacy groups quickly voiced their disappointment with the veto of SB 1047 – short for the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act  – arguing California missed the chance to lead with first-of-its-kind regulations on some of the country’s largest AI developers.  

“This [SB 1047] was the first of its kind legislation that went and put real safeguards in place for some of the biggest and scariest unknown potential uses of AI – which particularly given the rapid advancement of the technology – is really important for us to have those guardrails in place moving forward,” Kaili Lambe, the policy and advocacy director for Accountable Tech, told The Hill.  

SB 1047 would have required powerful AI models to undergo safety testing before being released to the public. Testing would have examined whether these systems could be manipulated by malicious actors for harm, such as hacking into the state’s electric grid. 

It also sought to hold AI developers liable for severe harm caused by their models but would have only applied to AI systems that cost more than $100 million to train. No current models have hit that number yet.   

Lambe said the major bill’s failure concerns her given regulation and legislation often move slowly while the technology moves “fast.” 

Landon Klein, the director of U.S. policy for the Future of Life Institute (FLI), agreed, stating there is a timely need for regulation to keep up with the rapid development. FLI is a nonprofit organization focused on the existential risks to society. 

“One year is a lifetime in terms of the generations of these systems and there’s considerable risk over the course of that year,” he said. “And we also run the risk of sort of this broader integration of the technology across society that makes it more difficult to regulate in the future.” 

In his veto message Sunday, Newsom said the bill was “well-intentioned,” but “does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data.”  

“Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it,” he said, advocating for an “empirical trajectory analysis” of AI systems and their capabilities before a solution can be found. 

The veto appears to kick the can down the road on the issue, though Newsom said Sunday the state is planning to partner with leading industry experts including Dr. Fei-Fei Li, who is known as the “godmother of AI,” to develop guardrails based on an “empirical, science-based trajectory analysis.”  

Klein suggested this initiative could be “little too late.”  

“By the time the process is completed, at least one if not several new generations of increasingly powerful systems are going to come out, all of these under the same sort of profit incentive structure that has driven a lot of the largest companies to cut corners on safety,” he said.  

While disappointed, Lambe suggested it is not the end of the road for regulation.  

“We’re really going to hold Gov. Newsom to his word that he’s going to continue to try to find solutions here,” she said, adding, “I do think that what you should see in this next legislative session [are] numerous bills in multiple states that put forward other AI safety frameworks, which I think will hopefully put pressure on the federal government to also take action.”  

Sneha Revanhur, the founder of Encode Justice – a global youth advocacy group – and co-sponsor on SB 1047 also stressed advocates will keep pushing for regulation. 

“We’ll be back soon, whether that’s California or another stage, or federally or internationally, we will be back soon,” she told The Hill.  

The bill captured national attention over the past few weeks and coupled with Newsom’s veto, Revanhur and others believe it is raising awareness about the push for regulation.  

“People are seeing the global stakes of this political battle. And I think that that, in and of itself, is a remarkable achievement, and it means that the AI safety movement is really picking up steam and getting places. And I mean that is just us sort of building the foundations for our next victory,” she said.  

California State Sen. Scott Wiener (D), the author of SB 1047, called the veto a “setback” but said the fight for the bill “dramatically advanced the issue of AI safety on the international stage.”  

Meanwhile, some AI or software experts are cautioning against the push for regulation, and applauded Newsom’s move to veto the bill.  

Some told The Hill more evidence is needed before lawmakers start placing guardrails on the tech. This includes further research on the specific risks of AI development and the most effective response when these are identified, experts said.  

This is the big question that industry and academia is wrestling with ,” said Daniel Castro, the vice president at the Information and Innovation Foundation.

“How do we test these systems? How do we evaluate them? To create a law requiring this at this point was definitely…putting the cart before the horse,” he said.

The latest debate over SB 1047 is just one of many conversations over how to regulate AI. Newsom signed a host of other AI regulatory bills this month aimed at preventing abuses of AI, including sexual explicit deepfakes. On a federal level, there are more than 100 bills introduced in Congress, which will have less than two months to pass legislation upon returning from recess in November.

One way to begin the regulation process is by encouraging open-source AI, meaning developers make both the models and the data that went into them public, University of Michigan robotics and electrical engineering and computer science Professor Jason Corso suggested.   

“I think it requires a willingness to share not only resulting models, but also the data that went into them, so we can better understand those relationships,” said Corso, who is also the co-founder and CEO of computer vision startup Voxel 51.

“It requires a need for even better tooling around, analyzing models and analyzing data sets, and I hope to see a community driven safety mechanism in place without the need for a government mechanism in place. But I suspect there would be further legislation in the future if the community is unable to do that.” 

“This is an ambitious piece of legislation, and it wouldn’t have worked,” echoed Matt Calks, the co-founder and CEO of cloud computing and software company Appian, said. “And in order to get effective legislation, we first need to understand AI across our society, need to understand what this technology is, what you can expect of it, what its limits are, and so we just need more information. We need transparency. I would start there.” 

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Secular Times is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – seculartimes.com. The content will be deleted within 24 hours.

Leave a Comment