California advances landmark legislation to regulate large AI models | Artificial intelligence (AI)

A California bill that would establish first-in-the-nation safety measures for the largest artificial intelligence systems cleared an important vote Wednesday. The proposal, aiming to reduce potential risks created by AI, would require companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state’s electric grid or help build chemical weapons – scenarios experts say could be possible in the future with such rapid advancements in the industry.

The measure squeaked by in the state assembly Wednesday and now faces a final vote in the state senate, where it has passed once already, before it heads to the governor’s desk for his signature, though he has not indicated his position on it. Governor Gavin Newsom then has until the end of September to decide whether to sign it into law, veto it or allow it to become law without his signature. He declined to weigh in on the measure earlier this summer but had warned against AI overregulation.

Supporters said it would set some of the first much-needed safety ground rules for large-scale AI models in the United States. The bill targets systems that require more than $100m in data to train. No current AI models have hit that threshold.

The proposal, authored by Democratic senator Scott Wiener, faced fierce opposition from venture capital firms and tech companies, including OpenAI, Google and Meta, the parent company of Facebook and Instagram. They say safety regulations should be established by the federal government and that the California legislation takes aim at developers instead of targeting those who use and exploit the AI systems for harm.

Wiener said his legislation took a “light touch” approach.

“Innovation and safety can go hand in hand – and California is leading the way,” he said in a statement after the vote.

Wiener’s proposal is among dozens of AI bills California lawmakers proposed this year to build public trust, fight algorithmic discrimination and outlaw deepfakes that involve elections or pornography. With AI increasingly affecting the daily lives of Americans, state legislators have tried to strike a balance of reigning in the technology and its potential risks without stifling the booming homegrown industry.

California, home of 35 of the world’s top 50 AI companies, has been an early adopter of AI technologies and could soon deploy generative AI tools to address highway congestion and road safety, among other things.

Elon Musk, owner of X, formerly Twitter, and founder of xAI, threw his support behind the proposal this week, though he said it was a “tough call”. X operates its own chatbot and image generator, Grok, that has fewer safeguards in place than other prominent AI models.

“For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public,” Musk tweeted.

A group of several California House members also opposed the bill, with former House speaker Nancy Pelosi calling it “well-intentioned but ill informed.”

Chamber of Progress, a left-leaning Silicon Valley-funded industry group, said the bill is “based on science fiction fantasies of what AI could look like”.

“This bill has more in common with Blade Runner or The Terminator than the real world,” senior tech policy director Todd O’Boyle said in a statement after the Wednesday vote. “We shouldn’t hamstring California’s leading economic sector over a theoretical scenario.”

The legislation is also supported by Anthropic, an AI startup backed by Amazon and Google, after Wiener adjusted the bill earlier this month to include some of the company’s suggestions. The current bill removed a penalty of perjury provision, limited the state attorney general’s power to sue violators and narrowed the responsibilities of a new AI regulatory agency.

Anthropic said in a letter to Newsom that the bill is crucial to prevent catastrophic misuse of powerful AI systems and that “its benefits likely outweigh its costs”.

He also slammed critics earlier this week for dismissing potential catastrophic risks from powerful AI models as unrealistic: “If they really think the risks are fake, then the bill should present no issue whatsoever.”

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Secular Times is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – seculartimes.com. The content will be deleted within 24 hours.

Leave a Comment