FILE -- California Gov. Gavin Newsom vetoed SB1046, a hotly contested measure that would have been the nation's strictest AI safety law. (AP Photo/Rich Pedroncelli, File)

Caption

FILE -- California Gov. Gavin Newsom vetoed SB1046, a hotly contested measure that would have been the nation's strictest AI safety law. (AP Photo/Rich Pedroncelli, File) / FR171957 AP

Gov. Gavin Newsom of California on Sunday vetoed a bill that would have enacted the nation’s most far-reaching regulations on the booming artificial intelligence industry.

California legislators overwhelmingly passed the bill, called SB 1047, which was seen as a potential blueprint for national AI legislation.

The measure would have made tech companies legally liable for harms caused by AI models. In addition, the bill mandated that tech companies enable a “kill switch” for AI technology in the event the systems were misused or went rogue.

Newsom described the bill as “well-intentioned,” but noted that its requirements amounted to “stringent” regulations could burden the state’s leading artificial intelligence companies, as Silicon Valley competes in the global AI race.

In his veto message, Newsom said the bill focused too much on the biggest and most powerful AI models, saying smaller upstarts could prove to be just as disruptive.

"Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 — at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good," Newsom wrote.

California Senator Scott Wiener, a co-author of the bill, criticized Newsom's move, saying the veto is a setback for artificial intelligence accountability.

"This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers, particularly given Congress's continuing paralysis around regulating the tech industry in any meaningful way," Wiener wrote on X.

The now-killed bill would have forced the industry to conduct safety tests on massively powerful AI models. Without such requirements, Wiener wrote on Sunday, the industry is left policing itself.

"While the large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that the voluntary commitments from industry are not enforceable and rarely work out well for the public."

Many powerful players in Silicon Valley, including venture capital firm Andreessen Horowitz, OpenAI and trade groups representing Google and Meta, lobbied against the bill, arguing it would slow the development of AI and stifle growth for early-stage companies.

“SB 1047 would threaten that growth, slow the pace of innovation, and lead California’s world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere,” OpenAI’s Chief Strategy Officer Jason Kwon wrote in a letter sent last month to Wiener.

Other tech leaders, however, backed the bill, including Elon Musk and pioneering AI scientists like Geoffrey Hinton and Yoshua Bengio, who signed a letter urging Newsom to sign it.

“We believe that the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure. It is feasible and appropriate for frontier AI companies to test whether the most powerful AI models can cause severe harms, and for these companies to implement reasonable safeguards against such risks,” wrote Hinton and dozens of former and current employees of leading AI companies.

Other states, like Colorado and Utah, have enacted laws more narrowly tailored to address how AI could perpetuate bias in employment and health-care decisions, as well as other AI-related consumer protection concerns.

Newsom has recently signed more than a dozen other AI bills into law, including one to crack down on the spread of deepfakes during elections. Another protects actors against their likenesses being replicated by AI without their consent.

As billions of dollars pour into the development of AI, and as it permeates more corners of everyday life, lawmakers in Washington still have not advanced a single piece of federal legislation to protect people from its potential harms, nor to provide oversight of its rapid development.