The fate of California’s AI bill will be decided this week

One of the most important global AI bills you’ve seen is sitting on California Governor Gavin Newsom’s desk, and its fate could be decided by his signature or veto stamp any day now. Newsom has until September 30 to decide.
The bill, called the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, requires developers of the largest AI models to implement security measures and reporting processes, and submit to third-party compliance audits, beginning in 2026. The overall purpose of this bill. preventing the creation and deployment of powerful AI systems that could cause catastrophic harm. In one oft-cited example, such a system may create or enable the creation of a bioweapon.
Opponents of the bill in the tech industry complain that the safety requirements place an unnecessary burden on model developers and will shift the focus from developing AI to concerns about safety compliance. Open source model developers in particular feel threatened by the bill, as it would force them to ensure that others cannot modify their models to cause harm in the future.
The governor’s office has been silent on the bill for months as SB 1047 made its way through the California legislature. When Fast company asked the office last month about the status of Newsom’s discussions with technical stakeholders on the issue, the office declined to comment. The tech industry has powerful lobbyists, at least one with personal ties to Newsom, working to kill the bill.
A big sign of Newsom’s thinking on the bill came last week during an interview with Salesforce CEO Marc Benioff at the company’s Dreamforce conference.
“We’ve been working over the last few years to come up with sensible regulations that support risk-taking, but not recklessness,” Newsom said during the interview.
“That supports a very diverse and healthy ecosystem here in California and puts us at a competitive advantage, but at the same time it puts the rules of the road closer.”
He continued: “That’s a challenge now in this space, especially with SB 1047, because of the kind of big impact that legislation can have and the negative impact, especially in the open source community, that legislation can have,” Newsom said. “So I’m looking at that by thinking more about what are the potential risks of AI and what are the perceived risks. [We] it cannot solve everything; What can we solve?”
Some commentators on X noted that Newsom’s talk of open public participation may be a sign that he is leaning towards polling.
But you should not always take politicians’ words seriously in these situations. Newsom may have been making a potential decision on SB 1047 to learn from reactions from various stakeholder groups, one insider tells me. If Newsom had already decided against the bill, he would have vetoed it.
Newsom’s office has been receiving pressure from many quarters to block SB 1047. Industry groups including VC a16z have been strongly opposed, as has the high-profile incubator Y Combinator. A chorus of political scientists have written letters opposing Newsom. These include California representatives Ro Khanna (D-Santa Clara), Anna Eshoo (D-Palo Alto), and Zoe Lofgren (D-San Jose), as well as Democratic Party heavyweight Nancy Pelosi.
However, supporters of SB 1047 point out that California’s largest labor union, SAG-AFTRA, and the National Organization of Women (NOW), have come out in support, The Verge’s Garrison Lovely reports. Actors Mark Ruffalo and Joseph Gordon-Levitt posted video open letters of support for Newsom.
Although SB 1047 was the subject of heated debate in Silicon Valley, the bill passed the California legislature in Sacramento with relative ease. Some California lawmakers are still sore about the state’s failure to regulate social media platforms like Facebook and Instagram, and they don’t want to miss the boat on AI regulation.
And SB 1047 could impact AI regulation beyond California. First the law would apply not only to AI companies located in California, but to any AI company whose models would be used to power services (such as chatbots) delivered to Californians.
The bill could also have an impact on other states that want to build a regulatory framework around developers of large-scale AI models. California tends to take the lead in new technology legislation, as the federal government has proven too closed to do so.
Public opinion polls have consistently shown strong support for regulating large-scale AI models. This may reflect a common concern about the development of software that is smarter than humans. It may also reflect a lack of confidence that profit-driven technology companies can be counted on to ensure the safety of current and future AI systems.
The main criticism from the tech world is that SB 1047 asks AI companies to anticipate and prepare for future harm that may be caused by future AI systems—an impossible burden.
AI pioneer and Turing Award winner Yoshua Bengio responds to the industry’s opposition this way in an X post last week: “But (1) AI is rapidly improving in skills that increase the likelihood of these accidents, and (2) We shouldn’t wait. for a major disaster before protecting the public.”