California Governor Gavin Newsom signs pioneering AI safety law

On Monday, Gov. Gavin Newsom signed new legislation establishing safety requirements for advanced artificial intelligence models, aiming to block their use in potentially catastrophic scenarios such as creating bioweapons or disrupting banking systems.
The governor highlighted California’s role as a trailblazer in AI oversight, contrasting it with what he called a lack of action at the federal level during a recent discussion with former President Bill Clinton. Newsom emphasized that the law introduces some of the first state-level safeguards for large-scale AI models while protecting California’s position as a hub for the industry, home to leading developers.
“California has shown that we can regulate to keep communities safe while still fostering innovation in AI,” Newsom said in a statement. “This law achieves that balance.”
Under the measure, AI companies must create and publicly share safety protocols ensuring their most advanced models cannot be exploited to cause large-scale harm. The rules apply to “frontier” systems defined by their extraordinary computing capacity—thresholds based on how many calculations they process. Lawmakers acknowledged those metrics are only an initial way to separate today’s top models from the even more powerful systems on the horizon. Companies like OpenAI, Google, Meta, and Anthropic—all headquartered in California—will fall under the new rules.
The law defines catastrophic risk as at least $1 billion in damages or more than 50 deaths or injuries. It seeks to prevent AI misuse in scenarios such as hacking a power grid or other mass-disruption events. Companies will also have to report major safety incidents to the state within 15 days. Additional provisions include whistleblower protections, the creation of a public cloud for research, and penalties of up to $1 million per violation.
While some tech firms pushed back, arguing regulation should come from Washington to avoid conflicting state rules, others were more supportive. Anthropic called the measures “practical safeguards” that formalize safety steps already being followed.
“Federal standards are still necessary to prevent a patchwork system, but California has built a strong framework that balances innovation with public protection,” said Anthropic co-founder Jack Clark.
The bill follows Newsom’s veto last year of a broader proposal that industry leaders warned could stifle innovation. Instead, he convened a panel of experts—including AI pioneer Fei-Fei Li—to draft recommendations on oversight. Those suggestions, along with industry feedback, shaped the final law. The legislation also eases requirements for startups to avoid slowing their growth, according to state Sen. Scott Wiener of San Francisco, the bill’s sponsor.
“With this measure, California once again establishes itself as a global leader in both technological progress and safety,” Wiener said.
Newsom’s action comes against the backdrop of federal debate. President Donald Trump announced in July that his administration would seek to roll back what it views as excessive regulations to accelerate U.S. dominance in AI. Earlier this year, congressional Republicans tried unsuccessfully to prohibit states and cities from regulating AI for the next decade.
In the absence of stronger national standards, states have been moving to fill the gap, introducing rules on everything from election-related deepfakes to AI-driven mental health tools. California lawmakers this year passed additional bills addressing child safety with AI chatbots and workplace applications of the technology.
The state has also been quick to apply AI itself, deploying generative models to detect wildfires, reduce traffic congestion, and improve road safety, among other initiatives.