

Earlier this week, California’s governor Gavin Newsom signed a new law designed to ensure safe development and deployment of frontier AI models.
“California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive,” Newsom said. “This legislation strikes that balance. AI is the new frontier in innovation, and California is not only here for it – but stands strong as a national leader by enacting the first-in-the-nation frontier AI safety legislation that builds public trust as this emerging technology rapidly evolves.”
The law, SB 53, establishes requirements for companies developing frontier AI models, spanning five categories: transparency, innovation, safety, accountability, and responsiveness.
To ensure transparency, SB 53 requires that frontier model developers publish a framework on their website that describes how they are incorporating national and international standards and industry best practices.
In order to support innovation, a new consortium called CalCompute will be formed to support research that will advance development of AI that is safe, ethical, equitable, and sustainable.
The law also creates a new safety mechanism for AI companies and the public to report critical safety incidents to California’s Office of Emergency Services.
It also sets up protections for whistleblowers who come forward with news of significant health and safety risks that frontier models pose, as well as creates a civil penalty for noncompliance.
Finally, SB 53 directs the California Department of Technology to recommend annual updates to the law based on input from multiple stakeholders, technological developments, and international standards.
“With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place commonsense guardrails to understand and reduce risk. With this law, California is stepping up, once again, as a global leader on both technology innovation and safety,” said Senator Scott Wiener, who authored SB 53.
Mayank Kumar, founding AI engineer at the AI security company DeepTempo, believes this is a welcome step forward for the responsible use of AI. He explained that standard bodies such as NIST and ISO have already released AI safety frameworks, and that this new law will speed up the enforcement process.
“This law rightfully treats AI as a critical emerging technology that, for the sake of public safety, must be regulated. Its focus on mandatory incident reporting is particularly crucial, establishing a framework similar to cybersecurity protocols where transparency is key to managing systemic risks and building a culture of accountability,” he said.