Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

AI at the Frontier: What California’s SB-53 Means for Large AI Model Developers

By Linn Foster Freedman & Roma Patel on October 2, 2025
Email this postTweet this postLike this postShare this post on LinkedIn

On September 29, 2025, Governor Gavin Newsom signed SB 53, the Transparency in Frontier Artificial Intelligence Act (“the Act”) into law, establishing a regulatory framework for developers of advanced artificial intelligence (AI) systems. The law imposes new transparency, reporting, and risk management requirements on entities developing high-capacity AI models. It is the first of its kind in the United States. Although several states, including California, Colorado, Texas, and Utah have passed consumer AI laws, SB 53 is focused on the safety of the development and use of large AI platforms. According to Newsom in his signing message to the California state Senate, the Act “will establish state-level oversight of the use, assessment, and governance of advanced artificial intelligence (AI) systems…[to]strengthen California’s ability to monitor, evaluate, and respond to critical safety incidents associated with these advanced systems, empowering the state to act quickly to protect public safety, cybersecurity, and national security.”

Newsom highlighted that “California is the birthplace of modern technology and innovation” and “is home to many of the world’s top AI researchers and developers.” This allows for “a unique opportunity to provide a blueprint for well-balanced AI policies beyond our borders-especially in the absence of a comprehensive federal AI policy framework and national AI safety standards.”

Although the Biden administration issued an Executive Order in October of 2023 designed to start the discussion and development of guardrails around using AI in the United States, President Trump gutted the AI EO on his first day in office in January of 2025, without providing any meaningful replacement. Since then, there has been nothing from the White House except encouragement for AI developers to move fast and furiously. As a result, states are recognizing the risk of AI for consumers, cybersecurity, and national intelligence and, as usual, California is leading the way in addressing these risks.

Newsom noted in his message to the California State Senate that, in the event “the federal government or Congress adopt national AI standards that maintain or exceed the protections in this bill, subsequent action will be necessary to provide alignment between policy frameworks—ensuring businesses are not subject to duplicative or conflicting requirements across jurisdictions.” A summary of the substance of the bill is outlined below.

Who is Covered?

The Act is meant to cover only certain powerful artificial intelligence models. The Act defines AI models generally as computer systems that can make decisions or generate responses based on the information they receive. Such systems can operate with varying levels of independence and are designed to affect real-world or digital environments, such as controlling devices, answering questions, or creating content. The Act defines several specific types of AI models and AI developers:

  • A foundation model is a general-purpose AI model trained on broad datasets and adaptable to a wide range of tasks.
  • A frontier model is a foundation model trained using more than 1026 integer or floating-point operations. This means that there are 100 septillion computational steps, which only applies to large and complex AI models.
  • A frontier developer is an entity that initiates or conducts training of a frontier model.
  • A large frontier developer is a subset of a frontier developer and constitutes a frontier developer (including affiliates) with annual gross revenues exceeding $500 million.

The Act applies to frontier developers. The law is designed to target developers with significant resources and influence over high-capacity AI systems and is not meant to cover smaller or less computationally intensive projects.

Key Compliance Requirements

  1. Frontier AI Framework – Large frontier developers must publish and maintain a documented framework outlining how they assess and mitigate catastrophic risks associated with their models. The framework may include risk thresholds and mitigation strategies, cybersecurity practices, and internal governance and third-party evaluations. A catastrophic risk is defined as a foreseeable and material risk that a frontier model could contribute to thedeath or serious injury of at least 50 people, or cause over $1 billion in property damage, through misuse or malfunction.
  • Transparency Reporting – Prior to deploying a new or substantially modified frontier model, developers must publish on their websites a report detailing the model capabilities and intended uses, risk assessments and mitigation strategies, and involvement of third-party evaluators. For example, a developer releasing a model capable of generating executable code or scientific analysis must disclose its intended use cases and any safeguards against misuse.
  • Incident and Risk Reporting – Critical safety incidents must be reported to the Office of Emergency Services (OES) within 15 days. If imminent harm is identified, an appropriate authority, such as a law enforcement agency or public safety agency, must be notified within 24 hours. For instance, if a model autonomously initiates a cyberattack, the developer must notify an appropriate authority within 24 hours. Developers are also encouraged, but not required, to report critical safety incidents pertaining to foundation models that are not frontier models.

Whistleblower Protections

The law prohibits retaliation against employees who report safety concerns or violations. Large developers must notify employees of their whistleblower rights, implement anonymous reporting mechanisms, and provide regular updates to whistleblowers.

Enforcement and Penalties

Noncompliance may result in civil penalties of up to $1 million per violation, enforceable by the California Attorney General. This is a high ceiling for penalties and likely to incentivize proactive compliance and documentation. Penalties may be imposed for failure to publish required documents, false statements about risk, or noncompliance with the developer’s own framework.

CalCompute Initiative

The Act also establishes a consortium to develop CalCompute, a public cloud computing cluster intended to support safe and equitable AI research. A report outlining its framework is due to the California Legislature by January 1, 2027. CalCompute could become a strategic resource for academic and nonprofit developers who seek access to high-performance computing but lack the necessary commercial infrastructure.

Takeaways

The Act introduces a structured compliance regime for high-capacity AI systems. Organizations subject to the Act should begin reviewing their AI development practices, internal governance structures, and incident response protocols.

Photo of Linn Foster Freedman Linn Foster Freedman

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chair’s the firm’s Data Privacy and Security Team. Linn focuses her practice on…

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chair’s the firm’s Data Privacy and Security Team. Linn focuses her practice on compliance with all state and federal privacy and security laws and regulations. She counsels a range of public and private clients from industries such as construction, education, health care, insurance, manufacturing, real estate, utilities and critical infrastructure, marine and charitable organizations, on state and federal data privacy and security investigations, as well as emergency data breach response and mitigation. Linn is an Adjunct Professor of the Practice of Cybersecurity at Brown University and an Adjunct Professor of Law at Roger Williams University School of Law.  Prior to joining the firm, Linn served as assistant attorney general and deputy chief of the Civil Division of the Attorney General’s Office for the State of Rhode Island. She earned her J.D. from Loyola University School of Law and her B.A., with honors, in American Studies from Newcomb College of Tulane University. She is admitted to practice law in Massachusetts and Rhode Island. Read her full rc.com bio here.

Read more about Linn Foster Freedman
Show more Show less
Photo of Roma Patel Roma Patel

Roma Patel focuses her practice on a broad range of data privacy and cybersecurity matters. She handles comprehensive responses to cybersecurity incidents, including business email compromises, network intrusions, inadvertent disclosures and ransomware attacks. In response to privacy and cybersecurity incidents, Roma guides clients…

Roma Patel focuses her practice on a broad range of data privacy and cybersecurity matters. She handles comprehensive responses to cybersecurity incidents, including business email compromises, network intrusions, inadvertent disclosures and ransomware attacks. In response to privacy and cybersecurity incidents, Roma guides clients through initial response, forensic investigation, and regulatory obligations in a manner that balances legal risks and business or organizational needs. Read her full rc.com bio here.

Show more Show less
  • Posted in:
    Intellectual Property
  • Blog:
    Data Privacy + Cybersecurity Insider
  • Organization:
    Robinson & Cole LLP
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo