On September 29, California Governor Gavin Newsom (D) signed into law SB 53, the Transparency in Frontier Artificial Intelligence Act (“TFAIA”), establishing public safety regulations for developers of “frontier models,” or large foundation AI models trained using massive amounts of computing power.  TFAIA is the first frontier model safety legislation in the country to become law.  In his signing statement, Governor Newsom stated that that TFAIA will “provide a blueprint for well-balanced AI policies beyond [California’s] borders – especially in the absence of a comprehensive federal AI policy framework and national AI safety standards.”  TFAIA largely adopts the recommendations of the Joint California Policy Working Group on AI Frontier Models, which released its final report on frontier AI policy in June.

Frontier Developers.  Effective January 1, 2026, TFAIA will apply to “frontier developers” who have trained, or initiated the training of, a foundation model using a quantity of computing power greater than 1026 FLOPS (a “frontier model”), with additional requirements for frontier developers with annual gross revenues exceeding $500 million (“large frontier developers”).  Notably, starting on January 1, 2027, TFAIA will require the California Department of Technology to annually provide recommendations to the Legislature on “whether and how to update” TFAIA’s definitions of “frontier model,” “frontier developer,” and “large frontier developer” to “ensure that they accurately reflect technological developments, scientific literature, and widely accepted national and international standards.”  Below we describe key obligations and restrictions imposed by TFAIA on such developers.   

Frontier AI Frameworks.  TFAIA will require a large frontier developer to create, implement, and publish a “frontier AI framework,” which is defined as “documented technical and organizational protocols to manage, assess, and mitigate catastrophic risks.”  Such frameworks must explain the developer’s approaches to:

  • Integration of Standards:  Incorporating “national standards, international standards, and industry-consensus best practices.”
  • Risk Thresholds and Mitigation:  Defining and assessing “thresholds used … to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk” and applying “mitigations to address the potential for catastrophic risks” based on those assessments.
  • Pre-Deployment Assessments:  Reviewing assessments and the adequacy of mitigations before deploying a frontier model externally or for “extensive[] internal[]” use, and using third parties to assess catastrophic risks and mitigations,
  • Framework Maintenance:  Revisiting and updating the developer’s frontier AI framework, including criteria for triggering such updates, and defining when models are “substantially modified enough to require” publishing transparency reports required by TFAIA (described further below).
  • Security and Incident Response:  Implementing “cybersecurity practices to secure unreleased model weights” and processes for “identifying and responding to critical safety incidents.”
  • Internal Use Risk Management:  Assessing and managing “catastrophic risk resulting from the internal use” of the developer’s frontier model, including risks resulting from the model “circumventing oversight mechanisms.”

Large frontier developers must review and update their Frontier AI Frameworks at least annually and publish any “material modifications” with a justification within 30 days if such a modification is made.

Transparency Reports.  Before or when deploying a new or a substantially modified frontier model, frontier developers and large frontier developers will be required to publish “transparency reports” on their websites or as part of larger documents such as “system cards” or “model cards.”  Frontier developer transparency reports must include the developer’s website, a “mechanism that enables a natural person to communicate” with the developer, the frontier model’s release date, supported languages, output modalities, and intended uses, and any “generally applicable restrictions or conditions on uses” of the frontier model.

In addition to these requirements, large frontier developers’ transparency reports must also summarize catastrophic risk assessments conducted pursuant to the large frontier developer’s frontier AI framework, the results of those assessments, any involvement by “third-party evaluators” in assessing catastrophic risk, and any “other steps taken to fulfill the requirements” of the large frontier developer’s frontier AI framework with respect to the frontier model.

Frontier developers may make redactions to transparency reports in order to protect “trade secrets, the frontier developer’s cybersecurity, public safety, or the national security of the United States or to comply with any federal or state law.”

Critical Safety Incident Reporting.  TFAIA will require frontier developers to report “critical safety incidents,” which are defined as any “unauthorized access to, modification of, or exfiltration of” model weights causing death or injury, “harm resulting from the materialization of a catastrophic risk,” “loss of control . . . causing death or bodily injury,” or a model “us[ing] deceptive techniques” to subvert its controls “in a manner that demonstrates materially increased catastrophic risk.”

Frontier developers are required to report such incidents within 15 days to the California Office of Emergency Services (“OES”) or, if a critical safety incident “poses an imminent risk of death or serious physical injury,” within 24 hours to an appropriate authority, including “any law enforcement agency or public safety agency with jurisdiction.”  Critical safety incident reports must be provided through a mechanism established by OES, and must include the date of the incident, reasons why the incident qualifies as a critical safety incident, a short and plain statement describing the incident, and whether the incident was “associated with internal use of a frontier model.” 

Catastrophic Risk Assessment Reporting.  TFAIA also will require large frontier developers to report to OES “a summary of any assessment of catastrophic risk” resulting from the large frontier developer’s “internal use” of any of its frontier models (while “internal use” is undefined, TFAIA may be referring to updates or modifications to a frontier model).  Large frontier developers must provide a summary of any such assessment to OES every three months or “pursuant to another reasonable schedule” specified by the developer and shared with OES.  However, TFAIA does not expressly require large frontier developers to conduct assessments of catastrophic risk or prohibit the deployment of frontier models that may present catastrophic risks.

TFAIA defines “catastrophic risks” as foreseeable and material risks that a frontier developer’s development, storage, use, or deployment of a frontier model will materially contribute to death or serious injury to more than 50 people or more than $1 billion in property damage by: (1) providing expert-level assistance in creating or releasing a chemical, biological, radiological, or nuclear weapon, (2) engaging in a cyberattack, or conduct that would constitute murder, assault, extortion, or theft if committed by a human, without human oversight, or (3) evading the control of its developer or user.

Whistleblower Protections.  TFAIA will prohibit frontier developers from making or enforcing “a rule, regulation, policy, or contract” that prevents any employee responsible for managing critical safety risks (a “covered employee”) from disclosing, or retaliates against a covered employee for disclosing, information to authorities or supervisors if the employee has “reasonable cause to believe” the information shows that:  (1) the frontier developer’s activities “pose a specific and substantial danger to the public health or safety resulting from a catastrophic risk,” or (2) the frontier developer has violated TFAIA.  Frontier developers will also be required to provide clear notices to covered employees of their rights and responsibilities under TFAIA, among other things.  Additionally, large frontier developers will be required to provide a “reasonable internal process” for covered employees to anonymously disclose the types of information above.

Enforcement.  A large frontier developer that violates TFAIA’s disclosure and reporting requirements, or that “fails to comply with its own frontier AI framework,” will be subject to civil penalties of up to $1 million per violation, enforced by the California Attorney General.  TFAIA does not expressly establish penalties for violations of disclosure and reporting requirements by frontier developers who are not large frontier developers.  Covered employees may bring civil actions for violations of TFAIA’s whistleblower protections described above and may seek injunctive relief and attorney’s fees.

TFAIA provides a safe harbor from its disclosure and reporting requirements for frontier developers who comply with certain federal requirements intended to assess, detect, or mitigate catastrophic risks associated with frontier models.  Specifically, frontier developers will be “deemed in compliance” with TFAIA’s disclosure and reporting requirements to the extent that the developer complies with federal requirements or standards that OES designates as “substantially equivalent to, or stricter than,” TFAIA’s requirements.  If a frontier developer declares their intent to comply with designated federal requirements, however, failure to comply with those requirements “shall constitute a violation” of TFAIA.  In a potential nod to recent efforts in Congress to enjoin the enforcement of state AI laws, and echoing calls for a national AI regulatory framework from lawmakers in other states, Governor Newsom’s signing statement highlighted the safe harbor as a “compliance pathway” that will “provide alignment” with any future “national AI standards that maintain or exceed the protections in this bill.” 

Frontier AI Model Safety Legislation: TFAIA vs. RAISE Act.  The signing of TFAIA comes exactly one year after Governor Newsom vetoed the Safe & Secure Innovation for Frontier AI Models Act (SB 1047), a 2024 frontier model safety bill that would have imposed broader developer requirements, including third-party safety audits and “full shutdown” safeguards. 

TFAIA’s signing also follows the New York legislature’s passage of the Responsible AI Safety & Education (“RAISE”) Act, a frontier model public safety bill, in June.  Unlike TFAIA, the RAISE Act – which was passed by the legislature but has yet to be signed by New York Governor Kathy Hochul (D) – defines “frontier model” as an AI model that costs over $100 million in compute costs to train, in addition to being trained on more than 1026 FLOPS.  Additionally, the RAISE Act lacks whistleblower protections and, in contrast to TFAIA’s focus on reporting and disclosure requirements, also would require frontier model developers to implement “appropriate safeguards” prior to deploying a frontier model and prohibit developers from deploying frontier models that create an unreasonable risk of “critical harm.”

Matthew Shapanka

Matthew Shapanka draws on more than 15 years of experience – including on Capitol Hill, at Covington, and in state government – to advise and counsel clients across a range of industries on significant legislative, regulatory, and enforcement matters. He develops and executes…

Matthew Shapanka draws on more than 15 years of experience – including on Capitol Hill, at Covington, and in state government – to advise and counsel clients across a range of industries on significant legislative, regulatory, and enforcement matters. He develops and executes complex, multifaceted public policy initiatives for clients seeking actions by Congress, state legislatures, and federal and state government agencies, many with significant legal and political opportunities and risks.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters within the Committee’s jurisdiction, including federal election law and campaign finance, and oversight of the Federal Election Commission, legislative branch agencies, security and maintenance of the U.S. Capitol Complex, and Senate rules and regulations.

Most significantly, Matt led the Rules Committee staff work on the Electoral Count Reform and Presidential Transition Improvement Act – landmark bipartisan legislation to update the antiquated process of certifying and counting electoral votes in presidential elections that President Biden signed into law in 2022.

As Chief Counsel, Matt was a lead attorney on the joint bipartisan investigation (with the Homeland Security and Governmental Affairs Committee) into the security planning and response to the January 6, 2021 attack on the Capitol. In that role, he oversaw the collection review of documents, led interviews and depositions of key government officials, advised the Chairwoman and Committee members on two high-profile joint hearings, and drafted substantial portions of the Committees’ staff report on the attack. He also led oversight of the Capitol Police, Architect of the Capitol, Senate Sergeant at Arms, and executive branch agencies involved in implementing the Committees’ recommendations, including additional legislation and hearings.

Both in Congress and at the firm, Matt has prepared many corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at legislative, oversight, or nomination hearings before congressional committees, as well as witnesses appearing at congressional depositions and transcribed interviews. He is also an experienced legislative drafter who has composed dozens of bills introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas.

In addition to his policy work, Matt advises and represents clients on the full range of political law compliance and enforcement matters involving federal election, campaign finance, lobbying, and government ethics laws, the Securities and Exchange Commission’s “Pay-to-Play” rule, as well as the election and political laws of states and municipalities across the country.

Before law school, Matt worked as a research analyst in the Massachusetts Recovery & Reinvestment Office, where he worked on all aspects of state-level policy, communications, and compliance for federal stimulus funding awarded to Massachusetts under the American Recovery & Reinvestment Act of 2009. He has also worked for federal, state, and local political candidates in Massachusetts and New Hampshire.

August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients for complying with federal, state, and global privacy and competition frameworks and AI regulations. He also assists clients in investigating compliance issues, preparing for federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.