On April 2, the California Senate Judiciary Committee held a hearing on the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) and favorably reported the bill in a 9-0 vote (with 2 members not voting).  The vote marks a major step toward comprehensive artificial intelligence (AI) regulation in a state that is home to both Silicon Valley and the nation’s first comprehensive privacy law.

This legislation would require developers of large AI models to implement certain safeguards before training and deploying those models, and to report safety incidents involving AI technologies.  The bill would give the California Attorney General civil enforcement authority over violations and establish a new “Frontier Model Division” within the Department of Technology to aid enforcement. 

At the hearing, witnesses—including Encode Justice, the Center for AI Safety, and Economic Security California—and legislators praised the bill’s goal of regulating large AI models while also expressing concerns about the feasibility of enforcement and potential effects on AI innovation.  The Chamber of Progress and California Chamber of Commerce (CalChamber) testified in opposition to the bill.  A coalition of advocacy and industry groups, led by CalChamber, has also signed a letter opposing the bill.

Covered Models.  Mirroring the White House’s 2023 Executive Order, SB 1047 would regulate developers of “covered models” trained on computers with processing power above certain thresholds, while also covering models of “similar or greater performance.”  Developers would also be prohibited from training or deploying a covered model that presents an unreasonable risk of “critical harm,” such as the creation or use of weapons of mass destruction, cybersecurity attacks causing catastrophic damages (greater than $500 million), activities undertaken by AI that cause mass casualties or catastrophic damages (greater than $500 million) and that would be criminal conduct if committed by humans, or other severe threats to public safety.

AI Developer Pre-Training Requirements.  SB 1047 would establish a set of requirements for developers of covered models that apply before a covered model is trained, including:  

  • Positive Safety Determinations.  Developers would be required to assess whether a model will have lower performance than covered models and lacks “hazardous capabilities.”  Models with such determinations are exempt from the bill’s requirements.
  • Protections & Safeguards.  Developers would be required to implement cybersecurity protections against misuse, ensure models can be fully shutdown, and follow industry best practices and NIST and Frontier Model Division guidance.
  • Safety & Security Protocols.  Developers would be required to implement, for each covered model, a “safety and security protocol” with assurances of safeguards, the requirements that apply to the developer, and procedures to test the model’s safety.

AI Developer Pre-Deployment Requirements.  After training a covered model, SB 1047 would require developers to perform “capability testing” to assess whether a positive safety determination is warranted.  If not, developers would be required to implement safeguards that prevent harmful uses and ensure a model’s actions and “resulting critical harms can be accurately and reliably attributed” to the model and responsible users.

AI Developer Ongoing Requirements.  SB 1047 would also establish ongoing obligations for developers, including annual reviews of safety and security protocols, annual certifications of compliance to the Frontier Model Division, periodic reviews of procedures, policies, and safeguards, and reporting of “AI safety incidents” within 72 hours of learning of the incident.

Whistleblower Protections.  SB 1047 would prohibit developers from preventing employees from disclosing information to the California Attorney General indicating a developer’s noncompliance, or from retaliating against employees who do so. 

SB 1047 has a long way to go before becoming law.  Should it be enacted, however, it could—like California’s comprehensive privacy legislation before it—become the de facto standard for AI regulation in the United States, filling the void created in the absence of comprehensive federal AI legislation. 

We are closely monitoring these and related state AI developments as they unfold.  A summary of key themes in recent state AI bills is available here, along with our overview of recent state synthetic media and generative AI legislation here.  We will continue to provide updates on meaningful developments related to artificial intelligence and technology across our Global Policy Watch, Inside Global Tech, and Inside Privacy blogs.

Photo of Holly Fechner Holly Fechner

Holly Fechner has two decades of legal, legislative and public policy experience in the public and private sectors.  Ms. Fechner has a broad-based practice handling legislative and regulatory matters for clients in areas including healthcare, tax, intellectual property, education, and employee benefits.  Drawing…

Holly Fechner has two decades of legal, legislative and public policy experience in the public and private sectors.  Ms. Fechner has a broad-based practice handling legislative and regulatory matters for clients in areas including healthcare, tax, intellectual property, education, and employee benefits.  Drawing on her extensive congressional and private sector experience, Ms. Fechner offers clients comprehensive advocacy services, including strategic advice, substantive legal and regulatory expertise, and policy and message development.  She has a proven track record in assisting clients fulfill their government affairs goals.

Matthew Shapanka

Matthew Shapanka draws on more than 15 years of experience – including on Capitol Hill, at Covington, and in state government – to advise and counsel clients across a range of industries on significant legislative, regulatory, and enforcement matters. He develops and executes…

Matthew Shapanka draws on more than 15 years of experience – including on Capitol Hill, at Covington, and in state government – to advise and counsel clients across a range of industries on significant legislative, regulatory, and enforcement matters. He develops and executes complex, multifaceted public policy initiatives for clients seeking actions by Congress, state legislatures, and federal and state government agencies, many with significant legal and political opportunities and risks.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters within the Committee’s jurisdiction, including federal election law and campaign finance, and oversight of the Federal Election Commission, legislative branch agencies, security and maintenance of the U.S. Capitol Complex, and Senate rules and regulations.

Most significantly, Matt led the Rules Committee staff work on the Electoral Count Reform and Presidential Transition Improvement Act – landmark bipartisan legislation to update the antiquated process of certifying and counting electoral votes in presidential elections that President Biden signed into law in 2022.

As Chief Counsel, Matt was a lead attorney on the joint bipartisan investigation (with the Homeland Security and Governmental Affairs Committee) into the security planning and response to the January 6, 2021 attack on the Capitol. In that role, he oversaw the collection review of documents, led interviews and depositions of key government officials, advised the Chairwoman and Committee members on two high-profile joint hearings, and drafted substantial portions of the Committees’ staff report on the attack. He also led oversight of the Capitol Police, Architect of the Capitol, Senate Sergeant at Arms, and executive branch agencies involved in implementing the Committees’ recommendations, including additional legislation and hearings.

Both in Congress and at the firm, Matt has prepared many corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at legislative, oversight, or nomination hearings before congressional committees, as well as witnesses appearing at congressional depositions and transcribed interviews. He is also an experienced legislative drafter who has composed dozens of bills introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas.

In addition to his policy work, Matt advises and represents clients on the full range of political law compliance and enforcement matters involving federal election, campaign finance, lobbying, and government ethics laws, the Securities and Exchange Commission’s “Pay-to-Play” rule, as well as the election and political laws of states and municipalities across the country.

Before law school, Matt worked as a research analyst in the Massachusetts Recovery & Reinvestment Office, where he worked on all aspects of state-level policy, communications, and compliance for federal stimulus funding awarded to Massachusetts under the American Recovery & Reinvestment Act of 2009. He has also worked for federal, state, and local political candidates in Massachusetts and New Hampshire.

August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients for complying with federal, state, and global privacy and competition frameworks and AI regulations. He also assists clients in investigating compliance issues, preparing for federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.