On June 17, the Joint California Policy Working Group on AI Frontier Models (“Working Group”) issued its final report on frontier AI policy, following public feedback on the draft version of the report released in March.  The report describes “frontier models” as the “most capable” subset of foundation models, or a class of general-purpose technologies that are resource-intensive to produce and require significant amounts of data and compute to yield capabilities that can power a variety of downstream AI applications.

After analyzing numerous case studies and defining the potential benefits of effective frontier model regulation, the Working Group outlines proposed key principles to inform the development of “evidence-based” legislation that appropriately balances safety and innovation.

  • Transparency Requirements.  The report states that frontier AI policy should prioritize “public-facing transparency requirements to best advance accountability” and promote public trust in AI technology.  The report identifies several “key areas” for frontier model developer transparency requirements, including risks and risk mitigation, cybersecurity practices, pre-deployment assessments of capabilities and risks, downstream impacts, and disclosures regarding how training data is obtained. 
  • Third-Party Risk Assessments.  The report states that third-party risk assessments are “essential” for building a “more complete evidence base on the risks of foundation models” and, when coupled with transparency requirements, can create a “race to the top” in AI safety practices.  To implement third-party risk assessments, the report recommends that policymakers provide “safe harbors” for third-party AI evaluators “analogous to those afforded to third-party cybersecurity testers.”
  • Whistleblower Protections.  The report finds that legal protections against retaliation for employees who report wrongdoing can “play a critical role in surfacing misconduct, identifying systemic risks, and fostering accountability in AI development and deployment,” while noting tradeoffs involved in extending whistleblower protections to contractors or other third parties.  The report suggests that policymakers consider whistleblower protections that “cover a broader range of activities” beyond only legal violations, which “may draw upon notions of ‘good faith’ reporting” in cybersecurity or other domains.
  • Adverse Event Reporting:  Drawing on examples of post-deployment monitoring in other contexts, such as government reporting requirements related to medical and equipment malfunctions, the report describes adverse event reporting as another “critical first step” for “targeted AI regulation.”  The report recommends mandatory adverse event reporting systems that share reports with “relevant agencies with domain-specific regulatory authority and expertise” and focus on a “tightly defined” and periodically updated set of harms.  The report further recommends that policymakers combine mandatory reporting with “voluntary reporting for downstream users.”  The report outlines certain benefits of adverse event reporting, including identifying emerging and unanticipated harms, encouraging proactive measures to mitigate risks, improved coordination between the government and private sector, and reducing the costs of enforcement.  The report also notes likely challenges for adverse event reporting regimes, such as the difficulty of clearly defining “adverse events” or ensuring sufficient government resources for monitoring reports.
  • Scoping:  According to the report, “[w]ell-designed regulation is proportionate.”  The report “cautions against” frontier model regulations that use “thresholds based on developer-level properties” (e.g., employee headcount), which “may inadvertently ignore key players,” while noting that “training compute thresholds” may be the “most attractive option” for policymakers.

The findings of the Working Group, which was convened by Governor Gavin Newsom (D) in September 2024 following his veto of California’s proposed Safe & Secure Innovation for Frontier AI Models Act (SB 1047), could inform lawmakers as they move forward with foundation model legislation in the 2025 legislative session.  On May 28, for example, the California Senate passed SB 53, a foundation model whistleblower bill introduced by State Senator (and SB 1047 co-sponsor) Scott Wiener.  Additionally, on June 12, the New York legislature passed the Responsible AI Safety & Education (“RAISE”) Act (S 6953), a frontier model public safety bill that we previously covered here.

*              *              *

We will continue to provide updates on meaningful developments related to artificial intelligence and technology across our Inside Global Tech, Global Policy Watch, and Inside Privacy blogs.

Photo of Jennifer Johnson Jennifer Johnson

Jennifer Johnson is co-chair of the firm’s Communications & Media Practice Group.  She represents and advises broadcast licensees, trade associations, and other media entities on a wide range of issues, including:  regulatory and policy advocacy; network affiliation and other programming agreements; media joint…

Jennifer Johnson is co-chair of the firm’s Communications & Media Practice Group.  She represents and advises broadcast licensees, trade associations, and other media entities on a wide range of issues, including:  regulatory and policy advocacy; network affiliation and other programming agreements; media joint ventures, mergers and acquisitions; carriage negotiations with cable, satellite and telco companies; media ownership and attribution; and other strategic, regulatory and transactional matters.

Ms. Johnson assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission and Congress and through transactions and other business arrangements.  Her broadcast clients draw particular benefit from her deep experience and knowledge with respect to network/affiliate issues, retransmission consent arrangements, and other policy and business issues facing the industry.  Ms. Johnson also assists investment clients in structuring, evaluating and pursuing potential media investments.  She has been recognized by Best Lawyers, Chambers USA, Legal 500 USA,Washington DC Super Lawyers, and the Washingtonian as a leading lawyer in her field.

Matthew Shapanka

Matthew Shapanka draws on more than 15 years of experience – including on Capitol Hill, at Covington, and in state government – to advise and counsel clients across a range of industries on significant legislative, regulatory, and enforcement matters. He develops and executes…

Matthew Shapanka draws on more than 15 years of experience – including on Capitol Hill, at Covington, and in state government – to advise and counsel clients across a range of industries on significant legislative, regulatory, and enforcement matters. He develops and executes complex, multifaceted public policy initiatives for clients seeking actions by Congress, state legislatures, and federal and state government agencies, many with significant legal and political opportunities and risks.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters within the Committee’s jurisdiction, including federal election law and campaign finance, and oversight of the Federal Election Commission, legislative branch agencies, security and maintenance of the U.S. Capitol Complex, and Senate rules and regulations.

Most significantly, Matt led the Rules Committee staff work on the Electoral Count Reform and Presidential Transition Improvement Act – landmark bipartisan legislation to update the antiquated process of certifying and counting electoral votes in presidential elections that President Biden signed into law in 2022.

As Chief Counsel, Matt was a lead attorney on the joint bipartisan investigation (with the Homeland Security and Governmental Affairs Committee) into the security planning and response to the January 6, 2021 attack on the Capitol. In that role, he oversaw the collection review of documents, led interviews and depositions of key government officials, advised the Chairwoman and Committee members on two high-profile joint hearings, and drafted substantial portions of the Committees’ staff report on the attack. He also led oversight of the Capitol Police, Architect of the Capitol, Senate Sergeant at Arms, and executive branch agencies involved in implementing the Committees’ recommendations, including additional legislation and hearings.

Both in Congress and at the firm, Matt has prepared many corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at legislative, oversight, or nomination hearings before congressional committees, as well as witnesses appearing at congressional depositions and transcribed interviews. He is also an experienced legislative drafter who has composed dozens of bills introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas.

In addition to his policy work, Matt advises and represents clients on the full range of political law compliance and enforcement matters involving federal election, campaign finance, lobbying, and government ethics laws, the Securities and Exchange Commission’s “Pay-to-Play” rule, as well as the election and political laws of states and municipalities across the country.

Before law school, Matt worked as a research analyst in the Massachusetts Recovery & Reinvestment Office, where he worked on all aspects of state-level policy, communications, and compliance for federal stimulus funding awarded to Massachusetts under the American Recovery & Reinvestment Act of 2009. He has also worked for federal, state, and local political candidates in Massachusetts and New Hampshire.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection…

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection counsel to companies, including on topics related to privacy policies and data practices, the California Consumer Privacy Act, and cyber and data security incident response and preparedness.

August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients for complying with federal, state, and global privacy and competition frameworks and AI regulations. He also assists clients in investigating compliance issues, preparing for federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.