On February 27, California State Senator Scott Weiner (D-San Francisco) released the text of SB 53, reviving efforts to establish AI safety regulations in a state that is home to both Silicon Valley and the nation’s first comprehensive privacy law. SB 53 proposes a significantly narrower approach compared to Senator Weiner’s Safe & Secure Innovation for Frontier AI Models Act (SB 1047), which, despite having passed both chambers of California’s legislature overwhelmingly last year, Governor Gavin Newsom (D) vetoed. Instead, SB 53 focuses on rights for employee whistleblowers that work for developers of certain foundation models.
Like SB 1047, SB 53 focuses on developers of “foundation models” deemed to present a “critical risk.” This would apply to AI models that are trained on broad sets of data, use “self-supervision in the training process,” and have a wide range of use cases. The bill defines “developers” as persons that have trained at least one foundation model with a quantity of computational power that costs at least $100 million and defines “critical risks” as death or serious injury to more than 100 people, or more than $1 billion in damage, resulting from (1) the “creation or release” of chemical, biological, radiological, or nuclear weapons, (2) a cyberattack, (3) conduct by a foundation model that would be criminal conduct if committed by a human, or (4) a foundation model “evading the control of its developer or user.”
SB 53 would specifically protect employees of foundation model developers who disclose information to the California Attorney General, federal authorities, or other employees concerning potential critical risks posed by the developer’s activities or any alleged false or misleading statements about risk management practices. The bill would prohibit developers from both preventing such disclosures and retaliating against employees who make such disclosures. Additionally, developers would be required to provide clear notice to all employees of their rights under the bill. Finally, SB 53 would require developers to establish internal processes for employees to anonymously disclose information to the developer regarding developer activities that pose critical risks, with monthly updates to disclosing employees on the status of investigations and quarterly updates to the developer’s officers and directors.
By contrast, SB 53 lacks some of the broader safety and security requirements that were included in SB 1047, such as third-party safety audits, required shutdown capabilities, safety and security protocols, and incident reporting. In vetoing SB 1047 last year, Governor Newsom criticized the bill’s attempt to impose stringent AI standards based on the “cost and number of computations needed to develop an AI model” and announced a Joint California Policy Working Group on AI Frontier Models to develop recommended AI guardrails. The working group’s draft recommendations are expected in the coming weeks.
Notably, SB 53 provides for a private right of action for employees that permits the court to enjoin developer violations of SB 53’s requirements and grant reasonable attorney’s fees.
SB 53 is just the latest in a wave of AI legislation currently under consideration by state legislatures, including foundation model safety legislation introduced in Colorado, Illinois, Massachusetts, and Rhode Island. We will continue to monitor these developments across our Global Policy Watch, Inside Global Tech, and Inside Privacy blogs.