Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

New York Legislature Passes Sweeping AI Safety Legislation

By Jennifer Johnson, Micaela McMurrough, Jayne Ponder, August Gweon & Analese Bridges on June 24, 2025
Email this postTweet this postLike this postShare this post on LinkedIn

On June 12, the New York legislature passed the Responsible AI Safety & Education (“RAISE”) Act (S 6953), a frontier model public safety bill that would establish safeguard, reporting, disclosure, and other requirements for large developers of frontier AI models.  If signed into law by Governor Kathy Hochul (D), the RAISE Act would make New York the first state in the nation to enact public safety regulations for frontier model developers and would impose substantial fines on model developers in scope. 

The bill, which passed the New York Senate on a 58-1-4 vote and the New York Assembly on a 119-22 vote, advances a similar purpose to California’s Safe & Secure Innovation for Frontier AI Models Act (SB 1047), an AI safety bill that was vetoed by California Governor Gavin Newsom (D) in 2024.  The RAISE Act’s substantive provisions, however, are narrower than SB 1047.  For example, the RAISE Act does not include third-party independent auditing requirements or whistleblower protections for employees.  Following the bill’s passage, New York State Senator and bill co-sponsor Andrew Gounardes stated that the bill would “ensure[] AI can flourish,” while requiring “reasonable, commonsense safeguard[s] we’d expect of any company working on a potentially dangerous product.”

Covered Models and Developers.  The RAISE Act solely addresses “frontier models,” defined as an AI model that is either (1) trained on more than 1026 FLOPS and costing more than $100 million in compute or (2) produced by applying “knowledge distillation” to a frontier model and costing more than $5 million in compute.  The bill’s obligations would apply to “large developers” of frontier models, i.e., persons that have trained at least one frontier model and have spent more than $100 million in aggregate compute costs to train frontier models.

Critical Harms and Safety Incidents.  Similar to SB 1047, the RAISE Act’s requirements focus on reducing risks of “critical harm.”  The bill defines “critical harm” as death or serious injury to 100 or more people, or at least $1 billion in damage to rights in money or property, caused or materially enabled by a large developer’s use, storage, or release of a frontier model through (1) the creation or use of chemical, biological radiological, or nuclear weapon or (2) the AI model engaging in conduct (i) with no meaningful human intervention and (2) that would constitute a crime under the New York Penal Code that requires intent, recklessness, gross negligence, or the solicitation or aiding and abetting, if committed by a human. 

Safety Incident Reporting. The bill also would require large developers to report “safety incidents” affecting their frontier models within 72 hours after a safety incident occurred or facts establish a “reasonable belief” that a safety incident occurred to the New York Attorney General and New York Division of Homeland Security and Emergency Services.  The bill defines “safety incident” as (1) a known incidence of a “critical harm” or (2) circumstances that provide “demonstrable evidence of an increased risk of critical harm” resulting from an incident of frontier model autonomous behavior other than at the request of the user, unauthorized release or access, critical failure of technical or administrative controls, or unauthorized use. 

Pre-Deployment Safeguards.  Prior to deploying a frontier model, large developers would be required to implement “appropriate safeguards” to prevent unreasonable risk of critical harm.  After development, large developers would be prohibited from deploying frontier models that create an unreasonable risk of critical harm, or from making false or materially misleading statements or omissions related to documents retained under the Act.

Pre-Deployment Documentation and Disclosure Requirements.  The RAISE Act would also impose several documentation and disclosure requirements on large developers prior to deploying a frontier model, including:

  • Safety and Security Protocols.  Large developers would be required to implement, publish, and annually review a written “safety and security protocol” that describes the developer’s (1) procedures and protections to reduce risks of critical harm; (2) cybersecurity protections that reduce risks of unauthorized access or misuse; (3) testing procedures for evaluating unreasonable risks of critical harm or misuse; and (4) senior personnel responsible for ensuring compliance.
  • Documentation.  Large developers would be required to retain an unredacted copy of their safety and security protocols, records of updates and revisions, and information on specific frontier model tests and test results or information sufficient for third parties to replicate testing procedures for as long as the frontier model is deployed, plus five years.
  • Disclosure.  Large developers would be required to disclose copies of their safety and security protocols, with appropriate redactions, to the New York Attorney General and New York Division of Homeland Security and Emergency Services, and to provide access to the safety and security protocol with redactions limited to those required by federal law, upon request.

The RAISE Act omits the third-party auditing requirements and whistleblower protections that were cornerstones of the vetoed California SB 1047 proposal.  On June 17, the Joint California Policy Working Group on AI Frontier Models released the final version of its report on Frontier AI Policy, recommending that frontier model regulations incorporate third-party risk assessments and whistleblower protections, in addition to public-facing transparency requirements and adverse event reporting.

Enforcement.  The Act would be enforced by civil actions brought by the New York Attorney General.  Violations would be punishable by up to $10 million in civil penalties for first violations and up to $30 million for subsequent violations, in addition to injunctive or declaratory relief.  The Act does not create a private right of action.

Under New York Senate rules, the RAISE Act must be delivered to the Governor within 45 days from the date of passage – by July 27, 2025.  Governor Hochul will then have 30 days to sign or veto the bill.  If enacted, the RAISE Act would come into effect 90 days after it is signed into law.  

*              *              *

We will continue to provide updates on meaningful developments related to artificial intelligence and technology across our Inside Global Tech, Global Policy Watch, and Inside Privacy blogs.

Photo of Jennifer Johnson Jennifer Johnson

Jennifer Johnson is co-chair of the firm’s Communications & Media Practice Group.  She represents and advises broadcast licensees, trade associations, and other media entities on a wide range of issues, including:  regulatory and policy advocacy; network affiliation and other programming agreements; media joint…

Jennifer Johnson is co-chair of the firm’s Communications & Media Practice Group.  She represents and advises broadcast licensees, trade associations, and other media entities on a wide range of issues, including:  regulatory and policy advocacy; network affiliation and other programming agreements; media joint ventures, mergers and acquisitions; carriage negotiations with cable, satellite and telco companies; media ownership and attribution; and other strategic, regulatory and transactional matters.

Ms. Johnson assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission and Congress and through transactions and other business arrangements.  Her broadcast clients draw particular benefit from her deep experience and knowledge with respect to network/affiliate issues, retransmission consent arrangements, and other policy and business issues facing the industry.  Ms. Johnson also assists investment clients in structuring, evaluating and pursuing potential media investments.  She has been recognized by Best Lawyers, Chambers USA, Legal 500 USA,Washington DC Super Lawyers, and the Washingtonian as a leading lawyer in her field.

Read more about Jennifer Johnson
Show more Show less
Photo of Micaela McMurrough Micaela McMurrough
Read more about Micaela McMurrough
Photo of Jayne Ponder Jayne Ponder

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection…

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection counsel to companies, including on topics related to privacy policies and data practices, the California Consumer Privacy Act, and cyber and data security incident response and preparedness.

Read more about Jayne Ponder
Show more Show less
August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients for complying with federal, state, and global privacy and competition frameworks and AI regulations. He also assists clients in investigating compliance issues, preparing for federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.

Read more about August Gweon
Show more Show less
  • Posted in:
    International
  • Blog:
    Global Policy Watch
  • Organization:
    Covington & Burling LLP
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo