State lawmakers are considering a diverse array of AI legislation, with hundreds of bills introduced in 2025.  As described further in this blog post, many of these AI legislative proposals fall into several key categories: (1) comprehensive consumer protection legislation similar to the Colorado AI Act, (2) sector-specific legislation on automated decision-making, (3) chatbot regulation, (4) generative AI transparency requirements, (5) AI data center and energy usage requirements, and (6) frontier model public safety legislation.  Although these categories represent just a subset of current AI legislative activity, they illustrate the major priorities of state legislatures and highlight new AI laws that may be on the horizon.

  • Consumer Protection.  Lawmakers in over a dozen states have introduced legislation aimed at reducing algorithmic discrimination in high-risk AI or automated decision-making systems used to make “consequential decisions,” embracing the risk- and role-based approach of the Colorado AI Act.  In general, these frameworks would establish developer and deployer duties of care to protect consumers from algorithmic discrimination and would require risks or instances of algorithmic discrimination to be reported to state attorneys general.  They would also require notices to consumers and disclosures to other parties and establish consumer rights related to the AI system.  For example, Virginia’s High-Risk AI Developer & Deployer Act (HB 2094), which follows this approach, passed out of Virginia’s legislature this month.
  • Sector-Specific Automated Decision-makingLawmakers in more than a dozen states have introduced legislation that would regulate the use of AI or automated decision-making tools (“ADMT”) in specific sectors, including healthcare, insurance, employment, and finance.  For example, Massachusetts HD 3750 would amend the state’s health insurance consumer protection law to require healthcare insurance carriers to disclose the use of AI or ADMT for reviewing insurance claims and report AI and training data information to the Massachusetts Division of Insurance.  Other bills would regulate the use of ADMT in the financial sector, such as New York A773, which would require banks that use ADMT for lending decisions to conduct annual disparate impact analyses and disclose such analyses to the New York Attorney General.  Relatedly, state legislatures are considering a wide range of approaches to regulating employers’ uses of AI and ADMT.  For example, Georgia SB 164 and Illinois SB 2255 would both prohibit employers from using ADMT to set wages unless certain requirements are satisfied.
  • Chatbots.  Another key trend in 2025 AI legislation focuses on AI chatbots.  For example, Hawaii HB 639 / SB 640, Idaho HB 127, Illinois HB 3021, Massachusetts SD 2223, and New York A222, would either require chatbot providers to provide prominent disclosures to inform users that they are not interacting with a human or impose liability on chatbot providers for misleading or deceptive chatbot communications.
  • Generative AI Transparency.  State legislatures are also considering legislation to regulate providers of generative AI systems and platforms that host synthetic content.  Some of these bills, such as Washington HB 1170, Florida HB 369, Illinois SB 1929, and New Mexico HB 401 would require generative AI providers to include watermarks in AI-generated outputs and provide free AI detection tools for users, similar to the California AI Transparency Act, which passed last year.  Other bills, such as Illinois SB 1792 and Utah SB 226, would require generative AI owners, licensees, or operators to display notices to users that disclose the use of generative AI or warn users that AI-generated outputs may be inaccurate, inappropriate, or harmful. 
  • AI Data Centers & Energy.  Lawmakers across the country have introduced legislation to address the growing energy demands of AI development and related environmental concerns.  For example, California AB 222 would require data centers to estimate and report to the state the total energy used to develop certain large AI models, and would require covered AI developers to estimate and publish the total energy used to develop each model.  Similarly, Massachusetts HD 4192 would require both AI developers and operators of sources of greenhouse gas emissions to monitor, track, and report environmental impacts and mitigations.
  • Frontier Model Public Safety.  Following the legislature’s passage and Governor’s subsequent veto of California SB 1047 last year, California State Senator Scott Wiener filed SB 53 with the goal of “establish[ing] safeguards for the development of [AI] frontier models.”  Lawmakers in other states are also considering legislation to address public safety risks posed by “frontier” or “foundation” models, generally defined as AI models that meet certain computational or monetary thresholds.  For example, Illinois HB 3506c would require developers of certain large AI models to conduct risk assessments every 90 days, publish annual third-party audits, and implement foundation model safety and security protocols.  As another approach, Rhode Island H 5224 would impose strict liability on developers of covered AI models for all injuries to non-users that are factually and proximately caused by the covered model.

*              *              *

Although the likelihood of passage for these AI bills remains unclear, any state AI legislation that does pass is likely to have significant effects on the U.S. AI regulatory landscape, especially in the absence of federal action on AI.  We will continue to monitor these and related AI developments across our Inside Global Tech, Global Policy Watch, and Inside Privacy blogs.

Photo of Jennifer Johnson Jennifer Johnson

Jennifer Johnson is co-chair of the firm’s Communications & Media Practice Group.  She represents and advises broadcast licensees, trade associations, and other media entities on a wide range of issues, including:  regulatory and policy advocacy; network affiliation and other programming agreements; media joint…

Jennifer Johnson is co-chair of the firm’s Communications & Media Practice Group.  She represents and advises broadcast licensees, trade associations, and other media entities on a wide range of issues, including:  regulatory and policy advocacy; network affiliation and other programming agreements; media joint ventures, mergers and acquisitions; carriage negotiations with cable, satellite and telco companies; media ownership and attribution; and other strategic, regulatory and transactional matters.

Ms. Johnson assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission and Congress and through transactions and other business arrangements.  Her broadcast clients draw particular benefit from her deep experience and knowledge with respect to network/affiliate issues, retransmission consent arrangements, and other policy and business issues facing the industry.  Ms. Johnson also assists investment clients in structuring, evaluating and pursuing potential media investments.  She has been recognized by Best Lawyers, Chambers USA, Legal 500 USA,Washington DC Super Lawyers, and the Washingtonian as a leading lawyer in her field.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection…

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection counsel to companies, including on topics related to privacy policies and data practices, the California Consumer Privacy Act, and cyber and data security incident response and preparedness.

August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients for complying with federal, state, and global privacy and competition frameworks and AI regulations. He also assists clients in investigating compliance issues, preparing for federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.