Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

U.S. Tech Legislative & Regulatory Update – First Quarter 2026

By Jennifer Johnson, Nicholas Xenakis, Jayne Ponder, August Gweon, Conor Kane, Grace Howard, Irene Kim, Evan Chiacchiaro, Rosie Moss & Micah Telegen on April 6, 2026
Email this postTweet this postLike this postShare this post on LinkedIn

This update highlights key legislative and regulatory developments in the first quarter of 2026 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and Internet of Things (“IoT”).

I. Federal AI Legislative Developments

In the first quarter, members of Congress introduced several AI bills related to nonconsensual images, chatbots, support for small businesses, and preemption in response to President Trump’s December 2025 AI Preemption Executive Order.  For example:

  • Nonconsensual AI-Generated Imagery: Following the enactment of the federal TAKE IT DOWN Act, the Senate passed the DEFIANCE Act (S.1837) in January, which would provide individuals who are victims of nonconsensual, AI-generated intimate imagery with a private right of action.  The bill has been “held at the desk” in the House since it passed the Senate, which means that it has not yet been referred to specific committees for consideration.  The delay in referral could mean that the full House could vote on the legislation once there is sufficient support if the relevant committees agree to waive jurisdiction.         
  • Chatbots: Several legislative proposals have focused on chatbot safeguards, including for minor users. For instance, Sen. Ed Markey (D-MA) introduced the Youth AI Privacy Act (S.4199), which would require entities that make AI chatbots available to minors to implement certain safe design features.  In the House of Representatives, Rep. Brett Guthrie (R-KY) introduced the SAFE BOTs Act as part of the KIDS Act (H.R.7757), an omnibus online child safety bill.  The SAFE BOTs Act which would require chatbot providers to provide disclosures and implement safety guardrails for minor users, among other requirements.
  • Preemption: Members of Congress continue to debate AI preemption. In March, Reps. Don Beyer (D-VA) and other Democratic lawmakers introduced the GUARDRAILS Act (H.R.8031), which would state that the White House’s AI Preemption Executive Order “shall have no force or effect” and prohibit federal funds from being used for its implementation. In contrast, Sen. Marsha Blackburn (R-TN)’s discussion draft of the TRUMP AMERICA AI Act, discussed in detail below, prohibits preemption of any “generally applicable law,”, and in some cases would expressly prohibit preemption of state laws that are more stringent than, or do not conflict with, the bill’s provisions.
  • Omnibus Bills: Legislators also have proposed comprehensive legislative packages covering a broad range of AI-related topics. For instance, Sen. Marsha Blackburn’s proposed TRUMP AMERICA AI Act contains a number of AI legislative proposals beyond preemption, including the Kids Online Safety Act (online platform minor safeguards), NO FAKES Act (prohibiting unauthorized digital replicas), GUARD Act (companion chatbot minor safeguards), TRAIN Act (copyright and AI model training data), AI LEAD Act (AI product liability standards), AI Risk Evaluation Act (frontier model evaluations), Future of AI Innovation Act (voluntary AI standards), CREATE AI Act (codifying the National AI Resource Resource), and COPIED Act (synthetic content provenance).  Notably, large legislative packages that are policy-focused typically face challenges passing Congress as a whole, though components may survive.  It also bears pointing out that the package contains KOSA, which is not primarily focused on AI and has faced separate issues with passage due to its scope.

II. Federal AI Regulatory Developments

In the first quarter of 2026, the White House and federal agencies took several steps related to AI regulation and AI adoption by federal agencies.  For example:

  • White House:  In March, the Trump Administration released its National Policy Framework for AI, encompassing numerous AI-related recommendations to Congress that it framed as promoting a “light touch” approach to AI regulation, protections for minors, IP protection, free speech, innovation, and protection of workers.  The framework also calls for preempting state AI laws that “impose undue burdens” on AI development and use.
  • Department of Justice:  In January, the Department of Justice established its AI Litigation Task Force, which has the “sole responsibility” to challenge state AI laws that unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations, or “are otherwise unlawful in the Attorney General’s judgment.”
  • NIST:  The National Institute of Standards and Technology (“NIST”) launched several initiatives focused on establishing standards for agentic AI systems.  In January, NIST’s Center for AI Standards and Innovation (“CAISI”) issued a Request for Information related to practices and methodologies for measuring and improving the secure development and deployment of agentic systems.  NIST launched the AI Agent Standards Initiative to support the development of industry standards for agents and a concept paper on agentic identity standards.

III. State AI Legislative Developments

State lawmakers have introduced over 600 AI bills with requirements for private entities in the 2026 legislative sessions so far. Enacted and/or passed (but not yet enacted) laws show a continued focus on companion chatbots; AI transparency; digital replicas and other synthetic content; and the use of AI by mental health providers and health insurers. For example:

  • Chatbot Safety: AI companions and chatbot safety continued to be a focus of state lawmakers this quarter, with new laws enacted in Washington (HB 2225), Oregon (SB 1546), and Idaho (Conversational AI Safety Act (SB 1297)).  Oregon SB 1546 establishes disclosure and mental health protocol requirements for “operators,” i.e., entities that make publicly available or control access to “AI companion chatbots.”  In addition to these requirements, Washington HB 2225 also will require operators to implement reasonable measures to prevent AI companion chatbots from claiming to be human or engaging in manipulative engagement techniques, while Idaho SB 1297 will require operators to provide tools for managing “privacy and account settings” to users and parents of users under 13.
  • Transparency & Content Provenance:  Multiple states have adopted or may soon adopt transparency requirements similar to those in the 2025 California AI Transparency Act. New laws in Utah (HB 276) and Washington (HB 1170) will require certain providers of genAI systems to include “latent disclosures,” with Washington also requiring covered providers to provide free “provenance detection tools” and optional “manifest disclosures.”  Additionally, New York lawmakers passed A3411, which awaits the Governor’s signature and would require certain entities to display a notice that the outputs of generative AI systems “may be inaccurate.”  
  • Harmful AI-Generated Content Regulation: State lawmakers enacted laws regulating the creation or distribution of harmful AI-generated content. Wyoming (HB 102) and Utah (HB 276) focus on restricting creation or distribution of nonconsensual AI-generated sexual material, with Wyoming’s law also prohibiting the development or distribution of AI systems designed, intended, or known to be used to (1) create, promote, or distribute AI-generated sexual material or child pornography, or (2) promote self-harm. Utah also enacted SB 256, establishing an individual right to consent to the use of one’s “personal identity” created through generative AI, and prohibiting the use of generative AI as a defense to a slander or libel claim.
  • Health Insurance & Healthcare: Lawmakers in multiple states passed laws to regulate the use of AI in healthcare settings. Indiana (HB 1271), Utah (SB 319), and Washington (SB 5395) enacted new laws regulating the use of AI by health insurers to evaluate claims and prohibiting health insurers from using AI as a sole basis for denying or modifying claims.  Legislation passed in Tennessee (SB 1580) and Delaware (HB 191), if signed by their respective governors, would prohibit AI systems from being represented or marketed as qualified mental health professionals or licensed professional healthcare workers, respectively.

Additionally, Governor Jared Polis released a draft bill that would replace the 2024 Colorado AI Act to impose requirements on developers and deployers of covered automated decision-making technology (“ADMT”), i.e., ADMT that is used to “materially influence a consequential decision.”  We will continue to closely monitor changes to the Colorado law.

IV. Connected & Automated Vehicles

The first quarter of 2026 brought activity related to CAV legislation, enforcement, and regulation. For example:

  • Federal Legislative Activity: Federal legislators considered a number of CAV-related bills this quarter. On January 13, the House Energy and Commerce Committee’s Subcommittee on Commerce, Manufacturing, and Trade held a hearing covering a number of CAV-related bills, including the Safely Ensuring Lives Future Deployment and Research In Vehicle Evolution Act of 2026 (the “SELF DRIVE Act of 2026”) (H.R.7390) (Rep. Latta (R-OH)), which would create a federal framework for AV deployment.  (The America Drives Act (H.R.4661), introduced in 2025, proposes a similar framework for deployment of large, commercial AVs.)  The Senate Commerce, Science, and Transportation Committee also held a hearing on CAV development, safety, and regulation on February 4.
  • NHTSA:  NHTSA continues to focus on updating its regulatory approach to accommodate advances in CAV technology.  On January 23, NHTSA requested input on how the U.S. should proceed with respect to a proposed UN draft Global Technical Regulation for Automated Driving Systems; it received more than fifty comments in response.  On March 10, NHTSA held an AV Safety Forum, which covered advances in CAV technology and steps the agency is taking to support CAV innovation and safety.  Speakers included Transportation Secretary Sean Duffy, NHTSA Administrator Jonathan Morrison, White House Office of Science and Technology Policy Director Michael Kratsios, and representatives from Zoox, Waymo, Uber, and other industry members.  The agency announced plans to update a number of safety rules that don’t account for AVs and to roll out new voluntary technical guidance for the industry.
  • FTC Settlement: On January 14, the FTC issued a settlement with GM and OnStar to resolve the FTC’s January 2025 complaint alleging that GM used a misleading enrollment process to sign up consumers for its OnStar connected vehicle service in violation of Section 5 of the FTC Act. The FTC’s complaint also had alleged that GM failed to clearly disclose that it collected consumers’ precise geolocation and driving behavior via an OnStar feature and sold that data to third parties without consumers’ consent.

V. Internet of Things

In the first quarter of 2026, the FCC reopened applications for Lead Administrator and Label Administrators as part of its Cyber Trust Mark program following the FCC’s original selections for 11 Label Administrators and Lead Administrator in 2024.  The new application period for Administrators comes after the company formerly in the Lead Administrator role withdrew from this position in December 2025.  No new selections have been announced to date.

We will continue to update you on meaningful developments in these quarterly updates and across our blogs.  Please also stay tuned for our upcoming quarterly video briefings on AI developments!

Photo of Jennifer Johnson Jennifer Johnson

Jennifer Johnson is co-chair of the firm’s Communications & Media Practice Group.  She represents and advises broadcast licensees, trade associations, and other media entities on a wide range of issues, including:  regulatory and policy advocacy; network affiliation and other programming agreements; media joint…

Jennifer Johnson is co-chair of the firm’s Communications & Media Practice Group.  She represents and advises broadcast licensees, trade associations, and other media entities on a wide range of issues, including:  regulatory and policy advocacy; network affiliation and other programming agreements; media joint ventures, mergers and acquisitions; carriage negotiations with cable, satellite and telco companies; media ownership and attribution; and other strategic, regulatory and transactional matters.

Ms. Johnson assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission and Congress and through transactions and other business arrangements.  Her broadcast clients draw particular benefit from her deep experience and knowledge with respect to network/affiliate issues, retransmission consent arrangements, and other policy and business issues facing the industry.  Ms. Johnson also assists investment clients in structuring, evaluating and pursuing potential media investments.  She has been recognized by Best Lawyers, Chambers USA, Legal 500 USA,Washington DC Super Lawyers, and the Washingtonian as a leading lawyer in her field.

Read more about Jennifer Johnson
Show more Show less
Nicholas Xenakis

Nick Xenakis draws on his Capitol Hill experience to provide regulatory and legislative advice to clients in a range of industries, including technology. He has particular expertise in matters involving the Judiciary Committees, such as intellectual property, antitrust, national security, immigration, and criminal…

Nick Xenakis draws on his Capitol Hill experience to provide regulatory and legislative advice to clients in a range of industries, including technology. He has particular expertise in matters involving the Judiciary Committees, such as intellectual property, antitrust, national security, immigration, and criminal justice.

Nick joined the firm’s Public Policy practice after serving most recently as Chief Counsel for Senator Dianne Feinstein (C-DA) and Staff Director of the Senate Judiciary Committee’s Human Rights and the Law Subcommittee, where he was responsible for managing the subcommittee and Senator Feinstein’s Judiciary staff. He also advised the Senator on all nominations, legislation, and oversight matters before the committee.

Previously, Nick was the General Counsel for the Senate Judiciary Committee, where he managed committee staff and directed legislative and policy efforts on all issues in the Committee’s jurisdiction. He also participated in key judicial and Cabinet confirmations, including of an Attorney General and two Supreme Court Justices. Nick was also responsible for managing a broad range of committee equities in larger legislation, including appropriations, COVID-relief packages, and the National Defense Authorization Act.

Before his time on Capitol Hill, Nick served as an attorney with the Federal Public Defender’s Office for the Eastern District of Virginia. There he represented indigent clients charged with misdemeanor, felony, and capital offenses in federal court throughout all stages of litigation, including trial and appeal. He also coordinated district-wide habeas litigation following the Supreme Court’s decision in Johnson v. United States (invalidating the residual clause of the Armed Career Criminal Act).

Read more about Nicholas Xenakis
Show more Show less
Photo of Jayne Ponder Jayne Ponder

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection…

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection counsel to companies, including on topics related to privacy policies and data practices, the California Consumer Privacy Act, and cyber and data security incident response and preparedness.

Read more about Jayne Ponder
Show more Show less
August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients for complying with federal, state, and global privacy and competition frameworks and AI regulations. He also assists clients in investigating compliance issues, preparing for federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.

Read more about August Gweon
Show more Show less
Photo of Conor Kane Conor Kane

Conor Kane advises clients on a broad range of privacy, artificial intelligence, telecommunications, and emerging technology matters. He assists clients with complying with state privacy laws, developing AI governance structures, and engaging with the Federal Communications Commission.

Before joining Covington, Conor worked in…

Conor Kane advises clients on a broad range of privacy, artificial intelligence, telecommunications, and emerging technology matters. He assists clients with complying with state privacy laws, developing AI governance structures, and engaging with the Federal Communications Commission.

Before joining Covington, Conor worked in digital advertising helping teams develop large consumer data collection and analytics platforms. He uses this experience to advise clients on matters related to digital advertising and advertising technology.

Read more about Conor Kane
Show more Show less
Photo of Grace Howard Grace Howard

Grace Howard is an associate in the firm’s Washington, DC office. She represents and advises clients on a range of cybersecurity, data privacy, and government contracts issues including cyber and data security incident response and preparedness, regulatory compliance, and internal investigations including matters…

Grace Howard is an associate in the firm’s Washington, DC office. She represents and advises clients on a range of cybersecurity, data privacy, and government contracts issues including cyber and data security incident response and preparedness, regulatory compliance, and internal investigations including matters involving allegations of noncompliance with U.S. government cybersecurity regulations and fraud under the False Claims Act.

Prior to joining the firm, Grace served in the United States Navy as a Surface Warfare Officer and currently serves in the U.S. Navy Reserve.

Read more about Grace Howard
Show more Show less
Photo of Irene Kim Irene Kim

Irene Kim is an associate in the firm’s Washington, DC office, where she is a member of the Privacy and Cybersecurity and Advertising and Consumer Protection Investigations practice groups. She advises clients on a broad range of issues, including U.S. state and federal…

Irene Kim is an associate in the firm’s Washington, DC office, where she is a member of the Privacy and Cybersecurity and Advertising and Consumer Protection Investigations practice groups. She advises clients on a broad range of issues, including U.S. state and federal AI legislation, comprehensive state privacy laws, and regulatory compliance matters.

Read more about Irene Kim
Show more Show less
Rosie Moss

Rosie Moss is an associate in the firm’s Washington, DC office. She is a member of the Data Privacy and Cybersecurity Practice Group and the Technology and Communications Regulation Practice Group.

Rosie advises clients on a wide range of data privacy and technology…

Rosie Moss is an associate in the firm’s Washington, DC office. She is a member of the Data Privacy and Cybersecurity Practice Group and the Technology and Communications Regulation Practice Group.

Rosie advises clients on a wide range of data privacy and technology regulatory issues, including emerging artificial intelligence compliance matters. She assists clients in complying with federal and state privacy laws and Federal Communications Commission (FCC) regulations. Rosie also maintains an active pro bono practice.

Read more about Rosie Moss
Show more Show less
Photo of Micah Telegen Micah Telegen

Micah Telegen is an associate in the firm’s Washington, DC office and a member of the Litigation and Investigations Practice Group. He also maintains an active pro bono practice and has experience litigating on behalf of tenants facing eviction.

Prior to joining the firm…

Micah Telegen is an associate in the firm’s Washington, DC office and a member of the Litigation and Investigations Practice Group. He also maintains an active pro bono practice and has experience litigating on behalf of tenants facing eviction.

Prior to joining the firm, Micah served as a law clerk to Judge Jacques L. Wiener, Jr., of the U.S. Court of Appeals for the Fifth Circuit and Judge Lance M. Africk of the U.S. District Court for the Eastern District of Louisiana. 

Read more about Micah Telegen
Show more Show less
  • Posted in:
    International
  • Blog:
    Global Policy Watch
  • Organization:
    Covington & Burling LLP
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo