This update highlights key legislative and regulatory developments in the first quarter of 2026 related to artificial intelligence (“AI”), connected and automated vehicles (“CAVs”), and Internet of Things (“IoT”).
I. Federal AI Legislative Developments
In the first quarter, members of Congress introduced several AI bills related to nonconsensual images, chatbots, support for small businesses, and preemption in response to President Trump’s December 2025 AI Preemption Executive Order. For example:
- Nonconsensual AI-Generated Imagery: Following the enactment of the federal TAKE IT DOWN Act, the Senate passed the DEFIANCE Act (S.1837) in January, which would provide individuals who are victims of nonconsensual, AI-generated intimate imagery with a private right of action. The bill has been “held at the desk” in the House since it passed the Senate, which means that it has not yet been referred to specific committees for consideration. The delay in referral could mean that the full House could vote on the legislation once there is sufficient support if the relevant committees agree to waive jurisdiction.
- Chatbots: Several legislative proposals have focused on chatbot safeguards, including for minor users. For instance, Sen. Ed Markey (D-MA) introduced the Youth AI Privacy Act (S.4199), which would require entities that make AI chatbots available to minors to implement certain safe design features. In the House of Representatives, Rep. Brett Guthrie (R-KY) introduced the SAFE BOTs Act as part of the KIDS Act (H.R.7757), an omnibus online child safety bill. The SAFE BOTs Act which would require chatbot providers to provide disclosures and implement safety guardrails for minor users, among other requirements.
- Preemption: Members of Congress continue to debate AI preemption. In March, Reps. Don Beyer (D-VA) and other Democratic lawmakers introduced the GUARDRAILS Act (H.R.8031), which would state that the White House’s AI Preemption Executive Order “shall have no force or effect” and prohibit federal funds from being used for its implementation. In contrast, Sen. Marsha Blackburn (R-TN)’s discussion draft of the TRUMP AMERICA AI Act, discussed in detail below, prohibits preemption of any “generally applicable law,”, and in some cases would expressly prohibit preemption of state laws that are more stringent than, or do not conflict with, the bill’s provisions.
- Omnibus Bills: Legislators also have proposed comprehensive legislative packages covering a broad range of AI-related topics. For instance, Sen. Marsha Blackburn’s proposed TRUMP AMERICA AI Act contains a number of AI legislative proposals beyond preemption, including the Kids Online Safety Act (online platform minor safeguards), NO FAKES Act (prohibiting unauthorized digital replicas), GUARD Act (companion chatbot minor safeguards), TRAIN Act (copyright and AI model training data), AI LEAD Act (AI product liability standards), AI Risk Evaluation Act (frontier model evaluations), Future of AI Innovation Act (voluntary AI standards), CREATE AI Act (codifying the National AI Resource Resource), and COPIED Act (synthetic content provenance). Notably, large legislative packages that are policy-focused typically face challenges passing Congress as a whole, though components may survive. It also bears pointing out that the package contains KOSA, which is not primarily focused on AI and has faced separate issues with passage due to its scope.
II. Federal AI Regulatory Developments
In the first quarter of 2026, the White House and federal agencies took several steps related to AI regulation and AI adoption by federal agencies. For example:
- White House: In March, the Trump Administration released its National Policy Framework for AI, encompassing numerous AI-related recommendations to Congress that it framed as promoting a “light touch” approach to AI regulation, protections for minors, IP protection, free speech, innovation, and protection of workers. The framework also calls for preempting state AI laws that “impose undue burdens” on AI development and use.
- Department of Justice: In January, the Department of Justice established its AI Litigation Task Force, which has the “sole responsibility” to challenge state AI laws that unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations, or “are otherwise unlawful in the Attorney General’s judgment.”
- NIST: The National Institute of Standards and Technology (“NIST”) launched several initiatives focused on establishing standards for agentic AI systems. In January, NIST’s Center for AI Standards and Innovation (“CAISI”) issued a Request for Information related to practices and methodologies for measuring and improving the secure development and deployment of agentic systems. NIST launched the AI Agent Standards Initiative to support the development of industry standards for agents and a concept paper on agentic identity standards.
III. State AI Legislative Developments
State lawmakers have introduced over 600 AI bills with requirements for private entities in the 2026 legislative sessions so far. Enacted and/or passed (but not yet enacted) laws show a continued focus on companion chatbots; AI transparency; digital replicas and other synthetic content; and the use of AI by mental health providers and health insurers. For example:
- Chatbot Safety: AI companions and chatbot safety continued to be a focus of state lawmakers this quarter, with new laws enacted in Washington (HB 2225), Oregon (SB 1546), and Idaho (Conversational AI Safety Act (SB 1297)). Oregon SB 1546 establishes disclosure and mental health protocol requirements for “operators,” i.e., entities that make publicly available or control access to “AI companion chatbots.” In addition to these requirements, Washington HB 2225 also will require operators to implement reasonable measures to prevent AI companion chatbots from claiming to be human or engaging in manipulative engagement techniques, while Idaho SB 1297 will require operators to provide tools for managing “privacy and account settings” to users and parents of users under 13.
- Transparency & Content Provenance: Multiple states have adopted or may soon adopt transparency requirements similar to those in the 2025 California AI Transparency Act. New laws in Utah (HB 276) and Washington (HB 1170) will require certain providers of genAI systems to include “latent disclosures,” with Washington also requiring covered providers to provide free “provenance detection tools” and optional “manifest disclosures.” Additionally, New York lawmakers passed A3411, which awaits the Governor’s signature and would require certain entities to display a notice that the outputs of generative AI systems “may be inaccurate.”
- Harmful AI-Generated Content Regulation: State lawmakers enacted laws regulating the creation or distribution of harmful AI-generated content. Wyoming (HB 102) and Utah (HB 276) focus on restricting creation or distribution of nonconsensual AI-generated sexual material, with Wyoming’s law also prohibiting the development or distribution of AI systems designed, intended, or known to be used to (1) create, promote, or distribute AI-generated sexual material or child pornography, or (2) promote self-harm. Utah also enacted SB 256, establishing an individual right to consent to the use of one’s “personal identity” created through generative AI, and prohibiting the use of generative AI as a defense to a slander or libel claim.
- Health Insurance & Healthcare: Lawmakers in multiple states passed laws to regulate the use of AI in healthcare settings. Indiana (HB 1271), Utah (SB 319), and Washington (SB 5395) enacted new laws regulating the use of AI by health insurers to evaluate claims and prohibiting health insurers from using AI as a sole basis for denying or modifying claims. Legislation passed in Tennessee (SB 1580) and Delaware (HB 191), if signed by their respective governors, would prohibit AI systems from being represented or marketed as qualified mental health professionals or licensed professional healthcare workers, respectively.
Additionally, Governor Jared Polis released a draft bill that would replace the 2024 Colorado AI Act to impose requirements on developers and deployers of covered automated decision-making technology (“ADMT”), i.e., ADMT that is used to “materially influence a consequential decision.” We will continue to closely monitor changes to the Colorado law.
IV. Connected & Automated Vehicles
The first quarter of 2026 brought activity related to CAV legislation, enforcement, and regulation. For example:
- Federal Legislative Activity: Federal legislators considered a number of CAV-related bills this quarter. On January 13, the House Energy and Commerce Committee’s Subcommittee on Commerce, Manufacturing, and Trade held a hearing covering a number of CAV-related bills, including the Safely Ensuring Lives Future Deployment and Research In Vehicle Evolution Act of 2026 (the “SELF DRIVE Act of 2026”) (H.R.7390) (Rep. Latta (R-OH)), which would create a federal framework for AV deployment. (The America Drives Act (H.R.4661), introduced in 2025, proposes a similar framework for deployment of large, commercial AVs.) The Senate Commerce, Science, and Transportation Committee also held a hearing on CAV development, safety, and regulation on February 4.
- NHTSA: NHTSA continues to focus on updating its regulatory approach to accommodate advances in CAV technology. On January 23, NHTSA requested input on how the U.S. should proceed with respect to a proposed UN draft Global Technical Regulation for Automated Driving Systems; it received more than fifty comments in response. On March 10, NHTSA held an AV Safety Forum, which covered advances in CAV technology and steps the agency is taking to support CAV innovation and safety. Speakers included Transportation Secretary Sean Duffy, NHTSA Administrator Jonathan Morrison, White House Office of Science and Technology Policy Director Michael Kratsios, and representatives from Zoox, Waymo, Uber, and other industry members. The agency announced plans to update a number of safety rules that don’t account for AVs and to roll out new voluntary technical guidance for the industry.
- FTC Settlement: On January 14, the FTC issued a settlement with GM and OnStar to resolve the FTC’s January 2025 complaint alleging that GM used a misleading enrollment process to sign up consumers for its OnStar connected vehicle service in violation of Section 5 of the FTC Act. The FTC’s complaint also had alleged that GM failed to clearly disclose that it collected consumers’ precise geolocation and driving behavior via an OnStar feature and sold that data to third parties without consumers’ consent.
V. Internet of Things
In the first quarter of 2026, the FCC reopened applications for Lead Administrator and Label Administrators as part of its Cyber Trust Mark program following the FCC’s original selections for 11 Label Administrators and Lead Administrator in 2024. The new application period for Administrators comes after the company formerly in the Lead Administrator role withdrew from this position in December 2025. No new selections have been announced to date.
We will continue to update you on meaningful developments in these quarterly updates and across our blogs. Please also stay tuned for our upcoming quarterly video briefings on AI developments!