On March 20, 2026, the White House announced a comprehensive national legislative framework (the “Framework”) that tracks with its December 2025 AI Preemption Executive Order and its July 2025 AI Action Plan and takes aim at hot-button AI policy topics such as child safety and privacy, AI training and copyright, liability protections and preemption of state laws that the Trump administration believes are necessary to maintain America’s leadership in AI innovation. The Administration set an ambitious deadline, calling on Congress to turn this policy blueprint into law by year’s end (some administration advisors are cautiously optimistic that a bipartisan solution based on the Framework is within reach). At the same time, Senator Marsha Blackburn released a 291-page discussion draft for national AI legislation, or “one federal rulebook for AI,” that would generally codify White House policy, but also diverges in several important ways.
At this juncture – with doubts about whether Congress can form a consensus around AI regulation even as state legislatures step up to fill the void – developers and deployers are left to watch and wait. For now, a patchwork of state AI laws remains, covering everything from child protection, health and safety, transparency measures, and automated decisionmaking.
White House AI Framework – Selected Issues
- Preemption: The Framework asks Congress to preempt a large share of state AI laws applicable to developers and general-purpose systems that impose “undue burdens,” including laws that regulate AI development and AI uses that would otherwise be lawful if performed without AI, as well as something like a liability shield protecting developers from penalties related to unlawful use of their models by third parties.[1] The Framework characterizes AI development as an “inherently interstate phenomenon,” yet states would retain general police powers to enforce laws protecting children and consumers, preventing fraud and governing zoning and a state’s own use of AI.
The proposed AI developer liability shield differs from existing CDA Section 230 protection of interactive computer services, which prohibits treating platforms as the publisher or speaker of third-party content; the White House proposal instead is framed as a limit on state law penalties for third-party unlawful conduct involving AI models. Still, it would serve a similar purpose by breaking the chain of liability between the model maker and the bad acts of users or downstream actors. The Framework stops short of full immunity by preserving generally applicable state police-power laws, leaving the precise contours of any such shield unclear.
- Child Protection: The Framework encourages development of tools allowing parents to manage children’s privacy settings and features to protect minors against self-harm. It also states that Congress should affirm that existing child privacy protections apply to AI systems, including limits on data collection for model training and targeted advertising.
- Copyright Issues: The White House Framework takes a restrained, pro-training position. It states the Administration believes “training of AI models on copyrighted material does not violate copyright laws,” while supporting the courts, not Congress, as the preferred venue to resolve this and related fair use questions. The Framework also suggests Congress should consider enabling licensing regimes or collective rights systems through which rights holders could negotiate compensation from AI providers.
- Right of Publicity: The Framework urges Congress to consider a federal standard protecting individuals from the unauthorized distribution or commercial use of AI-generated digital replicas of their voice or likeness, with clear exceptions for parody and other expressive works.
- AI Development: To remove barriers to innovation, the White House urges Congress to establish regulatory sandboxes and make federal datasets accessible to industry and academia in “AI-ready formats.” Congress should also provide AI resources to small businesses, such as grants or tax incentives, to support wider deployment of AI tools.
The Blackburn Discussion Draft
On March 18, 2026, Senator Blackburn released a discussion draft of her comprehensive AI bill. The Framework and Blackburn’s draft are similar in overall direction: both seek to avoid a state law patchwork, both stress protection for minors and individuals regarding digital replicas, and both include pro-innovation elements. Reducing the Framework into legislation would presumably look something like this draft bill. However, the approaches differ on liability and preemption, and Blackburn’s bill would not leave the AI training issues to the courts but rather would amend the Copyright Act to say unauthorized copying or computational processing of copyrighted works for AI training, fine-tuning, development or creation is not fair use. We will wait to see the exact language of the bill once it is formally introduced and shaped by further negotiation with the White House and members of Congress.
[Note: Given the preliminary nature of this discussion draft, a deeper discussion of the specifics of the bill is beyond the scope of this post.]
Looking Ahead
As a practical matter, the sweeping preemption envisioned by the Framework may prove challenging to enact in its current form. The most compelling evidence is the Senate’s July 1, 2025 vote of 99-1 to strip a ten-year moratorium on state AI regulation from a budget reconciliation bill. The areas showing clearer bipartisan traction are narrower, harm-specific measures, such as child protection, deepfakes and digital replicas, along with general bipartisan support for maintaining American leadership in AI.
To complement White House AI policy, the Federal Trade Commission (FTC) has maintained a light-touch approach to AI, preferring to enforce existing laws and combat AI-powered fraud, deceptive behaviors and false representations that harm consumers. As Commissioner Melissa Holyoak stated in April 2025: “The Commission will promote AI growth and innovation, not hamper it with misguided enforcement actions or excessive regulation.” On the enforcement front, the FTC has continued to bring actions against entities that make false claims about AI-related goods and services (as evidenced by the recent Air AI, Inc. settlement). The agency has also appeared to follow the Administration’s AI Action Plan which directed the FTC to review prior investigations commenced under the prior administration to ensure they “do not advance theories of liability that unduly burden AI innovation.” For example, in December 2025, the FTC reopened and set aside a prior consent order against Rytr, LLC, reached during the Biden administration, that banned the AI-enabled writing assistance service from generating consumer or customer reviews or testimonials. The current FTC explained that the prior order “condemn[ed] a technology or service simply because it potentially could be used in a problematic manner.”
For deployers and businesses, the right way to read the release of the Framework is neither “nothing will happen” nor “a national reset is imminent.” The Framework itself is not law, but it is part of a broader executive branch campaign that began with a December 2025 executive order calling for an AI Litigation Task Force, federal review of state laws, and possible federal reporting and disclosure standards. As the President’s prior executive order stated:“It is the policy of the United States to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.” Thus, even if the Framework is not enacted as federal legislation, it is best viewed as a serious signal about the Administration’s preferred roadmap on AI and the direction of federal agency enforcement. It may also have the intended effect of dissuading state legislatures from enacting broad AI laws in the near term and instead waiting for federal leadership on the issue, though not all state governments have heeded that signal (compare the Florida “AI Bill of Rights” bill, which has stalled, with California Gov. Gavin Newsom’s executive order directing agencies to heighten standards and safeguards for California’s own state AI procurement processes). At this time, AI developers and deployers should keep building state-law compliance around the risk areas already drawing legislation – children, chatbot safety, deepfakes, transparency, health uses, and other high-risk deployments – while carefully tracking Washington for narrower federal bills that could create a national standard in other discrete areas.
[1] Interestingly, as the White House pushes for a CDA Section 230-style shield for AI developers against claims related to illegal third-party use of AI models, there is a pending Senate bill to repeal CDA Section 230 (S.3546). Senator Blackburn, who is a co-sponsor of the Senate’s Section 230 sunset bill, also includes a provision to repeal Section 230 in the AI discussion draft (but would allow state AI regulation in certain areas and a limited private right of action in contrast to the Framework).