On December 11, President Trump signed an Executive Order on “Ensuring a National Policy Framework for Artificial Intelligence” (“AI Preemption EO”), the culmination of months of efforts by Republican lawmakers to assert federal primacy over AI regulation.  The AI Preemption EO, which follows the release of a draft version in November, states that “[t]o win” the race “for supremacy” in AI, U.S. AI companies must be “free to innovate without cumbersome regulation” and that “excessive State regulation thwarts this imperative,” including state laws that “requir[e] entities to embed ideological bias within models” and “impermissibly regulate beyond [s]tate borders.”  To address these concerns, the AI Preemption EO states that the Trump Administration “must act with the Congress to ensure that there is a minimally burdensome national standard,” which must “ensure that children are protected, censorship is prevented, copyrights are respected, and communities are safeguarded.”  However, the AI Preemption EO states that, “[u]ntil such a national standard exists,” the Administration has an “imperative” to “check the most onerous and excessive [state AI] laws.”  On December 8, prior to issuing the AI Preemption EO, President Trump stated that there “must be only One Rulebook if we are going to continue to lead on AI,” and that the involvement of states in AI regulation will “destroy[]” U.S. AI innovation “in its infancy.” 

To implement its policy of “sustain[ing] and enhance[ing] the United States’ global AI dominance through a minimally burdensome national policy framework for AI,” the AI Preemption EO directs White House officials and federal agencies to take various steps to either preempt directly where possible, or otherwise challenge, state AI laws as preempted by existing federal laws or regulations:

AI Litigation Task Force.  The AI Preemption EO directs the Attorney General to establish an “AI Litigation Task Force” with the “sole responsibility” of challenging state AI laws that unconstitutionally regulate interstate commerce, are preempted by federal regulations, or are otherwise unlawful “in the Attorney General’s judgment,” including laws identified as “onerous” state AI laws in evaluations published by the Commerce Secretary. 

On December 8, the White House Special Advisor for AI and Crypto, David Sacks, indicated that a potentially wide range of state AI laws may be subject to challenge under the AI Litigation Task Force’s mandate, stating that, when “an AI model is developed in state A, trained in state B, inferenced in state C, and delivered over the internet through national telecommunications infrastructure,” it is “clearly interstate commerce . . . reserve[d] for the federal government to regulate.”  On the other hand, according to Sacks, AI preemption would not apply to “generally applicable” state laws—such as those prohibiting or penalizing child sexual abuse material (CSAM)—or to local decisions regarding the construction of AI data centers. 

The decision to issue an EO preempting state laws in piecemeal fashion, and to pursue litigation and regulation to challenge potentially conflicting state laws, suggests that Congress is unlikely to act soon to preempt state AI regulations and replace them with uniform national standards.  While the AI Preemption EO may ultimately overturn some state AI laws or chill states from adopting new ones, the AI Preemption EO’s reliance on disparate federal agencies and authorities—most of which are not AI-specific—may increase uncertainty as to what rules govern the development, deployment, and use of AI in the short term.  In the meantime, states will generally be free to continue enacting and enforcing new AI laws.

Evaluation of State AI Laws.  The AI Preemption EO directs the Commerce Secretary, in consultation with White House officials, to publish an evaluation of state AI laws that identifies “onerous laws” that conflict with the AI Preemption EO’s policy and state AI laws that should be referred to the AI Litigation Task Force.  Consistent with President Trump’s July 23 Executive Order on “Preventing Woke AI in the Federal Government,” the AI Preemption EO requires this evaluation to identify, “at minimum,” state AI laws that “require AI models to alter truthful outputs” or that “may compel” AI developers or deployers to disclose or report information in violation of First Amendment or other constitutional rights.  Additionally, the AI Preemption EO provides that the evaluation may identify state AI laws that “promote AI innovation consistent with the policy” of the AI Preemption EO.

Although it is unclear what other state AI laws may be considered “onerous” under this provision, the White House Office of Science and Technology Policy (OSTP)’s September 26 Request for Information on federal AI regulatory reform outlined five categories of “barriers” that hinder AI development, deployment, and adoption and that may inform the AI Preemption EO’s implementation: (1) regulations that are “based on human-centered assumptions”; (2) regulations that “assume human actors”; (3) regulations that lack sufficient “regulatory clarity” regarding their application to AI; (4) regulations that “directly target” and “are a major hindrance” to AI; and (5) regulations that are inconsistently enforced due to “organizational factors.”

Funding Restrictions for States with AI Laws.  Similar to the approach of the moratorium on state and local AI laws that was overwhelmingly rejected by the Senate in July, the AI Preemption EO directs the Commerce Secretary to issue a Policy Notice specifying conditions of state eligibility for Broadband Equity Access and Deployment (BEAD) funds, which must specify that states are ineligible for such funds if they have “onerous AI laws,” as identified by the Commerce Secretary above, and describe how a “fragmented State [AI] regulatory landscape” may undermine the purpose and mission of BEAD funding, including the “growth of AI applicants reliant on high-speed networks” and BEAD’s “mission of delivering universal, high-speed connectivity.”  

Additionally, the AI Preemption EO directs federal agencies to take “immediate steps” to determine whether to condition their discretionary grant programs on states “not enacting an AI law that conflicts with the policy” of the AI Preemption EO, including “onerous state AI laws” identified by the Commerce Secretary or state AI laws challenged by the AI Litigation Task Force.  Agencies must also consider whether, “for those States that have enacted such [AI] laws,” to condition discretionary grants on such states “entering into a binding agreement” with the agency “not to enforce any such [AI] laws” for the performance period of the grant.

FCC Federal AI Reporting and Disclosure Standard.  Consistent with its policy of establishing a “uniform national policy framework for AI,” the AI Preemption EO directs the Chair of the Federal Communications Commission (FCC) to “initiate a proceeding” on adopting a “Federal reporting and disclosure standard for AI models” that “preempts conflicting State laws.”  Although not stated explicitly, this provision may be intended to preempt state laws like California’s Transparency in Frontier AI Act, which establishes AI developer reporting and disclosure obligations that the draft version of the AI Preemption EO described as “complex and burdensome.”  This provision also echoes recommendations in President Trump’s July 23 AI Action Plan, which called on the FCC to evaluate whether state AI regulations may be preempted under the Communications Act. 

FTC Section 5 Preemption Policy Statement.  The AI Preemption EO directs the Chair of the Federal Trade Commission (FTC) to issue a policy statement on the “application of the FTC Act’s prohibition on unfair and deceptive acts or practices” under Section 5 of the FTC Act “to AI models.”  Similar to language in the President’s Woke AI Executive Order, the AI Preemption EO requires the FTC policy statement on AI models to specifically explain where “State laws that require alterations to the truthful outputs of AI models” may be preempted by Section 5’s prohibition on deceptive acts or practices.  This provision appears intended to challenge the Colorado AI Act, a 2024 law that imposes various governance requirements for developers and deployers of “high-risk AI systems” in order to minimize risks of algorithmic discrimination.  The AI Preemption EO’s statement of purpose argues that the Colorado AI Act could “force AI models” to “produce false results in order to avoid a ‘differential treatment or impact’” on the basis of protected characteristics. 

Legislative Recommendations for Federal AI Framework.  The AI Preemption EO directs the White House Special Advisor for AI and Crypto and the Office of Legislative Affairs to jointly prepare a “legislative recommendation establishing a uniform Federal policy framework for AI that preempts state AI laws” that conflict with the AI Preemption EO’s policy of “global AI dominance through a minimally burdensome, uniform national policy framework for AI.”  The AI Preemption EO further provides that this legislative recommendation “shall not” preempt “otherwise lawful” state AI laws that relate to (1) child safety protections, (2) AI compute and data center infrastructure “other than generally applicable permitting reforms,” (3) state government procurement and use of AI, or (4) “other topics as shall be determined.” 

While the substance of the AI Preemption EO’s contemplated “legislative recommendation” is not specified, any future proposed federal AI framework legislation could be informed by a growing number of AI legislative proposals that have emerged in Congress in recent years.  For example, the Strengthening AI Normalization and Diffusion by Oversight and eXperimentation (“SANDBOX”) Act, introduced by Senate Commerce Committee Chair Ted Cruz (R) in September, would allow sandbox program participants to request waivers or modifications to federal regulations to enable the deployment of AI tools.  Additionally, the Safeguarding Adolescents From Exploitative (SAFE) Bots Act (H.R. 6489), introduced by Representatives Erin Houchin (R-IN) and Jake Auchincloss (D-MA) on December 5, would preempt state AI laws that “cover[] a matter described” in that bill’s AI chatbot safety provisions for minors. 

The AI Preemption EO is the most decisive step taken by the White House to date to halt an expanding array of state AI laws.  In recent years, state lawmakers in both parties have enacted dozens of new AI laws, from frontier model public safety regulations and AI consumer protection laws to chatbot safeguards and bans on harmful AI-generated deepfakes and nonconsensual impersonations.  The AI Preemption EO also could be subject to legal challenges from State Attorneys General.  On December 8, California Attorney General Rob Bonta (D) stated that his office would “take steps to examine the legality or potential illegality” of the AI Preemption EO, and Florida Governor Ron DeSantis (R), who recently proposed an “AI Bill of Rights” to protect Florida consumers, stated that an “executive order doesn’t/can’t preempt state legislative action.”

The issuance of the AI Preemption EO follows a series of legislative efforts to preempt state AI laws that have stalled in Congress.  In July, the Senate rejected, 99-1, a proposed budget reconciliation bill amendment that would have imposed a sweeping moratorium on the enforcement of state and local AI regulations.  And earlier this month, Republican congressional leaders abandoned an attempt to include an AI preemption provision in the National Defense Authorization Act (NDAA), despite the backing of the White House. 

Matthew Shapanka

Matthew Shapanka draws on more than 15 years of experience – including on Capitol Hill, at Covington, and in state government – to advise and counsel clients across a range of industries on significant legislative, regulatory, and enforcement matters. He develops and executes…

Matthew Shapanka draws on more than 15 years of experience – including on Capitol Hill, at Covington, and in state government – to advise and counsel clients across a range of industries on significant legislative, regulatory, and enforcement matters. He develops and executes complex, multifaceted public policy initiatives for clients seeking actions by Congress, state legislatures, and federal and state government agencies, many with significant legal and political opportunities and risks.

Matt rejoined Covington after serving as Chief Counsel for the U.S. Senate Committee on Rules and Administration, where he advised Chairwoman Amy Klobuchar (D-MN) on all legal, policy, and oversight matters within the Committee’s jurisdiction, including federal election law and campaign finance, and oversight of the Federal Election Commission, legislative branch agencies, security and maintenance of the U.S. Capitol Complex, and Senate rules and regulations.

Most significantly, Matt led the Rules Committee staff work on the Electoral Count Reform and Presidential Transition Improvement Act – landmark bipartisan legislation to update the antiquated process of certifying and counting electoral votes in presidential elections that President Biden signed into law in 2022.

As Chief Counsel, Matt was a lead attorney on the joint bipartisan investigation (with the Homeland Security and Governmental Affairs Committee) into the security planning and response to the January 6, 2021 attack on the Capitol. In that role, he oversaw the collection review of documents, led interviews and depositions of key government officials, advised the Chairwoman and Committee members on two high-profile joint hearings, and drafted substantial portions of the Committees’ staff report on the attack. He also led oversight of the Capitol Police, Architect of the Capitol, Senate Sergeant at Arms, and executive branch agencies involved in implementing the Committees’ recommendations, including additional legislation and hearings.

Both in Congress and at the firm, Matt has prepared many corporate and nonprofit executives, academics, government officials, and presidential nominees for testimony at legislative, oversight, or nomination hearings before congressional committees, as well as witnesses appearing at congressional depositions and transcribed interviews. He is also an experienced legislative drafter who has composed dozens of bills introduced in Congress and state legislatures, including several that have been enacted into law across multiple policy areas.

In addition to his policy work, Matt advises and represents clients on the full range of political law compliance and enforcement matters involving federal election, campaign finance, lobbying, and government ethics laws, the Securities and Exchange Commission’s “Pay-to-Play” rule, as well as the election and political laws of states and municipalities across the country.

Before law school, Matt worked as a research analyst in the Massachusetts Recovery & Reinvestment Office, where he worked on all aspects of state-level policy, communications, and compliance for federal stimulus funding awarded to Massachusetts under the American Recovery & Reinvestment Act of 2009. He has also worked for federal, state, and local political candidates in Massachusetts and New Hampshire.

August Gweon

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks…

August Gweon counsels national and multinational companies on data privacy, cybersecurity, antitrust, and technology policy issues, including issues related to artificial intelligence and other emerging technologies. August leverages his experiences in AI and technology policy to help clients understand complex technology developments, risks, and policy trends.

August regularly provides advice to clients for complying with federal, state, and global privacy and competition frameworks and AI regulations. He also assists clients in investigating compliance issues, preparing for federal and state privacy regulations like the California Privacy Rights Act, responding to government inquiries and investigations, and engaging in public policy discussions and rulemaking processes.