On December 11, the White House issued an Executive Order (EO) titled Ensuring a National Policy Framework for Artificial Intelligence (AI). The EO states a federal policy to sustain and enhance U.S. AI leadership through a minimally burdensome national policy framework and to limit conflicting state requirements. It directs rapid actions by multiple federal entities to evaluate, challenge, or preempt state AI laws viewed as inconsistent with that policy and to use federal funding and standard-setting to influence state approaches.

Key directives and timelines
The EO assigns near-term tasks to several federal entities aimed at identifying and contesting certain state AI laws and developing a national policy approach.

  • U.S. Department of Justice (DOJ): Within 30 days, establish an AI Litigation Task Force to challenge state AI laws that DOJ deems inconsistent with the EO’s policy (including potential claims involving interstate commerce or federal preemption).
  • U.S. Department of Commerce (Commerce): Within 90 days, publish an evaluation of existing state AI laws identifying “onerous” provisions for potential referral to DOJ, including laws that would require AI models to alter truthful outputs or compel disclosures or reporting that raises constitutional concerns, including the First Amendment.

Funding and grant conditions
The EO uses federal funding levers to affect state AI regulatory choices.

  • Broadband Equity, Access, and Deployment (BEAD) Program: Within 90 days, Commerce must issue a policy notice specifying that states with “onerous” AI laws identified in the Commerce evaluation are ineligible for remaining non-deployment funds to the maximum extent allowed by law, and explain how fragmented state AI rules could undermine BEAD-funded deployments and AI-reliant applications.
  • Federal discretionary grants: Federal executive departments and agencies must assess whether grants can be conditioned on states not enacting conflicting AI laws or agreeing not to enforce such laws during the grant performance period.

Federal standard‑setting and preemption
The EO directs federal regulators to consider national standards and the preemption of conflicting state requirements.

  • Federal Communications Commission (FCC): Within 90 days of Commerce’s identification of state laws, initiate a proceeding to determine whether to adopt a federal AI reporting and disclosure standard that would preempt conflicting state laws.
  • Federal Trade Commission (FTC): Within 90 days, issue a policy statement explaining how the federal prohibition on unfair or deceptive acts or practices applies to AI models, including when state laws that require changes to truthful AI outputs would be preempted as deceptive.

Legislative recommendation
The EO calls for a proposal to Congress for a uniform federal AI framework that would preempt conflicting state AI laws, while not proposing to preempt otherwise lawful state laws in certain areas.

  • The Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology must jointly prepare a legislative recommendation for a uniform federal AI framework.
  • The recommendation will not propose preemption of otherwise lawful state laws concerning child safety, AI compute and data center infrastructure (other than generally applicable permitting reforms), and state procurement and use of AI, and other topics to be determined.

Implementation notes
The EO specifies it will be implemented consistent with applicable law and available appropriations, does not create enforceable rights, and assigns publication costs to Commerce.

Our Take
The EO signals movement toward a single national baseline for AI, with potential preemption of conflicting state mandates and use of funding levers to discourage fragmented requirements.

For financial institutions, the EO may pave the way for AI adoption across the financial sector, including among banks, fintechs, and others exploring ways to deploy AI tools and solutions for fraud detection, eligibility decisions, customer engagement, and other business functions. For example, the EO provisions about altering “truthful output” may be directed against state laws that would require companies to modify their AI to prevent algorithmic discrimination.

For financial institutions operating across multiple states, a uniform standard could reduce complexity when attempting to comply with a patchwork of state AI laws, but near‑term uncertainty is likely as Commerce completes its evaluation, the FCC and FTC act, and DOJ initiates litigation. We anticipate many states to vigorously oppose these efforts in light of a letter by the National Association of Attorneys General on behalf of a bipartisan coalition of 36 state attorneys general rejecting a federal moratorium that would prohibit states from enacting or enforcing AI laws. Comparable legislative efforts to impose a ban on state AI laws have recently met with failure in Congress.

Photo of Kim Phan Kim Phan

Kim is a privacy and data security lawyer who counsels companies in federal and state privacy and data security statutes and regulations. Her work encompasses strategic planning and guidance for companies to incorporate privacy and data security considerations throughout product development, marketing, and

Kim is a privacy and data security lawyer who counsels companies in federal and state privacy and data security statutes and regulations. Her work encompasses strategic planning and guidance for companies to incorporate privacy and data security considerations throughout product development, marketing, and implementation.

Photo of Lori Sommerfield Lori Sommerfield

With over two decades of consumer financial services experience in federal government, in-house, and private practice settings, and a specialty in fair lending regulatory compliance, Lori counsels clients in supervisory issues, examinations, investigations, and enforcement actions.

Photo of Chris Willis Chris Willis

Chris is the co-leader of the Consumer Financial Services Regulatory practice at the firm. He advises financial services institutions facing state and federal government investigations and examinations, counseling them on compliance issues including UDAP/UDAAP, credit reporting, debt collection, and fair lending, and defending…

Chris is the co-leader of the Consumer Financial Services Regulatory practice at the firm. He advises financial services institutions facing state and federal government investigations and examinations, counseling them on compliance issues including UDAP/UDAAP, credit reporting, debt collection, and fair lending, and defending them in individual and class action lawsuits brought by consumers and enforcement actions brought by government agencies.

Chris also leverages insights from his litigation and enforcement experience to help clients design new products and processes, including machine learning marketing, fraud prevention and underwriting models, product structure, advertising, online application flows, underwriting, and collection and loss mitigation strategies.

Chris brings a highly practical focus to his legal advice, informed by balancing a deep understanding of the business of consumer finance and the practical priorities of federal and state regulatory agencies.

Chris speaks frequently at conferences across the country on consumer financial services law and has been featured in numerous articles in publications such as the Wall Street Journal, the New York Times, the Washington PostAmerican BankerNational Law JournalBNA Bloomberg, and Bank Safety and Soundness Advisor.