Key Takeaways:
- In December 2025, President Donald Trump signed an executive order that among other directives, instructed the attorney general to establish an Artificial Intelligence (AI) Litigation Task Force charged with challenging state AI laws deemed inconsistent with the administration’s goal of advancing a “minimally burdensome national policy framework for AI.”
- On Jan. 9, Attorney General Pam Bondi announced the launch of the Department of Justice’s (DOJ) AI Litigation Task Force, which – according to an internal DOJ memorandum – is tasked with challenging state AI laws inconsistent with federal policy. This includes undue interference with interstate commerce, conflicts with existing federal statutes or regulations, and any other theories the attorney general deems appropriate.
- Companies subject to state AI laws may soon face conflicting compliance obligations as federal challenges progress. Organizations should review their compliance programs and vendor agreements to ensure they can adapt to shifting requirements and remain fully informed as developments unfold.
I. Background
Since returning to office in January 2025, the Trump administration has taken steps to shift federal AI policy away from the regulatory approach adopted by the Biden administration. This effort began with the immediate revocation of President Joe Biden’s primary AI directive, an executive order aimed at establishing guardrails, safety testing requirements and civil rights protections for AI development and use in federal agencies. In January 2025, Trump issued a new executive order, Removing Barriers to American Leadership in Artificial Intelligence, directing federal agencies to prioritize AI innovation, competitiveness, and the reduction of regulatory burdens associated with AI development and deployment.
Throughout 2025, the administration continued to advance a more innovation-focused federal AI strategy by directing agencies to identify and roll back what it viewed as unnecessary regulatory barriers. In July 2025, the White House released America’s AI Action Plan, a comprehensive road map issued pursuant to the January 2025 executive order. The action plan – organized around accelerating innovation and building a national AI infrastructure – directed federal agencies to streamline AI-related regulations, review existing guidance and prioritize actions that promote domestic AI competitiveness. These directives signaled the administration’s intent to centralize federal oversight of AI and reduce variability in state requirements.
Accordingly, on Dec. 11, Trump issued an executive order, Ensuring a National Policy Framework for Artificial Intelligence (the EO), articulating the administration’s position that AI regulation should be governed at the federal level rather than by a mix of state requirements.[1] Among other directives, the EO instructed the U.S. attorney general to establish the AI Litigation Task Force (the Task Force) to challenge state-level AI laws that are “inconsistent” with U.S. AI policy, which seeks to “sustain and enhance” the U.S.’s global AI leadership through a uniform, minimally burdensome regulatory framework.[2]
II. The DOJ’s AI Litigation Task Force
On Jan. 9, 2026, DOJ employees received an internal memo from Bondi announcing the Task Force’s creation.[3] The memo provides that the Task Force will challenge state AI laws so that AI companies can “be free to innovate without cumbersome regulation.”[4] The memo also provides that the Task Force will consult David Sacks, the White House’s AI and crypto czar, regarding which state laws the Task Force may challenge. The Task Force will be led by Bondi or her designee and will include representatives from the offices of the deputy and the associate attorneys general, the DOJ’s Civil Division, and the Office of the Solicitor General.[5]
III. Anticipated Implications
The Task Force is expected to challenge state AI statutes on various legal grounds, including claims that such laws “unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations, or are otherwise unlawful in the Attorney General’s judgment.”[6] The administration’s position is that AI governance should be driven by a uniform federal framework rather than a state-by-state system, and the EO directs White House advisors to prepare a legislative recommendation to Congress to support the development of legislative recommendations to further that goal. As a result, federal courts in states with AI laws – passed recently in Colorado, California and Texas – may become early testing grounds for the Task Force’s litigation strategies and preemption theories.
That said, the EO itself does not displace any existing state AI requirements. State laws will therefore remain in effect unless and until they are successfully challenged in court or Congress enacts new federal legislation that preempts them. This necessarily creates uncertainty: Companies may face conflicting obligations under state regimes, while the federal government simultaneously works on and advances a national policy that disfavors those same requirements.
The administration is also expected to use federal funding tools to encourage alignment with federal policy, including potential restrictions on discretionary funding for states that maintain AI regulatory frameworks the administration considers burdensome or incompatible with national objectives. How states respond – whether through amendments or litigation or otherwise – will shape how quickly and how far the Task Force’s efforts proceed.
IV. Compliance and Risk Management Considerations
Organizations subject to state AI laws should prepare for a period of heightened regulatory uncertainty as the federal government pursues a national AI policy and the Task Force initiates litigation concerning state laws. Although the EO makes clear the administration’s intent to centralize AI governance, state statutes remain fully enforceable unless the courts invalidate them or Congress enacts preemptive federal legislation. Companies operating across multiple jurisdictions should therefore adopt a proactive strategy to manage incompatible state and federal requirements.
As a first step, organizations should review their AI deployments to identify those “double-regulated” by the states, including in California, Colorado or Texas. Existing vendor agreements requiring compliance with state-specific AI transparency rules may also soon conflict with federal reporting standards and need to be reviewed by competent counsel.
Additionally, compliance programs should be evaluated to ensure they have the ability to easily and quickly accommodate rapid legal developments. This includes assessing all internal governance structures, impact-assessment processes and oversight protocols to ensure they can be adapted to evolving federal expectations. Companies may also want to prepare for the possibility of increased agency inquiries or legal challenges as state authorities test the boundaries of their regulatory powers in light of federal action.
The BakerHostetler White Collar, Investigations and Securities Enforcement and Litigation team and Artificial Intelligence group are composed of dozens of experienced individuals, including attorneys who have served in the U.S. Department of Justice and at the U.S. Securities and Exchange Commission. Our teams have extensive experience in defending regulatory investigations and actions and in providing regulatory compliance counseling. We also advise clients across the full AI lifecycle, including designing governance programs, defending enforcement actions, and negotiating vendor AI provisions. Please feel free to contact any of our experienced professionals if you have questions about this alert.
[1] Exec. Order No. 14,365, 90 Fed. Reg. 58499 (Dec. 16, 2025).
[2] Id. §§ 2-3.
[3] Memorandum from the Attorney General, Artificial Intelligence Litigation Task Force, Dep’t of Just. (Jan. 9, 2026), available at: https://www.justice.gov/ag/media/1422986/dl?inline.
[4] Id.
[5] Id.
[6] EO § 3.