President Trump’s sweeping AI Executive Order just stitched another layer onto the already tangled patchwork of state AI regulations — and employers may feel left out in the cold. But we have you covered. This week, we piece together the EO itself, the legal challenges that may cause it to unravel, and tips for employers to seamlessly navigate evolving AI obligations.
Another Piece in the AI Patchwork
The December 11, 2025, EO, Ensuring a National Policy Framework for Artificial Intelligence, forecasts an aggressive federal effort to challenge — and ultimately preempt — state AI laws, including those that regulate employer AI tools like automated hiring systems and employee monitoring software. The EO directs the federal government to file lawsuits against and cut funding to states that enact and enforce “excessive” AI laws, which “threaten to stymie innovation” and hinder American “global AI dominance.”
The EO’s key directives include:
- State Law Evaluations – The Commerce Department will review state AI regulations and refer to the DOJ those laws it considers “onerous” or inconsistent with federal objectives.
- Creating a DOJ “AI Litigation Task Force” – The attorney general must establish a task force within 30 days to challenge onerous state AI laws as unconstitutional, preempted, or otherwise unlawful.
- Funding Conditions – Within 90 Days, the secretary of Commerce must issue a Policy Notice conditioning certain federal funds on state adherence to the federal AI framework. The EO specifically threatens funding under the Broadband Equity Access and Deployment Program (due to AI’s reliance on high-speed broadband networks) but also directs federal agencies to assess other discretionary grant programs and determine which funds may be conditioned on compliance with the EO.
- Legislative Recommendation – Federal officials must draft a legislative recommendation pushing Congress to enact a federal AI framework to preempt inconsistent state statutes, with exceptions for AI laws related to child safety, data center infrastructure, state government use of AI, and others “to be determined.”
What the EO does not do:
- Invalidate state AI laws – Although the EO triggers processes that could lead to legal challenges or future federal legislation, state AI laws remain in effect — for now.
- Target employers – There is no indication that the federal government has employers in its crosshairs. Instead, the EO is aimed at states and state officials who enforce AI laws.
- Protect employment-related state AI laws – Because employer AI tools were not included among the exceptions to future preemption, state regulation of workplace AI will likely be a litigation battleground.
- Immunize Employers – Even if state AI laws are eventually blocked or preempted, employer obligations under federal statutes such as Title VII (race, sex, and other protected categories), the ADA (disability), and ADEA (age) exist independently of state AI rules and continue to govern the use of AI in employment decisions. Employers could still be liable under federal law where, for example, a hiring software’s algorithm screens out disabled individuals or applicants of a certain race.
Future Challenges Could Unravel the Threads
Several news outlets leaked a draft of the EO weeks before it was signed, which gave opponents a head start in preparing to push back against the EO’s perceived overreach and intrusion on state authority. State leaders are expected to raise challenges on multiple legal fronts, including the principles of federalism and state authority under the 10th Amendment. Because states — not the federal government — traditionally regulate employment and consumer protection, states with robust AI regulations, like Colorado, New York, and California, will likely be among the first to test the EO in court. We also anticipate that states will argue conditioning federal funds on state acquiescence to the EO amounts to coercion under the Spending Clause and that federal agencies lack clear statutory authority to carry out the EO’s directives.
The EO has also drawn pushback on Capitol Hill — from both sides of the political aisle. Critics argue that Congress, not an executive directive, is the only proper form in which to craft a national AI framework. Members of the Senate have openly opposed the order, particularly where Congress overwhelmingly rejected similar preemption language in at least three bills just this year.
Tailor Your Policies for Change
For employers, these federal-state battles won’t erase current AI obligations overnight and may, in the meantime, increase regulatory uncertainty. It’s all the more reason employers should prepare to pivot as obligations evolve.
Here are six practical steps you can take now to stay proactive and reduce legal risk while AI regulations are in flux:
- Stay Compliant with State Law
- Identify AI-related laws in each state in which you operate. If you operate in multiple states with varying regulations, consider whether you should adopt one global policy that follows the most restrictive law.
- Ensure your HR and IT teams are aware of the relevant regulations and how they impact hiring, promotions, employee monitoring, and algorithm requirements.
- This interactive map is one of many tools you can use to track AI legislation.
- Update Internal Policies
- Ensure your policies and practices align with, and can adapt to follow, state and federal AI regulations. Consider updates to your policies on data retention, AI-driven hiring and promotion systems, and employee monitoring software. Be sure to include these policy updates in your employee handbook and employment agreements.
- Draft with an eye toward flexibility if state laws are challenged or preempted or if new federal standards are issued.
- Train staff on updated policies, legal requirements, bias mitigation, and compliant use of AI tools.
- Improve AI Governance and Documentation
- For each of your AI systems, document its intended use, including how data will be collected and protected, and your plan for human oversight.
- Periodically test your AI models using real-world scenarios to evaluate whether the output is fair and to prevent unintended discriminatory outcomes. Where possible, conduct your tests alongside counsel to preserve privilege.
- Enable end-users to report any biases they experience so you can adjust as needed.
- Maintain records of your mitigation and corrective efforts.
- Review and Update Vendor Agreements
- Make sure your third-party AI providers meet compliance requirements for each state in which you operate.
- Insist that your vendors disclose any bias audits performed on their tools, including the methodology and key results.
- Ask for clauses in your agreements that allow you to pivot if AI rules change.
- Monitor Federal and State Updates
- Monitor AI-related federal guidance, executive order developments, and congressional action, and check for corresponding state law changes.
- Track the EO deliverable dates for a DOJ Task Force (30 days) and federal funding Policy Notice (90 days).
- Consult Your Counsel
- In this and all other circumstances, your lawyer is your friend.
- Consult with employment or AI-compliance specialists to review your policies, contracts, and audit processes before making major AI-related changes.
As the federal and state governments tug at opposite ends of the AI regulatory fabric, employers don’t have the luxury of waiting to see whose thread holds. By mapping your current obligations, tightening internal governance, and implementing flexible policies, you can manage your risk as the AI rules shift — regardless of what unravels in court or in Congress.
Please contact our team if you have questions about AI governance, bias audits, or updating your policies in response to AI regulation. And subscribe to our blog as we monitor developments arising from the EO.
