Quick Hit: On April 24, 2026, the Department of Justice intervened in a lawsuit filed by xAI, a company owned by Elon Musk, challenging the Colorado Artificial Intelligence Act (the “AI Act”). If successful, the effort could prevent or delay the implementation of the AI Act, which is set to take effect on June 30, 2026.
Key Takeaway: The AI Act has been a source of concern for employers (and other entities utilizing AI) since it was enacted in 2024. It is the most comprehensive AI law in the country and, if it goes into effect without changes, will impose a host of onerous requirements on employers, including implementation of risk management programs, impact assessments, annual reviews, and disclosures regarding algorithmic discrimination.
Although there are efforts to amend the AI Act and delay its effective date, the AI Act’s effective date currently is June 30, 2026. As such, employers need to be preparing for the AI Act as though it will be implemented on June 30, 2026, but watching developments – such as the xAI litigation – in case they relieve employers of need to comply with the AI Act’s onerous provisions.
More Detail: While there are efforts to amend and extend the effective date of the AI Act, the xAI lawsuit (filed on April 9, 2026) reflects a growing concern that such efforts may not succeed before the law goes into effect. In its lawsuit, xAI seeks, among other things, a preliminary injunction, contending the law violates the U.S. Constitution.
The US Government’s intervention appears to be the first time it has sought to invalidate a state AI law since the President Trump issued Executive Order 14365 (“Order”), which sets forth a process by which such laws will be discouraged, challenged, and potentially preempted. The Department of Justice’s press release raises the policy concerns articulated in the Order, stating that “America’s success in the AI race will depend on removing barriers to innovation and adoption across sectors.”
Notably, in challenging the law, the Government invokes the Equal Protection Clause of the Fourteenth Amendment, characterizing the law as “requir[ing] AI companies to infect their products with woke DEI ideology” and declaring “[t]he Justice Department will not stand on the sidelines while states such as Colorado coerce our nation’s technological innovators into producing harmful products that advance a radical, far left worldview at odds with the Constitution.”
Specifically, the Government contends the law “violates the Equal Protection Clause because it compels AI developers and deployers to discriminate based on race, sex, religion, or other protected characteristics. By requiring developers and deployers to prevent the ‘risk’ of disparate outcomes based on demographic characteristics, SB24-205 effectively requires developers and deployers to expressly use demographic characteristics, including race, sex, and religion when building and using algorithmic model.”
The Government also argues AI Act violates the Equal Protection Clause because it “expressly authorizes intentional differential treatment” in so far as it: (1) “authorizes developers and deployers to engage in what would otherwise be illegal ‘algorithmic discrimination’ if the purpose is ‘to increase diversity’ (whatever that means),” and (2) “excepts from the definition of ‘algorithmic discrimination’ the offer, license, or use of AI to ‘redress historical discrimination.’”
We will continue to monitor developments in this case and related to the AI Act.