A recent enforcement action involving UCHealth could have been the case that defined how courts treat artificial intelligence in medical coding. Instead, it became yet another settlement—leaving providers with more questions than answers and regulators with more unchecked leverage.
What Happened?
At the center of the case was the use of AI-assisted coding tools that allegedly resulted in systematic upcoding of evaluation and management (E/M) services. The core allegation was deceptively simple: If a patient’s vital signs were taken a certain number of times, the AI-driven system would elevate the level of service billed—sometimes without sufficient clinical justification under Medicare rules. In other words, the algorithm equated frequency of data collection with medical complexity.
That is a dangerous shortcut. But defensible, fixable, and easily explained. I recently presented at the Health Care Compliance Association (HCCA) Regional Conference in Anchorage, Alaska. There was an in-house counsel presenting from UCHealth. I saw it on the agenda and was excited to ask her why UCHealth settled. Alas, she appeared virtually, so I was unable to inquire. If any of my readers know why UCHealth decided to settle instead of litigate, I would love to know.

Going back to AI and UCHealth, under Medicare’s E/M framework, the level of service is not determined by how many times vitals are taken. It is based on medical decision-making (MDM)—including the complexity of the patient’s condition, data reviewed, and risk of complications. An AI model that substitutes volume of inputs for clinical judgment is not just flawed—it is legally problematic.
The Real Issue: AI Mistakes vs. Fraud
Here is where this case becomes critically important—and frustrating.
The government’s theory effectively treated the alleged upcoding as a False Claims Act issue, implying knowledge, intent, or at least reckless disregard. But what if the root cause was not fraud at all? What if it was bad AI logic? Healthcare providers have long been held to standards of documentation and coding accuracy. But AI introduces a new variable: automated decision-making based on opaque algorithms. When those algorithms are wrong, the question becomes:
Is that fraud—or is that a technology error? We do not yet have a clear answer from the courts.
Why This Case Should Have Been Litigated
By settling, UCHealth avoided risk—but the broader provider community lost something far more valuable: legal clarity. If litigated, this case could have answered several critical questions:
1. What is the scienter standard when AI is involved?
Would a court require proof that the provider knew the AI logic was flawed? Or would reliance on a vendor or internal tool be enough to negate intent?
2. Can reliance on AI be reasonable?
Providers rely on EHR systems, billing vendors, and coding software every day. Would a court view AI tools as an extension of that reliance—or hold providers to a higher standard because of the technology’s risks?
3. Where does responsibility lie?
Is liability on the hospital? The vendor? The compliance team? The physicians? Litigation could have clarified how responsibility is allocated in an AI-driven environment.
4. How do courts distinguish “system error” from “systemic fraud”?
This is the most important question. Because the government often treats patterns as proof of intent. But AI, by design, creates patterns—even when it is wrong.
The Strategic Reality: Settlement vs. Litigation
From a defense perspective, this is where the decision becomes controversial. While settlement may feel like the safer route, in cases like this, litigation may actually be the more cost-effective and strategically sound option. Why?
Early dismissal opportunities: A well-pleaded motion to dismiss could have challenged the government’s ability to establish scienter based on AI-driven conduct.
Narrowing the case: Even if not dismissed, litigation could have significantly limited the scope of claims.
Setting precedent: A favorable ruling would have had industry-wide impact, deterring future overreach.
Avoiding inflated settlement pressure: The government often leverages uncertainty—especially with new technologies—to drive higher settlements.
Instead, by settling, the case reinforces a troubling dynamic:
Providers are expected to adopt innovative technologies—but are held strictly liable when those technologies fail. In this case, in my opinion, litigation heavily outweighs settling for 23 million. Not only would litigation have been cheaper, but the case could also have been driven precedent.
The Bigger Picture for Providers: This case sends a clear, if unintended, message: If your AI makes a mistake, you may be treated as if you made the error intentionally.
That is not a sustainable standard for an industry increasingly reliant on automation.
Providers should take away three key lessons:
- Audit AI outputs, not just inputs.
- Do not assume that more data equals better coding.
- Understand the logic behind your tools.
If you cannot explain how your AI reaches conclusions, you cannot defend it.
Be prepared to litigate the right case. At some point, a provider will need to take a stand and force the courts to address these issues.
Final Thought
The UCHealth case could have been the “AI meets False Claims Act” decision that the healthcare industry desperately needs. Instead, it became another data point in a growing trend: Settle first, ask legal questions later.
From a defense perspective, that was a missed opportunity—not just for one hospital, but for every provider navigating the uncertain intersection of AI and compliance.