Fresno Criminal Lawyer

Fresno Criminal Lawyer – Criminal Defense Lawyer Rick Horowitz
“Pretrial Release: The Illusion of Algorithmic Neutrality” is the first installment in a series I’m calling Detained by Design: Algorithmic Justice and the Erosion of Constitutional Bail in California.
Over the coming posts, I’ll examine how California’s pretrial release system — shaped by reform-minded case law, bolstered by risk assessment tools, and framed by laws like SB 36 — has quietly changed into something that may look data-driven and fair, but too often behaves like its shadowy (and unaccountable) punitive predecessor.
How the Series Will Work
This series is grounded in both empirical and lived experience. On the empirical side, I’ll draw briefly from recent studies — including a 2025 Apple paper, along with related work by Microsoft and others — that reveal a core limitation in how large reasoning models perform as the complexity of a task increases. These studies matter because they mirror how our courts increasingly rely on Artificial Intelligence (AI) models and tools that perform well in easy cases but degrade—sometimes silently — when nuance is needed most.
On the ground, because I primarily practice criminal defense in these four counties, I’ll be filing public records requests in Fresno, Kings, Tulare, and Madera counties to better understand how SB 36 is being implemented in practice. The public has been told these tools are transparent, validated, and fair. I intend to find out whether that holds up under scrutiny.
The series itself may occasionally be interrupted by other blog posts, but the structure will remain intact: each entry will begin with “Pretrial Release:” at the start of each post title, each will note its place in the series, and every time I write a new post, I’ll go through the series and update links. This way, all posts will contain obvious connections to all others for readers who arrive later in the series, or anyone, really, who wants to read the entire set of posts.
Think of this post as the introduction, the roadmap, and the invitation to follow along.
The Map of the Road Ahead
Each post in this series takes up a specific fault line in the system. Here’s what the entire series will cover:
- Pretrial Release: The Illusion of Algorithmic Neutrality
— This post, which introduces the series, its themes, and its framing. - Pretrial Release: When the Risk Tools Fail the Risky Cases
— A look at how and why algorithmic risk assessments collapse under complex case conditions, drawing from recent research in large reasoning models. - Pretrial Release: Silent Overrides and the Disappearing Record
— A closer examination of judicial behavior in counties like Fresno, where overrides of algorithmic recommendations happen frequently and without explanation. - Pretrial Release: A Law Written in Light, Enforced in Shadow
— An analysis of SB 36, how it was supposed to promote transparency and accountability, and how it’s functionally failing to deliver either. - Pretrial Release: Humphrey, Hollowed Out
— Revisiting California’s landmark Humphrey decision and showing how its promise of individualized, non-punitive bail review is being eroded by silent automation and procedural sleight of hand. - Pretrial Release: When Algorithms and Institutions Protect Each Other
— How judicial discretion and algorithmic scoring create a self-reinforcing loop, with each obscuring the flaws of the other. - Pretrial Release: Resisting the Conveyor Belt
— The final post in the series, where I propose paths forward—from policy reforms to public records strategies to trial-level defense tactics that challenge the current system’s structural failures.
Each post will include updated links to the rest of the series, so readers who enter at any point can still follow the full arc. Even if I pause to write on other topics along the way, the structure, titles, and focus of this series will remain consistent and linked so it will be easy to find all the posts.
Why “Neutrality” is the Wrong Frame
One of the great myths of modern legal reform is the idea that algorithms are neutral. That they take the bias out of human decision-making. That they bring consistency to a process distorted by subjectivity. But this way of framing it collapses under scrutiny — especially in the pretrial context.
I’ve written about this most recently in Confabulations Cause Hallucinations: AI Lies Fool More Than Our Eyes.
Although the commonly-used risk assessment tools in California do not utilize AI — they’re actuarial tools, or statistic scoring systems based on pre-selected variables and logistic regression —they suffer the same basic defect. They don’t see people. They process proxies; that is, they look at simplified data points like age, arrest history, prior failures to appear, then presume this is the whole story. They don’t know, for example, if the person had no transportation, no child care, or never received proper notice regarding their case or court hearing. (They might not even know they had a case until they got picked up on an arrest warrant during a traffic stop!)
Relatedly, though often touted as capable of predicting an individual’s likelihood of pretrial success, the tools are actually incapable of making individualized predictions. They instead study data from many individuals and then forecast aggregate group risk. A risk score therefore indicates that a person shares traits with a group who succeeded or failed at a certain rate. But the score provides no information about how a specific individual will behave if released.
— Brandon Buskey & Andrea Woods, Making Sense of Pretrial Risk Assessments, Champion, June 2018, at 18.
These actuarial risk assessment tools don’t understand the dynamics of systemic racism, or why someone might have multiple bench warrants for failing to appear while navigating housing instability or mental illness. These tools are built on decades of data that reflect biased policing, charging, and prosecutorial discretion. In that sense, like AI systems, they’re not neutral — they’re feedback loops. They feed our own biases back to us.
But it’s worse than that. The tools break down under pressure. Recent studies relating to AI systems — including a 2025 paper out of Apple and complementary work from Microsoft — have shown that large reasoning models perform less effectively as problem complexity increases. They give the appearance of deliberation in simple cases, but once multiple contingencies or contradictory variables enter the picture, their reasoning process falters. The model doesn’t warn the user that it’s failing. It just proceeds — confidently wrong.
The same is true with the actuarial models used for pretrial risk assessment, only more so, because there is no AI to start with: just bare algorithms built on biased information (data).
And it’s not just the tool that fails. It’s the whole criminal justice system around it. A 2023 field study in Pennsylvania — aptly titled Ghosting the Machine — documented how judges routinely dismissed their state’s risk assessment tool, describing it as “useless,” “not helpful,” or simply ignoring it altogether without explanation.
In addition,
human decision-makers can selectively follow algorithmic recommendations to the detriment of individuals already likely to be targets of discrimination. In Kentucky, a pretrial risk assessment tool — intended as a bail reform measure — increased racial disparities in pretrial releases and ultimately did not increase the number of releases overall because judges ignored leniency recommendations for Black defendants more often than for similar white defendants. Likewise, judges using a risk assessment instrument in Virginia sentenced Black defendants more harshly than others with the same risk score.
— Dasha Pruss, Ghosting the Machine: Judicial Resistance to a Recidivism Risk Assessment Instrument, in ACM Conference on Fairness, Accountability, and Transparency (FAccT) (June 2023), https://doi.org/10.1145/3593013.3593999.
While California’s legal framework differs, the behavioral pattern feels familiar. In counties like Fresno, overrides of algorithmic recommendations are common, but rarely recorded. SB 36 was supposed to expose this kind of behavior through transparency and reporting. But in practice, it shields it behind procedural abstractions and missing data.
The Pennsylvania study reminds us that the existence of a tool doesn’t ensure its meaningful use—especially when the institution wielding it prefers opacity over constraint.
This is why “neutrality” is the wrong frame. What we’re dealing with, even after SB 36, isn’t a neutral system flawed by implementation. It’s a system designed to look “reformed” while preserving discretion, avoiding accountability, and automating the logic of preemptive incarceration. The tool is a prop.
The override is the real policy.
What’s Coming Next
In the next post — Pretrial Release: When the Risk Tools Fail the Risky Cases — I’m going to dig into the technical breakdown behind much of what I’ve covered so far.
I’ll explain why these tools — while simple and fast in low-stakes cases — start to fall apart when complexity enters the picture. (If you read it, this is reminiscent of what Apple’s June 2025 paper found for AI. As I noted above, this same difficulty holds true for the actuarial tools behind California’s pretrial risk assessment.)
That collapse isn’t just a matter of degree: it’s structural. The collapse, in other words, flows directly from how the tools are built.
And it isn’t visible to the people relying on the output. (Frankly, I doubt most probation officers, or even judges, have even a basic understanding of these tools.) When the model fails to process nuance, it doesn’t flash red or pause for clarification. It just returns a number.
That number (sometimes) drives a detention decision, shapes judicial discretion, or “justifies” continued incarceration — all without anyone acknowledging its own unreliability. (Sometimes defense attorneys make inchoate attempts at showing this. But, frankly, most defense attorneys barely understand the defects in the tools any better than probation or the judges. We do, however, usually know more about our clients than they do.)
This isn’t a small problem. The system claims to be centered on fairness, individualized assessment, and due process. But when the tools flatten complexity and the judges either can’t see it or don’t care to, what’s really happening is the automation of preemptive punishment.
The next post will show why that’s not just a software limitation: it’s an institutional and constitutional one.
Stay tuned for the next post! Maybe a different bat time; same bat channel!
The post Pretrial Release: The Illusion of Algorithmic Neutrality appeared first on Fresno Criminal Lawyer. It was written by Rick.