*Editor’s Note: This article first appeared in 2015. It has been edited to reflect technological changes. But…the point remains.
Two books published in the 2010s and well before the advent of generative AI, “The Second Machine Age, by Erik Brynjolfsson and Andrew McAfee, and, How Google Works, by Eric Schmidt, Jonathan Rosenberg and Alan Eagle, noted the significance of chess legend Garry Kasparov’s loss to IBM’s Deep Blue in the 1990s.
Interestingly, even back then the books pointed out that despite the loss, the world did not end and there was no replicant takeover. Sound familiar in 2026? That the AI robots are going to take over soon?
Instead, what the authors emphasized is that the computer’s chess victory established that human knowledge coupled with technology is stronger than human knowledge or technology alone.
Not only is this true in chess, but in many other areas including the practice of law. This is even more true now as more and more professionals enlist the help of artificial intelligence in their work.
Weak Human + Machine + Better Process is Superior to Strong Human + Machine + Inferior Process
In The Second Machine Age, Brynjolfsson and McAfee note that after Deep Blue beat Kasparov, “freestyle” chess tournaments became popular. In freestyle events teams compete using any combination of humans and computers.
Kasparov noted that “the teams of human plus machine dominated even the strongest computers . . . The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur Americans chess players using three computers . . . Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.”
Similarly, the Googlers write that after Deep Blue’s victory “a virtuous cycle of computer aided-intelligence emerges: Computers push humans to get even better, and humans then program even smarter computers. This is clearly happening in chess; why not in other pursuits?”
Computer assisted human excellence is occurring all around us, and the practice of law is no exception. Simply put, proper use of technology makes better lawyers. Technology permits lawyers to save time, become more productive and learn things about legal matters that might not have been readily apparent using human faculties alone. Additionally, using technology might even make law practices more profitable.
Lawyer + Technology Trumps Lawyer Without Technology
For years, there have been many uses for legal technology, including law practice management and automation, document management systems, collaboration platforms and time keeping.
Fast forward to the advent of large language models and technology not only augments and automates legal work, it can actually help with brainstorming, reasoning and taking first passes at legal tasks. This is especially true as the use of AI agents becomes more prevalent.
Now the law practice management software has an AI chat feature (Clio Duo), Word has a Co-Pilot, e-discovery software has generative AI built directly into document review workflows (Relativity aiR), and apps like Harvey and Legora are full fledged AI platforms built for legal.
But…There is no Substitute for a Good Lawyer (The Reliability Layer)
Despite ever increasing predictions of attorney obsolescence, machines are not taking over tomorrow. Thus, there is no substitute for informed and creative legal skill.
Indeed, now that AI can tackle some of the legal work itself, lawyers can serve as the “reliability layer”. That is, using human judgment to steer artificial intelligence and knowing how to verify and trust the outputs.
Mastering the “reliability layer” is arguably, the highest-value skill a lawyer can develop in the age of artificial intelligence.
What the Reliability Layer Looks Like in Legal Practice
In chess, the amateur players who beat grandmasters were not necessarily better at chess. They were better at human-machine collaboration — knowing when to trust the computer’s move and when to override it to get better output. That is, they had a process involving both technology and human judgment.
Lawyers using AI can use the same M.O.
Prompt crafting. The quality of AI output is directly correlated to the quality of the instructions given. A lawyer who understands how to frame a legal question for an AI — providing the right context, constraints, and format requirements — will consistently get superior output over those who work it like a search engine. Prompt engineering is a learnable skill, and increasingly, a professional differentiator.
Output verification. Generative AI systems produce confident-sounding responses even when they are factually wrong and legally incorrect. In fact, this is a structural feature of how large language models work. Lawyers–everyone really–need to understand this concept and the need to verify AI output. That means checking citations, prompting the AI to pick holes in a legal argument like opposing counsel will, and applying professional judgment to conclusions before they become advice or filings.
Knowing when not to use AI at all. Maybe most importantly, the reliability layer also means recognizing the limits of AI usefulness. Privileged communications, sensitive client strategy, matters where errors pose high risk all require human judgment for which AI cannot serve as a substitute. In other words, you cannot automate discretion.
The Centaur Lawyer
Kasparov, reflecting on his loss to Deep Blue, eventually came to describe the ideal as a centaur — half human, half machine — where neither component alone is superior to the other. The centaur chess player wins not because of raw computational power or raw human intuition, but because of the quality of the collaboration.
The centaur lawyer is the same. AI can process more contracts, flag more legal issues, and draft pleadings faster than any human. But it cannot exercise professional judgment, bear ethical responsibility, or stand behind its work in court. That is the lawyer’s job. In the age of AI, those tasks become more important, not less.
The legal teams that will succeed are ones who figure out how to collaborate well not only with technology, but across disciplines. Not just adopting AI tools, but developing the process discipline, the verification habits, and the professional judgment to use them reliably.
The bottom line is that good lawyering still reigns supreme, but utilizing technology makes good lawyers even better.
The post How Garry Kasparov Can Make You a Better (Centaur) Lawyer appeared first on Percipient – Legal Services Powered by Technology.