Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

The Real Legal AI Risk is in the Handoffs

By Dennis Kennedy on March 18, 2026
Email this postTweet this postLike this postShare this post on LinkedIn

Most legal AI talk is still focused on whether the engine starts, while the real danger is that no one knows who’s actually steering the car once it hits the highway. It turns out the human in the loop isn’t a safety feature if the human doesn’t know which loop they’re currently standing in.

We are still judging legal AI by the visible draft, but the real issue is the invisible chain behind it.

For the past two years, our conversations have focused on the visible surface of the technology. Can it draft a clause? Summarize a case? Answer a query? These were useful questions, and early efforts like prompt engineering and Retrieval-Augmented Generation (RAG) were our first attempts to build a reliable chain for those answers. But those efforts were only a start.

The more interesting shift is from tools to systems.

A chatbot helps at one point in the work. A more agentic setup starts to move the work itself: intake, classification, retrieval, drafting, routing, review, and knowledge capture. That shift matters because the leak has moved from the faucet to the foundation.

It turns out the human in the loop isn’t a safety feature if the human doesn’t know which loop they’re currently standing in.

This isn’t a new problem. It’s a borrowed one. In systems engineering and medical malpractice, handoff risk refers to the danger that information is lost or distorted as it moves between teams or tools. It’s a bedrock principle. In a hospital, the risk isn’t just the surgery. The transfer from the OR to the ICU also creates risk. Legal AI is now entering its own handoff era.

Take a simple law department example. A contract request comes into an AI intake system. The system classifies it, pulls a template, suggests fallback language based on policy, generates a draft, and routes it for review. The agreement goes out with the wrong liability cap.

This is where a Columbo-style question becomes useful. The draft looked fine, but how did it get that way?

I spent enough years in law departments and enterprise systems to know that once a process crosses tools, teams, and approval layers, the handoff points become the whole game. The error rarely sits where people first want to pin it. We must look for the invisible links in the chain.

Was the RAG pipeline poorly optimized, causing it to ignore the most recent policy? Did the routing system bypass a critical human secondary check because of a tagging error? Does the vendor contract shield the provider from output errors, leaving the department to absorb the risk?

Want some more candidates? The model provider? The workflow vendor? The lawyer who reviewed it? The legal department that approved the system? The person who designed the routing logic?

Now take a messier example. Strait of Hormuz risk spikes. A company starts trying to understand supply chain exposure. One system flags affected vendors. Another pulls contract language on force majeure, notice provisions, and termination rights. Another drafts internal guidance or customer communications.

The output looks impressive and on point, even covering items you might have missed in a time crunch. Then a notice deadline is missed, or a contractual right is overstated, or a business team acts on a summary that sounded more certain than it was. Again, we are left asking who owned the miss in that sequence of handoffs.

As Lt. Columbo might say, “Just one more thing…” We often assume the lawyer at the end of the chain is the safety net. But if that lawyer doesn’t understand the logic that prioritized one clause over another, supervision becomes ceremonial. You can’t catch a mistake in a system you don’t actually understand.

This is why I think the pressure point has changed. For a while, legal AI was treated mainly as an output problem. Could the tool produce something useful? The next phase looks more like a governance problem. Can the system move work in a way that makes authority, review, and responsibility legible?

That is a different problem. It’s no longer just about evaluating tools. It is about understanding systems well enough to see where accountability gets blurred and where the chain has links we haven’t even named yet, like data provenance, model drift, and third-party indemnity.

The obvious objection I often hear is that true agents are still more marketing than reality. Demos are cheap, but workflow redesign is hard. But the speed of the hype doesn’t change the direction of the risk. Even if adoption is slow, the pressure point has moved.

Columbo wouldn’t spend much time admiring the polished draft on the desk. He’d be in the back room, asking the IT director and the insurance broker about the handoffs that no one bothered to document.

Lawyers should do the same.

We’ve spent three years debating if the AI can write a brief, while ignoring the fact that we’re watching a game of Telephone played by black boxes. If you can’t explain the handoff, you don’t own the outcome. That makes you the last person sitting in the passenger seat when the car leaves the road.


[Originally posted on DennisKennedy.Blog (https://www.denniskennedy.com/blog/)]

DennisKennedy.com is the home of the Kennedy Idea Propulsion Laboratory

Like this post? Buy me a coffee

DennisKennedy.Blog is part of the LexBlog network.

  • Posted in:
    Technology
  • Blog:
    DennisKennedy.Blog
  • Organization:
    Dennis Kennedy
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo