Editor’s Note: AI has moved from speculative promise to operational reality in privilege review, and legal teams can no longer afford to treat it as an emerging side issue. Drawn from a Legalweek 2026 panel moderated by Esther Birnbaum of HaystackID, this article examines how courts, corporate counsel, and eDiscovery practitioners are confronting the practical and legal consequences of AI-assisted privilege workflows. It highlights the distinction between classification and logging, the growing importance of defensible validation protocols, and the role of Rule 502(d) orders, enterprise-grade safeguards, and audit-ready processes in reducing risk. For cybersecurity, data privacy, regulatory compliance, and eDiscovery professionals, the stakes extend well beyond litigation efficiency: the same controls that protect privilege also shape how organizations defend their data-handling decisions under scrutiny.
Industry News – eDiscovery Beat
Defensible by Design: What Legal Teams Must Get Right About AI Privilege Workflows
Disclosure: This article is based on a panel session sponsored by HaystackID at Legalweek 2026. The panel moderator, Esther Birnbaum, is Executive Vice President of Data Intelligence at HaystackID. The panel also featured independent voices from Morgan Lewis & Bockius LLP, Norton Rose Fulbright US LLP, and The Home Depot. The reporting draws on notes from the session and on publicly available court documents, as well as legal analysis from third-party sources cited in the News Sources section below.
Privilege review is changing quickly, and the shift from manual log drafting to AI-assisted workflows was a central theme of a Legalweek 2026 panel on defensible privilege processes last week in NYC.
While panels from previous years viewed GenAI as a risky area where practitioners were not yet ready to take technology, the conversation has dramatically shifted this year from theoretical possibilities to proven efficiencies in core workflows. However, panelists emphasized that “proven” does not mean “unsupervised”; even as the scale of modern data makes AI adoption a “forcing function” for meeting deadlines, the process still requires a thoughtful, deliberative approach and “good lawyering” to remain defensible.
That was the opening frame set by Esther Birnbaum, Executive Vice President of Data Intelligence at HaystackID, who moderated a session titled “Automated Logs, Defensible Tags: AI-Driven Privilege Review without the Panic.” Joined by Sam Sessler of Norton Rose Fulbright, Liz Gary of Morgan Lewis & Bockius, and Nirav Shah, corporate counsel at The Home Depot, Birnbaum led a practical discussion on how legal teams are testing and deploying AI in privilege workflows.
“If you were here at Legal Week last year, you would know that GenAI for privilege was barely part of the conversation,” Birnbaum told the room. “But if you walk the floor this year, you’ll see that the conversation about AI has dramatically shifted… We have recognized the value; we’ve proven the efficiencies… we have already moved from theoretical to proven.”
This message lands differently depending on the risk profile of the task. The panel established a critical distinction between AI-driven logging and AI-driven classification. Drafting privilege log descriptions is a “safe, low-risk winner” for AI adoption, effectively solving the “blind page problem” and producing more consistent narratives than manual human review. Conversely, using AI for the primary determination of what is privileged remains high-risk, requiring robust validation, statistical sampling, and human-in-the-loop oversight to satisfy judicial requirements for attorney certification.
The Legal Stakes and a Fresh Warning from the Bench
Before the panel could get to workflow, it had to address the law, and, as it turned out, the law had just moved. Between the panel’s first prep call and its second, two federal courts issued competing rulings on AI, privilege, and the work-product doctrine on the same day: February 10, 2026.
In the Southern District of New York, Judge Jed Rakoff ruled in United States v. Heppner that 31 documents defendant Bradley Heppner generated using the consumer version of Anthropic’s Claude were protected by neither the attorney-client privilege nor the work product doctrine. Heppner, facing securities and wire fraud charges, had used the free version of Claude to outline defense strategies and then shared those outputs with his counsel, without counsel’s direction. Rakoff found three independent failures: Claude is not a lawyer; Heppner had no reasonable expectation of confidentiality given Anthropic’s privacy policy, which permits data disclosure to government authorities and use for model training; and Heppner was not acting at counsel’s direction.
“Even if this was attorney-client privilege, it would be waived,” Gary explained to the Legalweek audience, summarizing Rakoff’s analysis. “That is due to the nature of the tool he was using.”
On the same calendar day in the Eastern District of Michigan, Magistrate Judge Anthony Patti reached the opposite conclusion in Warner v. Gilbarco, Inc., a civil employment discrimination case where a pro se plaintiff had used ChatGPT to prepare filings. Patti held that the materials were protected work product and that using a consumer AI tool did not waive that protection. In part because AI tools are “tools, not persons,” and work product waiver requires disclosure to an adversary, not merely to a third-party platform.
Gary also offered a narrower reading of Heppner, emphasizing that the decision should not necessarily be read as resolving every question about AI use and work product protection. On work product specifically, she argued the ruling leaves more room than it might appear. “Even Judge Rakoff’s ruling does not necessarily mean that using even free tools or training tools could be considered a waiver of work product,” she said, noting that the work product waiver standard, requiring disclosure to an adversary, not merely to a third party, is a higher bar than attorney-client privilege waiver. She went further, arguing that a privilege log may itself be treated as work product because it would not exist but for litigation and is prepared by lawyers. If courts were to adopt that view more broadly, it could shape how AI-assisted privilege logging is evaluated in future disputes.
The panel’s takeaway was measured but direct: two federal courts, two frameworks, no controlling answer yet. But the divergence does not mean paralysis. It means preparation. A Rule 502(d) order, which allows for court-entered claw-back agreements that do not require a showing of inadvertence, was identified as essential infrastructure for any team using AI in privilege workflows. Gary emphasized the point applies whether or not AI is involved. “I don’t know if I would necessarily opt for an AI tool without that protection of a 502 order,” she said. Birnbaum immediately seconded it: securing a 502(d) order is “just good lawyering, having nothing to do with AI.”
Classification vs. Logging: The Distinction That Changes Everything
With the legal landscape established, the panel turned to what practitioners in the room actually needed: practical guidance on how to build AI privilege workflows that survive scrutiny.
A central concept in the discussion was the distinction between AI-assisted privilege classification, which addresses whether a document should be marked privileged, and AI-assisted privilege logging, which focuses on drafting the description supporting that determination. These are not the same task; they carry different risk profiles, and conflating them is where teams get into trouble.
“Classification is going to be a workflow you need to validate, you need to sample, you need to test,” Birnbaum said. “When you talk about privilege log descriptions, how are you going to validate that? You have really different considerations for each of these pieces.”
Shah, speaking from the in-house perspective, put it plainly: AI-generated log descriptions are the lower-risk entry point. Rather than leaving a reviewer staring at a blank cursor, AI pre-populates the log entry, and the attorney reviews, validates, and adjusts. The cognitive load shifts from creation to quality control, and the time savings are dramatic. “It’s much quicker and much faster and much cheaper for me to give my attorneys a thousand log entries where they just look at the document, they look at the entry, and they validate it. Maybe they tweak it versus sitting there and sort of dealing with that blind page problem of, I have to type for each one,” he said.
Sessler, drawing on ROI analyses conducted by his team, said that pre-loading AI-generated privilege descriptions before review can improve workflow efficiency and may improve accuracy when layered with other review methods. “The layering approach is really where we see the ROI payoff,” he said.
A practical principle that flows from this: start with logging to build internal comfort with AI-assisted privilege work, then expand into classification as validation processes mature. The panel’s discussion suggested that organizations do not need to go all-in on day one. The incremental path is not a concession; it is good risk management.
The Forcing Function and the Limits of Patience
One of the session’s more quotable concepts emerged when Birnbaum introduced what she called the “forcing function,” the external deadline or scale event that compels adoption even for hesitant teams. The banking client whose servers were seized by a regulator, leaving a legal team with three months to review 10 million emails and attachments, was Gary’s real-world example. “There is just no scale like a gen AI tool,” she said.
Birnbaum’s organization produced nearly 300,000 documents for a privilege log in a second request in just over one business week using AI-assisted workflows. “They literally came back and said, ‘We would not have made the deadline,’” she recalled of her team’s reaction. The adoption is not happening because the technology is novel; it is happening because the alternative has become untenable.
Shah framed the organizational risk of delay in unambiguous terms. Legal teams that postpone privilege review, letting the pile build in hopes a case will settle, eventually find themselves in a week-left crunch and then take shortcuts. Attorneys withhold meeting invites on the theory that any email with an attorney’s name is privileged. Shah argued that this kind of shortcut can create challenges, credibility problems, and judicial scrutiny that cost far more than investing in a defensible privilege workflow at the outset. “Once it’s gone, you will pay 10 times what that privilege log would have cost you to get your credibility back,” he said. More broadly, the panel’s point was that adoption is being driven less by novelty than by the practical limits of manual review at scale.
Validation, Transparency, and the Audit-Ready Workflow
The panel’s most direct message for anyone deploying or considering AI privilege tools was about defensibility: the process must be documented, repeatable, and explainable.
Validation takes different forms depending on which AI function is in play. For log descriptions, the quality check is more subjective, a qualitative assessment of whether the entry is accurate and sufficiently descriptive. For privilege classifications, statistical sampling is the standard. Teams should be sampling consistently, running null set tests, and verifying precision and recall metrics.
Sessler described using eDiscovery-adjacent tools to interrogate a completed privilege log and identify gaps, essentially asking the tool to challenge its own output from an adversary’s vantage point. Shah built on that idea, suggesting practitioners load a privilege log and prompt an AI tool to respond as plaintiff’s counsel looking for weaknesses. The exercise, he argued, takes minutes and can surface real vulnerabilities or confirm the log is sound. Either outcome is useful, and the approach represents an underrated application of generative AI: using it to hold your own work accountable.
Customization was another theme the panel returned to repeatedly. Privilege classifications are not one-size-fits-all. The entities involved, the jurisdiction, the judge, and the relevance of the privileged population to the core allegations all shape how a privilege review should be configured. AI tools that allow for building a “corporate brain,” accumulating knowledge about how a specific company’s employees and counsel communicate over time, can deliver increasingly consistent results across matters. For in-house counsel like Shah, that cross-matter consistency is the north star: the ability to defend not just one case but every case, using processes that are documented, repeatable, and ready to explain to a general counsel or a judge.
Gary offered what may be the clearest structural principle for privilege workflow design. “I personally think that the best setup for success is when that person or that set of people is involved from the start, has a seat at the table strategizing with the team, helps negotiate those agreements that really govern the rest of the process,” she said, citing ESI protocols, 502(d) orders, and protective orders specifically.
On the question of attorney certification requirements, some jurisdictions and individual judges require attorneys to certify that privilege determinations were made in good faith; the panel was direct. Know the local rules before the project starts. Build a QC process that puts the certifying attorney in a position to make that representation. AI-assisted workflows, properly validated, make that certification easier to support, not harder.
A Practical Guide for In-House Counsel Evaluating Outside Counsel AI Proposals
For in-house counsel receiving a proposal from outside counsel to use AI in privilege review, Shah’s contributions throughout the panel amount to a clear due diligence framework. First, ask for the workflow, not a product name, but a documented description of how the AI tool integrates with human review, what the attorney oversight steps are, and how decisions are recorded. Second, ask how the process will be validated: what sampling methodology will be used, what the acceptance threshold is, and who on the outside counsel team has authority to override an AI classification. Third, ask what the cross-matter deliverable looks like. Shah was explicit that one of the most valuable things outside counsel can provide is a reusable privilege framework, a documented set of classifications, entity lists, and process notes that the in-house team can carry into the next matter, regardless of which firm handles it. “What can you deliver me that adds value?” he said. “Here’s a takeaway. Here’s a little goody bag of a privilege that you can use on your next case.” Outside counsel that cannot answer these three questions with specificity are not ready to deploy AI in privilege review on your matter.
Where the Field Goes Next
The panel closed with a glimpse at emerging applications that go beyond the workflows already being deployed. Birnbaum described AI systems that can generate a preliminary list of privileged entities: attorneys, paralegals, and legal hold custodians, before a privilege review even begins, eliminating the mid-review scrambles that occur when a new lawyer surfaces in a large document population.
The panel also acknowledged the open problem that may keep practitioners busy for years: inconsistent privilege redactions across duplicate documents. Current tools offer partial solutions, hashing to apply consistent redactions, scripting to flag inconsistencies, but no one in the room had a complete answer. “Anyone wants to build a good AI company to solve this problem,” Shah offered, half-joking. No one disagreed.
For cybersecurity and information governance professionals, the session’s implications extend beyond the courtroom. The same principles of documented processes, audit-ready workflows, and attorney direction that protect privilege in litigation also define sound data governance in regulatory investigations, breach response, and compliance reviews. The question is not whether AI will be used in these workflows; it already is, but whether the organization can explain, defend, and repeat what it did.
One term recurred throughout the session without being fully defined, and it deserves precision: ‘enterprise-grade.’ In the context of AI privilege tools, the Heppner ruling makes the definition a legal question, not just a procurement preference. In practical terms, legal teams evaluating an enterprise-grade AI tool for privilege workflows should look for clear contractual limits on training, strong matter-level confidentiality protections, data handling terms that support reasonable expectations of confidentiality, and security terms that counsel can defend if later scrutinized. Birnbaum flagged this directly: ‘make sure it’s an enterprise-grade model and all the security is really acceptable.’ The panel’s discussion of Heppner underscored the risks of using consumer-grade or free AI tools in privilege-sensitive workflows, particularly where confidentiality terms and data-use practices may undermine privilege arguments. Before deploying any AI tool in a privilege workflow, legal and cybersecurity teams should obtain and review the vendor’s data processing agreement, confirm any no-training commitment in writing, and assess whether the confidentiality terms would support a defensible privilege position.
The Defensibility Safety Net
A standout recommendation from the session was the necessity of procedural safeguards, specifically the Rule 502(d) Order. Liz Gary of Morgan Lewis noted that she would rarely opt for an AI tool without this court-entered protection, which allows for the claw-back of inadvertently produced documents without the exhausting burden of proving “inadvertence”. Furthermore, defensibility relies on developing a “corporate brain,” repeatable processes, and customized privilege entities that ensure privilege is handled consistently across multiple matters, protecting a legal team’s credibility over the long term.
As courts, regulators, and opposing counsel grow increasingly sophisticated about AI use in legal workflows, the organizations that will fare best are not those with the most powerful tools; they are those with the most defensible processes. In 2026, what question keeps your legal operations team up at night: whether to use AI for privilege review, or whether your current process would survive scrutiny if you had to explain it in court?
News Source
- Automated Logs, Defensible Tags: AI-Driven Privilege Review without the Panic (HaystackID / Legalweek 2026)
- SDNY Rules AI-Generated Documents Are Not Protected by Privilege (Debevoise Data Blog)
- Two Courts, Two Answers: When Does Using AI Waive Privilege? (JLE Legal Analysis)
- Michigan Federal Court Protects AI-Assisted Litigation Work Product (Proskauer Rose LLP)
- Your AI Conversations Are Not Privileged: What a New SDNY Ruling Means for Every Lawyer and Client (Jones Walker LLP)
Assisted by GAI and LLM Technologies
Additional Reading
- At Legalweek, Judges Deliver a Stark Warning on Threats, Intimidation, and the Strain on the Rule of Law
- When a Comedian Walks Into a Legal Conference
- Latitude59 Opens Pitch Applications as Investors Raise the Bar on Operational Readiness
- FutureLaw 2026 Preview: The Practical Path to Defensible AI in Legal Workflows
- The 2026 Event Horizon: Early Outlook for eDiscovery, AI, and European Innovation
- Data Provenance and Defense Tech: IG Lessons from Slush 2025
- Lessons from Slush 2025: How Harvey Is Scaling Domain-Specific AI for Legal and Beyond
Source: ComplexDiscovery OÜ

ComplexDiscovery’s mission is to enable clarity for complex decisions by providing independent, data‑driven reporting, research, and commentary that make digital risk, legal technology, and regulatory change more legible for practitioners, policymakers, and business leaders.
The post Defensible by Design: What Legal Teams Must Get Right About AI Privilege Workflows appeared first on ComplexDiscovery.



