During the Generative AI and Litigation CLE Panel at the New York State Bar Association’s Annual Meeting, the panelists discussed whether AI “prompts” that are typically used to create output from generative AI are discoverable and whether all such prompts can be deemed privileged. The audience seemed surprised to learn that the short answers are yes and no, respectively.
There are several ways attorneys may be leveraging AI tools in litigation:
- During document review to quickly analyze large volumes of documents to identify relevant information, patterns, or anomalies – saving time and reducing human error;
- Assisting with research and identifying relevant case law, statutes, and legal precedents, providing comprehensive insights and suggestions;
- Scanning multiple legal databases, delivering results in minimal time;
- Helping draft pleadings, motions, and other legal documents by generating templates and suggesting language based on existing legal standards;
- Analyzing past case outcomes to predict the likelihood of success in litigation, helping lawyers make informed decisions;
- Assisting in developing litigation strategies by analyzing data and suggesting potential approaches based on similar cases;
- Evaluating the risks associated with a case by analyzing various factors and providing a risk profile; and
- Assessing settlement offers by comparing them to historical data and predicting potential trial outcomes.
Such uses in pending or anticipated litigation, if preserved as confidential, may be protected from discovery by the attorney-client privilege and the work product doctrine. But when these AI tools are queried or prompted to provide legal and/or business guidance, are those prompts discoverable? And are they protected?
Although there is limited applicable case law (and what does exist has arisen exclusively from AI copyright litigation), the consensus among courts is that prompts are just another form of eDiscovery, potentially discoverable under generally applicable discovery principles. Indeed, the limited federal case law reveals that courts apply strict relevance standards under Federal Rule of Civil Procedure 26(b)(1), requiring parties seeking AI-related discovery to demonstrate specific relevance to the allegations at issue.
Relevance
In determining whether AI prompts were discoverable, the Southern District of New York applied Rule 26(b)(1)’s relevance and proportionality requirements to deny a motion to compel production of evidence related to the adversary’s use of generative AI tools, creation of its own AI products, and positions regarding generative AI. Noting that Rule 26(b)(1) permits discovery of “any nonprivileged matter that is relevant to any party’s claim or defense and proportional to the needs of the case” the court held that “[t]he party moving to compel, … ‘bears the initial burden of demonstrating relevance and proportionality.’”
Other courts have ordered the production of non-lawyer AI prompts by a party’s employees, subject to usual proportionality constraints and finding irrelevant AI prompts and their results by definition, non-discoverable. Concord Music Group, Inc. v. Anthropic PBC, 2025 WL 2267950, at *1-2 (N.D. Cal. Aug. 8, 2025) (ordering production of prompt/output generated by the defendant’s “founder, executive or managing agent” and any other “identif[ied]” employee).
Privilege Protection Standards for AI and AI Development Communications
Application of general legal principles also means that, to the extent counsel is using AI – and prompting it (i.e., communicating with it) – in anticipation of or during litigation, the prompts and the resulting output may be protected from discovery by work-product principles.[1] In Concord Music, the court addressed this issue and rejected as “unpersuasive” an argument that the prompts of an attorney were not privileged:
[Defendant’s] initial argument, that the information it seeks (undisclosed prompts and outputs, and the settings therefore) is not privileged is unpersuasive. [Plaintiffs] cite cases where courts . . . have found precisely this information to constitute attorney work product. [Defendant] distinguishes only [the] denial of waiver, . . . but does not distinguish the basic finding that the failed prompts and related settings are attorney work product. This Court agrees. . . .
Id. at *2 (citations omitted). See also, 2024 WL 3748003, at *2 (N.D. Cal. Aug. 8, 2024) (“prompts were queries crafted by counsel and contain counsel’s mental impressions and opinions about how to interrogate [the AI tool], in an effort to vindicate Plaintiffs’ [case]”).
If a prompt or output incorporates client communication or legal advice, there may be an argument that the content is privileged. However, one court recently held that the conversations of a client with AI tools are notprotected by attorney-client privilege. Earlier this month, the Honorable Jed Rakoff of the Southern District of New York ruled that dozens of documents a criminal defendant generated using a non-enterprise consumer version of an AI tool are neither privileged nor protected as “work product.” See United States v. Heppner.[2] The court held that AI communications at issue in that case did not qualify as privileged under the attorney-client and work product doctrines for the reasons explained below.
Attorney-Client Privilege Analysis
Judge Rakoff applied the established three-element test for attorney-client privilege, which protects “communications (1) between a client and his or her attorney (2) that are intended to be, and in fact were, kept confidential (3) for the purpose of obtaining or providing legal advice” United States v. Heppner, — F.Supp.3d —- (2026), citing U.S. v. Mejia, 655 F.3d 126 (2011). The court found that Heppner’s AI communications failed all three elements.
- First Element – Attorney-Client Relationship: The court held that the AI documents were not communications between Heppner and his counsel because “Heppner does not, and indeed could not, maintain that [the AI tool] is an attorney.” The court emphasized that “in the absence of an attorney-client relationship, the discussion of legal issues between two non-attorneys is not protected by attorney-client privilege.”
- Second Element – Confidentiality: The communications failed the confidentiality requirement because the AI tool’s privacy policy explicitly reserves the right to collect both user “inputs” and tool “outputs,” use such data to “train” the tool, and disclose data to various “third parties,” including “governmental regulatory authorities,” thus eliminating any reasonable expectation of confidentiality.
- Third Element – Purpose of Legal Advice: The court found that Heppner did not communicate with the tool for the purpose of obtaining legal advice because the tool disclaims providing legal advice. When the government tested this by asking the tool whether it could give legal advice, it responded, “I’m not a lawyer and can’t provide formal legal advice or recommendations” and recommended consulting “a qualified attorney.”
Work Product Doctrine Analysis
The work product doctrine “shelters the mental processes of the attorney, providing a privileged area within which he can analyze and prepare his client’s case” and provides qualified protection for materials prepared by an attorney acting for his client in anticipation of litigation. Judge Rakoff found the AI documents failed work product protection because they were not “prepared by or at the behest of counsel” and did not reflect defense counsel’s strategy.
Legal Standards and Waiver Analysis
Judge Rakoff applied the black-letter rule that non-privileged communications do not become privileged simply by being shared with counsel. Gould Inc. v. Mitsui Min. & Smelting Co., Ltd., 825 F.2d 676 (1987). Because the AI documents “would not be privileged if they remained in Heppner’s hands,” they did not “acquire protection merely because they were transferred” to counsel. The court also noted that even if certain information Heppner input into the tool was originally privileged, he waived the privilege by sharing that information with the tool, just as if he had shared it with any other third party.[3]
Similarly, other decisions demonstrate that courts apply heightened scrutiny to privilege claims for AI development communications, rejecting broad assertions of attorney-client privilege for technical discussions. For example, the Southern District of New York held that “a document will not become privileged simply because an attorney recommended its preparation if it contains merely business-related or technical communications between corporate employees.” The court found that most technical discussions among employees about AI training data, model development, and repository management were not privileged even when attorneys were copied on communications. The court also held that attorney-client privilege cannot be based on the mere discussion of potential legal issues between non-attorney employees and that copying a lawyer on a communication does not render it privileged.
Conclusion
Recent case law suggests that discovery strategy in AI litigation must demonstrate specific relevance to a defendant’s conduct or the allegations in the pleading rather than general industry practices, as broad discovery requests into AI training data or prompts may be denied as overbroad or irrelevant. Similarly, practitioners may wish to avoid relying on blanket privilege claims for internal communications about AI development or counsel’s use of AI, as courts scrutinize each communication independently and require clear evidence of legal advice rather than business or technical discussions.
Moreover, as discussed in earlier posts, while the use of AI is growing, it is not without risk. Given the prevalence of AI hallucinations and fake citations in recent federal cases, practitioners may wish to carefully authenticate all AI-generated work product and evidence and verify all AI-produced materials before relying on them in litigation. The judicial focus on Rule 11 violations related to AI-generated false citations demonstrates the courts’ increasing awareness of AI limitations and the risks attendant to leveraging these tools.
[1] In the Concord Music litigation, counsel was held to have waived the work product privilege to the extent they turned over AI prompts to an expert witness. Concord Music Group, Inc. v. Anthropic PBC, 2025 WL 3677935, at *3 (N.D. Cal. Dec. 18, 2025).
[2] Bradley Heppner, a Dallas financial services executive charged with securities and wire fraud, used an AI LLM to research legal questions related to the government’s investigation after receiving a grand jury subpoena and engaging counsel, but before his arrest. Heppner communicated with an AI platform to prepare defense strategy reports and analyze potential legal arguments, generated 31 documents of prompts and responses, and transmitted them to counsel. United States v. Heppner, — F.Supp.3d —- (2026). When the FBI seized the documents during a search of Heppner’s home, his attorneys claimed the documents were privileged. The government moved to compel.
[3] This decision establishes important precedent for the intersection of artificial intelligence and traditional legal privilege doctrines. Judge Rakoff concluded that while “generative artificial intelligence presents a new frontier in the ongoing dialogue between technology and the law,” AI’s “novelty does not mean that its use is not subject to longstanding legal principles, such as those governing the attorney-client privilege and the work product doctrine.” The decision aligns with existing ethics guidance that requires attorneys to understand the technology they use and its security risks under ABA Model Rule 1.1 Comment 8.
