Landmark Ruling Finds that Conversations with Chatbots Generally Aren’t Protected by Attorney-Client Privilege
With the rise of artificial intelligence and the popularity of chatbots, it was inevitable that courts were going to be forced to deal with a fundamental question that will have profound consequences: can prosecutors have access to chats where individuals ask for advice relating to alleged criminal conduct? In an attempt “to answer a question of first impression nationwide,” meaning the judge is acknowledging this is the first case to tackle this new topic, Judge Jed S. Rakoff of the influential United States District Court Southern District of New York found that the answer to that is a resounding “yes” in a recently published legal opinion (though with some caveats). 
In the case at hand, Bradley Heppner, an executive of several corporate entities, learned through a grand jury subpoena and discussions with the government that he was the target of a criminal investigation by the federal government. Heppner diligently searched for and obtained criminal counsel in anticipation of future criminal charges, a prudent step that anyone who is subject to a criminal investigation would be advised to do. However, Heppner then took it upon himself without his counsel’s suggestion to engage with Anthropic’s AI tool Claude, the popular large language model, to ask it several legal questions about his case and potential strategies, purportedly for the “purpose of speaking with counsel to obtain legal advice” and that he “subsequently shared” the generated documents with counsel. Heppner was later indicted for security and wire fraud, and other charges and accused of allegedly bilked investors out of more than $150 million according to the indictment. During the subsequent execution of a search warrant after the indictment was unsealed and Heppner was arrested, federal agents seized electronic devices containing approximately 31 AI-generated documents and the underlying communications with Claude showing Heppner’s prompts to the AI platform for legal advice. After defense counsel asserted attorney-client and/or work-product privilege over the documents at issue, both parties recognized the potential issue and agreed to segregate the AI-generated documents until a court could rule on the matter.   
After a challenge by the government to the privileges asserted, the judge first found that the attorney-client privilege did not attach to these interactions, because: 1) Heppner could not claim attorney-client privilege for communication with the chatbot because Claude is not an attorney (and says so when prompted with legal advice); 2) Heppner’s communication with Claude in fact destroyed any claim to confidentiality because its written privacy policy permitted data collection, retention, and use for model training of conversations, as well as Anthropic’s “right to disclose such data to a host of ‘third parties,’ including ‘government regulatory authorities…even in the absence of a subpoena”; and 3) because Heppner engaged the AI entirely on his own initiative rather than at counsel’s direction, he cannot later claim it was privileged even if he did so for the “express purpose of talking to counsel” at a later date. The judge acknowledged this third factor was a closer call and other facts, such as acting instead at the direction by his counsel to do so for assistance in speaking with counsel about strategy, may have led to another outcome but nonetheless it was not privileged at the time of communication and later wishes to communicate said information with counsel does not retroactively fix the privilege issue. Similarly, the court ruled that the “work-product doctrine,” which protects work prepared by an attorney, their agents, or in some instances a client to assist in their preparation for trial, did not apply because Heppner prepared them without the advice of counsel on his own volition and did not involve counsel’s prior strategy. Further, the court ruled that just because Heppner later shared the documents with his attorneys, this did not cure the earlier waivers because privilege must exist at the time of the communication. As a result, he ruled that the government should be able to view these documents despite the underlying narrative they may possess and quasi-admissions of guilt or trial strategy Heppner thought may be helpful to his attorneys.
What Does it Mean for Clients Who Wish to Use this Emerging Technology?
This ruling is only the first one of its kind, and it is certainly possible that other judges could look at the issue presented differently. Still, anyone should exercise caution before running to the internet or AI at the first sign of legal trouble, no matter how popular and prevalent these tools are. As the judge pointed out in his ruling, while, “more than half of United States households have adopted AI in some form” and remarked on the glowing promises by its proponents that it will “revolutionize the way we process information…AI’s novelty does not mean that its use is not subject to longstanding legal principles.” Thus, Heppner’s use of Claude does not merit legal protection from disclosure to the government or its potential use in his subsequent trial. 
This ruling is a critical reminder for anyone who is under investigation, facing litigation, or even anticipating legal trouble that under no circumstances should they use a public AI chatbot, whether it be ChatGPT, Claude, Gemini, Copilot, or any other LLM, to research your perceived legal situation, ask it questions, draft defense theories, or analyze what the government might charge you with based on your past conduct. This should not be all together shocking, as Google and other search engine searches have often been used to levy criminal charges and/or convictions in the past against individuals, with prosecutors using search histories of alleged murderers in the past as evidence when they search such phrases as “ways to dispose of a body” or “how long before a body starts to smell” in leaving digital footprints of criminal behavior. This case highlights that the logic of allowing in this type of online “confession” or evidence of a crime has been extended, by at least one judge, to chatbots as well. The bottom line is simple, clients should leave the legal questions to hired and trained professionals where privilege attaches and know that chatting with an AI bot about your legal problems is not a private conversation and could lead to a conviction. The post Can I Ask A.I. Chatbots for Advice About Crimes? first appeared on Darryl A. Goldberg.

Photo of Catherine E. Galea Catherine E. Galea

Catherine E. David (Galea) is a member of the Health Care & FDA Practice in Greenberg Traurig’s Philadelphia office. She focuses her practice on health care regulatory, compliance, transactional, and enforcement matters. Cate concentrates her practice on the laws that govern health data…

Catherine E. David (Galea) is a member of the Health Care & FDA Practice in Greenberg Traurig’s Philadelphia office. She focuses her practice on health care regulatory, compliance, transactional, and enforcement matters. Cate concentrates her practice on the laws that govern health data, including, HIPAA, interoperability and information blocking regulations, Part 2, and similar state laws. Cate regularly handles responses to government investigations, policy drafting and implementation, and negotiating business associate agreements. She advises clients on health care licensing issues with mergers and acquisitions, fraud and abuse compliance under Stark Law and the Federal Anti-Kickback Statute, and health information privacy and security compliance under HIPAA and state law.