It’s unclear whether it’s the fact that people may use the internet and technology without having any real grasp of what their use entails, or that they just don’t care. But with the advent of chatbots, people are seeking legal, medical, sexual, and other interactions that involve their most intimate and personal thoughts, queries and acts. And people believe it’s just between them and their chatbot. It’s not.
On New Year’s Day, Jonathan Rinderknecht purportedly asked ChatGPT: “Are you at fault if a fire is lift because of your cigarettes,” misspelling the word “lit.” “Yes,” ChatGPT replied. Ten months later, he is now being accused of having started a small blaze that authorities say reignited a week later to start the devastating Palisades fire.
Mr. Rinderknecht, who has pleaded not guilty, had previously told the chatbot how “amazing” it had felt to burn a Bible months prior, according to a federal complaint, and had also asked it to create a “dystopian” painting of a crowd of poor people fleeing a forest fire, while a crowd of rich people mocked them behind a gate.
There are four types of privileged communications: Lawyer/client, doctor/patient, priest/penitent and spousal. There are reasons that these communications are protected by law.
All legal privileges rest on the idea that certain relationships — lawyer and client, doctor and patient, priest and penitent — serve a social good that depends on candor. Without assurance of privacy, people self-censor and society loses the benefits of honesty.
Historian Nils Gilman sees the problem as one of self-censorship, and thus undermining the “benefits of honesty.” This seems somewhat optimistic and theoretical. People don’t self-censor even when no privilege is involved. People tell their neighbors, buddies, garage mechanics and pretty much anyone who will listen all nature of personal information they would prefer not be made public. The problem isn’t that people self-censor, but that people can’t control themselves, and end up having their words used against them.
When it comes to the traditional legal privileges, the law has determined that the need for confidentiality is more important than the need to find any evidence against a person and nail them with it. Yes, it’s because of the inherent need for confidentiality in the nature of the relationship. Yes, it’s a means of assuring honest communication that’s vital to the nature of the relationship. No, it’s not just because of a concern that people will lie to their lawyer or doctor, but because the sanctity of the relationship is different than communications with one’s BFF.
So where does a chatbot fit in?
People speak increasingly freely to A.I. systems, not as diaries but as partners in conversation. That’s because these systems hold conversations that are often indistinguishable from human dialogue. The machine seemingly listens, reasons and provides responses — in some cases not just reflecting but shaping how users think and feel. A.I. systems can draw users out, just as a good lawyer or therapist does. Many people turn to A.I. precisely because they lack a safe and affordable human outlet for taboo or vulnerable thoughts.
Chatbots are replacing lawyers, doctors, therapists and best friends. The engagement may seem, to the user, to be little different than the real deal, and certainly easier and far cheaper, even if largely unreliable. And users often fail to grasp that chatbots not only remember what you said, but make your queries readily available to its owner and, subject to certain limitations, the government.
At present, most digital interactions fall under the Third-Party Doctrine, which holds that information voluntarily disclosed to other parties — or stored on a company’s servers — carries “no legitimate expectation of privacy.” This doctrine allows government access to much online behavior (such as Google search histories) without a warrant.
But when chatbots are used in lieu of lawyers and doctors, should users get the benefit of privilege that would apply if they were the real deal?
But are A.I. conversations “voluntary disclosures” in this sense? Since many users approach these systems not as search engines but as private counselors, the legal standard should evolve to reflect that expectation of discretion. A.I. companies already hold more intimate data than any therapist or lawyer ever could. Yet they have no clear legal duty to protect it.
Gilman argues that the law should evolve to recognize the way people use chatbots and clothe them in privilege which would apply if they were communicating with actual lawyers or doctors rather than a computer.
A.I. interaction privilege should mirror existing legal privileges in three respects. First, communications with the A.I. for the purpose of seeking counsel or emotional processing should be protected from forced disclosure in court. Users could designate protected sessions through app settings or claim privilege during legal discovery if the context of the conversation supports it. Second, this privilege must incorporate the so-called duty to warn principle, which obliges therapists to report imminent threats of harm. If an A.I. service reasonably believes a user poses an immediate danger to self or others or has already caused harm, disclosure should be not just permitted, but obligated. And third, there must be an exception for crime and fraud. If A.I. is used to plan or execute a crime, it should be discoverable under judicial oversight.
That user may mistake a bot for a lawyer or doctor somehow make a bot the equivalent of a lawyer or doctor? Are bots liable for malpractice? Of course not, just as asking for your neighbor’s legal or medical advice over the backyard fence wouldn’t give rise to a malpractice claim for bad advice. People do it all the time, with stunning honesty, and yet the confession is readily available for testimony against the speaker in court. Bummer, neighbor.
We use privilege to encourage people to speak honestly with doctors and lawyers, and protect those communications because they are doctors and lawyers and should not be punished for speaking with candor. But there is no reason why the law would similarly encourage people to seek legal or medical advice from a chatbot.
Much as Dr. Gilman has legitimate concerns about the use to which people put chatbots, and the use to which the government puts people’s “confessions” to chatbots, the better answer is to require that chatbots notify users that their communications with a computer aren’t privileged and can be used against them in court. Will this stifle communication with chatbots? You bet it will, and given the quality of AI information on matters of life and death importance, that’s hardly a bad thing. The danger isn’t that communications with a chatbot aren’t privileged, but that chatbot’s answers may ruin people’s lives, and even kill people. This is not something that’s so vital to society that it deserves to be encouraged and protected.