Last month, I had the privilege of speaking at the American Council of Life Insurers Annual Compliance & Legal Conference in New Orleans. I was tasked with presenting an hour on legal ethics, which isn’t always the most exciting topic but is a great way to get lawyers to show up to your early-morning session.

Specifically, I presented on lawyers’ ethical obligations in their use of technology, a topic I’ve grown more and more interested in through assisting law firms (and in-house legal departments) respond to data breaches, being a part of our firm’s generative artificial intelligence (AI) pilot program and training newer attorneys on our team.

While many of the Rules of Professional Conduct apply to lawyers’ uses of technology, including preparing for and responding to data breaches, the most pertinent is Rule 1.1, which states that a “lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness, and preparation reasonably necessary for the representation.” Comment 8 to Rule 1.1, which has been adopted by most states, applies the duty of competence to technology: “A lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.…” My goal for the session was to help the attendees do just that, to learn more about the proper – and improper – use of technology in their representation of clients.

Unsurprisingly, the use of AI is top of mind for lawyers. It seems that not a day goes by without a new headline telling the same story – that another lawyer was sanctioned for citing hallucinated (i.e., fake) cases generated by AI. These stories are not new. The first viral headline for an instance of this occurring hit the (virtual) newsstands in 2022. And yet they are happening more and more, and now even judges have filed orders citing hallucinated cases.

In response, some courts have issued rules addressing the use of AI in judicial filings. Some of these local rules require an attorney to disclose the use of AI-assisted technology in the preparation of a filing. Others outright ban the use of AI. These local rules and seemingly constant headlines underscore the general lack of understanding of what AI is and what it does (and doesn’t do). What courts are really worried about is the use of hallucinated cases – cases completely made up by a generative AI platform. Yet the local rules go further and apply to the use of all AI-assisted technology. But AI-assisted technology is everywhere and has been for a long time. Spell checkers and grammar tools use AI. Legal research platforms use AI. But these tools are not generative AI. What’s generative AI, you ask? Well, it’s in the name; it’s a tool that creates content using artificial intelligence. Too many lawyers believe these tools are like search engines, which locate already existing content containing specified search terms. Unfortunately, that is not so. If you ask generative AI for a case on a specific issue in a certain jurisdiction, it will create that case for you.

It’s not a difficult leap to see how this problem can lead to potential ethical violations, particularly the violation of Rule 1.1. Of course, lawyers should be checking citations and making sure they – or co-counsel – aren’t using hallucinated cases. More fundamentally, however, lawyers should make sure they understand the technology they are using. Not doing so can land them in hot water.