Here is my recent Daily Record column. My past Daily Record articles can be accessed here.
****
New Hampshire Ethics Committee On Drafting Legal Documents with AI
Despite repeated warnings from ethics committees, lawyers continue to make headlines for citing fake cases in legal briefs that were created by generative artificial intelligence. Most recently, attorneys from some of the largest law firms have been guilty of this transgression.
To be clear, these errors occurred because lawyers failed to review filings before verifying them and submitting them to the court. Regardless of who–-or what—prepares a document, you have an ethical obligation to carefully review it for accuracy.
So why does this keep happening? One key reason is that document drafting is an obvious legal use case for generative AI. According to the 2025 AffiniPay Legal Industry Report, 40% of the respondents who had adopted generative AI in their practices have used it to draft documents.
And with good reason. These tools can write initial drafts of legal documents in seconds, providing a foundational starting point for a more complex, carefully reviewed final draft. The problem is that many lawyers skip the second part, failing to confirm the accuracy of the draft.
Ethics opinions have addressed the obligations at play when lawyers incorporate generative AI into their legal workflows, but are there any unique issues to consider when using this technology for document creation? This question was recently posed to the New Hampshire Bar Association’s Ethics Committee, which responded in an “Ethics Corner Article.”
Before answering the query, the Committee pointed out the obvious—that AI-assisted document creation is not a new concept: “The truth is, lawyers have been using artificial intelligence…to help us draft documents, e-mails, and correspondence, for some time.” Predictive text is one example, as are tools that correct grammar, such as Grammarly.
Next, the Committee explained that since embedded enterprise-grade AI is still in its infancy, it would focus on the ethical issues surrounding the application of consumer-grade AI tools like ChatGPT and Perplexity to legal work.
The Committee reiterated what was discussed above: these tools can produce errors, and as part of the duties to provide competent representation and candor to a tribunal, “lawyers must verify the authenticity of any information produced,” whether it’s legal research or a document.
Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of “Cloud Computing for Lawyers” (2012) and co-authors “Social Media for Lawyers: The Next Frontier” (2010), both published by the American Bar Association. She also co-authors “Criminal Law in New York,” a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at niki.black@mycase.com.