Here is my recent Daily Record column. My past Daily Record articles can be accessed here.
****
Ethical Jury Selection in the AI Era
Choosing jurors is a complex undertaking that requires focused attention, strong people skills, and the ability to think on your feet. Based on finite information, you’re required to make swift decisions that will significantly impact the course of the trial–and your client’s life.
Fortunately, technology is available that can simplify the voir dire process and assist you in choosing a jury panel that is (hopefully) receptive to your client’s position.
But what if the technology you rely on is based on biased programming, leading you to unknowingly exercise peremptory challenges grounded in discrimination? Are you ethically required to prevent that from happening?
With the advent of artificial intelligence (AI), including generative AI, this issue is all the more pressing. AI technology is advancing rapidly, and while it offers great potential to streamline the voir dire process, it’s well known that many of these tools have unintentional bias built into their algorithms. What’s a lawyer to do?
The short answer? Read American Bar Association Formal Opinion 517, which was published on July 9th.
This opinion offers guidance on how to ethically select juries without engaging in unlawful discrimination, especially when relying on input from clients, consultants, or software.
Foundationally, Model Rule 8.4(g) prohibits lawyers from engaging in “conduct that the lawyer knows or reasonably should know is harassment or discrimination on the basis of race, sex, religion, national origin, ethnicity, disability, age, sexual orientation, gender identity, marital status or socioeconomic status in conduct related to the practice of law.”
However, lawyers only violate this mandate if they know or should reasonably know “that the exercise of a peremptory challenge is impermissibly discriminatory.”
Importantly, not all discriminatory challenges are impermissible. If the class being discriminated against is not protected by law, then it’s perfectly acceptable to strike a juror for that reason. Examples include age or marital status.
Unlawful discrimination can occur when lawyers accept input from others during the jury selection process, whether it’s a client or a jury consultant. A lawyer knowingly engages in unlawful discrimination if they are told that the basis for the recommendation to strike is grounded in race- or gender-based reasons. The fact that the lawyer acted at the direction of another “does not make otherwise unlawful conduct legitimate.”
This prohibition applies even when “the lawyer does not personally intend to discriminate on the basis of a protected class but may be advancing someone else’s intent to do so.” The standard is whether a lawyer of “reasonable prudence and competence…would have known that the challenges were impermissible.”
According to the Committee, the “reasonably should know” standard requires lawyers to inquire about the reason for the suggestion to exercise a peremptory challenge.
Notably, this same standard applies when lawyers rely on software, including AI tools, to make decisions during voir dire. Before doing so, lawyers have an ethical duty to “conduct sufficient due diligence to acquire a general understanding of the methodology employed by the juror selection program” and should avoid using the software if there is a reason to believe that it is based on programming that results in unlawfully discriminatory recommendations.
Does this mean that you should avoid using AI software to assist with the juror selection process? Of course not. This opinion simply confirms that the duty of technology competence applies when using AI tools for voir dire, just as it does with all other types of technology.
The bottom line: don’t let ethical concerns hold you hostage. Incorporate these tools into your courtroom arsenal with insight, integrity, and care, and success will follow. Technology–AI included–can be a powerful ally in the courtroom, providing a strategic advantage over your less tech-savvy opponents.
Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of “Cloud Computing for Lawyers” (2012) and co-authors “Social Media for Lawyers: The Next Frontier” (2010), both published by the American Bar Association. She also co-authors “Criminal Law in New York,” a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at niki.black@mycase.com.