In this article, I propose a useful way for the legal profession to think about artificial intelligence (AI). I describe how the technology works and how lawyers currently tend to classify material, before arguing that AI should be treated as a “tertiary” source of information.
Why lawyers should care
I was (and still partially am) a skeptic. I’d heard AI will replace junior talent. Or AI won’t replace lawyers—it will replace lawyers who don’t use it. Demis Hassabis, the CEO of Google’s DeepMind (a Nobel laureate), believes AI could help “cure all disease” in five to 10 years. Still, an inability to identify sources, or worse, an ability to identify made‑up sources, was an obvious deterrent.
The nuanced view taken by the State Bar of Texas Taskforce for Responsible AI in the Law in its 2023-24 Year-End Report is that AI “is a rapidly emerging and potentially disruptive technology that presents attorneys and judges with risks and opportunities.” According to the task force, nearly a quarter of judges are already testing out AI. There are anecdotal examples about its use to review the record for particular concepts and to brainstorm oral argument questions.
Consistent with this view, the Professional Ethics Committee for the State Bar of Texas provided AI-related guidance in Opinion 705, noting that lawyers should not “unnecessarily retreat” from new technology that may save clients’ time and money. But lawyers should also appreciate the risk of articulating hallucinated answers and exposing confidential information. Some AI platforms store user inputs and share them with third parties and some states are requiring the use of disclaimers, for example.
Thus, there is room for us to be both bullish and squeamish about AI. Either way, it is penetrating the industry. Being able to detect both its shortcomings and efficiencies can help us better serve our clients, our teams, and even our courts.
Finally, aside from these practical considerations, there are also philosophical ones for the profession to exercise leadership about, such as whether government regulation of the inputs and outputs of AI is a free speech issue. Nonpartisan think tanks are already surveying American and global audiences about this issue.
How AI technology works
There is an important distinction to be made between integrated AI that is already in platforms we use—such as autocorrect, search algorithms, and website chatbots—and generative AI that creates content like photographs (including deepfakes) and sentences. The large language models that generate sentences are generally what I mean by “AI” in this article.
Large language models can review massive amounts of information across a database and synthesize it in an impressively organized manner, aggregating and distilling concepts. Popular platforms include Claude by Anthropic and ChatGPT by Open AI.
However, large language models are fundamentally designed to predict the next word based on patterns. The technology may invoke an awkward word, cite a broken link, and fail to hedge while doing so. Its training won’t be current and likely won’t be precise on a novel or unique issue.
AI as a tertiary source
In our professional context, we already tend to think of material as a primary or secondary source. Primary sources represent the rules we must follow—cases, statutes, and regulations. The heart of the matter. The cases can be distinguished; they can be compared (a case from 1975 may have less weight than one from 2005, unless it was a Supreme Court case setting the standard, and so on through the fact-specific exceptions). That is among the interesting work of a lawyer.
The objective of a secondary source is commonly to lead you to your primary source. In fact, many law professors recommend starting your research with secondary sources, such as treatises or articles. The source material can also add unique value. This includes providing expert analysis, identifying cross-jurisdictional patterns, and condensing information into accessible takeaways. Interestingly, secondary sources are often evaluated by their authors and peer-review processes.
The concept of AI as a “tertiary” source thus provides a useful framework for understanding and leveraging it. This framework resolves common objections to AI. Because it is based on statistical pattern recognition, what AI should not be is relied upon as an end in itself. There are neither authors nor peer-review processes. There are open questions about whether anyone can be held accountable for its algorithmic outputs, and whether anyone should be.
But AI can at times be a tool to lead you to the right ideas and sources. It can refresh your recollection of popular treatises on federal practice, listing Wright & Miller’s Federal Practice and Procedure; Moore’s Federal Practice; Nimmer on Copyright; and Chemerinsky (for constitutional law), when prompted. It can be a first step in the brainstorming process, democratizing that process for professionals who don’t have the privilege of working on collaborative teams (or at least, not at all times).
And much unlike a proposal that could rehaul our entire workplace, this is simple habit stacking. Next time you have a general question for your search engine, you can try an AI-powered alternative. And rather than merely asking the question, you can add context, like “I am a lawyer in Texas … Provide response in bullet points with links.” But don’t take its word as gospel. And avoid saying “please” and “thanks”—apparently such niceties are costing the providers millions in electricity.
Daniela Peinado Welsh is a business and fiduciary lawyer at Graves, Dougherty, Hearon & Moody in Austin. She tries cases, defends them on appeal, and proactively studies evolving issues affecting her clients, such as data security and the relatively new Texas Business Court.