Reading Time: 6 minutes
One recurring thought relating to artificial intelligence is its impact on new staff. The promise of AI is that it will routinize the lower value work, the repetitive tasks, and allow knowledge workers to focus on higher value projects and outputs. This is great, but it begs the question of how someone new to a field gets the experience to vault over the area that is assigned to AI. How do you learn without starting from the bottom?
This article on journalists is only the latest example of people discussing this issue. Even if you buy into the use case that AI (primarily generative AI) can be used to shift work—one example being what Microsoft terms the Agent Boss and Frontier Firm model—that work belongs on a continuum. If you look at any field, there is an entry level starting point. People progress past that starting point towards expertise and higher value output.
As people move higher along that continuum, AI can potentially fill in behind them. Not because it has to but because, in theory, the higher expertise people are paid more and so the organization can get more value for its investment if the higher paid people are focused on higher value (revenue generating) work.
What AI might be doing to the skill-building of junior staffers is a tertiary concern, at best. Left unchecked, however, this problem has the potential to be existential: How do you produce competent senior staff when the junior staff is either replaced by AI or—as the New York piece suggests—replacing themselves with AI.
How AI Could Sabotage the Next Generation of Journalists, Pete Pachal, Fast Company, May 16, 2025.
Starting Point
The legal profession talks a lot about minimum competence. We spend a lot of time thinking about how to ensure people are learning the foundational skills to achieve that. But as the Fast Company post asks, how do you produce competent staff when the tasks usually assigned to a junior role are automated? In the past, automation might have been the ideal, since it usually requires some oversight to ensure it is operating properly. With AI, though, the goal seems to be to have the automation take advantage of machine learning to fill some of the role where a human might have been placed, and in which they would have done the learning instead of the AI.
I think most of us still engage in this sort of activity, in part because expertise does not eliminate the need to remain informed. Perhaps the most obvious shift is away from serendipitous information gathering (reading a print news source or website “cover to cover”, one at a time) to aggregated information. I used to get periodical subscriptions that, when they came in, I’d flip through, front to back, looking for relevant information. Most of the ones I used to read—Law Technology News and Law Office Computing, for example—have either gone defunct or been converted into websites. Now, even if I visit those websites, it tends to be for a specific article or to follow a link I found elsewhere.
When I want to gather information, I use RSS feeds and a reader to aggregate information, which saves time that is otherwise spent visiting websites. We use news alerts to aggregate information based on keywords and terms of art derived with our expertise, again eliminating the time that we might have spent in the past sifting through information.
We could use AI to curate that information further, perhaps consolidating and summarizing topics that appear in multiple places. But, for me, the summary is perhaps what I already know. It’s the nuance and detail that I would need and so it remains an automated task but with human oversight.
An entry level role may not have the expertise to know what’s important. In a way, AI could be helpful in bringing that person information. At the same time, that would cause the person to miss the opportunity to start to make those connections themselves. The envisioned AI world involves human expertise at some level. Someone who is AI-dependent from the start may not realize there are parts of the universe unplumbed by the AI tools they’re using.
This piece by Clay Shirky in The Chronicle of Higher Education touched on this from a number of angles. The issue isn’t whether or not to use AI, but when and how. I particularly like how he contrasts it to the use of calculators in K-12 classrooms. Even if AI improves performance (or, in commercial settings, generates more revenue), if it does so at the expense of learning, then it will create a gulf between those who have gained expertise and those who haven’t yet.
Not Enough Experience
It is not hard to find job postings that require multiple years of experience for what is an entry-level role. I get that approach as someone who hires but there is a real need to provide opportunities for people to learn on the job. Schooling can only take you so far. One only needs to look at law firms to see this issue.
The first few years as an associate are critical to a law firm. Retention is already an issue in these early years—a Major, Lindsey & Africa survey found that a quarter of associates who had been at their firm for fewer than 3 years planned to leave within a year—so one might think that moving new lawyers quickly onto challenging work that develops expertise would be important.
Historically, much of that third phase of learning [to actually be a lawyer] involves relatively low-stakes and potentially dreary tasks, which have, nevertheless, become important teaching tools as fledgling lawyers grow into their new professional identities. Today, however, generative artificial intelligence (GenAI) technologies pose a potential threat to this developmental model by fundamentally altering the types of work lawyers actually do.
The automation of routine tasks historically performed by first- and second-year associates results in a gap within the traditional talent pipeline, seriously challenging law firms’ conventional training models. After all, how are firms supposed to bring up senior associates if the work that traditionally transformed younger associates into more senior lawyers has been automated away?
The Law Firm Associate Gap and How To Fix It, Bryce Engelland, Thomson Reuters, January 9, 2025
The sooner they are moved through the more prosaic tasks (tasks that, frankly, can build confidence with lower stakes) the better. But skipping over those tasks would assume education or skills or knowledge that most new hires won’t possess. If they do not learn it now, it is not clear when they would gain that experience. This creates a professional competence risk.
I’m not sure this is true with librarianship, though. I think the nature of what librarians do makes it far more difficult to outsource even in those early years. The mere fact that most of us work directly with other people—which can’t be delegated to an AI—means that the role of AI will remain somewhat tangential unless we decide that in-person service is no longer part of our mandate. This is not to say automation and AI wouldn’t be useful or perhaps intrude in areas that used to be learning opportunities for new librarians. I just don’t think it creates as broad an impact as it might in other areas like, say, lawyers, medical professionals, and journalists.
Who Is Learning: the Machine or the Person?
The temptation of AI training for early career lawyers makes sense, though. One estimate of the expense to bring in a new associate is about $500,000 including nearly 500 hours of a partner’s time in oversight and training. Given the 3-5 year turnover issue, a law firm would want to recoup that expense quickly and AI might seem like a shortcut to getting associates to be profitable.
The tasks I think of most, of course, are legal research and professional writing. These are things that lawyers get better at by doing. This is especially important at a time when a new lawyer may not be able to tell, on their own, whether what they are reading or retrieving is correct or not. Approaching it as a librarian, I would want to be building the expertise to be able to validate the AI research, in the same way I would have validated other research automation.
Sure, sometimes that’s as easy as just reading the supporting output (or even clicking on it to be sure it’s not a confabulation). And it’s hard to tell whether the recent problems with AI-generated court submissions are the fault of associates or more experienced lawyers (although it would seem that a more experienced lawyer would be involved, if only through oversight, in all of these cases). But even the lawyering skill of preparing a clean brief, where you know the resources that are cited and why, and have verified all of the information is correct as required by Rule 11 seems to be the sort of skill that one would get by doing it.
¶17 ….The initial, undisclosed use of AI products to generate the first draft of the brief was flat-out wrong. Even with recent advances, no reasonably competent attorney should out-source research and writing to this technology – particularly without any attempt to verify the accuracy of that material. And sending that material to other lawyers without disclosing its sketchy AI origins realistically put those professionals in harm’s way. Mr. Copeland candidly admitted that this is what happened, and is unreservedly remorseful about it.
¶18 Yet, the conduct of the lawyers at K&L Gates is also deeply troubling. They failed to check the validity of the research sent to them. As a result, the fake information found its way into the Original Brief that I read. That’s bad. But, when I contacted them and let them know about my concerns regarding a portion of their research, the lawyers’ solution was to excise the phony material and submit the Revised Brief – still containing a half-dozen AI errors.
Lacey v. State Farm Gen’l Ins. Co., CV-24-5205 (C.D. Ca, May 5, 2025)
It’s still very much early days. I tend to believe AI will go the way of most automation in law firms. Useful where it can be validated and where it saves time but otherwise just another productivity tool submerged in larger tools. One advantage legal publishers have is that a lawyer, associate or otherwise, is likely to be safer using one of their RAG-based legal databases than the market accessible version of the same AI.
But we sometimes make grave errors when there is something new and we try to leverage it. The risk here is that, like other fields, the legal profession implements a new technology that leaves behind new lawyers. By drawing up the ladder of expertise behind them, lawyers may not realize the professional risk they’re creating until it’s too late.