Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

Tiara Time and Data Center Politics: Vanderbilt’s AI Governance Playbook with Cat Moon and Mark Williams

By Greg Lambert & Marlene Gebauer on January 12, 2026
Email this postTweet this postLike this postShare this post on LinkedIn
Cat Moon and Mark Williams of Vanderbilt

Cat Moon and Mark Williams return to The Geek in Review wearing two hats, plus one tiara. The conversation starts at Vanderbilt’s inaugural AI Governance Symposium, where “governance” means wildly different things depending on who shows up. Judges, policy folks, technologists, in-house leaders, and law firm teams all brought separate definitions, then bumped into each other during generous hallway breaks. Those collisions led to new research threads and fresh coursework, which feels like the real product of a symposium, beyond any single panel.

One surprise thread moved from wonky sidebar to dinner-table topic fast, AI’s energy appetite and the rise of data centers as a local political wedge issue. Mark describes needing to justify the topic months earlier, then watching the news cycle catch up until no justification was needed. Greg connects the dots to Texas, where energy access, on-site generation, and data-center buildouts keep lawyers busy. The point lands, AI governance lives upstream from prompts and policies, down in grids, zoning fights, and infrastructure decisions.

From there, the episode pivots to training, law students, and the messy transition from “don’t touch AI” to “your platforms already baked AI into the buttons.” Mark shares how students now return from summer programs having seen tools like Harvey, even if firms still look like teams building the plane during takeoff. Cat frames the real need as basic, course-by-course guidance so students gain confidence instead of fear. Greg adds a perfect artifact from the academic arms race, Exam Blue Book sales jumping because handwritten exams keep AI out of finals, while AI still helps study through tools like NotebookLM quiz generation.

Governance talk gets practical fast, procurement, contract language, standards, and the sneaky problem of feature drift inside approved tools. Mark flags how smaller firms face a brutal constraint problem, limited budget, limited time, one shot to pick from hundreds of products, and no dedicated procurement bench. ISO 42001 shows up as a shorthand signal for vendor maturity, though standards still lag behind modern generative systems. Marlene brings the day-to-day friction, outside counsel guidelines, client consent, and repeated approvals slow adoption even after a tool passes internal reviews. Greg nails the operational pain, vendors ship new capabilities weekly, sometimes pushing teams from “closed universe” to “open internet” without much warning.

The closing crystal ball lands on collaboration and humility. Cat argues for a future shaped by co-creation across firms, schools, and students, not a demand-and-defend standoff about “practice-ready” graduates. Mark zooms out to the broader shift in the knowledge-work apprenticeship model, fewer beginner reps, earlier specialization pressure, and new ownership models knocking on the door in places like Tennessee. Along the way, Cat previews Women + AI Summit 2.0, with co-created content, travel stipends for speakers, workshops built around take-home artifacts, plus a short story fiction challenge to write women into the future narrative, tiara energy optional but encouraged.

Listen on mobile platforms:  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Apple Podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ |  ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Spotify⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

[Special Thanks to ⁠Legal Technology Hub⁠ for their sponsoring this episode.]

⁠⁠⁠⁠⁠Email: geekinreviewpodcast@gmail.com
Music: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Jerry David DeCicca⁠⁠⁠⁠⁠⁠⁠⁠⁠

LINKS:

CAT MOON
Vanderbilt AI Law Lab
VAILL Substack
Women + AI Summit
Practising Law Institute (PLI)
American Arbitration Association (AAA)
Legal Technology Hub

MARK WILLIAMS
Hotshot Legal
The Information
Understanding AI (Timothy B. Lee)
One Useful Thing (Ethan Mollick)
SemiAnalysis
ISO/IEC 42001 standard overview
EU AI Act (Regulation (EU) 2024/1689)
Chip War (Chris Miller, publisher page)

Transcript

Marlene Gebauer (00:07)
Welcome to The Geek in Review, the podcast focused on innovative and creative ideas in the legal industry. I’m Marlene Gabauer.

Greg Lambert (00:14)
And I’m Greg Lambert and today we are bringing back a couple of our favorites. Cat Moon and Mark Williams are back on the show. So thank you both. Cat’s a professor of practice of law, co-director of the program on law and innovation and founding co-director of the Vanderbilt AI Law Lab and apparently has no personal life whatsoever because she has at least five jobs.

Marlene Gebauer (00:19)
Yay!

⁓ But she has a crown,

but she has a crown.

Greg Lambert (00:44)
And Mark is the professor of practice of law and co-director of VAILL. you say VAILL or VAILL? VAILL. All right. Let’s do VAILL. And so co-director and co-founder of VAILL. So Cat, at least six-timer, seven-timer?

Mark Williams (00:51)
VAILL Yep.

Marlene Gebauer (00:53)
like in Colorado.

Cat Moon (00:54)
Yeah, like you’re seeing.

I so. That’s why I have this. I have the tiara. excited that I had it ready to go.

Greg Lambert (01:03)
Yes, and came sporting the Tierra. Yeah, think Pablo

Marlene Gebauer (01:04)
Yeah, that’s why she ⁓

Yeah.

Greg Lambert (01:10)
Aredondo is the closest to getting his own Tierra right now. So, well, anyways.

Cat Moon (01:14)
Maybe

Marlene Gebauer (01:14)
So Pablo, if you hear this.

Cat Moon (01:14)
do an episode with both of us in our tiaras ⁓

Greg Lambert (01:19)
That would be so fun. All right, well, Cat and Mark,

Marlene Gebauer (01:20)
I would love that.

Greg Lambert (01:22)
welcome back to the Geek in Review.

Cat Moon (01:25)
Thanks. So good to be here.

Mark Williams (01:26)
Thanks for having us.

Marlene Gebauer (01:28)
So I’ll kick it off. ⁓ you both hosted Vanderbilt’s inaugural AI governance symposium this fall, and that brings together academics and practitioners and technologists. What did you find? What were the biggest insights or tensions that surface during the sessions? And, know, once everyone goes home, you know, what are next steps, you know, are there going to be any type of outputs or working groups, frameworks, ⁓ anything that’s going to happen as a result of this?

Mark Williams (01:58)
Yeah, so the governance symposium was kind of a crazy, harebrained idea that it sort of formed in my head over the last year or so. Because, mean, VAILL it started out and it’s still very much is sort of applied AI for legal services. But what you find when you go any sort of direction after that is the pitch, like just how do we use AI and legal services? It’s like, well, what does it mean to responsibly deploy this to manage it within an organization?

What are the regulatory frameworks? What are the compliance frameworks? And ⁓ it’s just sort of this thing in my head that everybody means something different when they say governance, honestly. And ⁓ we wanted to create an event that sort of represented all those various perspectives and definitions and put all those people in the same room and to demonstrate just how truly interdisciplinary.

it is, was, or should be, and introducing that in a law school setting in a way that maybe is a little bit new. So we also co-organized this with a couple other folks here at the law school that I want to give credit to. So Asad Ramzanali who is our director of AI policy at our Vanderbilt Policy Accelerator. He’s a former Biden White House staffer in the OSTP. And then ⁓

Sean Perryman, who was an alum of Vanderbilt Law School and came back and taught an AI governance course with me ⁓ this fall right around the same time. We decided to teach a short course and run a symposium in the same week. ⁓ Which I think when we do this again, we will probably maybe try to space those out a little bit. Yeah, yeah, but we really just I wanted to represent.

Marlene Gebauer (03:36)
Because why not?

Greg Lambert (03:38)
Yeah.

Live and learn. Live and

Mark Williams (03:52)
all of those different perspectives in one room. So we had sessions on thinking about building compliance and governance frameworks within a law firm, but also if you’re in-house at a large corporation, you’re thinking about, so we had people from large firms out of New York City, we ⁓ in-house counsel from HCA who do responsible AI, and we’re talking about

You know, not quite. We I was thankful we were able to do it for not quite AI 101, but maybe 102 or 103 where you could assume a little bit of information, but they were something for everybody in the room to learn a new topic in a new way. So talking about, know, if you’re an in-house counsel, you know what level of ownership should AI be on the board of directors? Do you need a task for it? Does somebody on the board need to be you know that who where does that sit within your legal team?

So discussions around that. Another really interesting one that we had was just around AI and energy and governance and policy around energy use, especially with data centers, ⁓ which is a very different discussion than the one on AI one on one for compliance within a law firm, but also related. And it was interesting because when we first were coming up with that topic,

Like in late spring, early summer, I kind of had to make the case for why AI and energy use would be an interesting topic for a governance symposium. But by the time we launched the symposium, like I didn’t have to make that case at all because, you know, local elections were being decided by candidates running as either I’m the AI, I’m the anti data center candidate or I’m the pro data center candidate. And just defining what everybody meant by that, there was just a lot more awareness about what those things meant. ⁓

So we just had a variety of different topics and people there and you all got to go to TLTF. I did not this year, but one of the best things I like about TLTF was ⁓ the sort of 15, 20 minute breaks in between where everybody gets to talk to each other and make connections. And I very deliberately spaced out the sessions and our, ⁓ sort of have those kinds of conversations and accidental.

You know, so we had federal judges from the middle district of Tennessee talking with Biden White House staffers, talking with other professors at Vanderbilt Law School. And I just had a meeting this week where like several research projects had sort of bounced out of those sessions where, you know, some of these happy accidents that I was sort of hoping would be coming forward. ⁓ So it really was just my attempt to.

put my arms around what we all mean when we say governance and to sort of lay bare that like we all mean very different things and adding some definitions to that. so we will be definitely doing, we got enough momentum out of that to do another one ⁓ next year at some point. So figuring out what the topics will be. And then also we do starting to expand our coursework and that we did a short course this fall. We’ll definitely be running those again, but also making connections. So just make the law school and

Vanderbilt in general being seen as a place where governing AI systems, thinking holistically about it, putting our law students in the same room as computer science students and making them sort of have a common language of what we all mean when we’re trying to translate these things is something we’re all going to be doing going forward and just laying the groundwork for that.

Greg Lambert (07:28)
Cat, anything to add to that?

Cat Moon (07:31)
was Mark’s brainchild and masterpiece and I did attend and I will say that session on energy and AI use was fascinating. was and I think that Mark hit the highlights but really bringing together this diverse group of people who could speak to different aspects and

Mark Williams (07:34)
Yeah.

Cat Moon (07:54)
I think everyone in the room, matter how expert they felt they were in their particular domain, walked away learning something. And so that’s always our objective.

Greg Lambert (08:03)
Yeah, it was interesting. I was in an energy practice group meeting yesterday and data centers are like the huge talk, especially here in Texas. And I mean, they were literally going, this is such an energy issue and such legal issues around the energy policy and access, ⁓ how they’re going to set up. A lot of these data centers are thinking that they’re just going to

self-sufficient in their energy. They’re going to build on site. So it’s going to be interesting to see how that goes and just the governance over those kinds of policies. ⁓

Mark Williams (08:43)
Yeah, it was

just when I first brought this up, it was a very niche sort of wonk issue. And one of the things that we talked about in our governance course, which is around the same time is my co teacher Sean Perryman put on this. I think it was in Virginia where there are a ton of data centers, but a six o’clock newsreel of. You know, putting of talking about AI energy use and data centers and his point was that this this is now.

Like my mom knows about this kind of issue, right? Which is different than a policy won’t issue. Like this is, this is something that, your parents might ask you about where it’s something that elections are going to be decided on, especially as the midterms are coming. This will be a topic of discussion. And so it’s gone. It’s gone from a kind of a bespoke sort of thing that only AI policy and energy people cared about to a big deal. ⁓ and that’s what everything about.

All of that was sort of interesting to me about why I could go on and talk about this for hours on end.

Greg Lambert (09:49)
Well, Mark, I’ll give you something else to talk about. actually, I kind of snuck my way onto the same board. So I want to get your perspective. You have a role on HotShot’s AI advisory board and HotShot with Ian Nelson and others the kind on-demand training that they do. And so in, you know,

Vanderbilt has a collaboration with them as well. What’s actually changed in the past year for how law students or lawyers are being trained on AI? And what kind of modules or experiments or outcomes are kind of already taking shape?

Mark Williams (10:31)
Yeah, so I was actually just reviewing some of their material work. Part of the things the board does is. Yeah, and I was really the one I was reviewing today was, you know, is business development kind of stuff, which was fascinating for me because well, I mean, Cat, you teach a whole course on laws of business. I have always taught in non AI courses, competitive intelligence and business development.

Greg Lambert (10:36)
I’ve got to review it too because it’s due tomorrow.

Mark Williams (10:58)
And a variety of skills-based courses that I’ve done, to like turn that into a generative AI kind of thing was a really useful exercise for me. And I think the thing that I like about that and ⁓ sort of the expectations is moving beyond just chat one shot, you know, using it for legal research is tapping into the more dynamic aspects of like, especially generative AI.

bolted on to good like financial services and business data resources. Course competitive intelligence or business development is a extremely useful ⁓ case for this kind of tools that may not, especially for law students who have not necessarily been exposed to that right away to wrap their brain around. It’s you don’t just have to use this to search Westlaw. It’s way more dynamic than that. ⁓ So that is

I was really in effect because all of the sort of use cases I’ve seen so far with hotshot have been kind of around that that they were moving it beyond the 101. The 101 stuff is there, but you can move it into other things as well. As far as like expectations, I would say over the last two years, what I’ve noticed is. The students are are a little bit well, I guess the future is here, but unevenly distributed so not. I’ve noticed more students come back now.

You know, I think I mentioned to you Greg, we had somebody who was in your office, your Houston office that took my class, ⁓ who have gone through some kind of formal summer training now. Like this last year, I noticed that more than in years before, where we were only really talking about it abstractly. Where now they’ve come back with some experience. They’ve used a legal bespoke tool, they’ve used Harvey, or at least they know to ask about Harvey. Like I’ve never experienced.

students come back and ask about a specific legal tool like they did this summer is like either ask me if we had Harvey or they had used it before. Now that doesn’t mean that everybody knows what to do with it. My sense is that they come back is that the firms were still figuring out what exactly they were supposed to be doing with it. So so I. Yeah, yeah, so I would say it is all far from settled, but I didn’t notice that the the.

Greg Lambert (13:13)
No we got it all figured out.

Marlene Gebauer (13:14)
Yeah. Totally clear roadmap. ⁓

Greg Lambert (13:18)
Yeah, absolutely.

Mark Williams (13:26)
conversation is at least taking place more and more is when I when I have had that first class when they come back for some reason I asked them like where did you go this summer and what was your experience and the answers have definitely matured over time. It seems over what the firms were comfortable allowing them to do to play around with to experiment with. Whereas you know this time two years ago, even last year, I would say it was still much more risk averse that there was much more of an expectation.

Uh, whether, and part of that is because, I mean, as you know, whether it’s Westlaw or Lexis, like it’s no longer this side choice of I’m going to go over here and use generative AI, and then I’m going to go back and do this other thing where it’s like Westlaw now, like the platform is just generative AI. So there’s no choice to be made. Like you’re, you’re going to be using it no matter what, which kind of reminds me a lot about, you know, 10 years ago when, you know, natural language processing, whenever all those systems made, you know, the Google interface.

kind of the central feature. Like that was a big deal, right? You can no longer just choose to go off and do your own little bespoke thing. The decision was being made for you. So that’s how I’ve noticed the discussion has changed too, where it’s not this optional thing. It’s now just infused in everything. And even if you don’t know how to use it, guess what? You’re using it. So that’s what I’ve seen so far.

Greg Lambert (14:47)
And Cat, I’d be remiss to not have you in front of me and ask this question. I know that Vanderbilt was pretty advanced in pushing the AI curriculum, of pushing it to get into the non-AI class, just the generic classes as well. But as the recipient of

law students that are coming in for summer associates. The past couple of years, we’ve gotten a group that’s been a little gun shy because I think a lot of the schools are really, know, kind of telling them don’t use it because it’s a, you know, an ethics violation to use it. And then they come here and we expect them to be, you know, essentially ready to go and telling us what it is they’re doing.

Marlene Gebauer (15:33)
Very proficient.

Greg Lambert (15:37)
Now this year, I will say it was a lot better and especially our fall associates that came in, they had real strategies that they used AI to help them with their studies, with the billing to prepare. And in fact, our managing partner just touts the story about, know, one of the fall associates told them that she uses Notebook LM.

And she takes like all the assignments and she puts it into Notebook LM and then it creates quizzes for her and she can then review the work and it better prepares her for it. So are you seeing it, especially like the peer institutions, are you seeing this loosening of AIs and ethics violation and you shouldn’t use it? Or are you seeing the regular professors being a little bit more open to using it?

Cat Moon (16:34)
So I can share a couple of observations. ⁓ I do think that there is a broader attempt to embrace the technology across legal education, right? I do think that that is still probably not evenly distributed. I think a fundamental challenge we have, so Mark and I both have been teaching in the innovation and technology space the entire time we’ve been at Vanderbilt. And so for me, that’s eight years and eight and a half.

Mark Williams (16:40)
basically.

Thank

Cat Moon (17:03)
now. throughout that time, even though Vanderbilt has embraced it, we’ve had incredible support from our dean and the faculty. What we do has been on the side, right? It’s been a ⁓ add on thing that happens around the edges. And

Mark Williams (17:10)
Thank you.

Thank you.

Cat Moon (17:25)
My observation is that legal education is continuing its embrace of generative AI in that same way for the most part. Of course, you are seeing some schools that are providing some mandatory training. And I know some are partnering with Hot Shot, for example. So all students are taking some kind of, you know, entry level something to get up to speed. What doesn’t…

Mark Williams (17:42)
Thank you.

Cat Moon (17:51)
seem to be happening or at least if it is folks aren’t talking about it so I’m assuming it’s probably not happening is a real consistency and embrace of a

central policy that really gives students guidance and confidence, right? And I’m not speaking about any particular institution when I say this. I’m speaking just kind of about legal education writ large. I do understand that the ABA has recently required syllabi to address.

Mark Williams (18:16)
Thank you.

Cat Moon (18:24)
⁓ the use of general AI in a given course. And I think it really comes down to approaching this very simply. And so maybe one reason there hasn’t been a more kind of consistent and broad approach is that folks have been thinking about it too much, making it too hard, too complicated, right? So what I hear consistently from my students is they just want guidance in a particular course, right? Like what are the rules for your course?

Mark Williams (18:25)
Thanks. ⁓

Cat Moon (18:54)
what I can or can’t do, how I should approach it, and I’ll follow the rules, right? It’s no different than anything else when it comes to how you approach a course. And so I think that puts the onus back on professors to understand how, if and how, they want to integrate and deal with the technology in their course. And I think it remains completely up to the individual person if they’re going to integrate it or not, right? This isn’t mandatory. I don’t think it should be mandatory.

Mark Williams (19:04)
Thank you.

Thank

Cat Moon (19:24)
I think there is room for continuing to teach in ways that do not force the integration of this or any other technology. ⁓ But students are so much more confident, and I would say competent, when they just have some basic guidance. It’s really that simple. ⁓ So that’s kind of my big observation about that and turning it into some kind of honor code.

violation issue, that’s the fear, right? I think that’s what you were seeing, students arriving, like I don’t want to touch it because at my school that’s an honor code violation. And I don’t think that’s helpful.

Greg Lambert (20:05)
I will say I

heard a really interesting statistic this morning driving into work and that is the blue books that you do exam, the exam blue books, sales of those have increased 80 % because we’re going back to, instead of doing tests on the computer, it’s handwritten because they want to make sure that the students know it and aren’t using the AI to come up with the answers.

Mark Williams (20:15)
Yes. I’ve seen that here. Yeah.

Cat Moon (20:17)
awesome. Yeah.

Greg Lambert (20:31)
⁓ But we’re rather using it as an assistant throughout the learning process, but not as the final result. So good for the Blue Book Company.

Marlene Gebauer (20:40)
That’s giving me flashbacks.

Mark Williams (20:40)
Yeah.

Cat Moon (20:43)
Yeah,

a little bit of PTSD there, but yeah.

Marlene Gebauer (20:47)
So, so Cat, I want to focus on something else that’s going on. ⁓ you launched the inaugural inaugural women plus AI 1.0 summit, last year in February and you’re gearing up for 2.0, ⁓ this February. So, ⁓ and I’m excited about this because I think I’ve decided I’m going to treat myself and go. So, ⁓

Cat Moon (21:09)
So good.

Marlene Gebauer (21:11)
Why is it important to have, you know, a women focused AI summit? ⁓ and what are like the top three practical lessons that, know, you’re carrying forward from last year, you know, how are they shaping what you’re going to do in terms of format and content for, for this summit?

Cat Moon (21:32)
Thank you for asking about it. And I’m so excited you’re going to join us. I was very excited when I saw your name roll in on the registration list. ⁓ So for a little bit of context, very quickly, the entire initiative, right, it’s really become an initiative. And so Women in AI, call it WAY for short. ⁓ It is the way. ⁓ So it happened. ⁓

Marlene Gebauer (21:38)
You

Cat Moon (22:00)
100 % because I was on LinkedIn in my feed this time last year and seeing you know things popping into my email inbox and it was one thing after another women are falling behind in the use of AI and Women aren’t embracing it women aren’t leading in it. It was women aren’t women aren’t women aren’t

And so my reaction was either this is a false narrative and that’s not okay or it is grounded in reality and that’s equally not okay. And so my instinct, my response is to say, okay, what can I do about this? The one thing I can do is I can convene. And so that was my solution is I’m going to convene and see who wants to come together and do something about this. Not just talk about it, but do something about it.

And so I conceived of the summit. Of course, the first thing I did was go to see if the domain was available. It was. It still is. It is my women and AI summit dot com dot org. I think I have them both. ⁓ And I just invited people.

Mark Williams (23:00)
Thank

Cat Moon (23:13)
In all people,

Mark Williams (23:13)
Thank you.

Cat Moon (23:14)
it is not for women only. We had men, not just Mark and the VAILL team as well. We had other men who traveled to join us. the purpose became really clear. I’ll share a couple of things about the first that lead to the takeaways, Marlene. And so I will eventually answer your question. the…

Mark Williams (23:18)
That’s good.

Cat Moon (23:36)
The first time it filled in like a week. I opened registration the day after December to a list of people who had indicated an interest. And within like three days, it was 90 % full. Then I opened to the general public and it so it sold out in like a week. And ⁓

well over 90 % of the women people who came either drove for more hours or got on a plane to be there on very short notice, right? The content was co created. I put out a call, the team put out a call for people, what do you want to talk about? What do you want to share? What do you want to learn about? We had so many responses that we couldn’t even accommodate everyone who wanted to come. And so that led to events in three other

Mark Williams (24:10)
you

Cat Moon (24:23)
places around the globe. went to London and conducted a workshop. went to Paris and conducted a workshop. And then finally I went to Sydney and did a keynote at an amazing event. Yeah, is hard work, Greg. Hard, hard work. It is. And so Terry Motter’s head.

Greg Lambert (24:33)
How awful, how terrible.

Marlene Gebauer (24:36)
Terrible life you have, really.

Greg Lambert (24:40)
Hahaha!

Cat Moon (24:46)
put together an amazing event in Sydney and so I had the privilege and honor of being able to keynote that event and what was consistent everywhere I went everyone who engaged is the fact that women

are very engaged, they are very much ⁓ concerned and interested in doing things in this space and are very interested in reversing this narrative, again, whether it is based in fact or fiction. And I follow that line of

⁓ research or commentary and it’s still coming out on a regular basis that women are falling behind. So ⁓ it really was just my interest in a belief that something should be done that clearly resonated with a lot of other women and people other not just women but people and so that led to ⁓ having it again. I announced the date at the first event because the first question everyone was asking midway through the day is are you going to do this again next year?

And so I’m like, yeah, we’ll do it the first February of every year, as long as people want to show up and do this.

So February 7th is when Way 2.0 is happening at Vanderbilt Law School and it is cross-disciplinary. Any and all are welcome. It’s not just for people who are AI experts. It is for the AI curious as well. And so back to your question, Marlene, some takeaways. One, very clearly we wanted to maintain the co-creation aspect. I want, we want

the event to really respond to an answer and get into what the people in the room are interested in getting into. Right? Like what do you want to focus on? What do you want to talk about? What do you want to teach? What do you want to learn? And so we have another call for the two primary kinds of content that we build the summit around, which I’ll talk about very briefly in just one second. And so that idea of co-creation is happening. ⁓

I have fabulous news for this year because Practicing Law Institute has come on as our ultimate partner and because of their incredibly generous contribution we’re going to be able to give a travel stipend to every person who comes to give a talk or facilitate a workshop.

Greg Lambert (27:23)
Nice.

Cat Moon (27:24)
And that

was really important to me. I think you all know, and probably most people listening to this podcast know that I’m very vocal about folks getting at least their travel expenses covered when they come to present. Of course, this is a nonprofit event, and it really is built by and for the community. But I’m very excited about so thank you PLA. So the content, let me talk about that really quickly. There are two primary

Mark Williams (27:35)
Thank you.

Cat Moon (27:53)
pieces and this.

really was just an experiment last year and we got incredibly positive feedback on it. So the morning is engaged in a series of what we call ACE talks and ACE is an acronym for Action, Community and Empowerment. So our objective really is for everything that comes out of the summit for us to be prepared for action, for us to build a community around this and for everyone to be finding ways to empower themselves and others through

Mark Williams (28:05)
Thank

Cat Moon (28:27)
coming together in this way. so action post conference, post summit is very, very important. The second piece of content are workshops. And so we learned from last year and created a little bit more of a specific format for those. And we already have half dozen pitches. ⁓ It’s only been open for like a week maybe. ⁓ So that’s really exciting to see folks coming up. So the afternoon will be workshops.

The

objective is that everyone walks out of a workshop with some kind of actionable artifact, right? So this is very much about how are we building to do something after our time together. And finally, I’ll comment, I’ll pick up on something that Mark mentioned when he was talking about his recent governance ⁓ conference. And that is building in very significant and intentional in the hall time. And so the afternoon, both morning and afternoon sessions will have

Mark Williams (29:20)
Thank you.

Cat Moon (29:27)
very large breaks in between formal sessions for that kind of engagement because like Mark’s event last year that in the hall time led to some really phenomenal collaborations that would not have happened otherwise like folks met who never would have met. ⁓

and come together. Yeah. And so just seeing that happen, ⁓ yeah. So that’s, I’m not sure if I answered your question Marlene. Yeah. Yeah.

Marlene Gebauer (29:50)
That’s the best feeling, isn’t it?

You totally answered the question, above and beyond answering the question.

Greg Lambert (30:02)
Yeah, well, Cat, I wanted to follow

up because just briefly, because you had mentioned there was a statement out there that said women are falling behind and you weren’t sure if that was a false narrative or grounded in reality. Have you determined what it is?

Cat Moon (30:23)
Like

what, so there certainly are, there’s data, there are data ⁓ to suggest that uptake by women has been slower. ⁓ So there are some studies out there, there are also some studies that are kind of,

portending that or predicting that if women are falling behind, then that’s only going to cause like the employment gap to grow, the pay gap to grow. Like they’re tying a lot of potentially negative things to this. I don’t believe the data is super solid that this is like an epidemic. And again,

Mark Williams (31:02)
Thank

Cat Moon (31:09)
does that matter if that’s the narrative? Because for me, that’s a real question to be asking, right? I mean, we’re living in the age of is anything real? And so does it have to be real to be a problem that manifests in these ways that are being predicted? so reshaping that narrative, I think is incredibly important. So I will share one more thing that is a difference, and this is answering your question, Marlene, kind of learning what are we doing differently? So this year, we are going to launch, I believe today,

a new piece of the summit and that is a short story fiction challenge. And so we are issuing a call for people to write a short story either totally human written or collaborating with AI with certain kind of parameters around that. And the point of that is to narrate

the future that we want to see, right? Tell that story that shows that women in the loop are

Mark Williams (32:08)
Okay.

Cat Moon (32:17)
leading, they are creating, they are part of this story. And I’m really excited to see what’s going to become of that. ⁓ And I will give another shout out very quickly to the American Arbitration Association because they are they are underwriting that contest. And so there will be two winners, a winner that is solely human author and then a winner for the human plus AI. And

Mark Williams (32:23)
Peace.

Cat Moon (32:47)
both grand prize winners will be given ⁓ travel stipends, substantial travel stipends to come to the summit and share their story with summit goers and we’ll use that kind of as a jumping off because you need to be able to see the future to make the future, right? And so we want to see the creative future that folks can help us imagine.

Mark Williams (33:13)
Thank

Marlene Gebauer (33:16)
That is very cool.

Cat Moon (33:18)
Yeah. Women in the Loop.

Marlene Gebauer (33:20)
Yeah, exactly.

Greg Lambert (33:22)
Mark, I wanted to get back a little bit to the AI governance and practice, because I know you guys have gotten into the weeds a lot with law firms and legal departments. And so I want to come at this as maybe from a law firm. I might have a dog in this race on this. But when it comes to AI governance and

Mark Williams (33:23)
Perfect.

and

Greg Lambert (33:47)
implementation, what is it that you’re seeing that we’re not quite getting yet or that we’re behind in or that we’re just not talking about enough that you’re seeing consistently?

Mark Williams (34:03)
⁓ cause so much of what you’re just talking about right there is just procurement, And coming up with, ⁓ definable standards about what, what it means to procure AI systems in this, which is really getting back to a data question. And I think this, so this is uneven for, for different firms. think are coming, it’s like a firm like yourself who has a lot of experience doing this type of work. Like you probably already have a checklist in place that is easily sort of bringing this in. ⁓

But if you’re a smaller firm, mid-sized firm to a boutique firm, which is oftentimes the kinds of firms we’re working with, the discussions we’re having, we kind of have this dual role, like all of our students and we teach inside the building to kind of this big law mindset, because almost they’re all going to big law. What we talk about when we… That’s kind of the path. a lot of our external work is much smaller, ⁓ boutique to mid-sized firms.

Greg Lambert (34:49)
They’re all going to be appellate attorneys at big law firms.

Mark Williams (35:00)
who really are just, ⁓ like, they’re overwhelmed. They’re overwhelmed by choice. They have a limited amount of resources. They have one shot at this to buy maybe one thing. And they’re not as well-versed. They don’t have somebody whose full-time job it is to do ⁓ procurement or to know what that means. ⁓ So I think it’s gotten better on the side that you’ve probably seen this too. Like in the early days, I’m putting my…

I’m not a collections librarian anymore, but when I first started this, I was a collections librarian and was reading through all of these vendor agreements and the ways that AI was just even being defined in those agreements at that time. It’s like, well, I can’t even use your product. Like if the way you’re defining this, because like I couldn’t even go run a natural language search, you’ve defined AI so liberally in this user agreement of what I can and can’t do that. Like I’m not sure who this is for.

They would basically be worded in a way that I would have to like sign over all of my. You know, data to them in some way. I’ve seen that mature over time and part of the vendors as well. And you know, there’s just now sort of firms starting to adopt like ISO 42001, which is.

Greg Lambert (36:14)
You know what, this

is the second time today in the first time that I’ve heard ISO 42001 So do you mind just talking about that real quick about what that means?

Mark Williams (36:25)
Yeah, ⁓ so especially if you think about it from like the data security, cybersecurity side, there are ISO standards or SOC 2 standards, sort of checklists that you can look for that says, yeah, I as a vendor have gone through this level of sort of protocol and it sort of lays out what, you know, certain things that you hit about the way you govern, manage, deploy AI, same thing as that you would do with data. ⁓ It’s kind of a shorthand for companies to say if you’ve achieved this level of certification, like you’re

You can be thought of as being a mature handler of this. ⁓ It’s still a very new area, but that’s probably the best default standard that’s out there. mean, and as you also know, like it’s the same thing with with data privacy and security too. Like it’s it’s a good floor, but it doesn’t necessarily the ceiling of ⁓ and particularly with AI. What you find is a lot of the best.

Greg Lambert (37:04)
Just when I think I’ve got it all figured out, something else happens.

Mark Williams (37:21)
standards and the ways that we talk about it, whether it’s ⁓ public law standards or whether it’s these private standards, is they’re really geared towards an older, mature version of machine learning and narrower deterministic AI systems. They weren’t written, like especially like you look at the EU AI Act, like generative AI was this massive inconvenience to that law.

Because they were about ready to ship that thing out the door with chat GPT have they really weren’t thinking about these general purpose systems when they originally designed these things and a lot of the. You know, low state and local laws are kind of geared that way too. ⁓ So it becomes this goes from this abstract exercise, but particularly and legal. Some of these laws if they like if you wanted to read them liberally a certain ways like are you? Wait a second. I’m going to be the deployer of a high risk.

AI system that is engaged in legal decision making. And what is a legal decision? What is a legal service? And so it goes into sort of UPL and all these definitions that we never had any agreement on to begin with. So it just creates all of these abstractions that can find these ways into these procurement contracts. So I don’t have any answers in that, but those are the questions. And I think just now over the last eight to 12 months, you’re starting to see a little bit of maturity around.

Like what actually is an acceptable standard that somebody can hold up as saying like, ⁓ I can, this shows that I can do a good job with your data and with AI and the same, isn’t this the same way conversations take place around cybersecurity standards? It’s early days, but we’re at least we have something to point to. I don’t know. guess what’s been your experience. You see these things as well.

Greg Lambert (39:10)
Well, luckily as the host of the podcast, I don’t have to answer questions. No, go ahead, Marlene. You probably have a better angle on this than I do.

Marlene Gebauer (39:14)
Well, I was actually I was actually going to great. Sorry.

Mark Williams (39:14)
hahahaha ⁓

Cat Moon (39:15)
you

Marlene Gebauer (39:22)
I was going to

ask a question. was going to ask a question. just, just Mark and sort of your experience, I mean, you’re talking about some of these, these things that might be hangups and, and like, is that when you’re dealing with firms or, or, or legal departments, is that like the be all and end all focus is the security and you know, how badly is that holding things up?

Mark Williams (39:46)
yeah, that’s more it. think right now, especially with kind of the log tail of the legal services industry that we often sort of interact with when we do engagements with firms, it’s these smaller boutique firms and they just need help making sense of, that, when you know, that legal tech hub market map that goes up and there’s like 700 different logos on it.

⁓ We need a framework for decision making about which one of these are worth our time and worth the work being for our book. We have a limited amount of budget. We have a limited amount of time to get buy-in. How do we develop a framework to narrow down that market map into something that is actionable for us that we’re going to get some ROI out of? ⁓ But what I have noticed is that, and I think this is true of all of us, it’s gone from less of the AI will want to

Is this, what is this? Is this a curi, what is this curiosity? Is this going to be a thing to more engagements about it’s here? It’s already in everything we do. It’s all around us. How do we take it from out of novelty and into production? Now it’s still a research and development process. It’s still early days and all my talks, I always, well, I don’t do it for my students cause they weren’t born then, but I put up the aol.com. You’ve got mail logo is like,

This is the era of AI that we’re in. When we look back on this, yeah, we look back on all of these conceptions of what we were talking about, we’re going to laugh at how clanked some of these things were. ⁓ But we don’t know what that looks like. So you mentioned earlier the notebook LM, the make a quiz feature. So this time last year, think even in, I have a course on Coursera, Genderative AI for Legal Services, I think.

Greg Lambert (41:09)
under construction.

Mark Williams (41:32)
One of the videos that I had up there before I updated the course was this like elaborate, like how to make a quiz prompt, this elaborate system prompt that you could give to, you know, and then you look back on that, like, that’s hilarious. Cause like literally that prompt now is just this button that you push and it says like, make me a quiz, right? Uh, which, so all, you know, this time last year, we all thought we were going to be these, you know, prompt engineers. And, you know, I think we moved on to a little bit of that understanding that, know, that’s not exactly the way that it’s probably going to play out going forward.

Greg Lambert (42:01)
Now we’re AI agents.

Mark Williams (42:01)
⁓

it’s funny. I’m doing a training tomorrow and I had a pre-call with the firm that I’m working with and they use copilot and the copilot agents features. And, ⁓ the person I was talking to is like, felt like it was just a macro to me. like, yeah, cause it probably wasn’t macro that they just called an agent. like a lot of the word. talk about like AI literacy.

Greg Lambert (42:19)
Yeah, yeah, the word agent in Microsoft are used a little loosely.

Marlene Gebauer (42:22)
lose very

Mark Williams (42:25)
I spend a lot of time on AI literacy, but it’s less that one-on-one part now. it is more like, what does it mean when Matthew McConaughey is on Sunday, football games, commercials, selling you AI agents through Salesforce? Like it’s gone from a technical nerd term that I’ve gotten my, know, Pearson artificial intelligence book to ⁓ a buzzword, right? And so dispelling some of that. That’s where I’ve seen, you know, most of what their conversations are taking places. ⁓

What is this and how can we come up with a framework? I can’t decide for a firm, like what of the 700 products on the market map that they should choose, but we can help guide that in a way as like, are your options, here are the ranges, like what’s everybody’s comfortability, what are your use cases and where might you go to frame that decision? ⁓ For larger firms, don’t necessarily need us to help you with that. You’ve got your own process for that.

Greg Lambert (43:23)
You’d be surprised.

Well, to kind of answer your question while I’m thinking about it, I would say probably the biggest issue we have, not just with information security data governance, is the advancements in the tools that we already have that have already passed all of these security requirements.

Mark Williams (43:25)
When you’re a smaller

Greg Lambert (43:47)
It’s like every week there’s a shift in what exactly these tools do. And so, you know, now the big thing with like Legora and Harvey is this external collaboration with clients and others. Well, that was not something that cleared the security muster, right? And so, and this is like a weekly event that there’s new things within existing products that, you know, it’s not like we’re

Marlene Gebauer (44:03)
agreed to.

Mark Williams (44:05)
No, no.

Greg Lambert (44:15)
buying this, it’s being introduced and sometimes we don’t even know it’s being introduced until it’s already, you know, it may be a month ago. And so yeah, I would say that’s one of the biggest things we struggle with is just keeping on top of what we have, let alone trying to figure out what we don’t have.

Mark Williams (44:22)
Yes.

Well, I thought of even.

Well, Harvey alone just like, you know, they introduced a deep research tool that deep research tool like goes, it searches the internet. Right. So a lot of, you know, the, the, how like bespoke sort of tools that like the whole reason you bought them is because they were a closed universe and kept them in house. And that’s what you contracted for. Then all of a sudden, like the press of a button here, you’re in the outside world. So yeah, it’s.

Marlene Gebauer (44:58)
And like, I’ll add

a couple things to that. Like, it’s, it’s more like once you get it, um, then what, what are the challenges? I, you know, I will say like information literacy and the literacy is, is a big, is a big challenge, you know, getting people to understand what the tools do and don’t do and what you should use and, and, you know, and not use depending on what you’re trying to do. And the other thing is, um, you know, the,

the complexity and broadness of requirements and like outside council guidelines and engagement letters. That is a huge challenge because that just sort of impedes, you have to check that every time, make sure like, okay, if it’s a client that says, yeah, go for it, that’s great. But I can imagine that’s probably not the norm, that it’s sort of yes, but.

And, know, contact us. so then that conversation has to happen each time. Maybe consent has to happen each time. And that, that definitely is, you know, slows things down.

Mark Williams (46:06)
Well, we’ve had discussions with some of our, uh, my class. So we would have a session called AI in the wild where we bring in various practitioners, but they would be describing situations where like larger institutional clients come in with their own sort of AI preferences. They’re like, we will work with this tool. If you would like to continue, like this is where we will be operating. Uh, which not everybody is going to have the levers to do that, but if you’re a large.

Institutional client. Okay. Well, guess we’re, yeah,

Greg Lambert (46:37)
We all know who they are. Cat?

Cat Moon (46:39)
Well, I was going to say,

think clients have been doing that forever. Right. So whether it’s like we have our portal that we want you to use for document, whatever it is. Right. So I think this takes it to a new level. So a couple of responses to Marlene, you were talking about just basic A.I. literacy.

Mark Williams (46:43)
Great.

Cat Moon (47:01)
I don’t think I said that the entire theme of the Women in AI Summit 2.0 is AI literacy. So shout out to anyone out there who is interested in really getting your head around what that means for your work, for your discipline, for your industry. Come and engage in that conversation and action. Another thought that occurred

to with respect to just ⁓ the rapid rate of change and constantly new features being rolled out and not even knowing what those features might be and how they challenge these pre-existing relationships and that impact. I think everything we’re seeing just as pointing into the in the direction that

we’re going to have to figure out different ways of working with each other, right? ⁓ Like the existing systems, I know everyone likes to say, initially everyone pointed to our existing ethics rules and said, well, our existing ethics rules can handle this just fine, right? Let’s just point to the rules that already exist and figure out how this fits. I don’t think it’s that simple. And I think that the more integrated the technology becomes and the more we finally accept that this rate of

change is going to continue. This isn’t going to suddenly stop or slow down. ⁓ I think is going to force us to really grapple with how does this fundamentally force us to change the way we do things. Yeah, yeah.

Marlene Gebauer (48:40)
It’s gotta be a streamlined way to, you know, as these changes happen

that, that, you know, it, cause it doesn’t like shut, shut everything down.

Cat Moon (48:48)
And the other thought that just keeps occurring when any of these issues pops up is this is going to require a lot more communication, collaboration, and transparency. And I think, again, ⁓

as this technology rises to prominence and as it is just infused everywhere, it really is showing all the cracks in how we have been working for so long, right? I believe, I will not say I’m a techno-optimist, but I do choose to believe that we have this opportunity, right, to

fix some things, to do some things better in this moment and make really good choices. But we are at this intersection that’s gonna require us to be making a lot more choices than we have in a long time, or ever, ever, I’d say even ever.

Mark Williams (49:37)
we did.

Cat Moon (49:49)
So I choose to find that to be really exciting, right? One of the lab’s mottos is replace curiosity, replace fear with curiosity, right? ⁓ And so yeah, this is a moment for really radical curiosity.

Marlene Gebauer (50:07)
Well, speaking of curiosity, you both have been testing new approaches to AI literacy and governance and training, you know, inside Vanderbilt law, just what we’ve been talking about. but I’m very curious what, what has worked, you know, what things have you done that have worked and also what hasn’t worked and you know, how

But, know, based on that, that learning opportunity, you know, how are you doubling down for, for the next cycle, academic cycle?

Mark Williams (50:38)
the what worked and what hasn’t worked is just, ⁓ they’re kind of related. And is it when we first started, like Greg, you came and talked to our first AI and legal practice. like, that didn’t work. no, it didn’t work. I, but I think about what we are limitations we had then versus we had now. We just didn’t have.

Marlene Gebauer (50:49)
And that definitely didn’t work.

Cat Moon (50:52)
It was awesome.

Greg Lambert (50:53)
Yeah, let me apologize now.

Mark Williams (51:02)
everything was kind of theory. didn’t have a lot of tools to show them outside of regular old chat GPT. Whereas the academic market for getting these tools in the hands of students and letting them ⁓ safely experiment with them, safely fail with them, ⁓ just in the last year, we’ve been able to open that up so much more. So we’ve been able to make it much more concrete and really show them like, this is the actual.

tool that you will have and the training that you will get when you are in your firm. Whereas, you know, the first year we were doing the best we could and we did a great job, but you know, it can only take it so far when, you know, I couldn’t even get Harvey on the phone and now they’ve given us to us, know, now everybody’s throwing these tools at us. So I think that was a big part of it was change and what’s been successful is we

with the help of our other VAILL collaborators too, Emily Pavleria and Kyle Turner. We have a team, a very experienced team who knows how to intake these new tools, how to frame them in the right way, how to deploy them throughout an institution and particularly with our students and developing the right measurables and experiments for them to get some experience on. That was not.

clear cut when we first started how exactly to do that best. But one, because we didn’t have access to the things we needed just because of out of practicality and that some of them didn’t exist yet. And so we had to do some experimentation with that. And now I think we are really kind of a well-willed machine on how to ⁓ intake, deploy, track, measure, and iterate now in a way that

It took a little learning, but I think we’re really kind of a mature group now when it comes to doing that.

Cat Moon (52:58)
So I’ll add, I agree with Mark completely and huge shout out to Emily and Kyle because they are tremendous members of the VAILL team. I believe one of our superpowers is the fact that we’ve had the program on law and innovation since 2016 and so rolling out the lab was an iteration already on something we were already doing. So I think ⁓ I know we were ahead of the curve because of that and having

foundation, having that support has made it a lot easier for us to move very quickly with our experimentation.

We certainly are learning some things that I know work are the foundation of everything that Mark just described. And that really is our belief that the mindsets are the foundation before you layer any technology or anything else on top of it. How are you approaching this work? Whether how are we approaching how we design what the lab is going to do, what the curriculum is going to do? How do we approach getting the students?

Mark Williams (53:48)
Thank

Cat Moon (54:03)
giving them the foundation that they need and it all comes down to mindsets. And so that has really been affirmed that we’ve been able to move quickly to be agile, to be responsive, to be iterative. And so that’s very, very affirmative. I think one thing we’ve learned early on, we’re like, we’re going to build stuff. We’re going to, you know, we’re a lab and that means you build stuff. Yes and no. So I, I

I think one thing we’ve learned is that it is very hard to build, deploy, and maintain something. And that because of the nature of the institution in which we work and our primary mission, which is to educate the students in the building and then outward the profession as a whole with respect to this technology, we are primarily a learning lab, A learning and teaching lab ⁓ that

It is not really practical to think that we’re going to spin up.

you know, the next Harvey for access to justice, for example, as much as we would love to do that kind of work and not to say that there isn’t a role for a, you know, a law school lab to do something like that. You can look at Suffolk. They’ve been building and shipping products, but they, they have a structure that’s designed to support that. Right. And so I think us figuring out where, where are superpowers, what can we do and build from there.

Mark Williams (55:09)
Thank

Cat Moon (55:37)
Has been a big part of our lesson. So what we do is we partner with organizations who want to build and we contribute our expertise and the technological resources we have through Vanderbilt to build with others instead of being the institution the entity that builds the thing right because I Yes, and I know you all understand and probably everyone listening

Mark Williams (55:45)
Thank you.

Marlene Gebauer (56:02)
combining superpowers.

Cat Moon (56:07)
to this podcast understands that it is one thing to to build something and launch it and put it out into the world. It is a completely different thing to maintain the right to maintain a thing you’ve built and that is hard and expensive even still now right it is becoming easier and easier to launch the thing but maintaining it and

Mark Williams (56:19)
Yes, that’s the rub.

Cat Moon (56:35)
Yeah, it’s super, super hard.

Mark Williams (56:37)
Yeah, so

I would add on to that. Yeah, because I mean, we very much do building all the time, but I think that, ⁓ like just yesterday, I was, you know, messing around with Claude code all day. I wasted half my day building with Claude code when I really should have been focusing on some other things. ⁓ so we still, cause we do have a lot of projects where we build, but I think our mindset about that is exactly what Cat said, building as a vehicle for learning.

as a vehicle for putting students in the room from different backgrounds, from law backgrounds, data science backgrounds, computer science backgrounds, and forcing them to work on projects together and develop a common language. But yeah, what’s changed is, I think one of the first ones we figured that out was we built a Tennessee end of life planning tool. And they’re great, we’ve built it, but like, we really gonna, do we really wanna be in the business of deploying and maintaining and ensuring the robustness of this thing over time?

Uh, probably not, but that doesn’t mean that building as as a vehicle for learning, it’s one the magical things about generative AI, access, the barriers to access for, for coming down. that somebody without a technical background can reach out and touch those concepts, uh, in a really dynamic way without having to write very little, little to no code is super valuable, but maybe, you know, we’ll leave the VC fundraising to.

to others, least for now. Yeah, yeah, yeah.

Greg Lambert (58:02)
I know there seems to be a lot of money out there. Grab it now.

Marlene Gebauer (58:04)
Yeah.

Cat Moon (58:05)
I know.

Well, I’ll add really quickly. So I do want to point out that Mark has collaborated with our Data Science Institute to build some things, right? You have your AI regulatory tracker. The lab is creating things that really fall within our remit and are things that ⁓ are maintainable, ⁓ I guess. And we’re working with students ⁓ to build

Mark Williams (58:17)
Yes.

Yeah.

Cat Moon (58:34)
what we’re calling the list. So it’s going to be a curated interactive, of course, GenAI powered repository for learning resources about generative AI. So if you need to get, you know, a group, a practice group up to speed,

then you can send them to the list and they can complete this interactive interview and end up with a playlist for ⁓ learning about generative AI. And that is going to be a tool. It’s in production now. Students are there.

they’re curating the content and we’re going to build the tool. And so something like that, we feel comfortable kind of getting our hands around and using its primary a learning vehicle. But it also was squarely within our mission. And I think that the things the tools that Mark has been working on fall within that remit as well. ⁓ that’s definitely part of the lesson, right? What are the things we can do that really leverage our superpowers ⁓ that extend what we have kind of out into the world? How can we

How can we help?

Greg Lambert (59:49)
Well, Cat, that leads right into my next question before we get to the crystal ball question. And we’ve been asking our guests recently for this. What is some resources, like one or two resources that you go to to help you keep up with this constant shifting of everything that’s going on in legal and AI?

Cat, you wanna start?

Cat Moon (1:00:21)
So probably like many folks, ⁓ I have different little, I guess now we would call them agents out in the world working for me, right? I have Claude doing something, I have Gemini doing something, I have Chad GPT doing something to go scan the environment and bring things back to me on a regular basis. So certainly that, I’m using the tools themselves to kind of harvest.

Mark Williams (1:00:31)
Thank

Cat Moon (1:00:49)
You know, I still love a good email newsletter. So I have a number of email newsletters that land in my inbox and I will say I most enjoy and get the most out of those. ⁓

resources that are not legal specific, right? I’m really looking, I’m trying to figure out where the puck is going and legal is not the place to be looking for that, right? We’ll see what’s going on over here and then eventually it will make its way into law. With that said, obviously just keeping up with what’s going on in the landscape of Gen. AI and legal tech is very important and the folks at Legal Technology Hub do a fantastic job of, I think, keeping

folks up to date at really accessible levels of that information. So it’s kind of a broad swath and be looking. So the lab has a sub stack, the AI of law, and that’s va ll sub stack.com. And so we share through that as well. Yeah, we recently got a shout out on the tax prof blog. ⁓

for that substack, so.

Greg Lambert (1:02:04)
We’ll give you a shout out here too.

Marlene Gebauer (1:02:06)
We’ll have links in the show notes too.

Cat Moon (1:02:08)
Yay!

Greg Lambert (1:02:09)
Mark, how about you?

Mark Williams (1:02:11)
Yeah, I think similar to cat like I’ll talk more about that no

legal ones, because the legal ones, think we can all can guess. ⁓ But ⁓ one I do like a lot is the information, which is a really good sort of business of legal tech. It’s a subscription service, but we have it through the law school. ⁓ Great sort of inside baseball. And then, I have way too many sub stacks on AI. It’s the answer. ⁓

have Understanding AI by Timothy Lee, ⁓ Ethan Malik, Substack, obviously. Semi-Analysis, which is like really nerdy, deep-cut ⁓ semiconductor computer chips. ⁓ I’ve been radicalized. read Chip Wars a couple of years ago, and now I just see the entire world through like, the entire world order through the prism of semiconductor and computer chips.

So that’s one, I really, even my classes, we talk about, when we talk about AI, we talk about the whole stack, like from the application layer all the way down to semiconductors and, know, TSMC and at video, like these things are all related. ⁓ So, and then a lot of podcasts as well. I’m a big AI podcast person. So, a shout out to my friend, Kevin Frazier, ⁓ his podcast, Scaling Laws, this podcast. ⁓

podcast, which is really like deep level AI nerdery. so I’m really kind of in the weeds and then, and then I kind of bring it back. It’s kind of like an accordion. Like I go really broad and then I come really, like really way back. ⁓ But it’s, ⁓ it’s all, yeah, I probably have too many things that I’m consuming because it’s, ⁓ as they say with, with models, you know, you can overfit to your training dad. feel like sometimes my brain is maybe overfit to

Marlene Gebauer (1:03:55)
you

Mark Williams (1:04:11)
⁓ AI training data is a little bit. So maybe I got to diversify. don’t know. ⁓

Marlene Gebauer (1:04:19)
All right, well, let’s put all of the learning you guys are gathering from these great resources to the test. in the next few years, ⁓ we’re looking at our crystal ball question. I know you guys are familiar with that. How are law schools going to prepare students for the transition to practice? Is there going to be more blended technology? And what do you think firms are going to be wanting? Because honestly, I’m very curious as to what firms are going to be wanting myself.

Mark Williams (1:04:46)
Yeah. Okay. I’ll let you go first.

Cat Moon (1:04:51)
So I’m gonna talk about the future I want to see. I really believe this is, again, a moment to do some things better when we think about how legal education is preparing lawyers for the time in which they’re entering. And this invites radical collaboration, right? So it’s not just firms sitting over here demanding something from legal educators. And it’s not legal educators saying, is what we’re willing to do, the rest is up to you, right?

and that’s kind of where we’ve been. ⁓ It is, and the beauty of it is we don’t have to be, right? That’s not, I think we’ve been focused on the wrong things. And so this is a really an amazing opportunity as the practice is figuring out what do we really need from a human lawyer going forward, right? There are a lot of unanswered questions there.

Greg Lambert (1:05:24)
We’re not a trade school, we’re a profession.

Marlene Gebauer (1:05:27)
Ha

Cat Moon (1:05:48)
So that’s gonna shift. And as legal education is figuring out what are we best prepared and suited to provide and what do we need to change, there’s so much room in the middle, right, for the stakeholders and I would also say students. So often they seem kind of left out of these conversations and even though they’re the ones, right, paying these enormous tuition bills. So I remain…

Being optimistic, think this is an incredible opportunity for the stakeholders to come together and really co-create what the future of legal education is going to look like.

know what that is. I don’t think But I also believe we are kind of entering this era of iteration that’s going to apply to legal education as well. Right? We can’t be the one sort of silo in this dynamic, right, environment that we’re in. We don’t want to be, I don’t think so.

Mark Williams (1:06:33)
You

Greg Lambert (1:06:34)
Yeah.

Marlene Gebauer (1:06:58)
Okay, Mark.

Mark Williams (1:06:59)
Yeah. Well, I’ll at say this, like if we wouldn’t have to have a lab if we knew the answer to some of these questions. I’ve been encouraged. I have the luxury of teaching an AI class that second and third year law students choose to be in. they’re coming at it from a self-selecting sort of mindset, but the degree to which they are hypervigilant and aware of their own learning and protective of their own learning.

Greg Lambert (1:07:05)
You

Mark Williams (1:07:27)
⁓ not, being, being aware of cognitive offload and like, things do I want to, what things do I need to like sit with and, and it’s suck at for a while in order to master this learning curve so that I can then, you know, take that and use AI with it later when I actually know what I’m doing. ⁓ I wouldn’t, I don’t say that, you know, I’m certain that shortcuts are being done as well, but I I’ve been encouraged the level of thoughtfulness that I’ve seen for my students in that well, because.

The thing that’s interesting to me is whether it’s taking place outside of law school that directly impacts it is that we are messing with that guild model of knowledge work and we are taking away a lot of those ⁓ foundational sort of bricklaying skills that used to be done by first and second year associates that’s increasingly getting commoditized and being shrunk down to zero. ⁓ What do we do to get those students up to that level of skill?

And at the same time, when they’re asked to make decisions on what their career path is going to be before their first wave of first semester exam grades are out, the pressures for them to decide what they want to do and what they want to specialize in are happening sooner. their day’s facing that. They’re facing this ⁓ sort of really transitional shift and platform of a technology in the way it’s being diffused. So they’ve got that.

And then I think for the first time, you know, in a very real way, we’re seeing, you know, the ownership rules, the law firm model is potentially changing. Like even here in Tennessee, we’re having some discussions about, you know, alternative ownership and different, you know, private equity is kind of starting to knock on the door. We’re seeing law, you know, AI first law firms starting to pop up. What does that mean? How does skill building play into that? So I have any answers, but I think those are the questions we’re going to be asking in the

things we’re going to be thinking. A lot of those are external to legal education, but they are directly beaming down onto it and going to be shaping how we think about it. ⁓ So yeah, those are the challenges. And again, we wouldn’t have to have a lab if we knew all the answers to those, but those are the things we’re at least talking about. And I hope that we’re forcing those discussions in a more direct way throughout the way we educate our law students, I would say.

Greg Lambert (1:09:45)
It’s just good job security for the folks at the lab.

Cat Moon (1:09:48)
Yeah,

Look, everything is going to be a lab at some point. Like, every, like, that’s, right. So, and I sadly had to remove the tiara. I think I need a larger size, my little, I stretch it little bit. So pretty. Anyhow, sorry.

Greg Lambert (1:09:51)
All right, we’ll catch. ⁓ Yeah.

You’ve gotten smarter since we gave it to you.

Well, thanks for wearing

it. Mark Williams and Cat Moon from Vanderbilt. Thank you very much for coming back on the show. It’s always a pleasure to talk to you guys.

Marlene Gebauer (1:10:16)
Yeah, thank you both.

Mark Williams (1:10:16)
Thank you for having us.

Cat Moon (1:10:17)
Thank you for having

us.

Marlene Gebauer (1:10:19)
And of course, thanks to all of you, our listeners, for taking the time to listen to the Geek in Review podcast. If you enjoyed the show, share it with a colleague. We’d love to hear from you, so reach out to us on LinkedIn and Blue Sky.

Greg Lambert (1:10:31)
And Cat and Mark, there are any particular plays that you would point listeners to if they want to learn more about what you guys are doing.

Cat Moon (1:10:39)
So we have a website, ailawlab.org, and we have the Substack, VAILL v-a-i-l-l.substack.com, and we’ll share links to those in the show notes.

Marlene Gebauer (1:10:54)
Great. And as always, the music you hear is from Jerry David DeCicca Thank you very much, Jerry.

Greg Lambert (1:10:59)
All right, thanks everyone.

Marlene Gebauer (1:11:01)
Bye.

Mark Williams (1:11:01)
Thank

Cat Moon (1:11:01)
Thanks!

 

Photo of Greg Lambert Greg Lambert

Librarian-Lawyer-Knowledge Management-Competitive Analysis-Computer Programmer…. I’ve taken the Renaissance Man approach to working in the legal industry and have found it very rewarding. My Modus Operandi is to look at unrelated items and create a process that can tie those items together. The overall…

Librarian-Lawyer-Knowledge Management-Competitive Analysis-Computer Programmer…. I’ve taken the Renaissance Man approach to working in the legal industry and have found it very rewarding. My Modus Operandi is to look at unrelated items and create a process that can tie those items together. The overall goal is to make the resulting information better than the individual parts that make it up.

Read more about Greg LambertGreg's Linkedin ProfileGreg's Twitter ProfileGreg's Facebook Profile
Show more Show less
Photo of Marlene Gebauer Marlene Gebauer
Read more about Marlene Gebauer
  • Posted in:
    Technology
  • Blog:
    3 Geeks and a Law Blog
  • Organization:
    3 Geeks
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo