Today’s guest post is by Reed Smith‘s Jamie Lanphear. She has long been interested in tech issues, and particularly in how they might intersect with product liability. This post examines product liability implications of using artificial intelligence (“AI”) for medical purposes. It’s a fascinating subject, and as always our guest posters deserve 100%
Danny Decker: Making The Client The Hero Of Your Marketing
In this episode, Steve Fretzin and Danny Decker discuss:
- Putting the client at the center
- Building your brand around real client needs
- Turning client questions into content
- Taking action and refining along the way
Key Takeaways:
- Strong legal marketing shifts the spotlight away from the lawyer and toward the client’s problems, emotions, and desired outcomes.
…
From Document Review to Fact Intelligence, Gregory Mostyn on How Wexler.ai Is Reshaping Litigation
This week on The Geek in Review, we talk with Gregory Mostyn, CEO of Wexler.ai, about how his company is building a sharper form of legal AI for litigation. In a market crowded with broad platforms that aim to handle every legal task at once, Mostyn describes Wexler as a focused system built for one of the hardest problems in disputes, understanding the facts. He shares how the idea grew from watching his father, a judge, carry home stacks of ring binders and spend late nights reviewing case materials by hand. That early picture of legal work, heavy with paper and pressure, became the spark for a company aimed at helping lawyers work through massive records with more depth, speed, and precision.
A central idea in the conversation is Wexler’s view that the most useful unit of analysis in litigation is not the document, but the fact. Mostyn explains that lawyers are often handed a mountain of emails, messages, filings, and exhibits, yet what they need is a clear understanding of what happened, why it matters, and where the pressure points sit. Wexler is designed to pull out events, inconsistencies, and supporting details from that record so litigators are working from a factual map rather than a pile of files. That shift matters because disputes are rarely neat. Important evidence may be tucked inside an offhand message, a late footnote, or an exchange written in vague, coded language. Wexler’s aim is to turn that mess into something a trial team can use to shape strategy.
Mostyn also walks through the mechanics that separate Wexler from more general legal AI products. He describes a detailed fact extraction pipeline that processes unstructured material and turns it into structured data before the system reasons over it. That design helps Wexler deal with the disorder of litigation, where timelines blur, people contradict each other, and key details are easy to miss. He also points to the scale of the platform, noting that it handles large document sets and supports work such as deposition preparation, trial preparation, summary judgment briefing, and early case assessment. One of the more striking features is real-time fact checking during depositions, where the platform helps lawyers spot contradictions in testimony as the questioning unfolds. The effect is less like using a search box and more like working with a tireless junior team member who has read the whole file.
Trust, accuracy, and restraint are another major part of the discussion. Mostyn is careful not to oversell what AI can do. He openly states that no system is perfect, yet he argues that Wexler reduces risk by staying inside the record given to it. It does not search the internet, does not drift into outside material, and ties its outputs back to specific text in the source documents. That discipline is important in litigation, where a made-up citation or invented fact is more than embarrassing, it is dangerous. Mostyn presents Wexler as a tool that helps lawyers verify, question, and sharpen their understanding of the case. The result is less time spent slogging through repetitive review and more time spent thinking about how to use the facts in a meaningful way.
The conversation closes on a bigger question about where this kind of technology leads the profession. Mostyn believes that as AI takes on more of the burden of document review and fact development, the value of human lawyering rises in other areas. Strategy, advocacy, witness preparation, courtroom performance, and judgment all become more important when the groundwork is assembled faster and more thoroughly. He also suggests that clients are beginning to care less about how many hours were spent reviewing documents and more about whether their lawyers are prepared, informed, and effective. For listeners interested in litigation, legal AI, and the next stage of law firm economics, this episode offers a thoughtful look at a company betting that the future belongs to tools built for depth, discipline, and the hard realities of dispute work.
Listen on mobile platforms: Apple Podcasts | Spotify | YouTube | Substack
[Special Thanks to Legal Technology Hub for their sponsoring this episode.]
Email: geekinreviewpodcast@gmail.com
Music: Jerry David DeCicca
Transcript:
Code of War: How AI Firms Are Rewriting the Rules of War and what that means for International Criminal Law
Efthimia (Mariefaye) Bechrakis, Esq. On March 26, 2026, a federal judge blocked the Pentagon from branding Anthropic a “supply chain risk.” The ruling does more than grant an early legal victory for Anthropic. It exposes a deeper structural shift in how the boundaries of military power are being negotiated at a moment when contemporary warfare is increasingly…
Fresh From the Conference Floor: IAPP Global Privacy Summit 2026
I just returned from the IAPP Global Privacy Summit, and there was a lot to absorb. Sessions covered AI governance, children’s safety, advertising compliance, international data law, and more.
There is SO much that we learned at IAPP that we can’t possibly put it all into this one newsletter, so stay tuned for part 2…
A moral perspective on the Anthropic lawsuit against the Department of War!
An Opinion in the WashingtonPost.com from Johns Hopkins University Professor Thomas Rid of strategic studies started with “A cultural rift is opening between those who see artificial intelligence as a mere tool for humans, and those who see AI as a set of self-aware, sentient digital minds that perhaps will one day be our replacement.”…
Nonprofit Resources of the Week – 4/4/26
Nonprofit Resources of the Week curates timely articles, tools, and commentary to help nonprofit organizations, their leaders, and their advisors stay informed about legal developments, sector trends, and emerging issues affecting the nonprofit and philanthropic ecosystem, including those related to equity, climate change, and resilience. The series also seeks to share tools, perspectives, and sources…
Q1 2026: A Record Quarter, a Compressed Market, and a Window That Won’t Stay Open
KEY TAKEAWAYS
- Q1 2026 broke every VC record. Crunchbase reports $300 billion deployed globally, a 150% jump year-over-year, nearly 70% of all VC deployed in all of 2025.
- AI captured 80% of venture dollars. $242 billion went to AI startups. Four of the five largest VC rounds in history closed in Q1. Jefferies tracks the AI
…
Amazon and OpenAI Push Ahead While Amadeus Scores a Court Win
Good Saturday afternoon from sunny Seattle . . . Our weekly Online Travel Update for the week ending Friday, April 3, 2026, is below. This week’s Update features a variety of stories, including an update from Amazon and its plans for Alexa Plus and travel. Enjoy.
-
- Amazon Seeks to Collapse Travel Discovery and Booking into
…
Guest Post: Private Credit – Risky Business?
James Sterlin
Mike Newham
In the following guest post, James Sterling, Claims Counsel, Euclid Financial & Professional Risks, and Mike Newham, Partner, RPC, consider the economic and underwriting risks associated with the private credit markets. A version of this article previously was published on LinkedIn and on Euclid’s website. My thanks to James and Mike for allowing me to publish their article as a guest post on this site. Here is the authors’ article.