We typed the following question into a simple AI prompt: “What is the difference between admonish and deter?” The response started with “The primary difference between admonish and deter lies in their intent and timing: admonishing is form of active, often verbal correction or warning regarding past or present behavior, while deterring is an act
Drug & Device Law
The definitive source for intelligent commentary on the law that matters for drug and device product liability cases
Latest from Drug & Device Law
Courts Get Proactive on AI: Disclosure, Certification, and Consequences
Artificial intelligence isn’t going anywhere. Experts use it. Opposing counsel use it. Clients use it – and want their lawyers to use it too. It is becoming an increasingly standard legal research, drafting, and case strategy tool. But as a couple of our recent posts (here and here) have pointed out—AI is far…
Guest Post − AI Enters the Exam Room: Product Liability Implications of AI Health Tools
Today’s guest post is by Reed Smith‘s Jamie Lanphear. She has long been interested in tech issues, and particularly in how they might intersect with product liability. This post examines product liability implications of using artificial intelligence (“AI”) for medical purposes. It’s a fascinating subject, and as always our guest posters deserve 100%…
AI Hallucinations in Court: A Case Study in How Bad It Can Get
SDNY Holds that Defendant AI Inquiries Made Without Counsel’s Input Were not Shielded by Attorney-Client Privilege or Work Product Doctrine
We’ve become aware that some clients are using artificial intelligence (AI) to summarize or analyze things like complaints, briefs, internal documents, or even – horror of horrors! – law firm bills. If the client performing these tasks is an in-house lawyer, such work might be protected by the attorney client privilege or work product doctrine…
A Modest Proposal Concerning AI Hallucinations
We added a new site to our blogroll recently – “AI Hallucination Cases,” which describes itself as:
This database tracks legal decisions in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. . . . While seeking to be exhaustive (972 cases identified so far), it is…
Guest Post: What the New Reference Manual for Scientific Evidence Teaches Us About AI in the Courtroom
Today’s guest post is from Nick Dellefave, an up and coming Holland & Knight litigator. The Blog has rolled out a few posts on the latest edition of the Reference Manual on Scientific Evidence. Nick adds to this opus with a dive into the intersection between scientific evidence, the role of trial judges,…
No Physical Injury, No Damages, Still No Medical Monitoring Class
Sometimes we feel as though we have gone back in time. The Super Bowl is in San Francisco this week, as it was 10 years ago, although this time around, the atrium lobby of our building has been converted into an ESPN studio. We are the temporary home of the Rich Eisen Show, with the…
Guest Post – The AI LEAD Act: A Step Toward Regulating AI Product Liability in the United States
Today’s guest post is by Reed Smith‘s Jamie Lanphear. Like Bexis, she follows tech issues as they apply to product liability litigation. In this post she discusses a pro-plaintiff piece of legislation recently introduced in Congress that would overturn the current majority rule that electronic data is not considered a “product” for purposes…
Another Shameless Plug – Calling All Life Sciences In-House Counsel: Wrap Up Your 2025 CLE Requirements with Us
If you’re an in-house counsel working in the pharmaceutical, biotech, medical device, or digital health space (and still looking to complete CLE hours before year-end) we invite you to join Reed Smith’s annual Virtual Life Sciences CLE Week, taking place November 3–7, 2025.
This week-long event will feature a series of live webinars on the…