Bexis recently attended the Spring Conference of the Product Liability Advisory Council (“PLAC”). PLAC meetings are usually good for new blogpost ideas, and this one was no exception. Today’s idea comes from an unusual source, though – the final day’s ethics presentation. That presentation was about artificial intelligence, mostly in the mass tort context. One
Drug & Device Law Blogging Team Blogs
Blog Authors
Latest from Drug & Device Law Blogging Team
Guest Post – Think Before You Prompt: What Recent Case Law Tells Us About Privilege, Work Product, and Your AI Interactions
Today’s guest post is another tech-related discussion from Reed Smith‘s Jamie Lanphear. Given the increasing ubiquity of artificial intelligence (“AI”) in legal practice, the notion of AI prompts and output becoming yet another front in the never-ending ediscovery wars is concerning. Here are Jamie’s latest thoughts on the latest pertinent caselaw in this…
When Admonishing Does No Deterring It May Be Time To Retool
We typed the following question into a simple AI prompt: “What is the difference between admonish and deter?” The response started with “The primary difference between admonish and deter lies in their intent and timing: admonishing is form of active, often verbal correction or warning regarding past or present behavior, while deterring is an act…
Courts Get Proactive on AI: Disclosure, Certification, and Consequences
Artificial intelligence isn’t going anywhere. Experts use it. Opposing counsel use it. Clients use it – and want their lawyers to use it too. It is becoming an increasingly standard legal research, drafting, and case strategy tool. But as a couple of our recent posts (here and here) have pointed out—AI is far…
Guest Post − AI Enters the Exam Room: Product Liability Implications of AI Health Tools
Today’s guest post is by Reed Smith‘s Jamie Lanphear. She has long been interested in tech issues, and particularly in how they might intersect with product liability. This post examines product liability implications of using artificial intelligence (“AI”) for medical purposes. It’s a fascinating subject, and as always our guest posters deserve 100%…
AI Hallucinations in Court: A Case Study in How Bad It Can Get
SDNY Holds that Defendant AI Inquiries Made Without Counsel’s Input Were not Shielded by Attorney-Client Privilege or Work Product Doctrine
We’ve become aware that some clients are using artificial intelligence (AI) to summarize or analyze things like complaints, briefs, internal documents, or even – horror of horrors! – law firm bills. If the client performing these tasks is an in-house lawyer, such work might be protected by the attorney client privilege or work product doctrine…
A Modest Proposal Concerning AI Hallucinations
We added a new site to our blogroll recently – “AI Hallucination Cases,” which describes itself as:
This database tracks legal decisions in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. . . . While seeking to be exhaustive (972 cases identified so far), it is…
Guest Post: What the New Reference Manual for Scientific Evidence Teaches Us About AI in the Courtroom
Today’s guest post is from Nick Dellefave, an up and coming Holland & Knight litigator. The Blog has rolled out a few posts on the latest edition of the Reference Manual on Scientific Evidence. Nick adds to this opus with a dive into the intersection between scientific evidence, the role of trial judges,…
No Physical Injury, No Damages, Still No Medical Monitoring Class
Sometimes we feel as though we have gone back in time. The Super Bowl is in San Francisco this week, as it was 10 years ago, although this time around, the atrium lobby of our building has been converted into an ESPN studio. We are the temporary home of the Rich Eisen Show, with the…