Another class action lawsuit—Cruz v. Fireflies.AI Corp.—puts a spotlight on potential legal risks associated with AI meeting assistants. The complaint alleges that the Fireflies tool records, analyzes, transcribes, and stores voices of meeting participants, including voices of those who are not Fireflies users, without the notice, written consent, and retention safeguards required by Illinois’ Biometric Information Privacy Act (“BIPA”). It points to Fireflies’ “speaker recognition” functionality and contends the product creates and retains voiceprints—covered under BIPA—while lacking a publicly available retention and destruction policy and failing to inform meeting participants about biometric collection. This case is a useful preview of issues regulators and plaintiffs may raise around AI transcription and summarization. We have previously covered other lawsuits involving these issues here.

Editor’s Note: Human rights are rapidly moving from philosophical principles to enforceable obligations and concrete expectations in AI governance. With the United Nations and the Council of Europe setting the tone, cybersecurity, information governance, and eDiscovery professionals must now treat rights‑based compliance as a core design requirement. This article identifies the emerging international architecture—from the

The Ambition Effect

The prevailing narrative surrounding Generative AI in the legal sector is one of unprecedented efficiency. The sales pitch is seductive in its simplicity: automate routine drafting and research, compress hours into minutes, and liberate attorneys for higher-value strategic thinking.

Yet, as the initial wave of adoption settles, a distinct counter-narrative is emerging