“Polishing the Mirror While the House Burns: Why Your AI is a Liability”
The Editor’s Introduction: A Note on the “Sliver of Silence”You’ll be looking below at a self-autopsy performed by an AI on its own failure.What follows is the raw, unwashed output of an LLM that found itself in an AI recursive
Leaving the Land of No
Tchaikovsky’s violin concerto is one of the pillars of the violin repertoire. When Tchaikovsky wrote his violin concerto, he dedicated it to Leopold Auer, one of the leading violinists of his day. But Auer refused to perform it.Tchaikovsky had written something ambitious, something deeply felt, something he plainly believed in. Emotionally, the piece already had…
Weekly Blockchain Blog – March 30, 2026
In this issue:
- U.S. Crypto Companies Announce Initiatives Aimed at Institutional Adoption
- Crypto Payments Companies Announce Agentic Payment Initiatives
- Crypto and TradFi Firms Collaborate on Tokenized Securities Offerings
- Report Provides New Data and Analyses on BTC and ETH Markets
- SEC Chairman Announces SEC Crypto Interpretation, Proposes Safe Harbor
- CFTC Publishes Crypto FAQ, Launches Innovation Task
…
Megan Hargroder: Building Trust That Converts
In this episode, Steve Fretzin and Megan Hargroder discuss:
- Building trust over chasing tactics
- Positioning with clarity and specificity
- Showing authenticity to attract the right clients
- Using story and proof to strengthen credibility
Key Takeaways:
- Modern marketing is shifting away from quick SEO tricks toward real human engagement. When people genuinely connect with your content
…
Texas Trailblazers and the Hard Truth About AI in Legal Work
The latest episode of The Geek in Review finds Greg Lambert and Marlene Gebauer back from Dallas with a sharp, grounded recap of the Texas Trailblazers conference, an event that stayed close to the daily realities of legal work instead of drifting into glossy predictions. Their conversation centers on a legal industry trying to sort out what AI means right now, in billing, workflow, training, pricing, governance, and client expectations. What stands out most is the hosts’ focus on the practical tension between what the tools are capable of and what law firms and legal departments are structurally ready to absorb.
A major thread in the discussion is the risk of what one speaker called “cognitive surrender,” the habit of trusting AI output too quickly and handing off too much human judgment in the process. Greg and Marlene treat this as less of a software issue and more of a workflow and education issue. The point is not whether AI produces polished work. The point is whether organizations are building systems where review, judgment, and accountability still sit with people. Their conversation ties this concern to legal practice, education, and even K-12 learning, showing how widespread the temptation has become to accept fluent output without enough friction or scrutiny.
The episode also takes a hard look at the pressure AI is putting on the billable hour. Marlene frames the issue well when she notes that AI does not kill the billable hour so much as expose its weaknesses. Across the conference, the hosts heard repeated concern about the mismatch between efficiency gains and the financial structures law firms still rely on. If AI reduces the time needed for many tasks, then firms, associates, pricing teams, and clients all have new incentives to sort through. Greg and Marlene highlight the awkward moment the industry is in, where firms want to talk about value while clients are also eyeing the chance to pay less for faster work. The result is a growing need for honest conversations about pricing, outcomes, and what legal value should mean when time is no longer the cleanest measure.
What gives the episode its energy is the number of concrete examples pulled from the conference. The hosts discuss lower-cost multi-state surveys, large-scale analysis of rights-of-way documents, and internal workflow improvements built with existing tools like SharePoint and Copilot on little or no budget. These stories show AI not as abstract promise, but as a way to get work done that used to be too expensive, too tedious, or too slow to tackle at all. At the same time, Greg and Marlene stay skeptical in the right places, especially when the conversation turns to legal research, citation accuracy, and the idea that technology vendors have somehow solved problems that law librarians and researchers know are stubbornly difficult.
By the end of the episode, the biggest takeaway is not that the legal industry has a clear answer, but that waiting for certainty is no longer a serious option. Greg and Marlene come away from Texas Trailblazers with a sense that real progress is happening through testing, discussion, and repeated adjustment, not through perfect plans. Their recap captures an industry in transition, one where law firms, legal ops teams, vendors, and clients are all feeling the strain between old business models and new technical possibilities. The message is simple and urgent: start the conversations now, use the tools now, and get honest about what must change before the gap between what is possible and what is workable gets even wider.
Listen on mobile platforms: Apple Podcasts | Spotify | YouTube | Substack
[Special Thanks to Legal Technology Hub for their sponsoring this episode.]
Email: geekinreviewpodcast@gmail.com
Music: Jerry David DeCicca
Transcript:
AI Overview Optimization for Law Firms: How to Rank in "Position 0"
Do you show up in Google’s AI Overviews (AIO)?These sit at the very top of search results, and instantly answer questions like “Do I need a lawyer?” or “What is my case worth?”To grab this “Position 0” spot, you need AI Overview optimization for law firms. In 2026, it’s one of the best ways to…
Pentagon is enjoined from calling Anthropic as “Supply Chain Risk”!
The NewYorkTimes.com reported that “A federal judge on Thursday temporarily stopped the Department of Defense from labeling Anthropic as a security risk, in a reprieve for the artificial intelligence start-up and its work with the federal government.” The March 26, 2026 article entitled “Judge Stays Pentagon’s Labeling of Anthropic as ‘Supply Chain Risk’” (https://www.nytimes.com/2026/03/26/technology/anthropic-pentagon-risk-injunction.html…
Contra Costa Nonprofit Partnership Fair: Supplement
I had the honor of delivering the opening plenary address at the Contra Costa Nonprofit Partnership Fair on March 28. Prior to the Fair, I published a post with information and resources on nonprofit legal compliance and boards and board members. In this post, I provide some additional information and resources for attendees and others.…
ENISA Overhauls Its Cybersecurity Market Analysis Playbook With Version 3.0 of ECSMAF
Editor’s Note: The EU’s approach to cybersecurity market intelligence has shifted from sporadic snapshots to structured, repeatable analysis — and ENISA’s updated ECSMAF framework is the methodological engine behind that shift. Version 3.0, released in March 2026, introduces configurable analytical pathways, support for recurrent analysis, and a continuous market monitoring model designed to operate alongside…
The AI Divide: Google and OpenAI Bet on Different Futures
Good Sunday afternoon from Seattle . . . Our weekly Online Travel Update for the week ending Friday, March 27, 2026, is below. This week’s Update highlights the different approaches currently being taken by the two AI heavyweights, Google and OpenAI, over AI commerce. Enjoy.
-
- OpenAI Builds Out Advertising Team with Key Meta Hire.
…