Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

AI Governance is not doing well!

By Peter Vogel on February 3, 2026
Email this postTweet this postLike this postShare this post on LinkedIn
1770128027-4668-5242-lxb_photoFHgWFzDDAOslxb_photo-
Igor Omilaev, Unsplash

ComputerWorld.com reported that “Across industries, CIOs are rolling out generative AI through SaaS platforms, embedded copilots, and third-party tools at a speed that traditional governance frameworks were never designed to handle. AI now influences customer interactions, hiring decisions, financial analysis, software development, and knowledge work — often without being formally deployed in the classical sense.”  The February 2, 2026 article entitled “Why AI adoption keeps outrunning governance — and what to do about it” (https://www.computerworld.com/article/4122948/responsible-ai-gap-why-ai-adoption-keeps-outrunning-governance-and-what-to-do-about-it.html) included these comments about “Why legacy data governance struggles under genAI”:

Even where governance exists, it’s often built on assumptions that no longer hold. Fawad Butt, CEO of agentic healthcare platform maker Penguin Ai and former chief data officer at UnitedHealth Group and Kaiser Permanente, argues that traditional data governance models are structurally unfit for generative AI.

“Classic governance was built for systems of record and known analytics pipelines,” he said. “That world is gone. Now you have systems creating systems — new data, new outputs, and much is done on the fly.” In that environment, point-in-time audits create false confidence. Output-focused controls miss where the real risk lives.

“No breach is required for harm to occur — secure systems can still hallucinate, discriminate, or drift,” Butt said, emphasizing that inputs, not outputs, are now the most neglected risk surface. This includes prompts, retrieval sources, context, and any tools AI agents can dynamically access.

What to do: Before writing policy, establish guardrails. Define no-go use cases. Constrain high-risk inputs. Limit tool access for agents. And observe how systems behave in practice. Policy should come after experimentation, not before. Otherwise, organizations hard-code assumptions that are already wrong.

What do you think?

First published at https://www.vogelitlaw.com/blog/ai-governance-is-not-doing-well

  • Posted in:
    E-Discovery, Technology
  • Blog:
    Internet, IT & e-Discovery
  • Organization:
    Peter S. Vogel PC
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo