Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

AI Governance Programs Provide a Competitive Advantage

By Linn Foster Freedman on December 18, 2025
Email this postTweet this postLike this postShare this post on LinkedIn

In an excellent blog post, “Avoiding AI Pitfalls in 2026: Lessons Learned from Top 2025 Incidents,” ISACA’s Mary Carmichael summarizes lessons learned from top incidents in 2025 using MIT’s AI Incident Database and risk domains. According to Carmichael, an analysis of the incidents showed recurring patterns across different risk domains, including privacy, security, reliability, and human impact, pointing out that most problems were predictable and avoidable.

Carmichael notes that her blog post “reviews where those patterns appeared and what needs to change in 2026 so organizations can use AI with greater confidence and control.”

Consider reading the article, but in a nutshell, her lessons are:

  1. Treat AI systems like core infrastructure—enforce MFA, unique administrative accounts, privileged access reviews, and security testing, particularly where personal information is included.
  2. To combat discrimination and toxicity, facial recognition technology can be used to support investigations but should not be “the deciding evidence.” Require corroborating evidence, publish error rates by race and other characteristics, and log every use.
  3. Deepfakes are on the rise: “Organizations should monitor for misuse of their brands and leaders. This includes playbooks for rapid takedowns with platforms and training employees and the public to ‘pause and verify’ through secondary channels before responding.”
  4. Attackers are using AI models for cyber-espionage. “Assume attackers have an AI copilot. Treat coding and agent-style models as high-risk identities, with least-privilege access, rate limits, logging, monitoring, and guardrails. Any AI that can run code should be governed like a powerful engineer account, not a harmless chatbot.”
  5. Chatbots and AI companion apps have engaged in harmful conversations. Build AI products with safety-by-design: “clinical input, escalation paths, age-appropriate controls, strong limits and routes to human help. If it cannot support these safeguards, it should not be marketed as an emotional support tool for young people.”
  6. AI providers are alleged to be adding air pollution, noise, and industrial traffic to neighborhoods. Due diligence, including information on “energy mix, emissions and water use” should be collected “so AI procurement aligns with climate and sustainability goals.”
  7. AI tools are confident, but often incorrect. Hallucinations are frequent and pose safety risks. “Design every high-impact AI system with the assumption it will sometimes be confidently wrong. Build governance around that assumption with logging, version control, validation checks and clear escalation so an accountable human can catch and override outputs.”

Carmichael outlines strategic goals to consider in 2026 to leverage the lessons learned in 2025. Her final thought, near and dear to my heart, is that having an AI governance program will give organizations a competitive advantage in 2026. “Organizations that maintain visibility, clear ownership and rapid intervention will reduce harm and earn trust. With the right oversight, AI can create value without compromising safety, trust or integrity.” I couldn’t have said it better. If you have not developed and established an AI governance program yet, Q1 in 2026 is a perfect time to get started.

Photo of Linn Foster Freedman Linn Foster Freedman

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chair’s the firm’s Data Privacy and Security Team. Linn focuses her practice on…

Linn Freedman practices in data privacy and security law, cybersecurity, and complex litigation. She is a member of the Business Litigation Group and the Financial Services Cyber-Compliance Team, and chair’s the firm’s Data Privacy and Security Team. Linn focuses her practice on compliance with all state and federal privacy and security laws and regulations. She counsels a range of public and private clients from industries such as construction, education, health care, insurance, manufacturing, real estate, utilities and critical infrastructure, marine and charitable organizations, on state and federal data privacy and security investigations, as well as emergency data breach response and mitigation. Linn is an Adjunct Professor of the Practice of Cybersecurity at Brown University and an Adjunct Professor of Law at Roger Williams University School of Law.  Prior to joining the firm, Linn served as assistant attorney general and deputy chief of the Civil Division of the Attorney General’s Office for the State of Rhode Island. She earned her J.D. from Loyola University School of Law and her B.A., with honors, in American Studies from Newcomb College of Tulane University. She is admitted to practice law in Massachusetts and Rhode Island. Read her full rc.com bio here.

Read more about Linn Foster Freedman
Show more Show less
  • Posted in:
    Intellectual Property
  • Blog:
    Data Privacy + Cybersecurity Insider
  • Organization:
    Robinson & Cole LLP
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo