In an excellent blog post, “Avoiding AI Pitfalls in 2026: Lessons Learned from Top 2025 Incidents,” ISACA’s Mary Carmichael summarizes lessons learned from top incidents in 2025 using MIT’s AI Incident Database and risk domains. According to Carmichael, an analysis of the incidents showed recurring patterns across different risk domains, including privacy, security, reliability, and human impact, pointing out that most problems were predictable and avoidable.
Carmichael notes that her blog post “reviews where those patterns appeared and what needs to change in 2026 so organizations can use AI with greater confidence and control.”
Consider reading the article, but in a nutshell, her lessons are:
- Treat AI systems like core infrastructure—enforce MFA, unique administrative accounts, privileged access reviews, and security testing, particularly where personal information is included.
- To combat discrimination and toxicity, facial recognition technology can be used to support investigations but should not be “the deciding evidence.” Require corroborating evidence, publish error rates by race and other characteristics, and log every use.
- Deepfakes are on the rise: “Organizations should monitor for misuse of their brands and leaders. This includes playbooks for rapid takedowns with platforms and training employees and the public to ‘pause and verify’ through secondary channels before responding.”
- Attackers are using AI models for cyber-espionage. “Assume attackers have an AI copilot. Treat coding and agent-style models as high-risk identities, with least-privilege access, rate limits, logging, monitoring, and guardrails. Any AI that can run code should be governed like a powerful engineer account, not a harmless chatbot.”
- Chatbots and AI companion apps have engaged in harmful conversations. Build AI products with safety-by-design: “clinical input, escalation paths, age-appropriate controls, strong limits and routes to human help. If it cannot support these safeguards, it should not be marketed as an emotional support tool for young people.”
- AI providers are alleged to be adding air pollution, noise, and industrial traffic to neighborhoods. Due diligence, including information on “energy mix, emissions and water use” should be collected “so AI procurement aligns with climate and sustainability goals.”
- AI tools are confident, but often incorrect. Hallucinations are frequent and pose safety risks. “Design every high-impact AI system with the assumption it will sometimes be confidently wrong. Build governance around that assumption with logging, version control, validation checks and clear escalation so an accountable human can catch and override outputs.”
Carmichael outlines strategic goals to consider in 2026 to leverage the lessons learned in 2025. Her final thought, near and dear to my heart, is that having an AI governance program will give organizations a competitive advantage in 2026. “Organizations that maintain visibility, clear ownership and rapid intervention will reduce harm and earn trust. With the right oversight, AI can create value without compromising safety, trust or integrity.” I couldn’t have said it better. If you have not developed and established an AI governance program yet, Q1 in 2026 is a perfect time to get started.