AI compliance and insurance are now directly connected. Most companies assume their existing insurance covers AI-related risks. That assumption became wrong in 2026.  The “silent AI” era is over. Until recently, AI risks were absorbed into existing policies because nothing explicitly excluded them. Coverage existed by default, not by design. 

AI insurance requirements changed when insurers began rewriting policy language. 

As of January 1, 2026, Verisk released new general liability endorsements CG 40 47 and CG 40 48. These forms allow carriers to exclude claims tied to generative AI. Once these endorsements are added, coverage for AI-related harm can disappear even if the rest of the policy remains intact. 

Major carriers, including AIG, W.R. Berkley, and Great American, have filed or sought regulatory clearance for broad AI exclusions across D&O, E&O, EPLI, and CGL lines, as reported in recent industry analysis on AI insurance coverage trends. 

The wording matters. Some exclusion language is broad enough to bar coverage for any claim “arising out of” AI use, output, training, advice, or decision-making, a shift highlighted in multiple 2026 risk reports. That language is broad. It does not require AI to be the sole cause. If AI were involved in the process, the carrier may have grounds to deny coverage. 

Most companies will not see this coming. Policy renewals may include new questionnaires or endorsements, but the impact is not always clear at the time of signing. The gap becomes visible only when a claim is filed. 

At that point, it is too late to adjust governance or documentation. 

Example. A company deploys an AI hiring tool. A discrimination complaint follows. The EPLI policy was renewed with an AI questionnaire, completed without supporting documentation. There is no record of bias testing or human oversight. The carrier applies a sublimit. A five-million-dollar claim is capped at five hundred thousand. 

This is the shift. Coverage is no longer assumed as it is earned through documented control. 

 

AI risk is no longer abstract. It affects legal exposure, insurance coverage, and business operations. This overview explains how AI risk is defined and why it requires structured control. 

WHAT INSURANCE UNDERWRITERS NOW REQUIRE 

The insurance market has split into two categories. 

Companies with documented AI governance receive coverage on workable terms. Companies without it face exclusions, sublimits, premium increases, or denial. 

Underwriters are no longer evaluating AI as a background risk. They are evaluating it as an operational system that must be controlled, documented, and monitored. 

That evaluation now mirrors how insurers assess cybersecurity posture. If controls are not documented, they are treated as absent. 

At a minimum, underwriters now require structured documentation of AI use and control across the organization. This shift aligns with broader industry guidance on AI risk and underwriting expectations. 

Governance must be defined. Companies need a clear framework that identifies where AI is used, how tools are approved, and who is responsible for oversight. This includes model inventories, risk registers, and defined approval processes for deploying new systems. 

Decision-making must include human oversight where appropriate. Underwriters look for evidence that AI outputs are reviewed before they affect customers, employees, or business outcomes. 

Employment-related use is under heightened scrutiny. If AI is used in hiring, promotion, or termination decisions, insurers expect documentation showing bias testing and human review. Without it, exposure shifts quickly into EPLI risk. 

Security expectations have expanded beyond traditional controls. Some carriers now require evidence of model risk assessments, adversarial testing, and safeguards aligned with frameworks such as the NIST AI Risk Management Framework. 

Third-party tools are also part of the review. Companies must demonstrate that they evaluated how external AI vendors handle data, security, and liability. This is not optional. Vendor risk is treated as company risk. 

Insurers also expect defined incident response processes. If an AI system produces inaccurate, harmful, or discriminatory output, there must be a documented plan for how the company responds, investigates, and mitigates impact. 

Regulatory awareness now factors into underwriting decisions. Recent regulatory tracking shows insurers are increasingly aligning underwriting with state AI laws and governance expectations. Companies are expected to understand how laws such as the Colorado AI Act and CCPA automated decision-making regulations apply to their operations. Lack of awareness signals unmanaged risk. 

The standard is clear. If you cannot document how AI is governed, validated, and controlled, insurers will adjust coverage to reflect that gap. 

THE SIX POLICY TYPES AFFECTED 

AI exclusions are appearing across multiple policy lines. Coverage changes across policy types have been documented in recent insurance market reviews focused on AI risk allocation (Future Workforce Systems). This is not limited to cyber coverage. The impact cuts across nearly every major commercial policy. 

Tech E&O policies are seeing some of the earliest changes. Certain carriers now exclude claims tied to AI-generated outputs, including chatbot responses, automated recommendations, and content produced through generative systems. If a client relies on AI and a failure occurs, coverage may not apply. 

Cyber liability policies remain the most protective, but the scope is narrowing. Carriers are beginning to carve out specific scenarios, including AI-generated deepfake fraud and certain types of automated social engineering attacks. Coverage still exists, but it is no longer as broad as it was. 

Directors and Officers policies are seeing some of the most aggressive language. Exclusions can extend to governance failures, regulatory investigations, and shareholder claims if AI played any role in the underlying issue. This creates exposure at the leadership level, not only operationally. 

Employment Practices Liability Insurance is now a focal point. AI-driven hiring, promotion, and termination decisions create measurable discrimination risk. Underwriters are actively targeting this area, and coverage depends heavily on documented controls. 

Commercial General Liability policies are also shifting. New endorsements allow carriers to exclude claims tied to AI outputs, including defamation, privacy violations, copyright infringement, and even certain bodily injury scenarios linked to automated systems. 

Standalone AI insurance products are emerging to address these gaps. Providers such as Testudo and Armilla are offering specialized coverage, but these policies require detailed governance documentation and are typically placed through surplus lines markets. 

The pattern is consistent. AI exposure is no longer isolated as it touches every layer of coverage. 

THE REGULATORY LANDSCAPE DRIVING THIS 

Regulation is expanding AI-related liability, and insurers are adjusting coverage to match that exposure. 

The Colorado AI Act, effective June 30, 2026, introduces specific requirements for companies using high-risk AI systems. These include risk management policies, impact assessments, consumer notices, and public disclosures. The law creates direct compliance obligations and financial penalties of up to twenty thousand dollars per violation. 

The CCPA automated decision-making regulations, effective January 1, 2027, add another layer. Companies must provide advance notice of AI use, offer opt-out rights, and allow access to information about how automated decisions are made. This creates both compliance and documentation risk. 

Other states are moving in the same direction. Several have introduced AI liability bills that create new private rights of action. This expands the ability of individuals to bring claims tied to AI-driven decisions. 

The NAIC AI Model Bulletin has already been adopted by 23 states plus DC as of April 1, 2026. The bulletin sets expectations around governance, accountability, and fairness in AI systems. It provides guidance for insurers on evaluating AI risk and is shaping how underwriting decisions are made across jurisdictions. 

International exposure adds another layer. The EU AI Act is being implemented in phases through 2027 and affects companies operating across borders. It introduces classification systems, compliance requirements, and enforcement mechanisms that extend beyond US law. Broader regulatory developments across jurisdictions are summarized in recent legal outlook reports. 

These developments are not theoretical. They increase the likelihood of claims and expand the scope of liability. Insurers are responding by tightening coverage, introducing exclusions, and requiring stronger governance documentation. 

The direction is clear. As regulation increases, underwriting becomes more restrictive. 

WHAT YOUR COMPANY NEEDS TO DO NOW 

These are legal deliverables. Not internal guidelines. Not IT controls. 

If your company uses AI in operations, underwriting now depends on whether these documents exist and whether they reflect actual practice. 

Companies need: 

  • AI Acceptable Use Policy
    A company-wide policy governing employee use of AI tools, data input restrictions, approved systems, and training requirements.  
  • AI Governance Framework
    A documented structure covering model inventories, risk registers, approval workflows, human oversight, and internal accountability.  
  • AI Bias Testing and Documentation
    Evidence showing testing and validation of AI systems used in hiring, lending, pricing, or customer-facing decisions.  
  • AI Vendor Due Diligence
    Legal review of third-party AI tools, including data handling practices, security posture, and liability allocation.  
  • AI-Specific Contract Provisions
    Updated terms, data processing agreements, and vendor contracts addressing AI use, risk allocation, and compliance obligations.  
  • AI Incident Response Plan
    A defined process for handling harmful, inaccurate, or discriminatory AI outputs, integrated with existing incident response protocols.  
  • AI Regulatory Compliance Audit
    Mapping AI use against current and upcoming laws, including the Colorado AI Act and CCPA automated decision-making requirements.  
  • Board-Level AI Oversight Documentation
    Records demonstrating that leadership understands and oversees AI risk at the organizational level.  

These are not optional if you expect coverage. Traverse Legal works with companies to build the AI governance infrastructure that insurance underwriters now require. Before renewal. Not after denial. If your company is facing an upcoming renewal or has already received an AI questionnaire, now is the time to act. 

The post AI Insurance Requirements: Insurance May Not Cover Your AI Failures first appeared on Traverse Legal.