An AI governance policy defines how AI is managed across the organization. It is the starting point for meeting current insurance expectations and reducing underwriting friction. Underwriters no longer accept informal oversight. They expect a documented system showing where AI is used, how it is approved, and who is accountable for its operation. 

At a minimum, this includes a complete inventory of AI tools in use across departments. Many companies cannot produce this on request. That gap alone signals unmanaged risk. Approval processes must be defined before new AI tools are deployed. This includes evaluating purpose, data inputs, and potential impact on customers or employees. Without this step, companies cannot show that risk was assessed before deployment. 

Internal controls must also govern how AI-driven decisions are made. This is not limited to technical safeguards. It includes operational rules defining when AI can act and when human intervention is required. 

Human oversight is critical. Underwriters expect clear review points, especially in high-risk areas such as hiring, pricing, or customer interaction. Board-level visibility also matters. A strong AI governance policy should show that leadership understands AI risk and receives regular reporting.

Without a defined framework, companies cannot show control over AI use. Without control, underwriting outcomes deteriorate. 

AI governance is not a theoretical concept. It is a defined set of controls that keeps AI systems aligned with business, legal, and operational expectations. The following overview provides a clear baseline for how governance is understood in practice.

AI-SPECIFIC QUESTIONNAIRES AT RENEWAL 

Insurance renewals now include AI-specific questionnaires. These are not informational. They are underwriting tools. Carriers use these questionnaires to assess whether a company’s AI use creates unmanaged liability. The answers are measured against documentation, not intent. The focus is strongest in employment-related risk. EPLI underwriters are asking direct questions about the use of automated systems in hiring, promotion, and termination decisions. 

Typical questions include whether AI tools are used in employment decisions, whether bias testing has been conducted, and whether human review exists in the process. These questions point back to the same issue: whether the company has real AI governance and can prove it in practice. A policy that exists only on paper will not solve the problem.

Documentation is the deciding factor. Underwriters expect evidence showing that bias testing was performed and that outputs are reviewed before decisions are finalized. Without this, the response is incomplete. Incomplete or unsupported responses create underwriting risk. That risk usually appears as exclusions, sublimits, or adverse pricing. 

The questionnaire is not the problem, but the first place where the absence of an AI governance policy becomes visible. 

AI SECURITY AND VALIDATION 

AI systems require controls beyond traditional cybersecurity. An underwriter no longer looks only for endpoint protection, MFA, and incident logging. The question now is whether the company can show how AI systems are evaluated, monitored, and constrained in practice. NIST’s AI Risk Management Framework gives a useful benchmark because it focuses on managing risks tied to the design, development, deployment, and use of AI systems.  

Companies must demonstrate how they assess model risk, validate outputs, and prevent harmful or inaccurate results. That means documenting where the model can fail, what controls sit around it, and how the business confirms outputs before they drive real decisions. If an AI tool is used in hiring, pricing, customer service, fraud review, or internal operations, insurers want evidence that the company is not treating the model as self-proving.  

This includes documented processes for reviewing AI-driven decisions and confirming reliability before those decisions affect operations. In practical terms, that can include model-level risk assessments, adversarial testing, escalation paths, and defined human review points. The same governance logic appears in Colorado’s AI law, which requires deployers of high-risk AI systems to implement a risk management policy and program and to complete impact assessments.  

An AI governance policy should connect security and validation directly. If the company cannot show how outputs are tested, reviewed, and corrected, it will struggle to answer underwriter questions with confidence. 

EMPLOYEE AI USE AND INTERNAL CONTROLS 

Employee behavior creates exposure. In many companies, the first meaningful AI risk does not come from a custom model. It comes from employees using public or third-party AI tools without clear internal controls. 

Companies must define which AI tools are approved, what data can be used, and how employees interact with these systems. That means the business needs a written AI acceptable use policy, not a loose internal assumption. The policy should identify approved tools, restricted data categories, required review steps, and training expectations. Without that structure, the company cannot show disciplined AI use during renewal.  

Uploading sensitive data into public AI tools can create unauthorized disclosure risk. Policies and training must address this directly. This point also connects to current regulations. The Colorado AI Act imposes documentation, risk management, and disclosure duties for high-risk systems, while the NAIC Model Bulletin emphasizes governance, accountability, compliance, and consumer protection in insurer use of AI systems. Those sources reinforce the same operational lesson: AI use must be controlled, documented, and supervised.  

Internal controls should also address who can approve AI use cases, when legal review is required, and how employees escalate issues involving inaccurate or harmful outputs. A strong AI governance policy does not stop at approval. It governs daily use. 

AI VENDOR DUE DILIGENCE 

Third-party AI tools must be evaluated before deployment. An insurer will treat vendor AI risk as company risk if the tool affects operations, customer interactions, employment decisions, or regulated data. That is why AI vendor due diligence now sits inside a practical AI governance policy, not only inside procurement. NIST’s AI Risk Management Framework supports this approach by focusing on governance, mapping, measurement, and management across the full AI lifecycle, including deployed systems and third-party tools.

Companies need documentation showing they reviewed vendor practices related to data handling, security, and compliance. That review should not be casual. It should answer a short list of business-critical questions: 

  • What data does the tool collect, retain, or use for training  
  • What security controls protect the system and its outputs  
  • What testing exists for accuracy, bias, and reliability  
  • What human oversight options exist for high-risk use cases  
  • What regulatory obligations does the vendor help the company satisfy  

Those questions matter because regulators are already moving toward documented risk management and impact assessment obligations for certain AI use cases. Colorado’s AI law, for example, ties high-risk AI use to risk management programs, impact assessments, and recordkeeping.  

Contracts must address liability allocation and define responsibility for AI-related outcomes. That includes data use rights, confidentiality, security obligations, compliance representations, limitations on vendor use of company data, and clear responsibility for claims tied to harmful or inaccurate outputs. If the contract stays silent, the company usually carries more downstream risk than it intended. This is where legal review matters. A company cannot credibly claim disciplined AI risk management if its vendor paper ignores the AI-specific issues underwriters now ask about.  

AI INCIDENT RESPONSE 

AI-related incidents require defined response procedures. A company should not wait until an inaccurate, harmful, or discriminatory output causes damage before deciding who investigates, who escalates, and who communicates with affected parties. NIST’s AI Risk Management Framework and the Colorado AI Act both reinforce the need for structured governance, ongoing monitoring, and correction of AI-related risk.  

Companies must document how they handle situations where AI produces inaccurate, harmful, or discriminatory outputs. In practice, that process should cover a few core steps: 

  • Identify the output and contain further use  
  • Preserve logs, prompts, inputs, and decision records  
  • Determine whether human review failed or never occurred  
  • Assess whether legal, regulatory, customer, or employee harm occurred  
  • Escalate to legal, compliance, and leadership where required  

This is not an IT-only exercise. It is a legal and operational control. If the company uses AI in employment, pricing, customer-facing workflows, or regulated data environments, incident response must connect directly to governance and documentation.  

This process should align with existing incident response frameworks but account for the unique risks created by AI systems. Traditional cyber incident plans focus on intrusion, unauthorized access, and system compromise. AI incident response must also address flawed outputs, model drift, discriminatory effects, and reliance on generated content that appears credible but is wrong. An AI governance policy should make that distinction explicit so the business can respond quickly and show underwriters that AI risk is controlled rather than improvised. 

REGULATORY COMPLIANCE 

AI use must align with applicable legal requirements. An AI governance policy is not complete unless it maps internal AI use to current and emerging law. 

Companies need to map their AI systems against current and emerging laws and ensure their practices meet regulatory expectations. That means identifying where AI is used, whether those uses fall into high-risk or regulated categories, and what documentation, disclosures, testing, or review processes apply. Colorado’s AI law requires deployers of high-risk AI systems to implement a risk management policy and program and complete impact assessments. California’s updated CCPA rulemaking package also includes automated decision-making technology regulations and related compliance obligations.

Failure to align creates exposure across multiple areas, including privacy, employment, and consumer protection. It also affects underwriting. The NAIC Model Bulletin on the use of AI by insurers emphasizes fairness, accountability, compliance with state law, transparency, and secure system design, and it has already been adopted by 23 states plus DC. That means regulatory compliance is no longer a side issue. It is part of how insurers evaluate whether AI risk is managed or ignored.

A company does not need a perfect answer for every future rule. It does need a documented process for identifying applicable requirements, assigning responsibility, and updating controls as the legal landscape changes. Without that discipline, AI use becomes harder to defend to regulators, insurers, and counterparties at the same time. 

Traverse Legal provides AI governance, compliance, and policy development aligned with insurance requirements. If your company is preparing for renewal or expanding AI use, early legal preparation reduces risk and improves coverage outcomes. 

The post AI Governance Policy: Business Needs Before the Next Insurance Renewal first appeared on Traverse Legal.