Skip to main content
main content, press tab to continue

By Dr. Anat Lior and Sonal Madhok | December 09, 2025

AI is transforming risk profiles across industries, and the insurance market continues to adapt. Can we expect policies to explicitly address AI in the near future, ending the silent coverage era?
Cyber-Risk-Management-and-Insurance|Enterprise Risk Management Consulting|Insurance Consulting and Technology|Insurtech|Risk and Analytics|Willis Research Network
Artificial Intelligence|Innovation at Work

Artificial intelligence is transforming risk profiles across industries, and the insurance market continues to adapt. Today, AI-related risks are largely covered implicitly under traditional insurance policies (“silent AI” coverage). This "silent" coverage is similar to how early cyber risks were handled under standard policies before dedicated cyber insurance existed. Therefore, an AI incident may be covered by current policies (cyber, liability, etc.) even if they do not specifically mention "AI." However, this silent coverage creates ambiguity. Insurers are now moving to clarify coverage for AI, either by introducing endorsements that affirm coverage for certain AI risks or by adding exclusions to standard policies to avoid unanticipated exposure. According to Dr. Anat Lior’s ‘Insuring the AI Age’ paper, we can expect policies to explicitly address AI in the near future, ending the silent coverage era.

Why does this matter?

Currently companies often rely on a patchwork of policies to cover AI risks. No single policy covers all AI perils, but different policies cover different aspects (for example, a data breach caused by AI falls under cyber insurance; an AI-caused injury falls under general liability). Each policy may have gaps where an AI-related loss might not fit neatly. Therefore, it is critical for risk managers to understand these overlaps and gaps and not assume that “AI” is certainly covered. Insurers anticipate the AI insurance market to grow (one forecast is ~$4.7B in premiums by 2032), and they are developing solutions. Specialized AI endorsements (e.g. some cyber policies now explicitly cover or exclude AI-driven events like data poisoning or AI-generated content) have been introduced and some standalone AI insurance offerings for specific needs. However, Dr. Lior states that insurers expect AI risks to be absorbed into mainstream products over time once they have more data.

Parallels to Cyber: The ongoing situation with AI coverage resembles the evolution of cyber insurance. Initially, cyber losses were paid under property or liability policies unintentionally (“silent cyber”); eventually insurers added cyber exclusions and created dedicated cyber policies when the risk became significant and better understood. For AI, Dr. Lior suggests we are at that inflection point. Insurers are starting to tighten terms, for example, by adding wording to clarify whether (and how) losses from autonomous decisions or algorithmic errors are covered. Therefore, reviewing policy language carefully at renewals is essential to ensure an endorsement is created or ensuring another policy will cover any gaps in coverage.

Coverage by traditional policies (with AI considerations)

Most organizations will address AI risks through existing insurance policies. Below is a shortened summary of Dr. Lior’s work on how 12 common policy types relate to AI. It outlines each policy’s role, key gaps, potential AI-related exclusions, and an example scenario:

fullscreenEnlarge this table

Table 1: AI-related risks mapped to existing insurance policies

Source: “Insuring the AI Age” contents abridged

Policy Type Coverage for AI Gaps/Limitations Potential Exclusions Example Scenario
Cyber Insurance Data breaches, AI-aided hacks, privacy violations Requires breach trigger; own data loss not covered Unauthorized AI use, AI-generated content AI chatbot leaks proprietary code; own data loss not covered
Tech E&O Errors in tech services/products, incl. AI systems Needs clear negligence; excludes injury/damage High-risk AI apps, lack of oversight AI trading algorithm causes client losses
EPLI Discrimination from AI in HR or customer service No coverage for fixing AI; limited to covered groups Algorithmic bias sub-limits / exclusion AI hiring tool discriminates against older applicants
Professional Indemnity (PI) Errors by professionals using AI tools Assumes human oversight; autonomous AI unclear Unsupervised/unapproved AI use Doctor misdiagnoses due to AI tool
General Liability (CGL) Injury/damage from AI systems; defamation No coverage forfinancial loss; cyber exclusions Intangible harm, data-only events AI robot injures visitor
Workers’ Comp Employee injuries from AI-operated equipment Only covers employees; limited benefits None; premiums may adjust Factory worker injured by AI robot
IP Liability Infringement from AI-generated content Patient is often excluded; intentional acts not covered Training-data IP claims, generative AI AI-generated image infringes copyright
Property Insurance Physical damage from AI failures No coverage for downtime or data loss Cyber exclusions; software loss AI system causes explosion; repairs covered
Crime Insurance Theft/fraud incl. AI-enabledscams Sub limits for social engineering; non-fraud losses Deep fake fraud, insider fraud Deep fake voice scams leads to wire transfer
D&O Executive liability for AI oversight failures Excludes fraud; doesn't cover fixing AI None yet; scrutiny of AI governance Shareholders sue board over failed AI project
Product Liability Injury/damage from defective AI-enabled products AI as “product” unclear; excludesfinancial loss Non-compliance with safety standards Self-driving car’s AI fails, causes crash
Media Liability Defamation/privacy/IP from AI content Needs editorial review; excludes intentional acts Unreviewed AI content, patent issues AI-generated article fabricates quotes

As the table shows, most AI-related risks can be mapped to an existing insurance policy, but each has limitations. For example, an AI-caused data breach is covered by cyber insurance, but if that breach is of a company’s own confidential data, they suffer a gap. Or, if the company’s AI-driven advice causes only financial harm (no injury), CGL and product policies will not cover, the company would require E&O coverage.

Underwriting adaptations

Insurers have started adjusting underwriting practices for AI exposures. Key points include:

  • Data scarcity: There isn’t much historical loss data on AI incidents, so according to Dr. Lior, insurers are using analogies to known risks (comparing AI scenarios to similar past claims in cyber, tech, etc.) and running scenario analyses. Some are building internal databases of AI “near misses” and lawsuits to learn trends. In underwriting applications, there may be more detailed questions about AI usage and controls to insureds (e.g. “Do you use AI in making decisions? How do you prevent bias? What if the AI fails? Is there human override?”). Providing thorough answers and having strong AI governance can favorably influence terms or at least ensure coverage fits the risks.
  • Human oversight vs. autonomy: Underwriters generally support a human in the loop for critical AI decisions. It implies if something goes wrong, it’s likely due to human error, which traditional insurance covers readily. If an AI is fully autonomous, insurers may consider that akin to a product that must be extremely well-designed (or covered via product liability). Some insurers are willing to cover fully autonomous risks but often at lower limits or with strict conditions. According to Dr. Lior, one insurer metaphorically called a human overseer a “liability sponge” absorbing blame. Over time, if AI systems prove very reliable, this bias may be reduced, but currently having documented human oversight and safety checks will help in underwriting.
  • “Hyper-scaler” tech firms vs. others: Very large companies developing advanced AI often self-insure a chunk of risk (e.g. large balance sheets or captive insurers). For example, in Dr. Lior’s research, a major retailer’s risk manager noted they had not bought a specific “AI insurance” product. They rely on existing cover and their own capacity for now. Insurers focus on small-to-medium enterprises for new AI coverage, where the need to transfer risk is greater and the exposures are more bounded. This suggests that smaller companies can find insurers to work with them on AI risk, whereas large corporations may negotiate bespoke arrangements. Regardless of size, transparency is key: disclosing AI activities to insurers can help prevent future complications and mitigate risk.
  • Broker behavior: According to Dr. Lior, insurance brokers have tended to reassure clients that existing policies suffice for AI, unless there is a apparent gap. This conservative approach suggests that many companies have not yet purchased new AI-specific policies. While brokers may currently advise that existing policies are sufficient for AI-related risks, it is helpful to review how specific AI scenarios would be addressed under a company’s current coverage. If policy language is unclear, seeking clarification from a company’s respective broker or insurer may provide greater confidence in their risk management strategy. As insurers begin to introduce AI-specific exclusions or endorsements in some markets, brokers may respond by identifying alternative solutions. Understanding potential coverage gaps early can support more informed decision-making and help organizations prepare for future changes in the insurance landscape.

Market & regulatory landscape

The external environment carries heavy influence on AI insurance:

  • No broad mandates yet, but discussions ongoing: Governments haven’t required companies to carry AI liability insurance across the board. However, targeted requirements could emerge. For instance, regulators might mandate insurance for autonomous vehicle operators (similar to auto insurance requirements) or for AI in critical sectors (healthcare AI devices, etc.) once frameworks are in place. The EU considered an AI liability fund and mandatory insurance in earlier debates, but the final EU AI Act did not impose insurance, instead it focuses on compliance and lets liability fall under existing laws. In the U.S., there is interest in how insurance can support AI risk management, but no federal mandates as of now. Implication: For now, buying AI insurance is a choice guided by risk appetite and contractual demands, not by law. However, specific industries like autonomous vehicles may already have jurisdictions in place to require certain coverages.
  • Emerging liability laws: Legal clarity is coming that will drive insurance changes. The EU AI Act (expected enforcement in 2025–2026) will impose strict obligations and heavy fines on providers of high-risk AI systems. Insurers are developing products to cover these new liabilities (e.g. endorsements for regulatory defense costs or fines, where insurable) related to AI Act violations. Separately, the EU is updating its Product Liability Directive so that victims of AI-caused harm can sue under strict liability (with eased burden of proof). This means companies deploying AI in products will face a liability environment much like traditional product manufacturing, making product liability insurance and adherence to safety standards even more crucial. In the U.S., there have been proposals (like a California bill) for strict liability on AI developers for certain harms. While not law, they indicate a trend: if AI causes harm, someone (developer, deployer, etc.) will be held accountable even absent negligence. As these laws evolve, insurance will adjust. Insurers may offer coverage for the new liabilities where possible, but also may require that companies follow any legally mandated risk controls (if you violate law or regulation, insurance may not cover the resulting penalties).
  • Insurers as risk partners (quasi-regulators): Insurers often fill governance gaps by requiring policyholders to implement safety measures. For AI, underwriters might stipulate things like “the insured must conduct regular bias audits on their AI” or “a human must review AI outputs in high-stakes cases” as binding conditions. While these aren’t public laws, failing to meet them could void coverage. Insurance can thereby enforce a level of AI oversight. Additionally, insurers provide resources: many have published AI risk bulletins and some offer consultations to clients. For example, an insurer might help a client test an AI system’s security as part of underwriting. This is beneficial to both sides as it reduces risk and potential claims. Thus, engagement with an insurers’ risk improvement services can strengthen a company’s AI risk management and make insuring those risks easier and cheaper.
  • Potential for government backstops: There is recognition that some AI risks could be so systemic (affecting many companies simultaneously) that they challenge insurability. For instance, if a widely used AI platform or cloud service fails and triggers massive losses across the economy, it could be akin to a natural catastrophe or terrorism event in terms of insurance impact. Some experts suggest a need for a government-supported reinsurance program for catastrophic AI events, similar to how many countries have terrorism insurance pools. While hypothetical now, this idea might gain traction if there is a major AI-driven catastrophe (e.g. an AI cybersecurity incident that causes global infrastructure downtime). For everyday AI risks, this isn’t needed, but it is worth noting that public-private solutions could emerge for the extreme tail risks, ensuring coverage remains available.

Key takeaways

In sum, AI is introducing new risks, but it is also pushing the insurance industry to innovate. For risk managers, the central message is proactive engagement:

  • Map AI risks to policies. Identify any exposures that don’t clearly fall under an existing coverage or that might exceed current limits.
  • Close the gaps: Begin conversations with respective broker and insurer about endorsements or new policies if needed (for example, adding a rider for AI copyright liability if a company heavily uses generative AI, or increasing social engineering fraud limits given deepfake risks).
  • Stay informed: Monitor how laws like the EU AI Act or new guidance in your industry will affect liability, and ensure the insurance policy adapts accordingly. If a company operates globally, laws may differ and something covered in one country might not be in another if liability standards vary.
  • Implement strong AI governance: Insurance is not a substitute for good risk management; underwriters will favor companies that control their AI risk. Document all AI development and oversight processes. If it is possible to show an insurer that a company is following best practices (data privacy, bias mitigation, human oversight, contingency plans), then not only does the chance of loss reduce but also it is likely to secure better insurance terms.

Insurance is becoming a key enabler for AI adoption

Much like cyber insurance gave businesses confidence to engage in e-commerce, robust AI coverage can give stakeholders (investors, customers, boards) confidence to deploy AI technologies. It provides a financial safety net if things go wrong. As one expert in Dr. Lior’s paper said, insurance for AI is an important piece in “ensuring the safe integration of new technologies into society”. By carefully aligning risk transfer strategy with AI strategy, innovation occurs with greater peace of mind. In the coming years, the currently blurry lines of coverage are expected to sharpen, with explicit terms, specialized products for complex risks, and hopefully a track record of claims that prove manageable.

Authors


Assistant Professor of Law, Thomas R. Kline School of Law, Drexel University

Assistant Professor of Law, Kline School of Law, Jackson School for Global Affairs, Yale University Affiliated Fellow, Information Society Project (ISP), Yale Law School


Technology Risks Analyst
email Email

Contact us