Generative AI is no longer a future opportunity for insurers; it’s becoming a competitive dividing line
EIOPA’s latest perspective from The European Insurance and Occupational Pensions Authority (EIOPA)’s 2025 market-wide study, issued on 2 February 2026 and based on responses from 347 insurance undertakings across 25 EU/EEA markets, underscores this shift: the industry is moving rapidly toward AI-enabled operations, but with strong human oversight and a cautious approach to risk.
Against this backdrop, we’ve distilled EIOPA’s analysis through the lens of our market experience, client engagements, and our vision for a next-generation Human + AI Agent operating model—highlighting what leading analytical teams need to thrive.
01
AI enhances insurance analytical decision-making but does not replace it
EIOPA’s focus on human oversight confirms a core belief: Gen AI can greatly enhance insurers’ analytical capabilities, but accountability must remain with humans. AI will speed up analysis, surface complex patterns, and automate routine tasks, but it cannot hold responsibility in a regulated industry.
This means oversight will shift from manual line-by-line, item-by-item reviews to a principle-based, policy-driven governance. Leaders will need to define clear guardrails, thresholds, and escalation rules that determine when AI operates autonomously and when human intervention is required. AI agents will monitor continuously, escalating only where human assessment is required.
Future workflows will blend deterministic and non-deterministic steps to achieve the appropriate control whilst retaining the benefits of the flexibility Gen AI inherently provides, all by leveraging the transparency and explainability built into human user interfaces to support confident decision-making. This aligns with EIOPA’s view that today’s Gen AI systems remain heavily supervised but will gain controlled autonomy as governance matures and operational confidence strengthens.
For example, insurers can deploy AI-driven monitoring to flag Actual vs Expected (AvE) deviations in near real time. Only material exceptions – say, a sudden 10% frequency surge in a micro segment are routed to human review committees, reducing noise and focusing expert attention where it matters most. This operationalises EIOPA’s “assisted” approach under strong supervision, and it mirrors WTW’s “predict and act” active portfolio management cycle.
02
Future insurance analytical teams will be insurance domain experts fluent in AI
EIOPA’s report highlights a looming talent gap in AI skills within the industry. We see this challenge not as a shortfall of talent, but as a shift in the required skill set of existing individuals. The insurance analysts of the future won’t just be data scientists or actuaries; they will need to be insurance domain experts who are also fluent in AI. These professionals will design, interpret, and challenge AI agents' outputs, integrating AI-driven insights seamlessly into business decisions. They will operate in continuous, interconnected analytical cycles as part of networked teams aligned to a new AI-driven target operating models.
Generative AI will dramatically compress analytical cycle times; instead of periodic or annual reviews, insurers will move to near real-time updates. Continuous feedback loops among functions such as pricing, reserving, underwriting, and claims will become the norm, enabling increasingly proactive and collective analytics cultures rather than periodic or siloed ones.
For example, a cross-functional portfolio management team (encompassing pricing, claims, underwriting, and reserving) might use WTW’s RadarTM Vision tool to sift through emerging data and automatically surface areas of concern, such as a creeping increase in auto parts costs leading to higher claims severity.
The cross-functional team can then agree on targeted micro-actions (for instance, adjusting certain price levers, updating underwriting rules, or tweaking claims triage and total loss criteria) and measure the impact within days or weeks, rather than waiting for quarterly results. Knowing how to work with AI agents by asking the right questions, requesting detailed explanations, and challenging assumptions are the types of skills needed by the team.
03
Leading insurers operationalize AI instead of just experimenting
EIOPA found that most Gen AI use cases in insurance are still in the pilot or proof-of-concept stage. In our view, insurers must move decisively beyond experimentation and “AI labs” to truly operationalize AI across their enterprises. The winners in this new era will be those who integrate AI into workflows, not those who regard it as a separate, non-integrated experiment. This means establishing scaled operating models that embed human-AI collaboration. Key elements of such an operating model include:
-
Human-and-agent teams, in which AI agents handle high volume and routine tasks while humans focus on oversight, complex cases, and strategic analysis where expert assessment is more important.
-
Connected functions, breaking down silos by enabling AI-driven insights to flow seamlessly across teams (for example, allowing claims and pricing functions to share an up-to-date, frequently updated view on inflation).
-
Shared outcome alignment across pricing, reserving, underwriting, and claims, so that all functions work towards common business goals using insights derived from both human expertise and AI.
We envision a future in which a single analytics team might consist of five human experts, supported by up to fifty AI agents. In this scenario, AI takes on much of the heavy lifting for data processing and initial analysis, while humans provide guidance, make final decisions, and handle exceptions.
For example, an insurer could establish a central active portfolio management “run” function that operates a near-continuous insight → action → feedback cycle. AI agents would automatically prepare cross-functional “change sheets” that highlight segments to grow, defend, or adjust in response to emerging data. Human managers would then review, approve, and deploy these changes within predefined guardrails. This kind of rapid, iterative cycle moves far beyond a one-off pilot to embed AI into everyday analytical operations.
04
Insurance domain expertise must be embedded into AI agents
EIOPA’s observation of a heavy reliance on third-party AI providers underscores a critical point: generic, one-size-fits-all AI solutions are insufficient for a heavily regulated, highly specialized industry like insurance. Insurers will gain greater value (and control) by embedding deep insurance-domain knowledge directly into their AI tools and models. At WTW, we are designing AI solutions with insurance-specific expertise at the core. We continue to invest in:
-
Insurance domain-specific AI systems that understand the unique characteristics of insurance processes, decisions, and data.
-
Deterministic models and rules blended with Gen AI reasoning, where combining more traditional insurance analytics (which are deterministic in nature) with generative AI provides consistency, ability to meet regulatory needs, and apply human oversight.
-
Embedded governance, explainability, and compliance capabilities built into AI systems from the ground up.
-
Advanced model monitoring and auditability, so that AI decisions can always be tracked and verified.
This combination of insurance domain-centric design and controls ensures accuracy, regulatory alignment, and the ability to apply human oversight, ultimately providing material differentiation from competitors.
For example, a specific AI pricing agent could be designed with insurance-specific knowledge – such as recognising retention elasticity or catastrophe exposure levels to detect risk-adjusted underwriting margin drift. It could then suggest targeted actions at a segment level (adjusting rates, changing underwriting rules, revising risk appetite) that are consistent with capital, reserving, and regulatory requirements. In this way, the AI is not just technically sophisticated, but also “speaks insurance” by incorporating the same domain expertise that human specialists apply.
05
Governance and controls must shift to more frequent and more action-oriented oversight
As AI becomes ingrained in decision-making, traditional periodic governance will need to give way to embedded, real-time oversight of AI activities. We strongly agree with EIOPA’s emphasis on updating governance frameworks for the age of AI. The next phase of AI governance will be continuous and agent-driven. In practice, future control environments are likely to include:
-
AI agents for risk and compliance – for example, critic agents, guardrail agents, and compliance agents continuously monitoring AI activities in real time.
-
Automated, continuous checks on data privacy, policy thresholds, regulatory compliance, and financial limits – ensuring that AI systems operate within predefined bounds and any deviation is instantly flagged.
-
Comprehensive logging of AI actions for transparency and auditability, so that every AI-driven decision can be traced and reviewed by humans at any time.
Simply put, static quarterly or annual governance processes will no longer suffice for organisations using AI at scale. Governance must become a built-in part of the AI’s operation.
For example, consider a scenario in which every portfolio action proposed by an AI agent – such as recommending a 2.5% rate increase for a particular microsegment – is automatically subjected to a series of policy and limit checks before implementation. Each recommendation would generate an audit trail accessible to second-line oversight teams and regulators. This real-time control mechanism ensures that AI-driven decisions comply with all regulatory and business rules and standards, allowing human reviewers to focus only on exceptions or breaches.
06
Shift to direct agent-to-agent communication for easier integration
Many insurers rely on multiple third-party technology providers, which can make integration slow and cumbersome. We believe agent-to-agent communication will increasingly replace traditional APIs and middleware, reducing friction between systems. In an AI-native integration model where AI agents communicate directly with one another, legacy systems, cloud services, and cross-functional workflows can interoperate seamlessly. This shift promises lower integration costs, faster deployment of new tools and models, and improved quality control across the enterprise.
For example, imagine a monitoring AI agent flags an uptick in slippage in the Actual vs Expected (AvE) loss ratio. It could automatically trigger a cascade of agent-to-agent interactions: a refit agent recalibrates the demand model, and a deployment agent pushes updated rate recommendations to the pricing engine, subject to human approval. All these steps occur within a controlled, fully logged pipeline where integration happens behind the scenes through agent interactions, while humans maintain ultimate governance and oversight.
07
Insurance analytics software must embrace AI
EIOPA notes the growing importance of reliable vendor tools in supporting AI adoption.
In this context, Radar™ 5, the latest version of WTW’s market-leading pricing and analytics software, enables insurers to integrate Gen AI features in a governed, industry-specific manner.
-
Offers rapid, transparent insights into business performance, allowing users to identify changes in business KPIs and predictive models quickly.
-
Automatically identifies and clearly explains portfolio shifts, highlighting emerging trends or anomalies in the book that may require management action.
-
Provides AI augmented model development assistance, offering underwriters and actuaries a “co-pilot” to interpret complex model results and suggest data-driven next steps.
08
Deep expertise becomes more, not less, valuable in the AI era
Far from replacing human expertise, the rise of AI makes deep insurance knowledge and analytical assessment more critical than ever. As routine tasks are automated, an organization’s true competitive advantage will lie in uniquely human strengths such as:
-
Interpretation of nuanced trends and model outputs.
-
Expert assessment in areas of ambiguity or ethical consideration.
-
Governance and oversight of automated processes.
-
Strategic decision making that sets long-term direction.