Skip to main content
Choose your location
Select the location and the language that you prefer
main content, press tab to continue
Article | Willis Research Network Newsletter

Beyond our imagination: How Generative AI promises to reshape scenario analysis in the insurance industry

By Jessica Boyd and Cameron Rye | May 15, 2024

Recent advances in generative AI promise to change how (re)insurers approach scenario development. Given the rapid evolution of this technology, what might the future hold?

Scenarios are narratives about how the future might unfold, designed to raise awareness and stimulate discussion among stakeholders. In the (re)insurance industry, scenario analysis is a cornerstone of risk management, crucial for understanding tail risks, identifying emerging risks, strategic planning, and managing risk aggregations.

Peter Schwartz, an early pioneer of scenario planning, likens the use of scenarios to “rehearsing the future”[1], where the objective is to run through (or practice) simulated events as if we are already living them. Similar to rehearsing a theatre production, the process of scenario development requires a collaborative effort of numerous individuals and several days, weeks, or months of refinement before the scenarios are ready for their intended audience. This traditional approach to scenario development is notably time-consuming and resource-intensive.

However, over the past 18 months, advances in Generative Artificial Intelligence (AI) tools, including Large Language Models (LLMs), have enabled the rapid generation of numerous scenario narratives across a wide range of disciplines. This raises important questions for the (re)insurance industry: Could scenarios generated by AI be beneficial? Do these scenarios make logical sense? What are the potential limitations? And given the rapid development of this technology, what might the future hold?

An example of failure of imagination was evident during Hurricane Katrina in 2005, when levees protecting the city failed, resulting in devastating flooding and nearly 2,000 fatalities. Despite the known risk of levee breaches in New Orleans prior to the event[3], such scenarios were not incorporated into catastrophe models used for risk management at the time. As a result, many (re)insurers unwittingly had large flood exposure concentrations in the city, which translated into substantial losses when the levees failed, resulting in the costliest insured loss on record at the time.

This problem stems from limitations in the brain. Human thinking is riddled with cognitive biases[4] that skew our judgment. Our ability to imagine potential future outcomes is limited by the availability bias, causing us to overestimate the likelihood of events that are more memorable, the recency bias, which draws too heavily upon the most recent experiences and the hot hand fallacy, whereby a string of successes can lead to an overestimation of future success. But the point of scenario development is to imagine unimaginable – but possible – future events. How can we achieve this with brains that are inherently wired to cling to the familiar?

Furthermore, whilst using LLMs helps to avoid introducing human cognitive biases, scenarios produced by generative AI may inadvertently reflect biases present in their training data or model code. And while LLMs can produce scenario narratives, they cannot currently do the quantitative bits very well, such as estimating losses or evaluating business impacts.

Given these caveats, many applications will necessitate an AI-assisted approach to scenario development. This process includes sense-checking and adjusting scenarios for specific business use cases, as well as translating narratives into measurable business impacts. LLMs should therefore be viewed as tools to assist with the heavy lifting of generating scenario narratives, rather than a turnkey solution.

It is also important to note that the quality and specificity of a prompt provided to an LLM can significantly influence the accuracy, relevance, and usefulness of the scenario produced. Investing time in prompt engineering – the practice of carefully crafting inputs to elicit the desired outputs from generative AI – is therefore vital. At WTW, we have been refining this practice to aid our insurance clients in developing a broad range of scenarios relevant to their exposures.

What’s on the horizon?

AI is advancing quickly, with breakthroughs now spanning beyond language models to areas like weather forecasting, including hurricane landfall predictions[6]. It is entirely plausible that within a few years, AI will not only generate natural catastrophe scenario narratives but also produce synthetic hazard data for these scenarios, such as hurricane wind fields. Eventually, we might even see AI-generated catastrophe models capable of simulating probabilistic losses. The potential applications are as vast as they are exciting, and our engagement with this technology can unlock the door to new capabilities in catastrophe risk assessment.

Footnotes

  1. Schwartz, P. (1996). The Art of the Long View: Planning for the Future in an Uncertain World. New York: Currency Doubleday. Return to article
  2. Shell Scenarios. Return to article
  3. Lousiana State University, The Climate Change and Public Health Law Site, Hurricane Pam Exercise. Return to article
  4. Daffron (2023), How do you weigh a biased perception of risk?, WTW. Return to article
  5. Open AI. Return to article
  6. Lam et al. (2023), Learning skillful medium-range global weather forecasting, Science. Return to article

Authors


Jessica Boyd, Head of Model Research
Head of Model Research, Willis Research Network
Email

Cameron Rye - Head of Modelling Research and Innovation Willis Research Network
Head of Modelling Research and Innovation
Willis Research Network
Email

Contact us