Skip to main content
Article

Deep dive on flood

Willis Research Network Digital Dialogue

February 3, 2021

Bringing insights from the world of science into practical risk and resilience discussions.
Climate|Medioambiental
Climate Risk and Resilience

Welcome to the Willis Research Network (WRN) Digital Dialogues series. The WRN aims to bring insights from the world of science into practical discussions around risk and resilience.

…we are delighted to introduce these Digital Dialogues, to continue the discussion with virtual expert panels… ”

Hélène Galy
Managing Director
Willis Research Network

Throughout the year, we hold several events covering a wide range of topics, but these gatherings often leave many of us wanting to know more, so we are delighted to introduce these Digital Dialogues, to continue the discussion in a more digital way, with key questions tackled by a virtual panel of experts, both in-house and in our worldwide network of partners.

We hope that you find this series interesting and keep coming back as we add to it, and we would also be delighted if you wanted to suggest topics or questions to feed this discussion.

Hélène Galy
Managing Director
Willis Research Network


Questions

The Panel

Professor of Hydrology and Climate Change, Newcastle University

Chris is Director of Civil and Geospatial Engineering and Professor of Hydrology and Climate Change in the School of Engineering at Newcastle University.


Shie-Yui Liong
Consultant to Tropical Marine Science Institute of National University of Singapore

Dr. Liong was with the Tropical Marine Science Institute of National University of Singapore (NUS), till 1 September 2019, for 15 years after spending about 20 years with the Department of Civil and Environmental Engineering of NUS.

Dr. Liong’s most recent research focus is on climate downscaling in Southeast Asia and deriving valuable information from the downscaled climate to evaluate the impacts of climate change on water resources, flooding, crop yields, etc.


Emma Raven
Head of Research and Development, JBA Risk Management

Dr Raven is Head of Research and Development at JBA Risk Management. She has sixteen years’ experience in flood modelling, statistical analysis, catastrophe model development, and climate change research.

Before joining JBA in 2011, Dr Raven gained her PhD in fluvial geomorphology and spent a further four years at Durham University specializing in flood patterns and correlations as a Willis Research Network Fellow. Most recently, Dr Raven has driven key projects exploring methods to incorporate climate change into industry data and tools, and to quantify the impact of climate change on flood risk. This work has included multiple academic collaborations around the world, as well as the development of JBA’s Climate Change Analytics – ground-breaking data designed to support long-term risk management.


Introduction from Nalan Senol Cabi, Flood Risks Hub, Willis Research Network

Nalan Senol Cabi

Climate change is a slow on-set process. Academics have been studying the impacts of climate change on flood risk for more than a couple of decades.

As a result, the scientific community has been urging and providing guidance to different industries to take measurements to slow down the impacts. The scientists have been joined by the insurance industry which, in particular, has been paying attention to climate change impact on natural disasters.

The industry’s motivations are obvious; it wants to quantify the risk and to find out what it can do to optimise portfolios under the ‘new normal’ and be prepared for it. There are many reasons for that but the main current one, I think, is the regulatory requirements. Due to the concrete consequences of burgeoning regulatory requirements, re/insurance companies, financial institutes are needing to further quantify their exposure to climate-related risks and to provide strategic plans to reduce their risk in order to have sufficient capital in their reserves to address unexpected, more frequent, higher intensity events.

Moreover, due to mandatory financial disclosures, some industries/companies are quite diligent about their actions towards climate-related risk mitigation. This has created a societal push within the market. Every company is feeling the pressure to be more transparent and to take material action.

Climate change impacts flood risk in various ways, but I believe the most important ones for the industry are the changing intensity and frequency of extreme weather events. How these changes affect losses and portfolio risk profiles are the two main questions. Therefore, right now, understanding these changes, quantifying them, finding sensible and credible adjustments to existing tools are industry priorities, at least for the short term.

Of course, the industry has been using catastrophe models to quantify nat-cat risk for some time now. Insurers are familiar with probabilistic event sets and realistic disaster scenarios. From this starting point, how can they adjust these tools in a sensible and credible way, based on various climate change projections, validate their approaches and be able to reflect on what the science is saying in a practical way for day-to-day business decisions?

Let’s get the views of our panel of experts.

Q1. What sort of climate-related changes in flooding are important for insurers to know about and understand?

Emma Raven

I work for JBA Risk Management and we've been producing climate change models and data specifically for the insurance industry for a couple of years.

The industry wants to understand what impact the changes will have on aggregate loss. How might their annual average loss change in the coming decades? How might that change over time and how does it change depending on which scenario you look at, low, medium or high emissions scenarios?

Insurers are also particularly interested in location specific information; where might be more or less susceptible to changes in flood risk due to climate change? How might that vary, depending on the different types of flooding?

We're getting asked for quite specific details on what might happen, and can we quantify the changes in flood risk that we expect in the future, specifically at property level and postcode level. There are also the changes in defences, local individual protection and adaptation measures. How might those influence the expected changes in average annual losses or location specific climate change impacts?

Chris Kilsby

I take Nalan’s point that the key change in attitude seem to have been the introduction of the regulatory stress tests. These have a longer-term perspective than the annual or two-to-five-year view of risk.

To assess the whole portfolio risk ‘now’ and how it might be at some ‘future’ time with hugely uncertain climate change being applied, you need to spatially identify driving risks of your portfolio. You can start with three main ones on flood; riverine flooding, surface flooding and coastal flooding. Then you need to look at where the biggest changes in the risks, in the frequency of hazards of these three categories, is likely to occur, because each will have a different profile under the climate change impact.

Shie-Yui Liong

I think at this point the industry is concerned about the return period shifts in flooding due to climate change. For a given area, identify the extreme rainfall intensity return period for present day, and investigate what that would be in 25 years, 50 years from now, for instance. That return period shift information is what the industry needs to take the necessary mitigation measures.

Q2: What variables should we look at to extract the climate change signal and calculate the impact on various types of flood risk?

Chris Kilsby

This is exactly why I’m keen to break things into categories, because there are different signals there. Yui has already mentioned, ‘rainfall’ is key here; for pluvial and fluvial flooding. Changes in rainfall signal is where we look. I think there is already a huge emphasis on monitoring and analysing rainfall and this gives us the ability to look at different durations of events which span 15 to 30 minutes, up to one hour for pluvial flooding, all the way up to multi-day for large basin fluvial flooding.

For coastal flooding, sea level rise is pretty well established and monitored but that's not the real issue here. The issue is the storm surge and that has quite a localised effect, so I think that's rather a difficult area to generalize. I'm not proposing we just ignore river flow measurements, of course they're useful where we have them. To get those out of climate models, we have to put extra models in the chain to do it. There is a whole list of variables to track, but rainfall is key.

Shie-Yui Liong

I think Chris has already said them all. When we talk about coastal flooding, there are three main components to consider: sea level rise, storm surges and waves. In principle, failing not to have downscaled data for the domain of interest, we could consider the Intergovernmental Panel on Climate Change (IPCC) sea level rise projections for different time horizons. Unlike rainfall, sea level rise data variation is quite linear from one grid to the next grid points. Also, the sea level rise impact is permanent. So, once you have the data originating from from different General Circulation Models (GCMs), that would be enough to capture the anticipated sea level rises. The timescale for storm surge varies from a couple of days to two weeks or so while waves are in a much shorter timescale. To assess coastal flooding, you can consider the aforementioned three components or just focus on sea level rise and surges only.

As far as climate modelling is, there are several challenges. Three most challenging ones are; selection of Regional Climate Model (RCM) domain, resolution, and the number of selected GCMs to be downscaled. First, RCM domain needs to be large enough to cover the starting point of typhoons or hurricanes in order to simulate their impact properly. Second, we need to have a higher resolution than 20x20-kilometre. Otherwise you need to apply further downscaling. Third, there are 40+ GCMs. So, you have to select and test some GCMs. As they have to be run on supercomputer, it is computationally expensive. Of course, you have to also to consider different emission scenarios to create a range of possibilities as well.

We have been doing dynamical downscaling for almost the entire Southeast Asia domain (85E -125E; 15S – 26N). The latest version available: the spatial scale is 20x20-kilometres resolution, and the domain is not large enough to cover the starting points of typhoons. For different emission scenarios; Representative Concentration Pathway (RCP) 4.5 and 8.5, we downscaled three GCMs; German’s ECHAM, Japanese’s MIROC, Australian’s ACCESS.

Emma Raven

To build on the points that have already been made about the rainfall data and the different types of flooding, we need to analyse rainfall data at different resolutions to pick up different signals. If you look at surface water events, we need to examine shorter duration; sub-daily, hourly records. Whereas when considering river flooding you need to look at accumulations of rainfall over longer periods and larger geographical areas. Rainfall is critical here but how you analyse it can identify a signal or not.

When you look at river flows, there are quite a few studies looking at patterns and trying to pick up climate trends within river flow records. If you were to look at just the annual maximum flow, you might find one trend or pattern but if you were to look at the daily time series, you might find different changes in frequency or in severity. Climate change impact might manifest itself in different ways; in frequency or severity changes or more likely in a combination of the two. The resolution and scale of the data you’re analysing, and how you analyse it, is key to capturing climate change signals.

Q3: To what extent should we be relying on physically based models rather than observational data? What value does the historical data still have?

Shie-Yui Liong

We have been commissioned to do a climate change study for the government of Singapore. We used both climate projection data and the historical data. They both are equally important as they complement each other.

The regional climate model (RCM): we used the Weather Research and Forecasting (WRF) model. We calibrated WRF with the reanalysis data, a “best estimate of the observed climate data” and verify the projected present-day climate with locally observed climate data. Only then the well/reasonably well calibrated and validated RCM is used to downscale GCMs. Whichever GCMs are selected depend on how well/reasonably well their projected present-day climate match with the observed climate.

For the ‘future', you simply run the RCM with the already “filtered” GCMs at different emission scenario (e.g. RCP 8.5). I think that’s the way to use the observational data for the RCM. Once you have that, the rest of it is almost a “smooth” sailing. You get the boundary forcing of your RCM from the GCM under consideration.

Emma Raven

We still need that present-day view of risk. We need a baseline to allow us to make a comparison before even looking at the climate view. Historical records offer many benefits; they provide data against which to validate our models, they provide us with a very good way of statistically doing flood frequency estimation and extreme value methods can work particularly well with historical records if they are long. We’ve already mentioned the resolution, but I think historical records can give us a good resolution for capturing sub-daily rainfall which climate models are not doing yet.

The problem is that historical data represents a period of time in the past and that might not be representative of today. Does the climate over the past 30 years or more really reflect today’s climate? That's the real challenge.

When looking at flood frequency estimation, there is a conflict of interest between decent statistical estimations based on long records and incorporating much more of a past that isn't reflective of today.

Moving on to global climate models, the fact that they can give us insights for future projections is crucial, but the challenge is we've got thousands of different scenarios to choose from and decisions to make over which models, which scenarios, which time periods to use. These are questions bouncing around the industry right now. Most likely, we will want multi-model, multi-ensembles. And then you suddenly you have lots of extra data to deal with and build into models. There can be difficulties around commercial use access too. Let’s say you’ve got all those sorted out, then you need to handle the ensemble question; do you just take the mean or the averages of all or do you use some sort of sophisticated algorithms for weighting different ensembles. How do you build that into your methods?

So, I think there are pros and cons of both datasets. Traditionally, we've always used historical records and there still is a lot of validity in using historical records. There is a shift now towards using climate models because you can't use historical records for the future.

Chris Kilsby

Yui and Emma have made the key points already. Basically, at the heart of this, there is a dilemma: observational data, as Emma just said, are not a reliable guide to the future. I think there's quite alarming trends of researchers trying to extrapolate records into the future - we simply can't do that; the uncertainties are huge.

Looking at long records and looking for trends, I think, is an interesting first exercise, but it's no more than that. I don't think it's the basis for future projection.

Emma knows this because, I remember many years ago, when she was working in Willis Research Network, she was looking at these flood-rich and flood-poor periods. That’s the way that the atmosphere, flooding behaves; we get shifts of regime. We shouldn't be looking for trends. There is no logic in looking for a linear trend in flooding, flood risk, flood intensity or rainfall. We can look for a steady trend in temperature or sea level rise, but the atmosphere just doesn't work like that with extremes, with rainfall. We tend to get regimes that jump up and down. There's no evidence, in general, for linear trends. I think it's very dangerous to think about it in that way.

That sounds like I’ve dismissed observation records but no, we need them. Yui and Emma pointed out that they are the baseline for a start but more importantly that they're the only data we've got that are real, so, we use those for validation. Secondly, they are the only data sets we've got which are high enough resolution in time for the pluvial flooding and in space for fluvial and pluvial flooding.

Looking at the climate models, the dilemma here is, they are the only source of information we can get for the future, but that they're not accurate enough. They can’t currently reproduce the observed climatology and the intensities at the resolution we need. We need to look at hourly data to reproduce the extremes well enough in space and time.

Twenty to 25 years ago, climate models were hopeless. We had 300-kilometre grid squares, that just averaged everything out; they really were bad. They've improved enormously. Regional climate models that are available for some, but not all parts of the world, are performing a lot more reliably and accurately. The issue we’ve got there at the moment is, even though these models are running at 1.5 to 2-kilometre resolution and are producing extreme rainfall from thunderstorms better than previously, we are only using them for 10- or 30-year simulations because these are so expensive to run. So, we're still stuck that we don't have enough, accurate, reliable data from climate models. We need to use ensembles to represent all the uncertainties and possible emissions scenarios.

So, what's the answer to the “exam question”? To what extent should we rely on models? We need both models and observed data. What’s emerged and is getting stronger and stronger is that the insurance industry and the cat modelling industry are going down the route of mixing these climate model outputs with the statistics from the observed data. I think the powerful way of combining them is in a stochastic framework with a stochastic rainfall model to put together an events set which has the characteristics of the observed data, where it's important, but then we perturb them with the important characteristics we can get from the climate models. There are many ways of doing that, but I think the answer is getting the reliable information from the observed data and the climate model outputs and combining them in a good way.

My favourite way is to use large event sets in a Monte Carlo framework to cover all the variability and uncertainties. You need to use long, continuous data in simulations to capture the longer and larger events, the history of the event and the antecedent rainfall. So, my answer to the questions is we’ve got to rely on both of them but choose the information from the observation and from the climate model and use it in a smart way. That means a lot of computational power is needed but that's what big computers are for, so we shouldn't be scared of doing that now!

Q4: What are the largest sources of uncertainty in climate models?

Chris Kilsby

We have been using climate model outputs, such as UKCP 09 and 18, where there is a wealth of data which has been validated and set up for Europe or for the UK. Other parts of the world may not be as well off with data.

There is a cascade of uncertainty in sources and it's a well-known cascade. You could start with, for 50 years into the future, we don't know what our emissions will be. That can make a difference of a factor of two or three, quite easily. So, the usual approach is to do multiple future projections for different emissions’ pathways.

The next biggest uncertainty is, I think, from the climate model parameterizations. Again, you could get a factor of two quite easily, looking at differently set up climate models.

When talking about uncertainty, we usually ascribe it to climate variability, where we could have a 10-year period which is flood rich, followed by a 10-year period which is flood poor. So, we can get another factor of two, three, four, 10 from there.

That's where the uncertainties come from and the wicked problem is that we have to deal with all of them. We can't just eliminate them; we must live with them. That's why one of these stochastic frameworks with large event sets is a possible way, an expensive but necessary way, of dealing with it. We cover the whole uncertainty range by looking at things in this way. But that's what the stress tests and everything else are asking for, I think; it all fits in.

I’d also mention briefly credibility and confidence that comes with each climate model, because there are different models out there. The question is, do you use them all? Political consensus says “yes, use them all”, but we know some of them are not as good as others. This is quite a difficult problem to deal with; to cherry pick the ones that perform well, or to throw them all in. That's a political decision and an institutional question.

Shie-Yui Liong

I just want to add a few words to what Chris has already said. In principle, whether you consider ensemble or not, for each RCM simulation driven by a GCM you need to specify the uncertainty bound, i.e. the upper and lower limits. You also need to execute RCM runs for various emission scenarios, RCP 4.5, 8.5 etc. The above need to be done for various GCMs. On ensemble: you can assign equal weightage to every one of them, if you wish. At least with the above uncertainty bounds and equal weighted approach resulting ensemble we know the range of uncertainties and should provide that useful information to insurance companies.

To note as well is the temporal resolution to be used to drive the RCM: Six-hourly is the highest temporal resolution you can get from hosting institutions, such as EHCAM from Germany’s Max Planck’s Institute. That's the best temporal resolution they could share with us. We can of course get GCM data from the IPCC Data Distribution Center. However, its resolution is monthly. Monthly data is not meaningful particularly for flood analysis.

Emma Raven

Uncertainty in cat modelling is already an important topic and we are adding this extra uncertainty associated with climate change. It is critical that the industry recognise that as we start to produce climate change data; it's not a black-and-white view of what will happen. There is going to be a widespread of outcomes that we need to consider. I’m working with my team on what Chris was talking about, to get that climate data into stochastic event sets. There is uncertainty built into many of the decisions we make in that process. And every scenario we consider can give a whole new event set, whole new model output. It's also crucial that users don't misinterpret or use it in an incorrect way – what does the model you are using represent, what isn’t included and what are the main limitations?

Q5. To what extent is science able to attribute a heavy rainfall event/flood to climate change?

Shie-Yui Liong

I think, you could, but this goes back to what we have discussed earlier. Your RCM, the original climate model, must cover the domain where a typhoon originated. The spatial resolution has to be fine enough. We have tried using 20x20 km outputs, but it is not fine enough to attribute to a heavy rainfall event to climate change. You need to further downscale the model to a much higher resolution, such as 10x10 km or even 8x8 km. Then again it is computationally a very expensive exercise. How long can you run it? Can you afford to run it all the way to year 2050 or even 2100? Remember, due to uncertainty issues mentioned above, one has to run several GCMs. This can be a long journey.

Chris Kilsby

Attribution of a single event: this is something that’s been investigated quite a lot. The first question would be, why would the insurance industry want to attribute an event or probability of a single event to climate change? Is it a useful thing to do or is it just a headline?

Second point, and I think we’re probably all on the same page here, it's mad to try and attribute the single event to climate change. The question we should be asking is: what fraction of the risk or the probability of that occurring has changed due to climate change? Nonetheless, a couple of research groups, have done a lot of work on this, attributing fraction of the risk to climate change. So, mechanically, using climate models, it can be done. It's only as reliable as the climate models are.

My view is attribution of a single event may not be that useful or informative.

Emma Raven

Chris has answered the question for me. We were involved in a project with the Centre of Ecology and Hydrology (CEH) and Oxford University, that looked at this exact question specific to the winter 2013-14 floods, to identify whether that event was more likely because of the greenhouse gas emissions. We ran models (climate, hydrological and our cat model) to simulate a pre-industrial climate and looked at the probability that you would get that event and with current climate. We extended that research looking to also consider the impact on property risk.

Chris raises a very good point about whether event attribution is useful – it’s interesting but what benefit do you gain and what relevance does it have in industry applications? It might be useful for validation. If you know to what extent an event was more likely to have been driven by a changing climate, you might then use that perhaps as a way to help validate or understand your climate cat models, for example.

Contact

Nalan Senol Cabi
Flood Research Manager, Willis Research Network

Nalan Senol Cabi is the Flood Risks Hub Leader in the Willis Research Network (WRN) and is responsible for bridging academic research into insights that will support decision-making processes to accommodate business needs.

She has worked extensively in the flood risk arena for more than a decade as catastrophe model developer and as Civil/Water Resources Engineer. She joined Willis Towers Watson in 2016. In her previous role at Willis Re, she was responsible for interpreting and assessing catastrophe risk models related to flood. She provided expertise on model validation of vendor and in-house flood models. She also provided knowledge and direction for client specific bespoke solutions to understand and manage flood risk in international territories.


Related content tags, list of links Article Climate Environmental Insurance Climate Change
Contact us