Skip to main content
main content, press tab to continue
Podcast

Data science risk

(Re)thinking Insurance: Season 3 episode 12

October 20, 2023

In this episode, Kartina Tahir Thomson and Pardeep Bassi talk data science, and the risks it can pose to insurers.
Insurance Consulting and Technology
Insurer Solutions|InsurTech

In this episode, Kartina Tahir Thomson and Pardeep Bassi talk about the risks and challenges an insurer faces when embedding data science into their organisation.

They discuss what we are currently observing in the market and what firms should be considering and/or doing when integrating data science, as well as some of the pitfalls that many have run into.

(Re)Thinking Insurance Podcast Season 3, Episode 12: Data science risk

Transcript for this episode:

PARDEEP BASSI: So data science is spreading. You can't stop it. Your ability to control and govern it is going to be your competitive advantage.

Data science is spreading. You can't stop it. Your ability to control and govern it is going to be your competitive advantage.”

Pardeep Bassi | Global Proposition Lead, Data Science

SPEAKER: You're listening to (Re)thinking Insurance, a podcast series from WTW where we discuss the issues facing P&C, life, and composite insurers around the globe, as well as exploring the latest tools, techniques, and innovations that will help you rethink insurance.

KARTINA TAHIR THOMSON: Hello, everyone. Welcome to (Re)thinking Insurance. I'm Kartina Tahir Thomson, and I'm your host for today. The topic we're discussing today is data science risks. And I'm delighted to be joined by Pardeep Bassi, data science proposition lead at WTW. Welcome, Pardeep.

PARDEEP BASSI: Hi, everyone.

KARTINA TAHIR THOMSON: So we're going to talk about risks associated with data science and what firms should be considering as part of their work on technology, data, and models. Welcome, Pardeep. So let's put everything in context, right? What should be firms considering, and what are you seeing in the market?

PARDEEP BASSI: So what we're seeing at the moment is the rate at which data science techniques are developing and being adopted is increasing. And it's increasing at a faster rate at which we can develop our understanding of risk governance and ethics. And what's making all of this worse is that we have two groups within most insurers, those who are practitioning the latest techniques who don't necessarily have an understanding of all the insurance risk frameworks and those who do who don't have an understanding of the latest techniques.

So what that's leaving us in a position is insurers themselves, as well as individuals within insurers, are exposed to risks. So driving that right balance of governance and control whilst still allowing the adoption and the value from data science is effectively the magic middle ground that insurers are aiming for.

KARTINA TAHIR THOMSON: And I think I identify with that, and I'm seeing that at the moment as well. And regulation are also trying to catch up with this. What are you seeing there in terms of what regulation is doing? And how is it looking at balancing what you just said?

PARDEEP BASSI: Existing risk frameworks have key components. One of those components is the regulatory aspect. Other components include the insurance risk, credit risk, regulatory and customer vulnerability, just to name a few. What we're seeing is the data science specific risks feed into many of these. And it's the ability to understand which of the data science risks are existing risks which have become more prevalent as well as understanding which are new risks which you need to consider.

I think to really bring this to life, one of the key risks to consider is as more and more complex models are used, how do you address the bias problem? So let's start with understanding what exactly is bias.

Bias is differentiating individuals or groups based off of particular characteristics. And you need to understand what's causing that bias. Is that the data that you have which isn't representative of the entire population? Or is it human decision-making which is fed into the creation of that data which your AI, your machine learning models are being trained on? Or is it the inherent model form that you're using which is adding to existing bias or creating new biases?

And it's the ability to understand where it's coming from which allows you to define the appropriate measure, understanding how to monitor it and mitigate the risk.

KARTINA TAHIR THOMSON: I think that's a really good point. And bias is something that I think is underlooked at by firms at the moment. So can you talk about what stages that firms should look at when they consider bias?

PARDEEP BASSI: So what usually happens is you have your group of data scientists building models, and bias is considered as an afterthought. Whereas what you really need to do is think about it at every stage of the model building process, so when you first explore your data, when you build your model and throughout the model building process, and at the very end when you take the outputs of the model to impact a business decision. So each of those three stages, you need to have specific measures and monitoring in place to pick up the different forms of bias most applicable for the model form as well as the decision that you're trying to make.

KARTINA TAHIR THOMSON: So when you talked about bias, it made me think about transparency and transparency in using data. Can you talk a bit more about what you're seeing there and what firms should be considering when they think about transparency as context of risk?

PARDEEP BASSI: So the need for transparency within insurance could be a regulatory need. Or it could be the need to explain decisions to customers. Or it could be the need to understand certain measures, such as bias, to make internal decisions to ensure that you're strategically aligned to business goals. And where this leads us to is you have to pick the most appropriate algorithm for your need to get that right balance between interpretability, transparency, as well as the predictive power.

So insurance is quite a unique problem, and we've seen some custom algorithms being developed. Layered GBMs are a great example where you have the full transparency to meet the need as identified but also give you that predictive power that gradient boosting machines give you.

KARTINA TAHIR THOMSON: That's great. So we've talked a lot about existing risks and what firms should be looking at. Let's talk about future or emerging risks. Can you talk a bit more about what firms should be considering from that context in emerging risks?

PARDEEP BASSI: Open source is an interesting one. It is widely adopted, but the rate at which it's being adopted is increasing. It allows incredible flexibility and innovation but opens you up to an increasing number of risks. Of those risks, governance and security is a key one. There could be vulnerabilities in the code, malicious code hiding in open source packages. The governance is one component, the security is another, but you also need to think about the maintenance and support model around open source packages.

Key-person dependencies increase. The stability of the code is in your own hands. You need to ensure that you're meeting the needs the business criticality needs. In addition to that, there's the compute cost that also needs to be managed.

KARTINA TAHIR THOMSON: So you mentioned consideration in terms of allowing this in governance and controls, essentially. So what are you seeing? And what are you suggesting people do when they consider writing software as part of their models?

PARDEEP BASSI: I think it's really-- you really need to understand where you add the value. So if there is a latest technique, and you're willing to take some risks to gain a competitive advantage, you can do that using open source. But there's no reason for you to take that throughout the whole process. So understanding where open source gives you an advantage, how you can integrate that open source with non-open source proprietary software to provide you with that governance and control and get that right balance.

KARTINA TAHIR THOMSON: OK. Are there any other risks, future risks that should be considered, or emerging risks?

PARDEEP BASSI: We can't not talk about large language models. This is a perfect example of technology moving very quickly and it being adopted absolutely everywhere, but the governance risk and control frameworks we have haven't kept pace. One of the key risks with using large language models is data privacy and IP.

You could potentially lose sensitive and proprietary data with the use of large language models. You have no control about where the data is used now and in the future. So it could potentially be used to build future models and help competitors. The loss to IP is huge.

The second component is hallucinations. This could be driven by bad prompts to the large language model, but it could also just be inherent weaknesses in the model, giving you results which are wrong but with a lot of certainty.

An insurance-specific example would be where images are used to assess the extent of damage for a motor accident. Deepfakes could be used by bad actors to help game the system. So the right controls in place there from an insurance perspective become more critical.

KARTINA TAHIR THOMSON: So what you just said there made me think that it has a massive impact on the risk profile of the organizations, right? So it impacts on competition risks, and it impacts on reputational risks. How do you think that fits in to the regulatory aspect of the whole market and what they're expecting from a firm to do?

PARDEEP BASSI: So your ability to compete in this new world means you can't not adopt data science, but you need to do it in the right way. Regulators are increasingly looking at this. And one really good example would be the EU AI Act. At the same time, large language models came out and really became the hot topic that EU AI Act was being discussed and developed. And now the EU AI Act has a specific component to do with large language models.

And I think the takeaway message here is, rather than being at the end of regulation and acting upon what's being enforced on you, if you think about it from a principle-based perspective, what should we be doing by design in terms of all of our processes, our people, and our systems? You can set yourself up for not only meeting current regulatory requirements but also future ones.

KARTINA TAHIR THOMSON: Great. So we've talked a lot about risks, emerging, existing, future risks. So let's talk about what firms can do to address these risks. Can you give us a few points around what firms can do to address these risks?

PARDEEP BASSI: So I think there needs to be a gradual evolution of governance to ensure you have the right oversight, compliance to regulation as well as internal requirements and restrictions imposed but also evolution of the risk management framework, not only to capture new risks but emerging ones. And one of the key things to help with all of this is a clear definition of roles and responsibilities. So we really need to think about who is making what decision to ensure we have the right accountability, visibility, and challenge to decisions at every level.

So if your data scientists are making decisions, do they need to wear the hat of a SMCR material risk taker? That's the kind of question you need to be asking.

KARTINA TAHIR THOMSON: So it sounds like everybody is accountable for this topic, right? So all the way from the board of directors all the way to data handlers. So firms need to be able to kind of flex their governance process to be able to allow for the accountability to take place. Is that fair?

PARDEEP BASSI: Definitely. There needs to be a level of flexibility but also education throughout the organization that this is important, and everybody needs to contribute.

KARTINA TAHIR THOMSON: Great. So we're coming to the end of it. And it in summary, this is a very important topic firms should really, really take into account this topic as part of their risk profile. It goes from dependencies on different risks all the way to reputational risks. But underlying this as well is the importance of governance, right? So if there's one thing you would like the audience to take away, what would it be?

PARDEEP BASSI: So data science is spreading. You can't stop it. Your ability to control and govern it is going to be your competitive advantage.

KARTINA TAHIR THOMSON: Thank you, Pardeep. That was really great and very insightful.

SPEAKER: Thank you for joining us for this WTW podcast featuring the latest perspectives on the intersection of people, capital, and risk.

Podcast host


Kartina Tahir Thomson
Kartina Tahir Thomson
Senior Director

Kartina is a Fellow of the Institute and Faculty of Actuaries (IFoA) with 25 years’ experience in actuarial, risk, governance and regulatory roles. She is the President-Elect of the IFoA and holds a number of senior positions in the actuarial profession in the UK and globally.


Podcast guests


Pardeep Bassi
Pardeep Bassi
Global Proposition Lead, Data Science

Pardeep joined the Insurance Consulting and Technology (ICT) division of WTW in 2022. His current role is Global Proposition Lead for Data Science, responsible for driving WTW’s increased focus of growth into Data Science from a software and consulting perspective.


Contact us