It doesn't even feature in the top half of the results. It really isn't ranked very highly. It doesn't feature in the top seven for any region, and it doesn't feature in the top seven for any industry.
If we contrast that with data loss and cyber attack, data loss is the number 2 result overall and has been for many, many years, or number 1, number 2, but it's been in the top results for many years as has cyber attacks. So cyber attack is the number 3 for this year overall. It varies regionally. There is more fluctuation as to what position it has, but both are in the top seven overall for all regions.
Overall though, AI is not. AI is in the bottom half of the table and it isn't in the top seven for any region. We also asked respondents not just to rank these different risks, but to consider the extent to which they thought board directors have the necessary skills and knowledge to provide effective oversight of a number of areas, including AI.
Then we asked them, which of those areas they thought the board needed to spend more time on, and how material they thought each of those areas are to the business. Now, AI actually features at the bottom of the table for this selection of risks as well, only 48% of the board thought, sorry, only 48% of respondents thought that the board had sufficient knowledge and skills to provide effective management or oversight of AI.
And it features second lowest in the areas where respondents thought more time is needed. So despite saying only 48% of people have got the knowledge to manage this, actually they still don't think anybody needs to spend more time on it. As well, it was the lowest result of these areas where in terms of materiality of risk, so they actually didn't think even that AI was material to the business.
So having given that lead in Hannah, what is your reaction to those as a set of results?
HANNAH TINDAL: I'm definitely a bit surprised by those results. There's been no shortage of press around the use of artificial intelligence, risks, and rewards of artificial intelligence. And with the survey results around cyber, AI is a natural parlay to that being top of mind for shareholders, regulators, and so definitely should be top of mind for boards.
ANGUS DUNCAN: So as a D&O underwriter, how important do you think this is as an issue when you're doing your D&O underwriting?
HANNAH TINDAL: It's definitely something that we're monitoring very closely. We've got a lot of regulation movement. Once that framework is set and regulators start really on the enforcement side, there's a lot of concerns that boards are able to react and set up and really on a global basis, because we've got EU leading the charge on that.
They've been very on the forefront starting the regulation in 2024. We've got pockets of the United States, various states have very specific regulations. So regardless of what's happening on the federal level, there is a lot of frameworks in place. So it's definitely something that we're watching very closely. And we need to have a good understanding that directors and officers are not just considering it, but also are prepared to react to it.
ANGUS DUNCAN: So if we can dig into that a little bit more detail, what is it about AI? What are the risks to do with AI that you think should be impacting on D&O underwriters? I mean, if I just can highlight some of the things that are in the press that people are talking about saying, these are risks for directors.
So we've got allegations about failure to disclose the use of AI in decision making, misrepresentations regarding AI. So the opposite, instead of failing to disclose it, saying that you are AI washing when in fact you're not. Failure to manage the use of AI within the business. So for example, we've seen cases regarding discrimination where AI was being used to edit which CVs came through and it turned out that, that was discriminatory.
Failure to ensure that AI is being used to maintain or avoid system failure. So if you are, in fact, using AI, making sure as a director that you're looking at the system failure issues that could come from that or tainting of the data to make sure that you're maintaining the data. Enhanced data privacy risks, that's another one that people talk about. And failure to ensure that you've actually got insurance for the AI risk. So are those the things that you're thinking about or any of those ones specifically more concerning for you?
HANNAH TINDAL: All of the above are definitely considerations. I think it's understanding how a corporation is using it and deploying it, making sure that they're deploying it in a way that makes sense for the organization is not deployed too broadly. Before it's had the time to work through any potential issues that could come in.
It can have biases built into it, so making sure that it's in an environment that's testing those biases and making sure those are being assessed appropriately before there's mass deployment. It really touches on so many things.
As you mentioned, it can be human resources concern, employment discrimination concerns, all the way up to just relying on facts that are necessarily facts that do have already biases built in because you're not utilizing the AI tool broadly enough to fact check itself. So there's definitely broad range of risk factors that we're considering. And it touches on pretty much every component of the business and its operations.
ANGUS DUNCAN: So looking at AI washing more specifically, I always think of AI washing in the same context as greenwashing. And for greenwashing, there was lots of regulatory action, lots of action against companies, but in fact, it hasn't led to lots of claims against directors and officers. So in my head, I do wonder whether or not AI washing could be the same thing.
Lots of lawyers talk about it. They're really focused on it. But they were also really focused on the prospect of greenwashing as an exposure for directors and officers. So I do have in my head this question about whether directors and officers in the survey are right to say, well, right now it's not a big risk, it's not giving rise to claims against directors and officers. Do you agree with that, or do you think, in fact, there is more reason to be concerned about AI washing?
HANNAH TINDAL: I definitely think AI washing needs is a real concern. It's tangible right now. We do have securities class actions on AI in a broad range of allegations from misrepresentation, over-representation. So I think boards are going to have to be very cautious and specific around their disclosures around the usage and implementation of AI and the benefits and risks to that implementation.
ANGUS DUNCAN: So you've already basically said you are aware of claims because there are claims out there, securities class actions and that obviously could impact on the D&O portfolio. Obviously we're in this very soft D&O market, so is it something that is leading you for certain clients to offer different terms, or is that just simply not possible at the moment with the market the way it is?
HANNAH TINDAL: I think the market way it is quite competitive, of course, and I think what we're trying to do right now is really make sure that we're educated on the topic, we're monitoring it very closely. I think of an insured was very bold and came out very strong and we thought that, that was going to lead to further complications than you might see an adjustment in terms and conditions. But I think more broadly, we're all in an education pattern.
So we're just making sure that we understand our organizations, how they're utilizing it, their disclosures around how they're utilizing it, and also that regulatory component, really monitoring how the regulators are going to react to it. The enforcement of that is going to be key, I think also, and that's probably what's also going to drive change within the underwriters.
So we do tend to lag behind some of the issues. But I think that we're all very keenly watching the area and making sure that we have an understanding about where it's headed.
ANGUS DUNCAN: Thank you, Hannah. So thank you, everybody, for joining us. As you can see, AI, the results from the survey really do show that people at the moment are not giving this a huge amount of priority. But for underwriters this is a concern, notwithstanding where the market's at, it is something that the market is spending a lot of time on and thinking about.
So it is something that I think as a insurance market, people would say directors and officers probably should be spending more of their time thinking about than perhaps they are currently. Thank you for joining us.