GPU computing has existed for many years in the modelling world, but recent investments in AI have brought the technology into the spotlight. Have those investments meant a rebirth of the technology? Can it deliver on claims to make financial reporting hundreds of times faster or cheaper? The second article in our technology series explores this further.
Central processing units (CPUs) are the "chips" that have traditionally powered desktop computers and servers. They have a small number of very fast and highly flexible processors that can tackle a wide range of problems. With cloud computing making thousands of CPUs available to insurers, these can easily scale to be thousands of times faster for the same cost.
Graphics processing units (GPUs) are another type of chip. They contain large arrays of low-powered processors – known as cores – that perform the same calculation in parallel across large datasets. In effect, they’re a low-powered and hence simpler and slower “grid on a card” that is used alongside your CPUs.
They were first developed 25 years ago, primarily to support real-time computer graphics for films or games. But they have become more widely used, including in high-performance computing, Bitcoin mining and, more recently, artificial intelligence (AI), which now consumes almost all new GPU cards produced.
The needs of each of these purposes are subtly different and GPU developments follow markets demands. The industry measures calculation precision "significant figures" (s.f.) which reflect the accuracy of numbers stored. For scientific and financial use, a higher level of precision (16 s.f.) is often needed than for computer graphics (7 s.f.). For AI, a very low level of precision is needed (4 s.f.); this is where most of the investment and performance gains are happening.
Actuarial models are typically large parallel loops, over data or scenarios, that distribute well.
Both CPU grids or clouds and GPU are massive forms of parallelisation. They can both perform general calculations with appropriate code and can distribute those calculations to deliver results 1000x faster than a single core. Elapsed time (“speed”) is therefore comparable under both setups.
Typically, GPU cards are cheaper than CPU grids with an equivalent number of cores, and this means that there’s potential for cost savings. However, the GPU cores are simpler and slower which can offset those savings. They also have a more complex architecture and less memory to use which can cause calculation bottlenecks. How do these constraints balance against the cost savings?
The question is really not whether GPU can do the calculations, but whether use of GPU gives you a cost benefit.
As a rough rule of thumb, one CPU core can do the work of 10 GPU-cores for specific algorithms, and costs the same as 50 GPU-cores. WTW’s own analysis shows that calculations performed on GPU cards can be anywhere between 10x cheaper and 10x more expensive than a CPU grid for a well-built model.
As a guideline, that’s not helpful. But it does illustrate that it’s important when considering GPU not only to test out your calculations, but to make sure that test includes all the needed complexity.
For example, the research shows GPU perform best when a model or its data exhibit the following properties:
Different approaches can mitigate these constraints, such as reducing the precision of the calculations, repeating calculations rather than caching values, and refactoring logic into mathematics.
These paradigms are different to those used in traditional models, and it may feel uncomfortable to fit a model to the technology rather than the business needs. More importantly it may mean that if you need to update your model, then you need to reconsider and reimplement the mitigations.
This clearly points to using commercial software to construct your GPU-based calculations, rather than self-build. The best tools here would allow you to change between CPU and GPU architectures and reoptimize for model changes without passing that task to the user.
With a number of factors needing to come together to make GPU cost-effective, it’s no wonder they’re not yet commonplace.
As not all models are better on GPU, then you need access to a range of hardware, which necessitates cloud computing.
Actuaries and modellers have also traditionally been very detailed focused. Commercially, they seek to find the margins we can make in their capital requirements or policy management by considering more detailed risk analytics and more detailed models. We also seek more interactive capital models that better reflect our ALM management policies. These model features needs better suit CPU than GPU.
However, we should consider whether simpler models used more frequently can lead to give better risk management than more complex models used to a schedule.
And of course, to use GPU within a modelling platform has a cost – the marginal costs relating to GPU hardware and potentially software licensing. If that modelling platform is new, or doesn’t support hybrid technologies, you may also have a transition project cost as well.
Where GPU architectures fit closely to the model characteristics, such as Variable Annuity business, the argument for GPU is clear. The cost benefits can easily finance the transition effort.
Outside of those domains the case is not proven. Clients seeking GPU acceleration, but without a clear view on their appetite for change in detail, would be best working with a platform that supports both technologies and using cloud-based compute. Further that platform should allow a single set of model code to run on both architectures.
This ensures that they can choose between both technologies based on their business (and hence model) requirements, and that they’re not locked in or constrained if their business requirements change.
This summary was correct as of November 2025.
We actively monitor developments in technology and research how they may be applied to financial modelling in a sustainable way. Talk to us to find out more about how this technology can help your business how we are incorporating GPUs into our own technology solutions.
Email software.enquiries@wtwco.com and we’ll connect you to our local experts.