Most insurers continue to rely on equation-based algorithms to develop pricing even as new advanced analytics and machine learning techniques become readily available and promise more sophisticated pricing models.
The simplicity and interpretability of equation-based algorithms is their main advantage. They are easier for IT systems to implement, requiring less demanding computer resources and are compatible with existing rating engines and policy administration systems. But more importantly, it is simpler to explain such pricing to key stakeholders, namely regulators, agents and brokers.
Many advanced analytics methods are difficult to explain in an intuitive fashion. The National Association of Insurance Commissioners (NAIC) defines machine learning as “an automated process in which a system begins recognizing patterns without being specifically programmed to achieve a pre-determined result.” This is different from a standard algorithm in that “an algorithm is a process or set of rules executed to solve an equation or problem in a pre-determined fashion.”
While the NAIC has a working group to help regulators better understand machine learning and artificial intelligence (AI), most regulators are currently not prepared to review such data-intensive methodologies. Until they are, machine learning and AI will likely aid in decision making, supplementing more traditional equation-based techniques such as generalized linear models (GLMs).
Some of the promises of machine learning include:
Even if you are limited to a more traditional approach for your ultimate pricing algorithm, machine learning promises to help reduce times to analyze and improve pricing models. As insurers struggle to take rate quickly, machine learning is a valuable tool to increase speed to market. Additionally, machine learning enables actuaries to focus on analytics and results instead of manual review.
Let’s examine some common real-world scenarios that actuaries face to see how machine learning tools might help.
Many personal lines insurers are experiencing rapid deterioration in results, exacerbated by inflation. An insurer might need to change its underwriting rules, rate order calculation or rate on additional variables. Such changes could result in significant premium disruption, prompting questions from senior management as well as regulators and agents.
Instead of providing anecdotal examples you can use a gradient boosting machine (GBM) to identify the drivers of dislocation, quantifying which variables have the largest impact.
As another example, a traditional personal lines pricing approach is to compare, for each coverage, the current relativity for a given rating factor against the indication from a predictive model and make selections – which is labor intensive.
As an alternative, we can use a GBM to prioritize which variables we should study. For example, we can focus on the 20% of rating factors where the current relativity is furthest from the indication. This accelerates the modeling and decision-making process, enhancing speed to market (crucial amid inflation).
When building a GLM, discovering interactions can be challenging and time consuming. With a layered GBM, the modeling process can be streamlined to quickly discover important interactions.
Suppose you are building a model to determine which agent’s books of business should be audited. Figure 1 shows an example of factor importance without interactions being considered. Note that convictions are not a relevant variable.
With WTW’s patent-pending solution to interpretable AI, a layered GBM, the effect from interactions can be separated out. In Figure 2, the factor importance with interactions included is shown. It suddenly becomes clear that convictions are an important variable, but only when interacted with other variables.
You can also look at factor importance with the interactions completely separated. As shown in Figure 3, we can see that agent tenure interacted with convictions is the second most important variable. This information can be taken back to the traditional GLM framework to quickly build interactions rather than guessing at which interactions may potentially be predictive.
As the above examples show, machine learning can be used to build better models and improve speed to market without directly being included in a pricing algorithm. Advanced techniques such as a layered GBM bring the predictive power of GBMs to key stakeholders from regulators to product management.