Skip to main content
main content, press tab to continue
Article

Article series: Mastering the art of model approximation – Part 1

Setting the decision-making compass

By Cheryl Angstadt , Karen Grote and Nik Godon | December 2, 2021

The authors explore how precise life insurance actuaries need to be in their modeling simplifications and approximations.
Insurance Consulting and Technology
N/A

There are no hard and fast rules on how granular or accurate actuarial models should be. The truth is it will depend on the circumstances. So how do actuarial and risk teams decide what is appropriate in a given situation? This is the first of a planned three-part series where we explore several of the factors actuaries need to consider, starting with some guiding principles.

Simplifications and approximations are common in modeling in the actuarial world, as almost everything we do is some form of estimate. Their use may be driven by time or resource constraints, or they may be a natural outcome of a lack of data or limitations in the model itself. Inherently, there’s nothing wrong in this standard industry practice, which balances pragmatism with precision.

But equally, there’s a danger that simplifications or approximations might not be fit for purpose or might go awry. Sometimes this may simply be the result of bad practice. Other times, problems can arise over time from not reflecting changes in the estimates used in models, resulting in simplifications going stale.

So, a common question asked by the actuarial teams we talk to is, How precise do we need to be in our modeling simplifications and approximations?

The trouble is the answer is not black and white, but rather it’s a distinct shade of gray. While there are various Actuarial Standards of Practice and papers on modeling best practices, none of them provide specifics about how accurate a model should be.

While there are various Actuarial Standards of Practice and papers on modeling best practices, none of them provide specifics about how accurate a model should be.

Effectively, it’s up to individual actuaries and insurers to develop and apply their own judgment. We hope the guiding principles that we’re sharing from our wide experience of working with multiple companies across multiple business lines will help.

Get your bearings

As with any project or journey, the first thing to determine is where you’re trying to get to. With the objective clear, models should be as complex, granular and sophisticated as needed to capture the nature of the risks to be modeled. Since the concept of proportionality applies, it follows that the structure of the model ought to reflect the nature, size and complexity of an insurer’s risk.

How might this work? Generally, a seriatim model is a good starting point from which to accept “compromises” in areas that pose little risk. One example might be compressing seriatim records as long as companies ensure that the overall characteristics of the blocks have not been materially altered and that key areas of risk have not been muted. Thinking of variable annuity guaranteed minimum benefits and universal life secondary guarantees, it would be very easy to lose the “in the moneyness” risk of certain policies with inappropriate compression. Moreover, modelers should “step through” each change in isolation to understand the impact, and modelers should look for inappropriate offsets that may occur among the policies.

For model approximations that cannot be quantified (in the case of model limitations), actuaries will need to find a way to assess the implications and explore ways to address the shortcomings to the degree of desired accuracy. Regardless of what is done, solid documentation of the limitation, the potential impact and the chosen workaround is essential; however, while documentation of the limitation is an important step, it cannot be the only step. There is a need to continue to refine further the model estimates as well as to communicate to users of any generated results what the potential impact of the limitation may be.

Currently, many models are limited in their ability to project accurate principles-based reserves (PBR) over time given the need for formulaic and deterministic/stochastic reserves as well as a roll forward of certain key assumptions (e.g., mortality improvement in the case of Chapter 20 of the National Association of Insurance Commissioners Valuation Manual [VM-20]). This often leads to a simplification of only using the formulaic reserves or a gross-up of the formulaic reserves based on the valuation date relationship, which can result in either an aggressive or a conservative set of statutory reserve projections over time. Even if there is no gross-up, use of the formulaic reserve can still result in differing profit emergence. If actuaries are unable to improve the models and project the true PBR balances, they should document the limitation and work to estimate the potential prudence or aggressiveness found at certain future points in time.

Furthermore, much as having a clear idea and justification of what simplifications and approximations are appropriate when building a model, actuaries will also always need to think about how things can change over time. This entails regularly reviewing compromises and making adjustments to recognize any changes in the risk posed by those compromises, not forgetting to document the adjustments and the rationale for them so that new users can understand why they were made and how they were made. This becomes all the more important when it comes to the need for actual model updates.

The timing of such adjustments and updates will typically hinge on a strong testing regime. Testing frequency and methodology should be set, documented and adhered to, remembering that wherever possible, it is best to keep things simple. Complexity can increase the risk of error.

Another common mistake we see is insufficient model validation (both static and dynamic), such as not capturing the appropriate cash flows or timing of the cash flows for the situation at hand. If something might not validate well, the actuary would be advised to investigate and try to determine the cause before proceeding with simplifications or gross-ups. A useful rule of thumb is always address the real problem if determinable and possible, and then assess the best way to simplify or approximate if that is the route one has to go.

A matter of degrees

If we view the above points as the road map for simplifications and approximations (understand the objective, start with seriatim, understand the potential impact of approximations, validate, update, document), the actual impact — negative or positive — will be a matter of the degree of inaccuracy a company can accept or is comfortable with. This, we would argue, should vary with the purpose of the modeling exercise, the target audience and who might see or use the modeling results.

External reporting, such as statutory or GAAP/IFRS reporting, will naturally require a higher degree of accuracy. Commonly, external reporting too will already have standards dictated by audit practices (e.g., percentage of earnings or percentage of capital). Conversely, certain internal exercises might have less need for accuracy. The potential for litigation risk is another factor also to bear in mind in determining the accuracy required.

External reporting, such as statutory or GAAP/IFRS reporting, will naturally require a higher degree of accuracy.

In the case of statutory reporting, since it allows for conservatism, companies might use some conservative simplifications. An exception might be where capital is scarce. Another example is that tax reserves are often stated as a percentage of statutory reserves (with a cash value floor), but use of the simple formula might not be possible if the assumptions for statutory reserves are conservative.

Practical guidance

We think plus or minus 2% tolerance to actual values is a reasonable guideline for static validation exercises; however, another consideration is that, in practice, percentage-based accuracy simplifications alone may not always work well. For large models, the implied dollar amount of inaccuracy may be too large. For small models, the degree of precision may be too great. Instead, companies are likely to need a combination of measures of some kind — for example, a $5 million minimum and a maximum of plus or minus 2% of liability or $25 million. The dollar amount limitations can be tailored to the particular exercise and intended audience as well as the degree of precision that is ultimately desired for the purpose.

Here, in fact, it’s worth noting that VM-20 does actually contain one of the only specific references for a targeted degree of precision in models: a collar concept that starting assets (i.e., your modeled reserve) must be within plus or minus 2% of reported statutory reserves.

In particular the 2% tolerance is also well suited for future projection accuracy, where you are comparing projected cash flows or reserves to a non-simplified model. When performing dynamic validations for such items as life insurance death claims, a plus or minus 2% tolerance will likely not work over a short time period; hence, in those instances a wider tolerance for short time periods would be appropriate. To the extent projections are deviating from historical cash flows, then the actuary needs to determine what is driving the deviation to deem whether it is appropriate.

Another danger with simplifications and approximations is the possibility of offsetting when considering them in aggregate. It’s important to drill down by key segments of the business, or even at a policy level, to reconcile any potential discrepancies. And ideally, any differences (even those within tolerance) should have an explanation of what is driving the difference. The ultimate goal should be to drive toward the plus or minus 2% tolerance at an appropriate level of granularity, with seriatim accuracy being the ultimate level of accuracy. One should also avoid using true-ups/gross-ups to fall within this tolerance limits and continue to refine accuracy until the tolerance goal is met.

A road map for compromise

For actuaries and risk managers who are used to pursuing granularity and accuracy, model simplifications and approximations can be troubling. But their use is unavoidable and, in most cases, a business imperative.

For actuaries and risk managers who are used to pursuing granularity and accuracy, model simplifications and approximations can be troubling.

While historical limitations on modeling time and granularity have technically gone away with significantly more robust computing power and speed, the cost of achieving the greatest speed and accuracy may not be affordable. Actuarial and risk teams will find the use of simplifications far more straightforward with a clear rationale — supported by robust documentation — of what is an appropriate approach and level for a range of risks and end users of model outputs.


The second article in this series will discuss the impracticality of a perfect modeling world and some common areas where simplification and accuracy issues arise.

Authors

Director, Insurance Consulting and Technology

Senior Director, Insurance Consulting and Technology

Senior Director, Insurance Consulting and Technology

Related content tags, list of links Article Insurance Consulting and Technology Insurance
Contact us