Professional Insight

The Future of Loss Reserving May Be “Outside the Triangle”

Loss reserving — the art-slash-science of property and casualty actuaries — can seem arcane to outsiders, even mystical. To mathematicians and actuaries, however, it is fairly straightforward.

The basic method, known as the chain-ladder, assumes the losses a company has incurred to date reveal how much more in losses the company will incur. Other popular methods are offshoots of that idea.

After that, an actuary’s knowledge, skill and judgment find ways to hone the estimate. Much of the loss reserving craft depends on understanding nuances of the method and its brethren.

Are there better ways to estimate loss reserves?

A panel of property and casualty actuaries addressed the question at the CAS Centennial Celebration and Annual Meeting in New York in November.

The panelists had lots of help, with robust participation from an audience of more than 500 and the results of a free-form survey conducted in advance of the meeting.

Research indicates that actuarial reserving methods — using the famous loss triangle most in the industry have heard of — tend to give cyclical answers, said Jessica Leong, FCAS,  business execution lead for predictive analytics at Zurich North America. Estimates tend to be too high for several years, then too low for several. In both cases, early estimates are way off; they then stair-step toward the correct number.

To do a better job, Leong said, actuaries should look “outside the triangle.” They should bring in external information; the way economists consider a myriad of data to refine their forecasts. Panelists noted that having accurate information on exposures or rate changes improves an estimate, even if the information does not come from a company’s own data. More important, said panelist David Clark, FCAS, senior actuary at Munich Reinsurance America, Inc., is that the data act as a good predictor of events that drive estimates higher or lower.

Cost comes into play, said audience member Mary D. Miller, FCAS. Actuaries and management tend to invest in analytics for pricing, not reserving. A refined pricing model can maximize profitable business. A refined reserving model gets to the right answer faster, but it does not change the amount of losses incurred.

In part, a limited methodology hampers the reserving process, said panelist James Guszcza, FCAS, U.S. chief data scientist at Deloitte. Current methods were devised in the era of pencil-and-paper statistical analysis. In today’s era of open-source statistical computing packages and inexpensive computing power, there is no necessity for actuaries to restrict themselves to traditional methods.

Panelists noted that having accurate information on exposures or rate changes improves an estimate, even if the information does not come from a company’s own data.

 

Today it is practical to build sophisticated models using summarized triangle data as well as analyze the individual claim-level data underlying loss triangles. When actuaries restrict themselves only to loss triangles, they are summarizing away information, Guszcza said.

Panelists offered three solutions. Leong suggested using a more sophisticated model known as generalized linear modeling (GLM). It has become the preferred method of pricing insurance. These models allow the actuary to adjust results to explicitly include economic or other changes into an estimate.

The method has other advantages. Mathematically, the traditional methods are a special type of GLM, so property and casualty actuaries have a leg up understanding it. And because GLMs have priced policies for years, executive management has heard of it, a fact that helps create buy-in.

Clark recommended that actuaries conduct research to find variables that predict shifts in loss reserves. He focused on latent variables, or elements that do not directly cause losses but that happen to be proportional to them.

Sometimes these can be hard to measure. Clark said social scientists, for example, try to study the results of a happy childhood, but struggle to figure out what one means by “happy.” So they ask a series of questions and shape the answers into a score.

In insurance, credit-based scores are classic latent variables. A high score correlates with a poor driving record. The scores do not directly cause a person to drive worse, but the higher the credit score, on average, the better the driver.

Clark has found that the calendar year loss ratio for commercial auto physical damage business is a good predictor for accident year commercial auto liability results, even though the latter takes much longer to play out. All of the external predictors that Clark suggested can be incorporated within the GLM framework that Leong introduced.

Perhaps the most radical departure came from Guszcza. He recommended cultivating a more sophisticated mathematical approach, using what statisticians call Bayesian data analysis.

Bayesian approaches have become a trend in the statistical world since 1990, he said. They differ from standard approaches because they use probabilities to model all uncertain quantities in an analysis.

For example, a person predicting the next flip of a coin would weigh the information contained in the data (past flips of the coin) against the probability initially assigned as part of the analysis. Guszcza analogized judging the next flip of a coin that has been flipped only a handful of times with forecasting the future development of a cohort of insurance claims. In each case, the limited data available for analysis might not contain all of the information relevant for making the forecast. The Bayesian approach offers a formal approach for combining fresh data with prior knowledge.

Election prognosticators like Nate Silver use this method. They start with an econometric model that predicts an election, then updates the prediction with polling information as that becomes available.

The resulting analysis would look familiar to an actuary, as it resembles credibility weighting.

“I’m not saying throw out the chain-ladder method,” Guszcza said. “The chain ladder is great.” But to improve the process, actuaries need to keep things “sophisticatedly simple,” meaning to start off simple but then be willing to add model structure as the situation demands. For example, Bayesian versions of the models Leong and Clark discussed are possible departures from the chain ladder or Bayesian chain ladder. Guszcza pointed out that the great flexibility of Bayesian data analysis facilitates the approach of sophisticated simplicity.


James P. Lynch, FCAS, is chief actuary and director of research and information services for the Insurance Information Institute in New York.