Professional Insight

For a Quarter Century, Actuarial Research Has Led the Insurance Revolution

Remember 1991?

Two hundred fifty million people lived in the United States. Three hundred twenty million do now.

The Dow Jones Industrial Average sat at 2,500. Now it is more than 17,000.

Dan Quayle was vice president.

Looking back a long way, it is easy to see how much has changed. The actuarial profession has changed, too.

There were 1,808 credentialed casualty actuaries in 1991. Now there are more than 7,000.

The profession has done more than just grow. Its research has remade the way that insurance companies price and monitor risk. Three actuarial veterans summarized the changes at the CAS Spring Meeting in Seattle in May in a session titled, “Twenty-Five Years of Actuarial Research: Success and Open Problems.”

Stephen Mildenhall, FCAS, chairman of analytics at Aon Benfield, laid out four areas in which actuaries have made great strides. Veteran researchers Stephen Lowe, FCAS, a senior consultant at Willis Towers Watson, and Stephen D’Arcy, FCAS, a retired University of Illinois academic, added their insights.

Stephen Mildenhall, foreground, makes a point during the 2016 Spring Meeting session “Twenty-Five Years of Actuarial Research: Success and Open Problems.” In the background (left to right), are panelists CAS President Steve Lowe and Steve D’Arcy and moderator Benoit Carrier.

Lowe likened the emergence of actuaries to the popularization of new data-driven insights that drive analyses as widely varied as baseball scouting and wine selection. “We are data-driven,” he said. “Others have clinical judgment.” In recent years, the analysts have beaten the clinicians. The Moneyball models outscout the baseball scouts.

According to Mildenhall, actuaries have made great strides in four areas: pricing, loss reserving, catastrophe risk modeling and the combined areas of enterprise risk management (ERM) and capital allocation. Of the four areas, Mildenhall credits ERM and capital allocation with enhancing the understanding of risk.

Pricing

The most significant advance, Mildenhall said, was moving pricing from univariate to multivariate analysis. Though the distinction sounds technical, it has affected insurance profoundly.

Univariate analysis measures how one variable changes the results; e.g., a young driver is more likely to be in an accident than an older one.

In the early 1990s, most insurance was priced by a series of univariate analyses. In auto insurance, for example, actuaries looked at how age affected driving records or how much discount to give for increasing a deductible. But each variable was examined in isolation.

Multivariate analysis would look at all three and take into account how the variables are related to each other; e.g., young drivers might deserve a different credit for raising a deductible than older drivers would.

Multivariate analysis, made possible by the growing power of computers and computer language, improved the ability of actuaries to understand and price risk, Mildenhall said. The growing automation helped eliminate subtle, unconscious biases that could creep into rates when they were set judgmentally.

But “it is an equivocal good,” Mildenhall said. He warned against embracing analysis that abandons the human touch.

“I hope we don’t move all the way to machine learning — to just throw it in the machine and see what comes out,” he said.

Loss Reserving

The biggest change, Mildenhall said, is the emphasis that the reserve estimate is an estimate — that it exists within a range, and actuaries often use stochastic models to develop that range. Actuaries are also better able to test how well different reserving methods work.

In the future, he said, actuaries are likely to look at how macroeconomic trends affect reserves, such as how falling gasoline prices in 2014 appear to drive up the frequency of auto claims.

D’Arcy also noted that research should examine how inflation affects how losses emerge.

Catastrophe Risk Modeling

In 1991 modeling catastrophe risk was in its infancy. Some models existed, but few used them because most companies used their historical data to price risk. The next year, Hurricane Andrew, with its enormous losses, caught the industry by surprise. The methods in use at the time failed to capture the damage such a storm could inflict.

“It was the best possible advertisement for cat models,” said Mildenhall. Actuaries quickly folded them into their standard pricing tools.

There was a side benefit, Mildenhall said. The models required detailed, accurate data. Reinsurers — who were most at risk from a catastrophe — insisted on high-quality data, and they surcharged risks that lacked it. That spurred a data cleanup from which actuaries everywhere benefit.

In the future, he predicted, actuaries will expand the use of catastrophe models, particularly adopting them to handle new or nontraditional risks, like cyber liability.

D’Arcy agreed and added that the industry should focus less on perfecting property catastrophe models and more on modeling casualty catastrophes.

ERM and Capital Allocation

Most of the basics of understanding risk were in place in the early 1990s, but few people knew them, Mildenhall said.

Today, actuaries and other risk professionals have a better understanding of how providers of capital — shareholders — need to be compensated. That, in turn, has helped the industry focus on diversification from the perspective of shareholders, policyholders and regulators.

In the future, Mildenhall said that research will look at the difference in tolerance between catastrophe risk and other risks, as understanding both will help company management strike a better balance between the two.

Capital allocation is the key to effective ERM, D’Arcy said. For now there is no universally accepted method of allocating capital. He recommended that actuaries look at several methods, then use their judgment to recommend a final allocation.

25 Years of Actuarial Research — A Summation

Mildenhall used the CAS research database, DARE (Database of Actuarial Research Enquiry), to see which areas dominated actuarial research since 1990. He looked separately at actuarial research on tasks and methods.

The tasks that actuaries performed, not surprisingly, were dominated by reserving and ratemaking. Dynamic risk modeling, capital management and ERM showed the biggest increases.

Papers on statistical and stochastic methods, simulation and risk measures are the methods that have grown the most.

The fastest-growing topics, he said, were generalized linear modeling and capital allocation. There was a big decline in articles on loss trend and increased limit factors.

He noted that today the individual actuary is far less likely to do research. The number of pages of research per CAS member has dropped 83 percent in a quarter century.

All the panelists recommended that actuaries pursue research. D’Arcy recommended actuaries follow research and chip in where they believe an actuary can make a difference: “Read research and get involved in research. . . Find [a paper] you think you can do better and write a comment of that paper.”

Lowe said it helps to find a writing partner. “Focus on the issues and get in the game. It’s actually fun and quite rewarding.” Mildenhall pointed to understanding risk tolerance for non-catastrophe lines, the use of transactional data in loss reserving, and multi-year considerations in capital modeling as potentially fruitful areas of research.


James P. Lynch, FCAS is chief actuary and director of research and information services for the Insurance Information Institute.