Professional Insight

Regulators and the Predictive Modeling Challenge

Big data, predictive models, neural networks — the world of insurance pricing is changing fast, as any actuary can attest.

It falls to the regulators — some of them actuaries — to figure out how get the old rules to apply to this new world.

Three actuaries explored the evolving dialogue between company and regulator at the Casualty Actuarial Society Spring Meeting in Boston, and looked for ways the two can work together to mutually navigate the regulatory landscape.

The National Association of Insurance Commissioners (NAIC) recognizes the issue, said Robert Curry, FCAS, an assistant vice president and regulatory actuary at ISO, a division of Verisk Analytics.

The NAIC’s Big Data Working Group has regulators developing best practices for reviewing predictive models. The working group looks for ways to make it easier to train regulators in reviewing predictive models, and it is considering whether to develop predictive analytics webinars or insurance summits on the topic.

The regulatory consortium is also looking at ways to help individual states review models, considering whether to create a central resource that insurance departments could tap to review models — the regulatory equivalent of when baseball umpires send disputed calls to centralized video review experts in New York.

Regulators are acutely aware of the challenge, said Dorothy Andrews, an ASA and statistician who reviews filings for regulators as part of her work at Merlinos and Associates.

They are “working really hard to get up to speed and are very keenly interested in making sure consumers are protected from harm,” she said. They focus on rating variables, she said, asking, “Do these variables have a logical relationship to the risk being insured?”

One typical question concerns how “black box” variables like insurance scores are constructed. Regulators want to be sure that none of the components of the black box variable are used elsewhere. Instead of a black box, Andrews recommended that companies build a “glass box,” one that lets regulators understand what is going on so they can ask the questions they need to in order to protect consumers.

Another typical question is whether any of the variables are intentional proxies for other disallowed variables. (The classic forbidden variable would be one that serves as a proxy for race.)

Andrews recalled that when she built predictive models as a company actuary, the insurer’s legal department had to review and sign off on every model variable. “I sometimes get the distinct impression that is not happening at other companies,” she said. Insurers should consider the time spent reviewing in advance as a good investment, however, especially in light of the alternative.

“A market conduct exam can take years to resolve at great expense to a company,” she said.

Within the insurance world, ISO is well known for making a tremendous number of loss cost filings annually in every state and the District of Columbia. At ISO, CAS Fellow Jim Weiss observed that he is often the actuary preparing responses to regulatory inquiries pertaining to predictive models.

He recommended choosing variables with great care. “The more parameters you have, the more quickly the models can deteriorate [in effectiveness],” he said. A live poll at the session determined that parsimony was often the most important consideration in a model — even more so than practicability, tractability, stability and accuracy.  Parsimony is defined by Oxford Reference as “the principle that the most acceptable explanation of an occurrence, phenomenon, or event is the simplest, involving the fewest entities, assumptions, or changes.”

Weiss said at ISO he works to make sure that regulators can see the care that has gone into developing the models that underlie the filing.

In any filing, both company and regulator must be confident that the model is a valid tool for pricing. When evaluating generalized linear models (GLMs), regulators have a good understanding of p-values (a statistical measure that purports to show the likelihood that the modeled relationships occurred by chance) but, Andrews said, regulators also consider other metrics. Weiss personally favors other metrics, as p-values are regarded by many as presenting false precision. It also is more relevant for GLMs than for emerging analytical techniques, such as decision trees.

Weiss and Andrews (as well as respondents to a live poll) said that lift charts — bar charts that show how losses grow as a variable changes — are a useful tool in validating a model.

For insurers and regulators, it can be a challenge to reach a common standard. “There’s no universal metric for what makes an ideal model,” Weiss said. “It’s kind of like figure-skate judging. It’s kind of subjective.”

For regulators, Andrews said, disparate impacts are important to avoid, despite how statistically significant the model may be. An excessive number of variables with an unclear relationship to risk can be difficult to explain and justify, delaying the approval of a filing. “Keep your models simple enough that a 6-year-old can understand them,” she said, paraphrasing a quote made famous by Einstein.  She also quoted noted 20th century statistician George Box: “Essentially all models are wrong, but some are very useful.”


James P. Lynch, FCAS, is chief actuary and director of research for the Insurance Information Institute. He serves on the CAS Board of Directors.