Professional Insight

Bias, Risk and Regulation: What Actuaries Should Know

Heightening regulatory attention to social bias is changing what fair insurance practices will mean. Thankfully, there are ways to find and address bias, Jessica Leong and Cathy O’Neil said during their presentation, “Bias, Risk and Regulation,” at the 2023 CAS Spring Meeting. 

O’Neil is the founder and president of O’Neil Risk Consulting and Algorithmic Auditing, Inc. (ORCA) and the author of the New York Times bestseller, Weapons of Math Destruction. ORCA is currently working with insurance departments to test for bias. Leong is the founder of Octagram Analytics and past president of the CAS. Together, they helped the audience understand the evolving landscape. 

For a long time, actuaries have relied on the concept that fair rates mean those that reflect loss costs and do not use any prohibited variables. But lately, this standard has been evolving, and carriers are wondering how to keep up. For example, the Colorado Division of Insurance is taking the most comprehensive approach to address potential bias with the passage of SB 169, said Leong.  

Leong said that actuaries already live in a world where a rate doesn’t perfectly reflect risk because some factors are not eligible for rating. 

 

Earlier this year, Colorado released a draft regulation. The regulation, Leong explained, would require insurers to: 

  1. Test model outcomes. 
  2. Have a plan if a model shows bias. 
  3. Instill accountability on a carrier’s board and C-suite. 
  4. Possess a robust governance framework to avoid bias created in-house or by a third party. 
  5. Report documentation to the insurance regulator. 

Understanding fair rates 

The presentation became a spirited discussion among the speakers and audience, who raised several questions about bias. Actuaries have long held that rates should reflect risk, so there were many questions on that topic. They included: 

  • If actuaries fail to use a model with good predictive power, aren’t they exposing insurers to risk, and isn’t that contrary to the Actuarial Standards of Practice? 
  • If models are corrected for bias, then doesn’t that mean that some customers will have higher profit margins than others, and isn’t that unfair? 

Leong said that actuaries already live in a world where a rate doesn’t perfectly reflect risk because some factors are not eligible for rating. Bias can also exist in other insurance practices, such as marketing and claims. For example, a class action lawsuit alleges that a homeowners’ insurer used a fraud-flagging algorithm resulting in Black customers jumping through more hoops than their white counterparts to receive claim payments. 

In another example, researchers studied the program of a health insurer that intended to provide extra help to patients with complex medical conditions. Due to limited space in the program, the insurer used an algorithm to identify patients for whom giving extra help would save the most in future health care costs. The cost of services was intended as a proxy for medical needs. However, it was a poor proxy because of inequity in health care in the U.S.: Black patients get less treatment than White patients on average. So, costs — and cost savings — are lower for Black patients, which meant the algorithm was less likely to identify them. The researchers showed that optimizing medical needs, instead of cost, would dramatically increase the number of Black patients in the program. 

In this case a participant astutely noted that there are biased models and then there are accurate models that reflect a biased reality, and this example looked like it represented the latter. 

The need for regulation

During the discussion, a number of participants raised questions around the same theme: Without new rules or standards, competitive pressure will lead insurers to continue the status quo. Insurers lack the incentive to make tradeoffs between predictive accuracy or profit in exchange for more fairness. As a result, actuaries will have to continue applying price to the expected cost as best they can without using prohibited variables in models.  

One participant asked, “Since insurance is in the business of discrimination, how should actuaries distinguish between ‘good’ and ‘bad’ discrimination? What is a bias, and what is the true difference in risk?” 

O’Neil said defining bias is primarily a public policy concern — not to be answered by actuaries, data scientists or artificial intelligence auditors. “It’s a question for regulators. There will be math consequences to the answer,” she continued, “but it is not a technical question.” 

How to test for bias

Another audience member pointed out that many factors correlate with loss cost and protected classes, so if a proxy is banned, another one will replace it. O’Neil agreed, “It’s a fool’s errand to ban specific features or inputs. Instead of prohibiting inputs, test outcomes,” she advised. 

Explicitly testing a model is the only way to be sure that there are no blind spots of bias, O’Neil said. Bias testing is already underway in Colorado and Washington D.C. The good news is that testing a model is not too difficult, even if it is already in deployment, O’Neil said.  

O’Neil believes testing should focus on “outcomes of interest” that are palpable and salient to consumers in addition to standard actuarial statistics like loss ratios. “We should measure whether different groups are getting different outcomes,” she added. If there are dissimilar outcomes for distinctive groups, is there a legitimate reason why? For example, age may be considered a legitimate factor. One group may be younger, on average, than the other, which may explain some of the differences in outcomes. Ultimately, determining legitimate factors is up to the regulators. 

Leong followed up, asking, “What if there is a big difference after accounting for all the legitimate factors?” O’Neil answered, “Then you can decorrelate with race the way you decorrelate with beta at a hedge fund to make sure you are not betting on the S&P 500.” 

Testing for bias requires data about race and gender. Both can be inferred by using a person’s first name, last name and address and leveraging U.S. Census data. This is a common method, initially developed by researchers at the RAND Corporation and used by public agencies, including the Consumer Financial Protection Bureau. 

Actuaries can start the journey now

Although there will be some changes to insurer practices, discovering and addressing bias are important steps forward. O’Neil and Leong expressed optimism that solutions will continue to emerge. “Insurers and actuaries are problem solvers and maximizers within constraints,” O’Neil said. The good news, they said, is that nothing stands in the way of insurers starting this journey now. Current models are testable, and insurers can build on their existing risk management and governance structures to prepare for a new generation of regulatory requirements.


Annmarie Geddes Baribeau is a consultant and writer who has been covering insurance and actuarial topics for more than 30 years. You can email her at annmarie@insurancecommunicators.com.