
In his Annual Meeting session, “Fighting Confirmation Bias in Loss Reserving,” Chris Gross of Cognalysis challenged actuaries to confront a powerful force shaping our work: the human tendency to believe what we already think is true. Using real-world anecdotes, psychological research, and simulated reserve analyses, Gross made the case that confirmation bias is a daily operational risk embedded in the reserving process, not just an abstract behavioral science concept.
Gross drew on the work of Peter Wason, the English cognitive psychologist who coined the term “confirmation bias,” which is our tendency to favor information that supports existing beliefs and to cling to those beliefs once they’ve been “confirmed.”
This isn’t about intelligence; highly trained professionals are just as susceptible. It’s about emotion, what Gross referred to as the “lizard brain,” the primitive part of us that fears being wrong and reflexively defends prior views. Being wrong hurts, and pain often overrides rational thought.
For actuaries, that emotional dynamic is amplified. Reserving is an exercise in being always wrong, and wondering, “By how much?” We work with wide ranges of reasonable outcomes, especially in long-tailed lines where the truth takes years to emerge, and our methodologies often reinforce the status quo. Prior evaluations feed into actual-versus-expected studies, loss development factor selections, and the a priori estimates we use in Bornhuetter–Ferguson methods. Add in the legitimate need for consistency and the very real fear that frequent changes in estimates will undermine credibility with management, auditors, or regulators, and it becomes much easier to explain away new signals than to admit our prior views might be off.
Gross argued that confirmation bias has serious consequences. If prior estimates are consistently low, the company may be writing business at inadequate prices for years, letting under-reserving quietly compound. If estimates are consistently high, opportunities for growth and competitive pricing are left on the table.
One of the most thought-provoking parts of the session was Gross’s critique of “smoothing.” Many actuaries take a gradual approach, phasing-in changes and feathering adjustments over time. To test this phenomenon, Gross simulated a series of random shocks to an underlying reserve position over multiple years and compared two approaches: reacting fully to each new signal versus blending in only half of each new shock.
The blended approach did reduce the largest single-quarter change, but at a cost. It increased the length of streaks where reserves moved in the same direction quarter after quarter, sometimes for years. From a trust perspective, that’s dangerous, especially for those relying on our numbers to make big decisions. A pattern where every adjustment is up or every adjustment is down invites users to conclude that the actuary is systematically conservative or optimistic. Ironically, smoothing in the name of consistency can create the appearance of bias and erode credibility more than a few large but well-explained movements would.
Gross then turned to practical tools actuaries can use to fight confirmation bias, starting with the idea of “starting blind.” He encouraged reserving actuaries, at least periodically, to rebuild their analyses from scratch without looking at the prior selections, booked ultimate losses, or even line-of-business labels. Once you’ve selected objective factors, methods, and a best estimate, compare them to your prior view and ask yourself questions like:
-
- How close is my current estimate to a purely objective indication?
- Are my subjective selections consistently higher or lower than the objective results?
- Which specific choices (tail factors, method weights, a priori losses) are contributing most to reconciling to my prior estimate?
- If I played devil’s advocate against myself, what would I attack first?
The key question is, “Is it possible I’m wrong, and am I giving that possibility enough weight?”
Peer review, already a standard practice in many organizations, takes on new importance through the lens of confirmation bias. Gross suggested that reviewers, where feasible, make their own independent selections before seeing the primary actuary’s work. Otherwise, the reviewer can inherit the same anchoring and end up justifying the same biased result. The most valuable peer review discussions, he emphasized, focus not on cosmetic differences but on the single biggest drivers of divergence, such as the tail factor or a key method of choice.
Gross next focused on building purely objective benchmarks into the selection process. For example, running a standard set of methods with fixed, non-judgmental rules for selecting factors (e.g., simplified weighted averages over specified periods or a uniform tail methodology) creates a neutral comparison point so we can quantify the magnitude and direction our judgment is pulling us. These tools aren’t about replacing judgment; they’re about stress testing it.
Finally, Gross borrowed from predictive modeling practice and proposed using training/test splits within triangles: randomly selecting half of the policies to form a training triangle and using that to select development factors to apply to the remaining test data. He illustrated this concept with a case study, varying the weight given to the loss development and Bornhuetter–Ferguson methods. The results highlighted both the wide range around a central estimate and the inherent differences between paid and incurred indications.
Throughout the session, Gross stressed that the real battleground is mindset. Institutional confirmation bias can be just as powerful as individual bias; once an organization has its view of the reserves, future actuaries inherit that view as a starting point. Before touching any data, they’re already anchored. The discipline he urged is simple to state but hard to practice: consciously ask, “What could be wrong with my prior conclusions?”, “What might have changed?”, and “If I were to see this for the first time, would I make the same selections?” That discipline needs to be applied both to our own prior work and to the analyses we inherit.
Ultimately, Gross acknowledged that robust defenses against confirmation bias, including fresh analyses, deeper peer review, objective benchmarks, and training/test experimentation, require time and effort. But he challenged actuaries to weigh that cost against the far larger cost of being wrong for too long: distorted business decisions, damaged professional credibility, and users who learn to “correct” our work rather than to trust it. Fighting confirmation bias isn’t a luxury. It’s central to delivering the unbiased, decision-useful reserve opinions our profession strives to provide.
Melissa Huenefeldt is a consulting actuary for Milliman and the CAS VP-Professional Education.







