What to Do When Your Results Are Challenged

I cannot count the times I have done an analysis, only to have someone counter with, “That can’t be right. I have a different number.” It’s frustrating but common. What’s an actuary to do? I can’t just sit there and let the other side destroy my work and my credibility and let them go on their merry way. I am paid to present facts and objective analysis that enable management to make informed decisions. Sticking with outdated and incorrect perceptions leads to inaccurate decisions.

Letting others perpetuate erroneous thoughts is not good for your credibility and career, let alone your ego; however, that is not the worst thing. Failure to express the results of your analysis means you aren’t earning your pay and might even be violating one or more of the Actuarial Standards of Practice.

But insisting that my result is correct is, in a sense, trying to prove that their answer is wrong — a stance that could be politically disastrous.

While the other party is saying, “Your answer can’t be right. I have a different one,” I am essentially saying the same thing: “Your answer can’t be right. I have a different one.” Outright arguing, “I am right, and you are wrong,” only causes the other party to defend their position more vigorously and escalates the argument. Either way, I need to turn the outcome into a win-win situation, especially if the other opinionated person is my client, my boss or someone above either of them.

If the measure has been analyzed previously, I try to get that analysis so I can understand what has been reported in the past. Often, there is a preconceived idea, or “rule of thumb,” of which I am not aware. Just as it is defined, a rule of thumb is “a rough and practical approach, based on experience, rather than a scientific or precise one based on theory.”* In many such cases, I first learn of these preconceived notions when presenting my results. In those instances, as you are well aware, it’s important to always control yourself. Never attack the decision maker, their ally or the rule of thumb itself. Never speak insults or even insinuate any. You could win the battle (the argument) and lose the war (your job).

For this column, I reflected on some of those instances, some of the reasons for the differences and some of the cases where I was able to bridge the gap.

Check Your Ego at the Door

First, I needed to realize that it wasn’t about me; it was about the other parties’ preconceived notions and their credibility. There was a logical reason for their preconceived answers, and they would defend them because they were protecting their own credibility. They wanted to save face. So I needed to find a way of allowing them to do that while still presenting the facts.

Too often, the other parties have used a rule of thumb or other preconceived notion for years, never having the occasion to update or validate the metric. My presentation was often the first time that their position was challenged. If my results confirmed their conventional wisdom, I could hear, “See, we didn’t need some smarty pants to spend a lot of time and money just to say I am right!” If the results contradicted the preconceived value, I might hear, “Well, the way you explained the analysis makes sense, but the answer is still wrong.”

Understand Their View

On a few occasions, I was able to determine the circumstances and assumptions required to make the other parties’ answer correct; then the discussion could center around which of the assumptions might need to be changed or how the data had changed. I recall a meeting in which I reported a cancellation rate that was significantly higher than the marketing estimate. Marketing excluded flat cancellations due to the company’s rule that they would flat cancel within 30 days of the sale date in order to provide an easier sell to the customer. Including the flat cancellations, my results were higher. Once the flat canceled contracts were removed, the difference disappeared.

In other instances, I had to tread carefully because the other parties had not kept up with the underlying activities that drove their measures, and their preconceived answer was way off the mark.

What’s in a Name?

One of the biggest sources for differences is in the definitions of the metrics themselves. This often happens when metrics are presented by different departments. For example, marketing might define “cancellation rate” as simply the number of policies canceled divided by policies in force; underwriting might omit policies canceled and rewritten; finance might estimate it using an accounting code for remitted commissions; and the actuary might use an entirely different calculation.

Such differences in definitions can apply to just about any metric — even accounting metrics are not defined exactly the same way from company to company. Be careful of doing an analysis the “right way” (code for “how it was done at my last company”) without finding out how the metric is defined in this situation. Definitions of the terms also can have varying meanings. Frankly, I have been shocked by some of the definitions I have come across.

The Straight Story

It’s imperative that you start the analysis and the discussion with the assumption that both measurements, yours and the other parties’, are correct. Conditions may have changed that make their measurements no longer current, in which case you should show how the driving metric has changed the outcomes. Tie the past (their number) with the present (your number) through an analysis of what has changed.

Under what conditions is the rule of thumb a reasonable estimate? It is best to find out what others think the outcome will be before the analysis. I ask this question not because I want to “work to their answer” but to have an idea of what preconceived values they may be bringing to the meeting. If it is a report, which we often provide as consultants in lieu of a meeting, we need to be able to communicate what we found in light of what was anticipated.

Also, before starting an analysis, I try to find out if there is a preconceived opinion about the metric, how long that metric has been around, and what activities, demographic shifts, marketing and other factors have occurred since the last analysis that might alter the measure.

During the analysis, I look for alternate means of measuring what I am measuring, for example, net or gross, with or without self-insured retention, by customer or by vehicle, direct response or something else. I try to figure out under what circumstances the answer that the client expects (the one I learned of before the analysis) could be correct. Establishing those conditions helps me bridge any difference between my results and the client’s. Then, I look for agreement in the metrics.

If I can, I like to meet with the people who gave me the pre-analysis value and go over my findings with them before a meeting with others. This enables them to see my work as well as point out potential errors or erroneous assumptions or definitions before presenting my analysis to a wider audience. This sort of meeting has saved me from embarrassment on numerous occasions.

Conclusion

Presenting new results doesn’t always go smoothly. But understanding and dealing with a recipient’s expectations and preconceptions can eliminate a lot of unnecessary bumps in the road and, hopefully, everyone can win.


* American Heritage® Dictionary of the English Language, Fifth Edition.