Professional Insight

A Traveler’s Guide to the Categorization Highway

Knowledge of the social world and, more precisely, the categories that make it possible, are the stakes, par excellence, of political struggle, the inextricably theoretical and practical struggle for the power to conserve or transform the social world by conserving or transforming the categories through which it is perceived.”Pierre Bourdieu, French Sociologist, 1985

The journey begins: Where actuarial science meets social science 

Let’s rewind the clock a few decades. The 1970s and ’80s marked a period of legislative debate and activity, such as the Equal Rights Amendment and Equal Credit Opportunity Act. The National Organization of Women (NOW), a prominent group advocating for gender equality, turned their attention to banking and insurance, questioning the practices of risk analysis and categorization.  

This activity is the central focus of a paper by sociologists Greta R. Krippner of the University of Michigan and Daniel Hirschman of Cornell University, “The Person of the Category: The Pricing of Risk and the Politics of Classification in Insurance and Credit.” While the technical elements of classification mechanisms make a worthwhile discussion, they focus the argument on the idea that the battles of NOW indicate there are higher stakes to compartmentalizing risk:  

  1. Our ability to define our identities as individuals.
  2. Our ability to mobilize around an identity.10

Identity and group formation are core tenets of sociology. Sociologist Ian Hacking presents a framework in his essay “Making up people.” At the risk of oversimplification, group formation can occur in two broad senses: 

  1. Organically by a group of persons. Groups often emerge in response to some shared antagonist. Since they originate from a natural experience, Hacking claims they create a “new kind of person,” with each new type expanding the “space possibilities for personhood.”
  2. Artificially, imposed by some outside agent. The key idea of groups of this kind is that they do not require shared experience.11

Artificial groups are the focus of Krippner and Hirschman’s research. This discussion will focus on two artificial forms of grouping, sorting or ranking, and examine the degree to which they also contribute to or detract from possibilities of personhood.  

On one side, we will have an “Actuarial System,” and its opposite will be an “Algorithmic System.” In my experience, the lines between the two are muddy. An actuarial system is an algorithm by definition, and algorithms may be informed by actuarial classifications. For the purposes of understanding the implications for identity and enabling collective action, the terms and their key features are defined as follows: 

Actuarial System  Algorithmic System 
Class-based (group)  Attribute-based (individual) 
Identity shared with others in group  Each individual has their own value  
Random members in group have similar average risk, known variances  Unique price/risk score per individual 
Potential for “social salience”  Hard to create groups of similar social experience 

 

According to Krippner and Hirschman, the ability to attach one’s identity to a broad but socially relevant group is a critical requirement for social action. Actuarial systems offer that potential. Algorithmic systems might erode it. As the world becomes more algorithmic, the “person of the category” may dissolve into a detached individual, isolated by their own permutation of a sea of predictive variables.12  

First stop: The actuarial system 

“Most actuaries cannot think of individuals except as members of groups.”  —“Sex Discrimination in Employer-Sponsored Insurance Plans: A Legal and Demographic Analysis” 13 

Krippner and Hirschman point out that when NOW went to court in the 1980s to sue insurers, they were able to mobilize because an identifiable group was present in the insurance rating table.14  

This observation challenges other interpretations of actuarial classification that suggest it is too artificial and arbitrary to create a group of real meaning.  

In 1988 Jonathan Simon, a law professor at Berkeley University, expressed concern over “Actuarialism,” as practiced in pensions and insurance, making its way into criminal justice and predicting things like criminal recidivism.15  

In Simon’s view, the actuarial grouping is artificial, assigned and not self-chosen. Organic, self-defined collectives are displaced by “aggregates” — fabrications from the “imagination of the actuary.” Similarly situated individuals become lumped together in a “community of fate.” 

Without a grounding in lived experience, the grouping “unmakes persons.” A person would likely more readily identify based on religious affiliation, generation or family position before claiming something like “Preferred Plus” underwriting status as a label for who they are. 

Krippner and Hirschman clarify that Simon’s position conflated actuarial and algorithmic classification systems, and when separated, it becomes apparent that the actuarial classifications are not completely separated from the real world. 

“We suggest that the groups (or aggregates) contained in the cells of the insurance pricing table may be artificial, but these are still potential collectivities that can, under particular circumstances, be activated. This potentiality, we argue, reflects the fact that insurers assign individuals to membership in groups (however “thinly” conceived) based on characteristics held in common, leaving open the possibility for the construction of shared subjectivities and action in concert.”16  

In addition to the fact that the case actually happened, Krippner and Hirschman highlight that the NOW lawsuit against the auto insurance industry revealed some interesting, occasionally paradoxical features of categorization for political goals and categorization for pricing goals.  

  1. Women were getting better rates! NOW argued that even if advantageous, using gender-based classifications impaired women’s equality by reinforcing stereotypes. “We don’t want the [insurance] industry to discriminate better, [but not] to discriminate at all.”17
  2. NOW argued for “miles driven” as a proxy for gender. It’s possible that the inverse relationship motivated the use of gender in the first place, as a proxy for usage, as strange as it may seem. Alternatively, there is evidence that some insurers seemed to lean into gender as a causal variable. A newspaper ad from this era, “Our Case for Sex Discrimination,” showed two stacks of piled-up cars, one twice as high as the other, attributing the larger one to male drivers. This, in my opinion, is less about the insurer’s attitudes, and more reflective of its response to cultural conversations.

The ultimate result of the case was that Pennsylvania restricted using gender and the mileage proxy as pricing variables.18  

Next exit: The algorithmic system 

“When human judgment and big data intersect there are some funny things that happen.” 

—Nate Silver, 2012.
(Statistician and Polling Analyst) 

Credit scoring, as developed by William Fair and Earl Isaac, had its roots in operations research. Deployed by factory managers and war generals, operations research’s framework involved analyzing variables using quantitative techniques to allow “better decisions to be made more often.”19 For bankers, the credit score was motivated by efficient predictions of default. 

Having successfully lobbied against creditors to remove gender and marital status (as well as the requirement for male cosigners), NOW’s job to protect against discrimination in lending became more difficult. As Krippner and Hirschman pointed out in Person of the Category, “Even a relatively simple scoring system, such as the one introduced by Montgomery Ward in the 1960s, defined approximately 750,000 possible combinations of factors.”20  

If a woman was denied a loan, the lender was only required to give four reasons, without detailing any other relevant reason. Yet even a full review of the entire rating algorithm would have been problematic, since it would be difficult to prove a systemic discrimination issue when a point here or there in other categories could have made a difference for any particular applicant. 

The way I interpret this is that two random strangers may both share the outcome of denied credit, perhaps even ending up with the exact same credit score, but they have no shared life experiences, making it somewhat harder for them to unify. 

An interesting exception to me is the “subprime” community that formed during the era of predatory lending and the subsequent housing market crash of 2007. While evidence of mobilization around credit scores is sparse and speculative, the later Occupy Wall Street and “We Are the 99%” demonstrations were centered on income and access to wealth, of which credit may be a component.  

In my view, as risk categorization becomes more individualized, the concern for discrimination will be forced to shift from the algorithm’s inputs to its outputs. Many modern algorithms are tested for bias based on their results. How does it all work out in the end? Are marginalized groups always in the same cluster?  

Yet, due to their complexity and individualization, the ability to pin down exactly what is causing disparate outcomes from an algorithm may be a significant challenge. Further complicating things is that algorithms are, to me, fickle creatures, constantly changing, each with its own unique “secret sauce.” 

Everywhere I look, it feels like the world is becoming more algorithmic, from determining what show I watch next to directing my health care to my social media feeds. In such a world, do we lose the person of the category? 

The road ahead 

“Insurance provides a form of association which combines a maximum of socialization with a maximum of individualization. It allows people to enjoy the advantages of association while still leaving them free to exist as individuals.”  

—François Ewald, 1991.
(Philosopher). 

To recap the sociological perspective of the classification systems from Krippner and Hirschman, “It is classification that makes politics possible, insofar as we understand politics as action in concert.”21  

  1. The way in which persons are classified has implications to collective action. Are commonalities observable and available or are they bogged down in a quagmire of proxies and attributes? “Classificatory technologies shape political struggles in part by shaping the possibility of perception — what is visible versus what is hidden from view.”22
  2. The groups that emerge from classificatory systems form “possible lines of connection and fracture.”23 Is there a potential to find a shared social experience, or is it an artificial abstraction? 

Categorization is a central feature of the actuarial profession. It may manifest as sorting disability claims by occupation, understanding driving behaviors or deploying artificial intelligence to mine big data for new underwriting factors. Classification systems are critical to understand and quantify risk.  

But perhaps, the actuarial world goes beyond mathematics. It is a harrowing responsibility to realize that the potential of our decision may influence the very constitution of a society or shape someone’s identity.  

In the digital age with the individualization of risk, what will be the role of the actuary? When we reach the end of the categorization highway, will we be preserving the person of the category or accelerating their demise?

Nate Worrell, FSA, is a director of customer success at Moody’s. He is based in
Babcock Ranch, Florida.