The impact of technological progress on people’s jobs can vary by innovation and profession. Some inventions have drastic impact. For example, the gasoline-powered tractor caused significant declines in farm-related employment during the early 1900s — offset in part by a surge in manufacturing employment, producing goods such as tractors. Other innovations have had gradual impact. For instance, improvements in the speed and portability of computing have helped broaden actuaries’ toolkits over recent decades. These improvements opened the door to partially competing professions such as data science but did not result in significant actuarial job loss. Actuaries’ resilience until now supports common thinking that cognitively challenging work is relatively robust to disruption.
The recent emergence of generative artificial intelligence tools has been abrupt. ChatGPT gained one hundred million monthly active users in the few months following its initial release. Its ability to construct persuasive essays, computer code and exam responses based on simple conversational prompts went viral. ChatGPT’s capabilities resemble those of predecessors — such as spell checks, customer service bots and smart speakers — but its aptitude to perceive and respond to context and tone make it more adaptable and broadly useful than ancestors. This more sentient nature has professions previously seen as disruption-proof worried about their future job prospects.
The recent emergence of generative artificial intelligence tools has been abrupt. ChatGPT gained one hundred million monthly active users in the few months following its initial release.
It is difficult to predict the future of employment, but actuaries are in the business of estimating how uncertain futures may materialize. They can use these aptitudes to envision and prepare for their own future in a job landscape defined by AI. This article presents four speculative scenarios regarding the potential impact on the actuarial profession and concludes with thoughts on how to build resilience to AI and in general.
Scenario 1 — Doomsday
The most discomforting scenario for actuaries, and many professions, is that they will cease to exist in their present form. Researchers from AI Impacts and the Future of Humanity Institute estimate a 50% chance that, within 120 years, all occupations will be fully automatable. Significant job reductions could occur even more quickly. The 2013 Oxford-Martin study “The Future of Employment” estimated a 21% probability of actuarial jobs being automated within “the next decade or two.” A more recent study from “Will Robots Take My Job,” which utilizes similar methodology, increases the estimate to 52%. Both fall within the researchers’ “low-to-moderate” risk categories.
The studies above estimate each profession’s automation risk by analyzing whether its different tasks utilize perception and manipulation (essentially, manual dexterity), creativity or social intelligence — each of which researchers deem difficult to computerize. The non-trivial estimates that result suggest actuarial work may not be as cognitively dynamic as one might think. The Federal Reserve Bank of St. Louis finds that within most “cognitive non-routine” lines of work, roughly half of workers still require detailed instructions or frequent interaction with supervision. Actuaries involved in periodic rate or reserve reviews, or predictive model refreshes, may not find these numbers surprising. Even as different executions of these routines may lead to different conversations with stakeholders, the routines themselves will likely follow similar procedures during each iteration. The more dynamic conversation that follows may then be limited to a relatively small number of participants in the routine, such as managers or go-betweens. This helps explain why researchers found that management had the fewest routine aspects of the cognitive professions studied.
Actuaries in novel roles such as manager would not be immune to automation either because AI can approximate skills as ostensibly human as creativity.
In a highly automated future, actuaries who have evolved into managerial roles may be the last few actuaries standing. “Today’s AI is conceptually similar to a summer intern,” said Ralph Dweck, FCAS, director of analytic products at Verisk. “It has limited context and requires a lot of coaching but can get certain jobs done very well.” As AI graduates to entry- or mid-career-level ability, actuaries could conceivably manage teams of bots rather than people. Each bot might have different aptitudes, such as language versus vision, and different training. The bots’ manager may have a lighter load than a people manager because he or she would not have to manage morale and could expect less variability in “employee” performance across any given skillset.
Actuaries in novel roles such as manager would not be immune to automation either because AI can approximate skills as ostensibly human as creativity. An algorithm would be relatively unlikely to produce genuine novelty because it is captive to its training data. However, a person would also be unlikely to be truly novel. Even if someone synthesizes information in an apparently novel way, it is highly possible that someone somewhere already did the same — and documented it in a place where a large language model could discover and learn from it.
The benefits of using AI to expedite discovery could offset some of the value lost by forgoing occasional genuine breakthroughs in the Doomsday scenario. Jessica Leong, FCAS, CEO of Octagram Analytics, recently developed a continuing education session called “How to Find Data-Driven Insights When You Have No Data.” Among other things, Leong illustrates how ChatGPT can help discover publicly available data. With minimal effort, Leong said, she asked what publicly available data existed for insurable events, and ChatGPT suggested the National Practitioner Database for medical malpractice insurance and provided with a link. With as many parameters as the human brain has synapses, and a trillion words of web-scraped content at its disposal, there are few limits on AI’s ability to approximate creativity.
Scenario 2 — Groundhog Day
A more status quo scenario is that the nature and number of actuarial jobs remains about the same. Even if AI provides a cheaper and comparably effective alternative to humans in some cases, that does not guarantee employers will utilize that alternative. “A lot of consensus building is already required for models people build,” said Leong. “Would stakeholders ever accept a model that AI built?” As a case study of this mindset, nearly half of U.S. adults surveyed by Pew Research felt that widespread use of automated vehicles (AVs) is a bad idea for society. Nearly a quarter of respondents felt the technology is likely to increase traffic deaths, even though human errors cause many vehicle collisions. Over three-quarters worried about AVs’ impact on job availability. People and organizations’ reluctance to buy into AI could similarly slow its roll into actuaries’ lane.
Even if AI provides a cheaper and comparably effective alternative to humans in some cases, that does not guarantee employers will utilize the alternative.
Companies also are not categorically opposed to long-term investments in people at the potential short-term expense of productivity. Many organizations provide paid study time and pay for exam fees and study materials while the analysts pursue CAS credentials early in their careers. The companies essentially pay for one month or more per year of study time, during which there is no direct or immediate output from the analyst. While analysts’ work is often routine and rife for automation, actuarial employers do not appear to be in a rush to divert expenditure away from study programs and toward automating analyst roles. Doing so would cut off a critical leadership development pipeline. Even the Doomsday scenario requires a few actuarial leaders to mind the store.
Tenured actuaries with routine aspects to their roles may also survive Groundhog Day unscathed. Some actuaries may worry that if, say, a third of their tasks vanished, then there would not be enough new work to fill the resulting void. Dweck does not necessarily see this as a concern. “Quality trumps quantity,” he said. “Having more time to focus on non-routine work, without the distractions of the daily routine, could lead to much higher quality output on what remains.” This could prove enough value that new work would not even be necessary to support continued demand.
Scenario 3 — Training Day
A third scenario involves the actuarial role transforming into something more like what data scientists do. AI tools such as ChatGPT can “hallucinate” inaccurate results. This can occur due to poor prompts, inaccuracies in its training data, prediction errors that are customary to any model or bluffing an excuse for applying moral constraints. Such difficulties have helped create high-paying opportunities for “prompt engineers” to extract higher quality responses from AI. Some of these roles do not even require STEM skillsets. However, asking good questions about complex risk dynamics will likely require some of the skills of an actuary.
David Wright, market solutions leader at Acrisure and host of the Not Unreasonable podcast, recently administered CAS Exam 9 to ChatGPT and it “failed miserably.” “When you get to the upper end of any domain, nuance increases by 100 times,” Wright says. “Large language models do not handle nuance well yet.” One challenge Wright experienced when asking ChatGPT exam questions was formulating actuarial concepts as prompts. “Think about how to communicate something as simple as a loss development triangle,” he said. AI would likely require significant tuning to get hip to the intricacies of actuaries’ unique geometric representation of claim valuations across multiple different time dimensions.
AI could add efficiency and scale to actuarial work, but actuaries themselves would continuously train AI to be able to scale these greater heights.
Wright does not see large language models making quantum leaps in domain nuance right away, but he feels professionals such as actuaries can help expedite AI’s learning process. “Training AI on industry specific data can enhance its effectiveness,” he says. Wright points to BloombergGPT as an example of improving domain performance in this way. He also sees potential for actuaries to serve up their own models, such as triangles, to AI as plug-ins so that AI does not need to learn such concepts itself. Domain-specific training and plug-in development leverage actuarial expertise but require a deeper data science toolkit than is customary for many actuaries today.
Wright also navigated relatively pedestrian challenges when testing AI. “Copying and pasting questions and responses in and out of the ChatGPT interface repeatedly became tedious,” he said — especially given that Wright regenerated the response to each question several times to simulate the various thought processes a student could take. As a workaround, Wright licensed programmatic access to ChatGPT’s application programming interface (API) and started sending questions via Python — at a typical cost of a few dollars each. When I spoke with him, Wright was also experimenting with teaching ChatGPT to grade its own performance, which would require a high level of actuarial acumen.
To summarize the Training Day scenario, AI could add efficiency and scale to actuarial work, but actuaries themselves would continuously train AI to be able to scale these greater heights.
Scenario 4 — Judgment Day
In our final scenario, actuaries would pivot in more of a social science than a data science direction in response to AI. Dorothy Andrews, ASA, senior behavioral data scientist and actuary at the National Association of Insurance Commissioners, has seen “wacky stuff” coming out of models long before AI started hallucinating. She recalls once listening to a debate over whether dog ownership is a reasonable explanatory variable for predilection to smoke. “People hearing this debate may start to formulate hypotheses for why this could make sense,” she said, as opposed to questioning whether there is a spurious correlation or confounding phenomenon at work. As models become increasingly complex, it is simple for model stakeholders to fall into the cognitive trap of “attribute substitution” — that is, replacing a difficult judgment task with an easier one.
However, there is a fine line between simplifying and oversimplifying complexity. “Data is created by human activity,” said Andrews. “AI is most likely to miss the mark where people have historically missed the mark.” For example, analysis of Lyft data by researchers at Johns Hopkins and the University of Chicago indicated that minorities are significantly more likely to experience police encounters over otherwise identical speeding infractions. This paradoxically exposes both the positive potential of high-dimensional data analysis, but also the potential peril of accepting math at face value. A finding as ostensibly mundane as speeding being risky may be distorted by decades of social biases. Because AI deeply learns, it can memorialize biases hidden deep within tomes of data.
Andrews sees actuaries and others playing an important role in managing the risks of AI. She points to a need for regulators to continue enhancing their teams’ abilities to review complex algorithms, which could create some new roles for actuaries. There is also opportunity to grow more diverse modeling and model review teams, with researchers at Columbia University finding that modelers’ prediction errors often correlated with demographic groups. However, STEM skillsets alone will not necessarily generate all the right questions.
The World Economic Forum estimates that, within “jobs of tomorrow” for which there is consistently growing global demand, the majority of workers will transition to different job families than those they work in today. For example, educators, health care workers and artists may land in data and AI jobs. Where is the actuary of tomorrow working today? Andrews feels some may be working in or studying the social sciences. “People who speak in highly technical language about how models work often do not understand the social dynamics in the data,” said Andrews. “Social science is about unpacking the why.” In the Judgment Day scenario, actuaries’ resilience derives in part from supplying this human touch to cases where AI may rush to judgment.
Creating the future
In reality, the four scenarios above may not be mutually exclusive, and the future of the profession could bear resemblance to each of the four in one way or another. Moreover, the future is not fully deterministic. Actuaries can influence how the AI-driven future looks for the profession and themselves individually.
As I drafted this article, the Writers Guild of America strove to influence its own future by going on strike — with one point of contention being writers’ desire to regulate AI’s use in content creation. To one extent or another, most professions create content, and one of AI’s essential threats is that it also creates intellectual property. Therefore, the considerations pertaining to Hollywood generalize to the present conversation. I asked Dominic Lee, ACAS, senior solutions advisor at SAS — who creates content as The Maverick Actuary — how he was thinking about AI, and he indicated that he was not overly worried. “I try to bring my unique voice to content,” said Lee. “While a large language model can be utilized to create content, there are objective limitations that would affect the breadth and quality of that content.”
Lee cites AI’s current lack of higher order thinking as one barrier to its impact. For example, he notes that if someone prompts ChatGPT to develop topically similar content that optimizes different criteria such as reach or engagement, “the different outputs generally would not reflect the difference in their intent — because the model is not trained in pursuit of these higher order objectives. It’s simply trying to predict the next word in a sequence.” Also, many AI tools deliver content in a single form such as text (ChatGPT) or imagery (Dalle), as opposed to multimedia content such as a tracking text meme that overlays text on video. (Speaking of memes, pundits mostly agree AI has yet to master the intricacies of humor.)
Another major limitation on AI relates to very specific gaps in the data available to train it. “The most obvious example where the absence of data limits the usefulness of these models is a personal story post,” Lee said. “If I wanted to write a post using an experience from my childhood known only to me, ChatGPT’s output would be based on an entirely fictional premise.” He also points to AI’s reliance on past data as limiting its ability to envision the future. “On LinkedIn, for example, I create short-form text posts focused on expanding the domains in which actuaries add value. So, I may do a post on what emerging risks challenges actuaries are equipped to solve, how their skills can be positioned, and so on,” he says. “Given the lack of historical context and the need to incorporate nuanced professional perspective, ChatGPT would have trouble bridging the gap between an actuary’s value proposition and a domain in which actuaries have not traditionally participated.”
I asked Lee to reflect on his process for generating differentiated content.
“I focus on adding value to my community whenever I create content regardless of the platform. I don’t operate on an advanced schedule like many creators. My process is highly flexible,” he said. “When I feel conviction around something that inspires me or I know something is on the mind of my community members, that’s when I’m most likely to create. I try to be as intentional as I can. Before I post, I ask myself, how will the content make a positive difference for someone in my community?”
For actuaries worried about all the things they cannot control, the best path may well be to focus on the things they can. Creating a better future for their stakeholders will ultimately have the most positive impact on their own future.
Jim Weiss, FCAS, CSPA, is a vice president for Crum & Forster. He will complete his term as CAS vice president-research in November 2023.