
The narrative around artificial intelligence (AI) is everywhere, from news sites to trade reports: a dizzying array of headlines, each more definitive than the last. One claims, “AI is coming for your $100K job.”[1] Another cites a study predicting that 40% of agentic AI projects will be abandoned by 2027.[2] The narrative swings between breathless hype and cautious retreat, but beneath it all is an undeniable truth: AI is having its cultural moment, and it is rewriting the rhythm of nearly every industry, including ours.
This moment has arrived with a velocity unmatched in technological history. Consider this: ChatGPT reached 100 million users within two months, far outpacing the adoption curves of the internet and personal computers. Two years after its public debut, generative AI — in the form of tools like ChatGPT — achieved a 39.5% adoption rate, a milestone that took the internet a decade to reach and telephones nearly a century.[3] Technology-related roles are among the fastest growing, with AI and information processing alone projected to create 11 million new jobs by 2030. This growth is a key driver of the fundamental shift in how humans and technology coexist.[4] The global AI market, valued at $64 billion in 2023, is projected to surpass $1 trillion by 2030, underscoring how expansive its reach will become.[5]

But if AI’s promise feels limitless, its challenges are equally daunting. At the 2025 CAS Spring Meeting in Toronto, a town hall poll asked participants whether they had used AI in their work. More than half said they had not. While the statistic was only a snapshot, it pointed to a larger truth: in an industry built on risk analysis and predictive modeling, adoption has been cautious and progress uneven. CAS President David Cummings noted during the discussion that many companies have firewalls in place to block AI tools, aiming to protect sensitive company data.
AI is not just another technology; it is a revolution unfolding at a complex moment for humanity, where excitement about innovation and anxiety about unintended consequences are intertwined.
Yet for all its momentum, adoption has been uneven. Research from MIT Sloan and the National Bureau of Economic Research (NBER) shows that AI adoption clusters in “superstar cities” and large companies within manufacturing and healthcare. In finance, insurance, and real estate — industries built on data and predictive modeling — AI’s high-intensity usage remains below 2%.[6] Why is adoption so slow in sectors that seem poised to benefit most?

The answer lies partly in culture. Insurance, and its actuarial function in particular, is an industry defined by caution, evidence, and the rigorous assessment of risk. Actuaries, who specialize in quantifying uncertainty, are naturally skeptical of tools that promise much but often remain opaque. As CAS Board Chair and CAS past president Frank Chang, who has navigated nontraditional actuarial roles at Google and Uber, put it on the “Almost Nowhere” podcast: “AI is transformative. But the replacement of humans by AI is where my skepticism starts. Human judgment calls are very difficult for AI to comprehend.”[7]
The complexity of AI lies not only in its technical sophistication but in how it reshapes decision-making itself. One insurer may find an algorithm excellent at predicting which claims will escalate but lacking the nuance to understand the context of a vulnerable customer’s situation. Another may discover that AI models can flag anomalies in reserving data but still require an actuary’s judgment to interpret what those flags mean within a business and regulatory environment. This friction reminds us that actuarial work is not just about data; it is about understanding people, policy, and the consequences of decisions.

Jim Weiss, FCAS, outlined four scenarios in a 2023 issue of Actuarial Review — Doomsday, Groundhog Day, Training Day, and Judgment Day — to describe actuaries’ uncertain futures in an AI-driven world. Each scenario reflects a nuanced vision of coexistence between humans and machines.[8] Today’s reality can be a blend of all four. AI is neither a perfect solution nor an existential threat. It is a negotiation, an evolving balance between human expertise and technological capability.

At its core, this negotiation requires cultural change. Josh Meyers, FCAS, who helped design the CAS Institute’s (iCAS) AI Fast Track Bootcamp, captured this on a recent “Almost Nowhere” podcast: “The AI Fast Track built a community. It wasn’t just about learning AI, it was about collaboration, sharing insights, and practical use cases.”[9] Technology may be revolutionary, but its adoption demands human skills — communication, collaboration, and strategic thinking — as much as computational power.

Recognizing this, CAS and iCAS have stepped into roles as guides and facilitators. Alicia Burke, director of portfolio at iCAS, put it plainly: “AI isn’t a topic you just learn once and walk away. The topic is complex and will be evolving for decades to come. This is exactly why we built the AI Fast Track with a vibrant community discussion board. We want individuals to feel welcome to share their successes and challenges with one another to further build the profession.” This reflects the thoughtful, incremental approach required to integrate AI within the cautious world of insurance.

Sessions at recent CAS Annual, RPM, and Spring Meetings further illustrate how actuaries are engaging with these complexities. At the 2025 RPM Seminar in Orlando, Sergey Filimonov explored the promise and challenges of using large language models and unstructured data. “The models are black boxes, and that goes against the transparency that’s valued in actuarial work,” he noted on the “Almost Nowhere” podcast. “There’s a lot of really interesting discussion around some of these big open questions that we’re figuring out.”[10] His reflection captures the tension between the appeal of AI’s capabilities and the actuarial commitment to transparency, nuance, and careful judgment.

These challenges and opportunities are not limited to the United States. Globally, regulators are taking a keen interest in AI, from the European Union’s 2024 AI Act’s focus on transparency and accountability, to the NAIC’s ongoing discussions on AI model governance. Charlie Stone, an actuary speaking at the Spring Meeting, demonstrated AI’s potential in his “Bridging the Data Divide” session, detailing how U.K. regulators use machine learning to identify reserve deterioration and act faster than traditional methods allow. “AI tools highlight exactly where attention is needed,” he said. Yet Stone was clear: technological innovation alone is not enough. Adoption requires cultural change and overcoming operational inertia, challenges often more significant than the technology itself.
As Stone also noted on the “Almost Nowhere” podcast, U.K. regulators are already leveraging AI to monitor reserving practices while carefully balancing fairness and transparency requirements. For actuaries, these signals are clear: the era of AI governance is arriving, and their expertise will be essential in shaping how these systems align with public trust and regulatory expectations.
These barriers are echoed in recent research from the NBER, which found that successful AI adoption is slowed not only by technical challenges but also by entrenched human resistance and organizational inertia. Startups embracing AI often have younger leaders open to new methods, while large insurance organizations face the challenge of layered processes and established workflows. In many cases, incremental change remains the only viable path forward.
Technical obstacles, organizational resistance, and deeply rooted concerns about data privacy and security further complicate the journey. Insurance companies and actuarial teams grapple with the tension between innovation and the protection of sensitive customer data, facing the reality that even the most advanced AI cannot fully eliminate the risks associated with digital transformation.
Yet small, focused AI initiatives are making meaningful impacts. Consider the Air France-KLM Group, which began experimenting with 10 automation bots in 2016 and gradually scaled to 179 bots that now save 200,000 staff hours annually across customer service, cargo, and engineering operations.[11] The airline didn’t seek a sweeping overhaul; it invested in targeted, practical automation that freed staff to focus on higher-value work while improving workflows. For actuaries and insurers, the takeaway is clear: meaningful AI adoption does not require grand transformation. It can begin with small, carefully chosen projects that align with operational needs while building confidence and capacity for broader innovation.
This tension between ambition and caution, between rapid technological change and the deliberate pace of trust, defines the insurance industry’s relationship with AI. Yet the cautious pace of adoption is not shortsighted. Actuaries’ commitment to deliberation, ethical scrutiny, and rigorous standards protects both the profession and the public. A key challenge for the profession is understanding how Actuarial Standards of Practice apply to models that were not built by actuaries. Technology may advance rapidly, but the profession’s ethical foundation must remain firm.

Brian Fannin, ACAS, echoed this need for foundational clarity in predictive modeling. “Individual claim reserving and predictive modeling have similar objectives — to find predictive elements. Actuaries need to continuously update their predictive skill sets,” he said on the “Almost Nowhere” podcast.[12] His insight captures the paradox of technological progress: innovation often requires a return to first principles, reinforcing the importance of human judgment and ethical frameworks, particularly in matters of data privacy, security, and bias.
Actuaries are not merely spectators of this technological shift. Through initiatives like the CAS Institute’s AI Fast Track program, they are actively learning how to distinguish hype from substance, ensuring the promise of AI’s usefulness translates into meaningful, responsible action. The program opens by demystifying AI, reminding participants that beneath the buzz lies a set of sophisticated algorithms, not a replacement for human judgment. Sessions guide actuaries from the fundamentals of search techniques and rules-based AI to advanced discussions on machine learning, deep learning, and generative AI, equipping them with practical skills while emphasizing the irreplaceable value of domain expertise.
A key theme running through the Fast Track is that actuaries, with their grounding in data ethics and risk, are uniquely positioned to guide how AI is implemented responsibly within insurance. In the program’s capstone session, “Mind, Model, Morality,” the discussion moves beyond technical considerations to the ethical and philosophical implications of AI. It challenges participants to consider bias, judgment, and governance, ensuring that new tools are used in ways consistent with actuarial standards and the public trust.

Max Martinelli, who co-designed the Fast Track, emphasized the importance of actuaries claiming their seat at the table as companies build AI strategies. “That domain knowledge is key,” he said. “It’s not the crunching of numbers, it’s the domain knowledge, and actually getting your hands dirty can really be part of the process.”
He has consistently urged actuaries to resist the narrative that AI will replace them, advocating instead for a focus on practical, well-scoped applications that build confidence and demonstrate value. “We try to root ourselves in practical innovation,” he explained. “It really just goes back to the fact that insurance risk is typically very multivariate in nature. It’s about using the right tools for the right use case.”
Martinelli also dispelled the notion that actuaries need to become AI engineers before engaging with these tools meaningfully. “You don’t have to read a ton of textbooks to get started. You can learn by doing,” he said, explaining that repeated, hands-on use is what gives people the knowledge to tie it to use cases.
Perhaps most importantly, he encourages actuaries to see the potential of their existing modeling skills to unlock new business opportunities and drive organizational improvement. “Actuaries already have these powerful modeling skills, and more tools are coming out that allow us to model quickly,” Martinelli noted. “We can start using them to solve questions we’re already answering, better and faster.”
Taking the first step
For actuaries, embracing AI doesn’t require becoming engineers overnight. It begins with curiosity and participation: seeking out training like the AI Fast Track On Demand program or sessions from the 2025 iCAS Data Science & Analytics Forum, including “Reserve with Machine Learning” and “Tech for Pros: An Overview of Modern Ops.” It means attending CAS meetings with AI-focused sessions, testing small projects such as claims triage or reserving data exploration, and joining CAS and iCAS community discussions to learn from peers. It also means advocating for ethics and governance in your company’s AI rollouts, ensuring these tools align with the profession’s mission of protecting the public while delivering practical value.
Where does this leave actuaries today? In many ways, at the center of a quiet but profound transformation. AI is neither a cure-all nor a curse. It is a powerful tool that, to be effective, requires careful judgment, critical analysis, and ethical rigor — traits that have long defined the actuarial profession.
As organizations race to deploy AI tools, a clear message has emerged from the actuarial community and beyond: building these systems responsibly requires structure, accountability, and sustained oversight. It is not enough to launch a model and move on. The World Economic Forum called for the creation of dedicated heads of AI ethics to guide implementation, advocating for policies that keep models in beta longer, require thorough documentation, utilize external assessments, and commit to ongoing employee training. Sloan echoes this, urging leaders to treat AI not as a one-off deployment, but as a dynamic system that demands continuous scrutiny and adaptation.
This perspective resonates with Martinelli’s objectives emphasized in the CAS Institute’s AI Fast Track program. Actuaries, data scientists, and analysts increasingly seek not just to use AI tools, but to have a seat at the table as their organizations develop AI strategies and governance frameworks. Their message is clear: in a field where decisions are only as good as the assumptions and processes behind them, ensuring AI is deployed responsibly is not a sideline concern. It is essential.
The conversation about AI in actuarial work will continue, evolving with each new technology and challenge. Alicia Burke’s invitation remains a clear call to action: “We want to hear how AI is showing up in your work and where you need support. Reach out. Let’s keep this conversation evolving.” This is how actuaries, and indeed all professionals, will navigate the uncertainties of an AI-driven future.
It is this careful, collaborative negotiation between human expertise and technological innovation that may ultimately offer the world a valuable model: a thoughtful approach to progress that balances optimism with skepticism, anchored in ethics and wisdom. In a moment defined by rapid technological change, such a model is not just valuable. It is essential, and it is precisely the role actuaries have long prepared to play, ensuring that the promise of AI serves people first while safeguarding the trust at the heart of the P&C profession.
Dan Jackman is the CAS Senior Marketing Consultant and Executive Producer of the “Almost Nowhere” podcast.
- [1] John Hope Bryant, “Yes, AI Is Coming for Your $100K Job. But It Could Build Great Jobs for Many More,” Time, July 31, 2025, https://time.com/7306692/ai-taking-jobs-more-opportunities/.
- [2] Gartner, “Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027,” press release, June 25, 2025, https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027.
- [3] Alexander Bick, Adam Blandin, and David J. Deming, “The Rapid Adoption of Generative AI” (working paper, Federal Reserve Bank of St. Louis, September 18, 2024), 3, https://ctstate.edu/images/Forms-Documents/AI-presidential-fellows/The-Rapid-Adoption-of-Generative-AI.pdf.
- [4] World Economic Forum, “Future of Jobs Report 2025” (2025), https://reports.weforum.org/docs/WEF_Future_of_Jobs_Report_2025.pdf.
- [5] Bloomberg Intelligence, “Generative AI 2024, Assessing Opportunities and Disruptions in an Evolving Trillion-Dollar Market” (2024), https://assets.bbhub.io/promo/sites/16/Bloomberg-Intelligence-NVDA-Gen-AIs-Disruptive-Race.pdf.
- [6] Kristina McElheran et al., “AI Adoption in America: Who, What, and Where,” NBER Working Paper No. 31788 (National Bureau of Economic Research, October 2023), 1, https://www.nber.org/system/files/working_papers/w31788/w31788.pdf.
- [7] Alicia Burke and Max Martinelli, “Frank Chang, The Growth Set,” Almost Nowhere, June 4, 2025, podcast, https://open.spotify.com/episode/20P2bA3SiW4dXI48biTwvS?si=zaOtzHc0RIW1SRMIgk4HAQ.
- [8] Jim Weiss, “Four Futures for Actuaries in the Wake of AI,” Actuarial Review, July 13, 2023, https://ar.casact.org/four-futures-for-actuaries-in-the-wake-of-ai/.
- [9] Alicia Burke and Max Martinelli, “Josh Meyers, Actuaries in the Age of AI,” Almost Nowhere, February 13, 2025, podcast, https://open.spotify.com/episode/3FVgt2rsxnkrpMd7nm7ijd?si=3G0uiOrORIGhtv3OQy3TPQ.
- [10] Alicia Burke and Max Martinelli, “Sergey Fillimonov,” Almost Nowhere, February 18, 2025, podcast, https://open.spotify.com/episode/3FVgt2rsxnkrpMd7nm7ijd?si=3G0uiOrORIGhtv3OQy3TPQ.
- [11] Karl Flinders, “Air France-KLM to Increase Intelligence of Bots That Saved 200,000 Hours,” Computer Weekly, July 3, 2025, https://www.computerweekly.com/news/366627136/Air-France-KLM-to-increase-intelligence-of-bots-that-saved-200000-hours.
- [12] Alicia Burke and Max Martinelli, “Charlie Stone and Brian Fannin,” Almost Nowhere, July 1, 2025, podcast, https://open.spotify.com/episode/5mATz1FsAGoyDMZHc2Ao2y?si=kg70cmH1ROOPAnxbPMWcGw.
COMPENDIUM
List of AI Articles and Products from the CAS and iCAS
Actuarial Review Articles:
- Agentic AI: Your New Actuarial Coworker, May 16, 2025
https://ar.casact.org/agentic-ai-your-new-actuarial-coworker/
- A Focus on Research and Volunteers, March 20, 2025
https://ar.casact.org/a-focus-on-research-and-volunteers/)
- Artificial Intelligence Gone Nuclear, March 20, 2025
https://ar.casact.org/artificial-intelligence-gone-nuclear/
- AI Regulation in Insurance: A Road to Unintended Consequences, March 20, 2025
https://ar.casact.org/ai-regulation-in-insurance-a-road-to-unintended-consequences/
- From AI to Climate Risk: Updates from the recent IAA Meeting in Tallinn, Estonia, January 23, 2025
https://ar.casact.org/from-ai-to-climate-risk-updates-from-the-recent-iaa-meeting-in-tallinn-estonia/
- Rapidly Evolving Technology and Its Implications for the Reserving Process, January 23, 2025
https://ar.casact.org/rapidly-evolving-technology-and-its-implications-for-the-reserving-process/
- Four Futures for Actuaries in the Wake of AI, July 13, 2023
https://ar.casact.org/four-futures-for-actuaries-in-the-wake-of-ai/
CAS Signature Events:
- Application of AI and Machine Learning in (Re)Insurance, 2025 Reinsurance Seminar
https://www.pathlms.com/cas/events/12221/event_sections/17909/video_presentations/361288 - AI: A Multi-faceted Cyber Threat, 2025 Spring Meeting
https://www.pathlms.com/cas/events/12068/event_sections/17771/video_presentations/356915 - Risk Evaluation for a Cloud-Based AI Model, 2025 Spring Meeting
https://www.pathlms.com/cas/events/12068/event_sections/17770/video_presentations/356912
- How Actuarial Science Can Benefit from AI… and Vice Versa, 2025 Spring Meeting
https://www.pathlms.com/cas/events/12068/event_sections/17771/video_presentations/356916
- AI-Empowered Actuaries: An Introduction to AI Agents, 2025 Spring Meeting
https://www.pathlms.com/cas/events/12068/event_sections/17770/video_presentations/356950
- AI Insurance: Managing and Underwriting Enterprise AI Risks, 2025 Spring Meeting
https://www.pathlms.com/cas/events/12068/event_sections/17770/video_presentations/356909
- Bridging Data Divides: AI as a New Paradigm for Unstructured Data, 2025 RPM Seminar
https://www.pathlms.com/cas/events/11818/event_sections/17370/video_presentations/347807
- The Use of A.I. in Insurance, 2025 RPM Seminar
https://www.pathlms.com/cas/events/11818/event_sections/17371/video_presentations/347814
- Reserve in Machine Learning, 2025 iCAS Data Science & Analytics Forum
https://www.pathlms.com/cas/courses/104478/video_presentations/348393
- Tech for Pros: An Overview of Modern Ops, 2025 iCAS Data Science & Analytics Forum
https://www.pathlms.com/cas/courses/104478/video_presentations/348395
- ERM: Using AI in Scenario and Stress Testing for Optimizing Insurance Strategy (Part 1), 2024 Annual Meeting
https://www.pathlms.com/cas/events/10243/event_sections/16507/video_presentations/328264
Webinars, Workshops, Bootcamps, & Bundles
- The Intersection of Actuarial Science and Artificial Intelligence, October 15, 2024
https://www.pathlms.com/cas/courses/74642
- iCAS AI Fast Track (on demand version coming soon)
- Actuarial Workflows with AI, 2024
https://www.pathlms.com/cas/courses/90048/video_presentations/332504
- ICAS Data Science & Analytics Forum (Reserve with Machine Learning and Overview of Modern Ops), March 2025
https://www.pathlms.com/cas/courses/104478
- P&C AI Bundle (2022-2023)
https://www.pathlms.com/cas/product_bundles/4848
Podcasts: Almost Nowhere
- Episode 1: Joshua Meyers
https://open.spotify.com/episode/3Qy8mGBvlK45lHG6cIRwdT
- Episode 2: Sergey Filimonov
https://open.spotify.com/episode/3FVgt2rsxnkrpMd7nm7ijd
- Episode 6: Jim Guszcza
https://open.spotify.com/episode/1FKXH9kXPkuhVkcqvig8Li?si=xpBWktRoRgeuKw2kRORqsg
- Episode 7: Frank Chang
https://open.spotify.com/episode/20P2bA3SiW4dXI48biTwvS?si=zaOtzHc0RIW1SRMIgk4HAQ
- Episode 8: Charlie Stone & Brian Fannin
https://open.spotify.com/episode/5mATz1FsAGoyDMZHc2Ao2y?si=kg70cmH1ROOPAnxbPMWcGw