Professional Insight

The AI Cheat Code: How ChatGPT (and AI Tools) Will (and Won’t) Forever Alter Human Work

Alex Salkever — co-author of the books Your Happiness Was Hacked: Why Tech is Winning the Battle to Control Your Brain — And How to Fight Back and The Driver in the Driverless Car: How Our Technology Choices Can Change the Future — was the featured speaker at the CAS 2023 Annual Meeting.

In these books and in dozens of articles published online, he explores exponentially advancing technologies such as robotics, genomics, renewable energy, quantum computing, artificial intelligence, open-source software, drones and driverless cars.

Salkever served as a technology editor at BusinessWeek.com and as a guest researcher at the Duke University Pratt School of Engineering.

He opened his remarks by presenting the background behind the rapid growth of artificial intelligence (AI). Throughout history, technological growth has always been exponential. He presented examples starting with the invention of electricity, followed by the radio and mobile phones. The latest technology is AI, the most familiar example of which, ChatGPT, amassed 200 million users in just three months.

He attributed this rapid growth to four factors that are all constantly being improved every year: computing power, networks, sensors and data. Over time, technologies that were once cost-prohibitive are now affordable. Salkever cited gyrometers as an example; once bulky and expensive, now anyone can buy one for a few dollars. As a result, the volume of data available for machine learning has soared.

He noted the increasing number of jobs in which AI performs better than do humans. However, we do not need to worry just yet. Salkever presented the three stages of human work that AI attempts to perform:

  • Basic work entails writing emails, blogs, basic research, creating Excel formulas, writing computer code and creating images, videos and text.
  • Medium work involves creating business plans, performing detailed research, writing simple computer programs and building websites.
  • Advanced work includes negotiating among multiple parties, writing entire computer programs, autonomously creating businesses, navigating complicated systems and conducting original research.

Despite news of AI doing hard math optimization and passing difficult tests such as bar exams and medical licensing exams, Salkever asserted that AI technology today can only do basic work and has a long way to go before it can perform more complicated tasks.

He used the calculator as an example. Did it replace accountants? Actuaries? Of course not. On the contrary, he shared two illustrative use cases for how AI would help us at our work:

  • With more free time, we can do more work.
  • AI helps less-experienced workers to learn their jobs.

Despite these benefits, Salkever reminded us of the risks and limitations of AI systems. Just like any other algorithm, AI is only as good as the data it is given. Any bias in the output reflects the bias already present in the input. He said, “AI does not understand people, gender and physics” and can sometimes produce nonsensical results.

AI misuse is also rampant, and Salkever pointed to the example of Cigna using AI to deny hundreds of thousands of claims without any second-level human review. Further, he emphasized that AI does not understand the concept of truth, and therefore does not filter lies.

Salkever cautioned against over-reliance on AI — it is dangerous and can lead to the loss of repositories of public knowledge. He used navigation as an example: When was the last time you drove anywhere without the GPS?

He closed by sharing several ways AI is already in use today, such as:

  • Transcribing meetings.
  • Analyzing long documents.
  • Conducting initial business research.
  • Business writing.
  • Infrared inspecting of vegetation through satellite scan when power lines are knocked down.
  • Creating better risk models (e.g., Kettle).

At the end of the session, there was a Q&A that has been paraphrased below for brevity.

Q: What insurance problems do you think AI will solve?

A: Mitigating climate risk, health insurance assessment and microinsurance.

Q: Are AI training sets mostly in English? If yes, would bias exist?

A: Yes, this is a known problem. Most of the dataset is from the West. An example of how this bias is addressed is in China, where Chinese citizens are portrayed as equals without a class system. Another approach is textbook learning with smaller large language models, such as the one Microsoft is using.

Q: What are your thoughts about adding quantum computing to AI?

A: It is too early; the applications are limited for now. When it happens, it will turbocharge AI. We will have much faster problem solving, leading to new problems we have not even thought about. The problems of the future are going to be some version of, “What questions should I ask?” instead of, “How can I solve this?”

Q: What are your thoughts on data protection with respect to using AI for personal versus business use?

A: I am skeptical of disclaimers. While it is hard to pull private data, AI could, given the right prompts, theoretically still do it. For now, be cautious with private data, and definitely do not enter it in public AI systems.

Q: Would over reliance on AI make our skills weaker?

A: Definitely. There are studies that show how the section of the brain responsible for geospatial navigation gets atrophied because everyone uses the GPS even for driving short distances nowadays. Therefore, outsourcing swaths of our core knowledge is bad.

Q: Has there been an increase in gatekeeping information from AI?

A: 100%. Big organizations are getting their data scraped by GPT, and it is now a battle royale for data. Many of the existing problems with AI are caused by it being trained on social media — a sub-optimal training source.

Q: Are there intellectual property issues related to the use of AI, such as copyright/trademark infringement?

A: There are. However, the providers of AI such as Microsoft, Adobe, etc. have clauses that will indemnify you in the event you are sued for using their AI. The question now is: What constitutes fair use?


Nick Witras, ACAS, MAAA, is a member of the Actuarial Review Working Group. Witras is a senior actuarial analyst for Chubb. Witras is a radar expert, crypto enthusiast and automation innovator.