A Proposed New Direction for the CAS Syllabus and Exams

This is an extended version of the “Actuarial Review” article on this topic. It covers topics in more detail than space would permit for the “Actuarial Review” article for those who want to hear more about the reasoning behind the proposals. The purpose of this proposal is to start a conversation among our members and CAS Leadership on the direction our syllabus and exam process should take. I anticipate that there will be wide range of responses which is healthy.

My proposal has six principal goals with the first two relating to syllabus changes and the last four related to changing the exam format:

  • Increase coverage of applied statistics or data science topics
  • Improve integration of applied statistics or data science topics with ratemaking and reserving
  • Decrease exam result release time
  • Decrease volunteer grading time
  • Improve the feedback given to candidates on exam results
  • Increase the number of practice questions available to candidates

We are at a point where we need to make material changes to our syllabus and exams. The Society of Actuaries (SOA) give our current preliminary exams, and they will begin making changes to the preliminary exams (Exams 1, 2 and 3F) starting in 2023 with Exam 3F (SOA Exam IFM) being eliminated. Statistical modeling technology has changed rapidly in recent years while our syllabus has remained largely static which means our syllabus needs to catch up, if we are to provide the training our candidates need to succeed.

I will acknowledge that there are actions underway to change our syllabus. There was a recent announcement that in 2023 Exam 3F will be replaced by a course like DS1 – Data Concepts and Visualization from the iCAS syllabus, On-Line Course 3, and that there will be changes to MAS I & II. The details of the Exam 3F replacement have not been published yet though and there is no word on what the revised MAS I & II exams will look like. The Admission Transition Plan published by the CAS mentions that there will be changes for other exams in later years, but at this point there are no details available. As a result, I will focus on the published syllabus for 2022 for most of this article, but I will briefly mention On-Line Course 3.

A comparison of our current syllabus to the proposed is shown in the appendix. I would recommend printing those exhibits and looking at the current vs. proposed summary while reading this article as the best way to follow the reasons for making the proposed changes to our syllabus.

The bulk of this article is devoted to explaining the transition between the current to the proposed syllabus. I have not included details on a new syllabus in this article, since that would be the work of many people.

The expanded article consists of seven sections:

  • Syllabus Assumptions
  • Proposed Syllabus Changes Summary
  • Preliminary Exams
  • Validation by Education (VEE) and On-Line Courses
  • Associateship Exams
  • Fellowship Exams
  • Exam Testing Format
  • Conclusion

Syllabus Assumptions

Before going into a discussion of changing any of the exams, a general set of statements on the goals of the syllabus and exams as well as the constraints we face seems useful. The list below is long which means deciding what items are on the syllabus will involve some difficult tradeoffs. The assumptions and limitations follow.

  • The content of our syllabus should be consistent with the idea that we are similar to the data science group. We are good at math, but our members need to have a broad range of skill sets.
  • There are limits to the exam study time required after graduation from college to achieve Fellowship (travel time) that the candidates and employers will tolerate. We should be selective when choosing material that we need to test at the level of rigor of our exam process versus using either the Validation by Educational Experience (VEE) or online courses route.
  • If we add new material, we need to find older material that can be eliminated or reduced in coverage.
  • The CAS lacks the resources to expand the number of topics covered by VEE on its own. The VEE process is supported jointly by the CAS, CIA, and SOA today.
  • Our clients view our primary competency as being good at the math that matters for Property & Casualty (P&C) ratemaking and reserving which means we should focus our exams on those areas. Given that assumption, the primary focus for our exams should be to equip our candidates to use the results of commonly available, open-source modeling tools in our work. Given the regulatory and financial reporting environment in which we operate, we also need an exam that is focused on those topics.
  • Our members should be able to speak the language of finance, accounting, economics, policy forms and coverage, but we do not need to test that material at the same level as we do in our exam process. We can use a combination of VEE and online courses to satisfy our requirements for those topics. Acquiring some programming skills is necessary, but our skill level need not be at the level expected for full-time programming staff. We can use online vendors who have certificate programs like COURSERA to verify that our candidates have some familiarity with programming.
  • Our syllabus should be updated on a regular basis to reflect changes in the open-source modeling tools as they become commonly available. The syllabus and exam group should talk regularly with our research group to gauge the need to make changes.
  • We need to retain adequate coverage of the more traditional methods as currently covered under Exam 5. Our current Exam 5 techniques are still widely used, and candidates need to understand those techniques to perform on the job.
  • We should define the level of competency expected for the application of modeling tools to match that seen in upper level undergraduate or first-year graduate program courses for Statistics or Data Science. We are not in the business of minting PhD level statisticians yet results from modeling tools cannot appear as a “black box” to our members in practice.
  • Today, we have one syllabus and set of exams that all candidates take even though our candidates will go down different career paths. I assume that we will continue to have one syllabus for all candidates which means not all of our candidates will do hands-on modeling work, but all of them need to understand the results of predictive analytics tools well enough to use them when writing a statement of actuarial opinion on reserving or ratemaking assignments for clients or regulators.
  • We should test a candidate’s grasp of the underlying assumptions behind the modeling tools covered on our syllabus and their ability to interpret the output from those tools, but we should not test their ability to write code to build models as one would in a data science unit.
  • Our textbooks and training material should come from the same pool of material used by the applied statistics and data science group whenever possible for the proposed Exam 3 and the two revised MAS exams. We have limited volunteer time available and there is no lack of either textbooks or self-study guides we can use. Then too, if we use the same reference material as the applied statistics and data science group, we will speak their language.
  • Some exposure to hands-on modeling is necessary to internalize the concepts that would be covered in both the modeling theory exams which are Exam 3 and the two MAS exams and later in the Fellowship exams which are revised Exams 7-9. We should not test the ability to code models in an exam setting though, rather we should have homework set up in the form of simple modeling assignments to be uploaded and checked mechanically that they run correctly.
  • This is a long list which means deciding which items are on the syllabus will be an ongoing discussion.

Proposed Syllabus Changes Summary

Exam 3/IFM is going away which implies we can drop related material on our current Exam 9. The combined result is that we have some space available on the exam portion of our syllabus. The later sections in this article covering individual exam changes describe how that space could be used on a by exam basis, but the general goals are:

  • Improve our coverage of machine learning
  • Move some modeling coverage to an earlier exam
  • Improve our linkage between modeling theory in the earlier exams and application in the later exams
  • Scale back coverage of our traditional ratemaking and reserving techniques

For the VEE and online portion of our syllabus the changes are:

  • Scale back the online courses from The Institutes.
  • Introduce online courses to cover programming

The FCAS designation would still require Exams 7-9 plus the ACAS designation. The ACAS designation would still require VEE credits, Online course completion, Exams 1-3, 5, 6, MAS I, and MAS II plus completing the Course on Professionalism.

The proposed syllabus is consistent with the idea that when new material is added (better coverage of predictive analytics) other material must be eliminated or condensed.

I attempted to keep the scope of the proposed changes to the syllabus within tolerance to achieve those changes within the next two years, although I would say that completing all of the changes within a two-year time period will be challenging. There are some changes that I believe would be useful but are out of scope for completion in the next two years which I mention some of the sections below.

Proposed Preliminary Exams

The proposed preliminary exams are:

  • Exam 1/P
  • Exam 2/FM
  • Exam 3: Introduction to Modeling:
    • Include Statistics from current MAS I Section B (except for Section B.3)
    • Include Extended Linear Models from current MAS I Section C
    • Include Time Series with Constant Variance from current MAS I Section D

Exam 1/P is currently on the syllabus and would cover Probability with the syllabus determined by the SOA

Exam 2/FM is currently on the syllabus and would cover Financial Mathematics with the syllabus determined by the SOA.

Exam 3 is a new exam and would cover about 75% of the current MAS I exam which frees up space for improving our coverage of machine learning topics and parametric distributions on the MAS exams. The Statistics section would cover material that is commonly covered in the second semester of a two semester Introduction to Mathematical Statistics sequence. The Extended Linear Models section would cover Ordinary Least Squares, Generalized Linear Models and give exposure to extended linear modeling topics like Generalized Additive Models. Times Series with Constant Variance would cover an introduction to ARIMA plus adapting regression models to time series analysis. Courses covering most of this material for this should be available at the undergraduate level.

Our reference material for Exam 3 should come from commonly used college level textbooks.

Candidates who complete the three preliminary exams while in college will have covered some commonly used class plan modeling tools before starting work which is a benefit to employers. Completing exams 1-3 is a good indication to employers that a candidate could pass the later exams.

For college students enrolled in an applied statistics or data science program, studying for Exams 1 & 3 while in college would mean the candidates are simply reviewing coursework they would cover in any event. Time spent reviewing that material is a low-risk investment which would help them in a variety of career paths which, in turn, helps us when competing for candidates. College students who do well in applied statistics or data science courses will have attractive career options outside of actuarial science.

Currently, we are scheduled to replace Exam 3/IFM with On-Line Course 3 which covers data concepts in 2023. As I understand the proposed content, many of the concepts covered will not make sense to a candidate until they have covered the material on Exam 5 making it unsuitable for a preliminary exam. The data visualization portions of the data concepts course would be covered by Exploratory Data Analysis or actual to predicted results examples in the Study Notes for the proposed later exams.

At one time, we had an exam testing Calculus and Linear Algebra. Sometime in the future, we should review if resuming testing on those topics as part of preliminary exams would reduce travel time for passing the predictive analytics exams after graduation. Having an exam testing Calculus and Linear Algebra would motivate candidates to take Linear Algebra while in college as part of a typical two-year sequence in Calculus and Linear Algebra. Reviewing that coursework could help candidates retain enough of that coursework, that reading the mathematics behind the predictive analytics tools would be less challenging.

Validation by Education (VEE) and Online Courses

The proposed syllabus would:

  • Retain the 5 current VEE topics
  • Condense On-Line Courses 1 & 2 from The Institutes into one course describing policy forms/ coverage and insurance operations
  • Create three new programming topics (Python, R and SQL) to be covered via a vendor certificate for each topic.

Validation by Education

The proposed syllabus would keep the current set of VEE courses which give our candidates exposure to commonly used business concepts and vocabulary. Those courses are given below:

  • Introduction to Finance
  • Introduction to Financial Accounting
  • Introduction to Cost Accounting
  • Introduction to Microeconomics
  • Introduction to Macroeconomics

Eventually, expanding the VEE course list would have value, but I believe it’s beyond the scope of what we should try in the next two years given the work required to bring improved coverage of predictive analytics online.  Comments on where expanding VEE could be helpful are given below.

I believe that there would be a great deal of value in expanding the VEE list to include additional Finance topics. We are in a forced choice environment for space on the exam syllabus and predictive analytics should win that contest over additional finance topics like those on the current Exam 9 under Sections A & B yet additional material yielding a better understanding how the finance people think about risk and reward has merit.

Ideally, we would add policy forms and coverage plus a limited introduction to insurance operations to the VEE course listing which would let college students get credit for that topic and let us completely drop the online course proposed below from The Institutes.

Adding courses on R, Python and SQL to the list of VEE courses has appeal, since candidates would get credit towards graduation at their school through that route.

Online Courses from The Institutes

We would drop from two to one online course from The Institutes. That one course would cover policy forms and coverage plus some limited coverage of company operations.

Our candidates require some familiarity with P&C vocabulary relating to topics like policy coverage options or insurance company operations to work effectively in a P&C insurance company, but the two On-Line Courses we have today effectively add a mini-CPCU program to our syllabus. Each company will have its own policy forms and company structures will vary which means including a limited, general introduction to those topics on our syllabus is the appropriate course of action.

Also, a portion of the current On-Line Courses appear to be duplicating later exam material. Our current Exam 6 which would become the proposed Exam 6 offers extensive coverage of insurance regulation and insurance accounting which appears to be duplicated in the current On-Line course offerings. A portion of the current Exam 7 which would become proposed Exam 9 provides coverage of risk management topics which are also found in the On-Line course offerings.

Programming Certificates

The online programming certificates would be:

  • Introduction to R
  • Introduction to Python
  • Introduction to SQL

There are several online vendors who offer courses leading to a certificate. We would need to identify which vendors and courses meet our requirements.

Some level of programming skill in R and Python is necessary to work applied statistical modeling and data science examples which is essential to understanding the modeling material on the proposed syllabus. Covering an introduction to SQL would give candidates exposure to data base concepts. Picking up an introduction to using R, Python and SQL gives candidates marketable skills for a variety of career paths which means this a low-risk requirement for college students to complete while in college and, in many companies, would be a useful skill set for on-the-job assignments.

Associateship Exams

The list of proposed exams includes:

  • Exam 5: Basic Concepts for Ratemaking and Estimating Liabilities
  • Exam 6: Regulation and Financial Reporting
  • Exam MAS I: Modern Actuarial Statistics I
  • Exam MAS II: Modern Actuarial Statistics II

Exams 5 and 6 provide the specialized vocabulary and concepts someone new to the P&C actuarial field requires to work effectively in an actuarial area.

Exams MAS I & II provide the additional modeling tools beyond those covered on the preliminary exams required to deal with the complexities of solving P&C ratemaking and reserving problems. The modeling techniques covered under the MAS exams could be used in ratemaking, reserve analysis or setting risk management metrics; so, covering those topics before the proposed Fellowship exams is efficient.

One need not take the exams in the sequence I have listed as long as we do not use ratemaking or reserving concepts when writing test questions for the MAS exams or when creating homework assignments. There are advantages to continuing with the modeling material and taking the MAS exams immediately after passing Exam 3 but reading the material on Exam 5 immediately after gaining full-time employment has advantages too.

I suggest that we defer using P&C ratemaking and reserving examples when applying modeling techniques to the Fellowship exams. The details of those examples can be complex and using simple examples to illustrate the modeling techniques lets the candidates focus on those techniques. Then too, the question of which modeling tool should be used for a given application will be a key concept to cover on the Fellowship exams.

One should note that both Machine Learning and Bayesian MCMC are techniques where advances in computing power, algorithms, and the software available to run those models in recent years has made it much more practical for actuaries to use them than in earlier times. Then too, the fact that we would use open-source software means candidates avoid the cost of acquiring proprietary or commercial software which makes it practical to include using those tools as homework. Expanding coverage of those tools is consistent with the idea that our syllabus should adapt to changing technology.

In general, the packages used for applied statistics or data science continue to evolve which makes testing how to code using those tools impractical. We should focus our exams on interpreting the modeling output, the assumptions behind those tools and or setting up a model structure to meet a given modeling goal.

We should create homework assignments in the form of simple modeling assignments to go along with the topics covered in Exam 3 and the two MAS exams. Those modeling assignments would be uploaded by the candidates, and we would verify the results by mechanically executing the uploaded programs. Candidates would need to complete homework assignments for a given exam in addition to achieving a passing score on the exam to receive credit for the exam.  The homework assignments should not assume any knowledge of P&C ratemaking or reserving vocabulary since Exam 5 will not be a prerequisite.

Rearranging the material that we cover in the proposed Exam 3, MAS I, and MAS II to cover that same material in four exams rather than three had a good deal of appeal, but there are limits to the amount of change we accept over the next couple of years.  Passing a four-hour exam like the ones we have today can be a test of stamina as much as knowledge and skill. Then too, having fewer topics on a given exam makes it easier for the predictive analytics exam committee members to assemble an exam. I believe dividing the predictive analytics topics between four exams rather than the current three should be a topic for future discussion amongst the membership.

Exam 5: Basic Concepts for Ratemaking and Estimating Liabilities

The proposed Exam 5 would:

  • Include our current Exam 5 material but limit the depth at which some topics are covered.
  • Include Introduction to Credibility from MAS II Section A.

The proposed would give candidates an introduction to the mechanics of ratemaking and reserving. The material for Basic Concepts for Ratemaking and Estimating Liabilities would, in large part, come from the material covered on our current Exam 5. The issue of how to deal with complex analytical problems like those created by changing claim practices would be deferred to the later exams which would allow us to compress the existing Exam 5 material.

Compressing the current Exam 5 material makes it possible for us to add Introduction to Credibility, and the current Exam 5 material mentions credibility repeatedly. In earlier times, introduction to credibility was covered on Exam 4/C which would normally have been taken before the current Exam 5. When the SOA eliminated Exam 4/C, we picked up the Introduction to Credibility topic on MAS II.

Exam 6: Regulation and Financial Reporting

The proposed Exam 6 U.S. would cover the same topics as the current Exam 6 U.S.:

  • Regulation of Insurance & Insurance Law
  • Government & Industry Insurance programs
  • Financial Reporting and Taxation
  • Professional Responsibilities the Actuary in Financial Reporting
  • Reinsurance Accounting Principles

The proposed Exam 6 would give candidates the regulatory context in which the ratemaking and reserving operations operate. Given the role regulatory environment and financial reporting requirements play in P&C insurance, the actuary must understand the requirements of insurance rate regulation and financial reporting to provide the services clients expect from P&C actuaries.

MAS I: Modern Actuarial Statistics I

The proposed MAS I exam would:

  • Retain Probability Models (Section A of MAS I today)
  • Move Statistics (Section B except for section B.3 of MAS I today) to proposed Exam 3.
  • Move Extended Linear Models (Section C of MAS I today) to proposed Exam 3.
  • Move Times Series with Constant Variance (Section D of MAS I today) to proposed Exam 3
  • Include Linear Mixed Models (Section B of MAS II today)
  • Include Bayesian MCMC from (Section C of MAS II today)

Today in MAS I, the Probability sections cover some useful probability concepts not covered on the current Exam 1/P, and the proposed MAS I would continue with that coverage. Also, some Life Contingency topics are covered today to meet the International Actuarial Association (IAA) requirements and are included under the probability section of proposed MAS I.

The Linear Mixed Models material will link regression methods to least squares type credibility weighting assuming the dependent variable is Normally distributed. Linear Mixed Models also allow one to test for and, if necessary, incorporate correlation effects in the model forecast.

The Bayesian MCMC section will allow one to link regression methods to both least squares type credibility weighting and to the prior and posterior distribution form of credibility weighting. The prior and posterior form of credibility weighting allows the actuary the means to insert the degree to which his expert opinion should shape the forecast and publicly documents how his expert opinion was enrolled to shape the model forecast. The modeler is free to choose a wide range of distribution functions. The Bayesian MCMC tool will also allow one to test for and, if necessary, incorporate correlation effects in the model forecast. One can also have a non-linear model and incorporate the idea of a mixture of distributions.

At some point, we should consider dropping the Linear Mixed Model topic and replacing it with an expanded coverage of penalized regression which is a form of credibility weighting in regression models. We currently give a short introduction to penalized regression under the topic of Extended Linear Models which would move to Exam 3 under this proposal. At the time the current MAS exam syllabus was created, Bayesian MCMC model performance was much poorer than it is today and using a Mixed Model approach was often the practical route to get to an answer. Any Linear Mixed Model structure can also be solved via Bayesian MCMC which makes it increasingly difficult to justify spending syllabus space on Linear Mixed Models. Penalized regression can be a practical method to include credibility weighting in class plan modeling with large data sets.

MAS II: Modern Actuarial Statistics II

The proposed MAS II exam would:

  • Move Introduction to Credibility (Section A of current MAS II today) to proposed Exam 5.
  • Add Applications of Parametric Distributions (material like that from the Bahnemann monograph and Section B.3 of MAS I)
  • Move Linear Mixed Models (Section B of current MAS II) to MAS I
  • Retain and expand machine learning (Section D currently: trees, random forest, gradient boosting, clustering)
  • Move Bayesian MCMC (Section C of CURRENT MAS II) to MAS I.
  • Add coverage of Neural Networks

Application of parametric distributions would cover a range of topics. Some topics would be from the former Exam 4/C like estimating loss elimination ratios and deductible factors or coverage of heavy tailed distributions. It would also cover material like that in the Bahneman monograph.

At times, an actuary may have a modeling problem where the form of the model is unclear, and Machine Learning techniques like trees or neural networks provide a means to let the model learn from the data to define the model form. Other times, parametric distribution-based models may not fit the data at hand and these techniques offer a means to arrive at a model under those circumstances. Clustering is a technique that can help describe the data set characteristics by grouping similar observations.

Fellowship Exams Proposal

The three proposed Fellowship exams are:

  • Exam 7: Advanced Estimation of Policy Liabilities
  • Exam 8: Advanced Direct Business Ratemaking
  • Exam 9: Risk Management

The purpose of the Fellowship exams is to combine the modeling tools that are covered on proposed Exams 3 and the MAS exams with the P&C insurance specific knowledge gathered from proposed Exams 5 and 6 to equip our candidates to provide advice to clients on P&C ratemaking, reserving and risk management problems.

Source of Syllabus Material for Proposed Exams 7 & 8

Given the age of much of the reading material on our syllabus for the current Exams 7 & 8 and the need to give P&C specific advice and training on the application of current modeling tools, I anticipate that we will need to generate new Study Notes via requests for publication and replace a good portion of the existing reading material.  Then too, we need to provide our candidates with a series of examples to give them guidance on how one would really apply predictive analytics in practice. There are two generic topics for Study Notes that we will need for Exams 7 & 8:

  • Outlines of the nature of the modeling problem for each exam
  • Examples of applying a given technique to the modeling problem at hand.

There is a wealth of information from the existing syllabus material for the current Exams 7 & 8 that can serve as a starting point for Study Notes outlining the nature of the modeling problems for reserving and ratemaking. While the numerical techniques in those earlier articles may be outdated, we can divorce the issue of how the authors solved the problem from their outline of the problem they were trying to solve when creating Study Notes and synthesize the content of those earlier articles.  Consolidating readings for Exam 5 into a Study Note was helpful to candidates, and I believe the same would be true for the advanced reserving and ratemaking exams.

We need Study Notes for the new exams 7 and 8 with numerous examples showing how the modeling theory introduced in earlier exams can be applied in practice. Creating Rmarkdown files containing both the code required to run the routine and comments on what is taking place along with the data sets used in the examples would provide those examples. We could generate the data sets via simulation within each example and lead the candidate through the process of how to manipulate data to arrive at a useful input data set for a given analytical routine. A candidate could execute each of the modeling routines covered in that Study Note to get hands on modeling exposure, alter the commands to experiment, and benefit from the comments adjacent to different steps in the modeling process. The exact code used to create a model would not be tested. The examples would enhance our ability to ask questions related to selecting the best modeling tool for the task at hand, setting up the model structure and interpreting the results when applied to a ratemaking or reserving problem.

We will assume that the underlying modeling theory has been covered adequately in Exam 3, and the MAS exams and that the Fellowship exams will focus on applications. One should note though that the comments in the examples referencing specific sections of the earlier exams will be a prompt to the candidate that remembering the material on the earlier modeling exams will be essential to passing the proposed Exams 7 and 8. The examples in the Study Notes that will let a candidate do hands on modeling and referring to the relevant reference material on earlier exams will provide the real life reinforcement of those modeling concepts required to safely use those techniques in preparing an actuarial statement of opinion.

The starting point for any analysis should be an Exploratory Data Analysis to check for bad data and understand the nature of the ratemaking or reserving problem at hand and would be a part of any example. The Study Note compiling the insight from earlier reserving or ratemaking papers would provide insight into what to check in the Exploratory Data Analysis.

Translating the results of the modeling exercise into useful information for clients is a key skill to acquire. The Study Note examples would include examples illustrating how to use graphs to make those results useful to clients.

We should have some simple modeling assignments for homework to be uploaded by the candidates that would use the Study Notes as a starting point. Given candidates should have completed Exam 5 before taking the advanced ratemaking and reserving exams, those homework assignments could use ratemaking and reserving vocabulary. This should satisfy the expectation from employers that our candidates will have some ability to do hands-on modeling work after completing the exams.

Proposed Exam 7: Advanced Estimation of Policy Liabilities

Proposed Exam 7 would:

  • Move Enterprise Risk Management (Section C today of Exam 7) to proposed Exam 9
  • Move Insurance Valuation (Section B today of Exam 7) to proposed Exam 9.
  • Retain Estimating Policy Liabilities (Section A today of Exam 7) and update the material to reflect current, commonly available modeling techniques available via open-source software

The Estimating Policy Liability topics to be covered are:

  • Limits of link ratio methodology
  • Primary unpaid loss estimates for direct, ceded reinsurance, salvage & subrogation
  • Reinsurance assumed and Excess unpaid losses
  • Measuring the effect of changes in company operations (underwriting and claims) on reserve estimates
  • Effect of correlation on loss estimates between lines and measurement of correlation
  • Effect of varying loss cost inflation on unpaid loss estimates
  • Total company reserve distribution
  • Premium Liabilities

The proposed Exam 7 would move away from using link ratios for reserving. Given our history, an explanation of the limitations of link ratios is necessary and there are two existing monographs (one by Shapland and the other by Myers) that provide a good starting point for that topic.

I suggest that we separate the segmentation portion of modeling for reserving from the forecasting part. Machine learning tools provide an efficient means to identify potentially useful ways to segment the reserving data set. I use the term potentially useful, since one could find in practice that reducing the granularity of a proposed segmentation scheme may not improve the reliability of the forecast.

The data sets used in reserving often contain complex covariance structures and often require credibility weighting options to achieve plausible results (results one would feel comfortable sharing with a peer group of experienced actuaries). That combination leads to Bayesian MCMC as the tool we would focus on in the modeling examples contained in the Study Note. One should note that a Bayesian MCMC analysis produces a posterior distribution automatically which can be used as the basis for the unpaid loss distribution used to advise clients when setting reserve levels.

A sample of the types of analytical problems to be addressed includes:

  • How to use actuarial judgement in setting the prior distribution parameters
  • Estimating loss cost trends
  • Accounting for effect of claim department changes on forecast
  • Accounting for effect of changes in underwriting on forecast
  • Applying least-squares type credibility weighting while employing a regression type equation
  • Estimates for triangles with sporadic zero payments at either early or later development periods.
  • Model averaging
  • Evaluating model for correlation effects (if any)

The total company reserve distribution is needed to give management a view of reserve risk to surplus and requires:

  • Estimating the effect of correlation between lines to estimate the total reserve distribution
  • Incorporating an Economic Scenario Generator (ESG) results for inflation variability in reserve forecasts.

Proposed Exam 8: Advanced Direct Ratemaking

The proposed Exam 8 would:

  • Move Reinsurance and Catastrophe pricing (Section C of Exam 8 today) to proposed Exam 9
  • Retain Classification, Excess, Deductible, and Individual Risk Rating and update the material to reflect current, commonly available modeling techniques available via open-source software
  • Add state level ratemaking utilizing current, commonly available modeling techniques available via open-source software
  • Add Issue Rate and Retention Models utilizing current, commonly available modeling techniques via open-source software

The modeling problems vary widely by topic which means the modeling tools likely to be used in the Study Note examples will be different by topic. A few comments on the modeling tools likely to be used by topic seems useful.

Classification would cover both the segmentation of the business and estimation of rating factors on a countrywide basis. The machine learning tools provide an efficient means to identify potential segmentation plans. Generalized Linear Models have been the workhorse for class plan modeling and are accepted as a reliable tool which means the examples should include their application. Often, the problem faced by a modeler is more in the nature of how to slim down a rating plan to a manageable number of factors when building a GLM than finding additional rating classifications which would mean using tools like penalized regression (a form of credibility weighting) or principal components analysis to reduce the number of variables to manageable level. Generalized Additive Models may be used to smooth provide a smoothing effect on rating plan factors.

State level ratemaking would cover the rate level indication process by coverage, territory relativities and territory definition as well as adapting the countrywide rating plan to any unique state level changes to a rating plan required by insurance regulations. Given the smaller data size typically associated with state level ratemaking this would mean employing Bayesian MCMC to employ regression models linked with credibility weighting to arrive at base rate level changes or territory relativities.

Clients will typically ask what the effect on written premium will be when implementing a new class plan or changing state level rates. Logistic regression models from the Generalized Linear Modeling family are commonly used for the task for creating issue rate or retention models.

Excess and deductible pricing would cover calculating adjustments to filed rates to accommodate the terms applicable to a given insured. Given we would increase the material on distribution applications on the MAS exams we may be able to reduce space on the syllabus devoted to Excess and Deductible pricing. We would still need practical examples in a Study Note.

Individual Risk Rating would include both experience rating and estimating the parameters for a retrospective rating plan. Developing an experience rating plan in essence means developing a large credibility weighting table which would be well suited to a Bayesian MCMC approach. One of the early issues to resolve in setting the learning objectives for this exam would be how many companies still use Individual Risk Rating tools like Table L or M and what level of coverage is appropriate for those rating devices. Creating a Table L or M would mean building an aggregate loss distribution which is one of the topics covered in Probability Models under proposed Exam 6 which means we may be able to limit coverage to an example or two in a Study Note that references the material from proposed Exam 6.

Proposed Exam 9: Risk Management

Proposed Exam 9 would:

  • Drop the current Finance topics (Sections A & B today of Exam 9)
  • Include Enterprise Risk Management (from Section C today of Exam 7)
  • Include Insurance Valuation (from Section B today of Exam 7).
  • Include Reinsurance and Catastrophe pricing (from Section C of Exam 8 today)
  • Retain Economic Capital topics (Sections C.5-C.9 today of Exam 9 today)
  • Retain Rate of Return, Risk Loads and Contingency Provisions (Section D today of Exam 9)

The finance topics on the current Exam 9 match up with the current Exam 3/IFM which is going away and would be dropped from the proposed Exam 9. The focus of this exam is on Insurance Risk Management.

The topics listed above are intertwined when measuring a company’s risk exposure; so, having them on one exam is useful. Economic Capital metrics like Value at Risk are an output of an Enterprise Risk model. Economic Capital models use a distribution of cash flow linked to interest rates and inflation forecasts which is a part of Insurance Valuation. Rate of Return, Risk Loads and Contingency provisions can rely on economic capital allocated from an Enterprise Risk model. Catastrophe model results are a key input to Enterprise Risk models. One use of Enterprise Risk models is to provide estimates on the value of different reinsurance programs.

While Reinsurance and Catastrophe pricing are included in the list above, the level of coverage for those topics would not be as extensive as for the direct pricing topics on Exam 8. The coverage would be more in the nature of an introduction to those topics and would provide enough background information on the characteristics of reinsurance and catastrophe modeling that the inputs into the Enterprise Risk management modeling is not a “black box.”

I recommend that we use the existing readings from the current set of exams for the proposed Exam 9 readings.

Testing Format

I propose that we return to the practice of releasing multiple-choice questions, answer keys and individual candidate test responses to candidates and that we move to a multiple-choice testing format for all current exams over a two-year time period. This would mean the new advanced reserving and ratemaking exams would come on stream using multiple-choice test questions

Releasing Test Questions and Answer Key

For exams with the multiple-choice testing format, releasing test questions, the answer key and candidate tests answers is a viable option.

There is a cost to releasing the test questions. The exam committee would need to turn out more questions. The exam question challenge period will likely need to be longer, since the candidates will be in a better position to write effective challenges with the questions, answer key and their own answers in front of them.

There are material benefits to the candidates under this option. We can do the following for the candidates:

  • Send each candidate a file containing their answers which will allow them to do a self-assessment should they fail.
  • Enable challenges on the answers and questions for each test in a reasonable period and avoid having the candidate use allotted exam time to compose challenges.
  • Provide a steady stream of new questions for the candidates to use when studying for future exams.
  • Provide concrete assurance to the candidate population that they are getting a “square deal.” The questions and answers would both be open to public scrutiny and challenge.
  • Recognizes that past test questions for the advanced ratemaking and reserving questions may no longer be helpful for candidates when studying for the new exams once we update the syllabus and continue to do so on a regular basis.

The cost/benefit issue merits careful consideration by the CAS members.

Advantages of Moving to a Multiple-Choice Testing Format

There are advantages both to the candidates and to our exam volunteers in moving to a multiple-choice testing format:

  • It makes it practical to give the candidates the answers they gave during the exam, which, combined with the questions and the answer key, is an invaluable tool for those candidates who failed the exam to diagnose where they need to improve.
  • Candidates could write challenges after they have finished writing the exam and have the time and material to formulate an effective challenge.
  • Reduces demand on volunteer time for grading for all exams.
  • Recognizes that if we update our syllabus to reflect current predictive analytics applications on the advanced reserving and ratemaking exams, the potential grading volunteer pool will shrink noticeably.
  • Uses machine grading to arrive at the raw individual candidate score which makes it possible to give the candidates, the Exam Committee and senior exam officers the raw scores within a day of the exam window closing.
  • The level of knowledge and skill required to pass should be consistent over time with the passing percentage floating (We will not grade on a curve) which is easier when we avoid open ended questions.
  • The evaluation of candidate answers should be consistent across candidates.
  • The scoring for candidate answers should be objective (there is a correct answer).
  • It lends itself to testing a representative sample of the exam material with the understanding that the sample will vary from one exam sitting to the next given the volume of material.
  • Easier to write questions with a range of difficulty to avoid having many candidates with scores on or near the cut score that determines passing.

Our constructed response graders work hard to meet the timeliness and consistency requirements but given the candidate volume we face today that is a challenging task and will become more difficult as we start to update the syllabus on a regular basis.  Our volunteers work hard to be consistent and objective in awarding partial credit, but as the candidate numbers for an exam approach a thousand that is a heavy lift.

The multiple-choice format makes it easier to adapt to new predictive analytics material. One can safely train volunteers that are not experienced in a given topic to write test questions, since all the new volunteer test questions are reviewed for accuracy, clarity, and relationship to the syllabus material by a wide group of volunteers. Grading constructed response questions is a different matter, since to evaluate and give partial credit fairly one must have some experience in applying the technique in question. The volunteer time constraints we face today will become more difficult going forward and the use of the multiple-choice format will help us.

The multiple-choice format does not lend itself to open ended questions which means there will be a definitive answer on the answer key distributed to candidates for each question.

It is easier to get a representative sample of the syllabus material using several small point value questions like we do for the multiple-choice exams than for constructed response exams which tend to have relatively few, but high-valued point questions.

We have used the multiple-choice exam format for several years on the MAS exams and have been able to test complex topics. There have been many comments on the MAS exams, but I do not believe anyone has said the questions are too simplistic.

We have written multiple-choice questions for years without the constraints of the Blooms Taxonomy which has helped give us questions with a wide range of difficulty which in turn spreads out candidate scores. Passing or failing a large percentage of candidates who are close to either side of the selected cut-score is problematic. We have not had the problems on the multiple-choice exams as those encountered on the constructed response exams that led to the implementation of the Blooms Taxonomy on the constructed response exams.

Adapting to the Multiple-Choice Testing Format

One should note that bringing current predictive analytics into the proposed advanced reserving and ratemaking would require some changes even if we were to retain the constructed response format. It will no longer be possible, for the most part, to ask questions that require candidates to do the calculations leading to their answers in the CBT spreadsheet format for the advanced reserving and ratemaking exams.

We can adapt to testing using integrated problems by including case studies which have the results for different models displayed as the basis for integrated test questions with the candidate asked to find the relevant test statistics, review graphs of the results and answer questions on the results. We can ask questions related to how to set up a model to meet a given goal for a specific modeling technique. We can use triple true false questions to evaluate how well a candidate understands the underlying functioning of a given technique and when it can be used safely. This is similar to how MAS II has operated. There may be a few limited sections where we can ask the candidate to calculate an item, but techniques like Gradient Boosting or the Lasso Regression technique don’t lend themselves to calculations in a spreadsheet.

For numerical questions on other exams, one can break down a given the process into steps when setting up the problem and elect to give candidates information up to a given step then calculate the next step.  Generally, since one cannot give partial credit, questions have to be set up to minimize the calculations the candidate has to perform.  One can test the same concepts as we have in the past, but how one frames the question will need to change.

I would move to a multiple-choice testing format for all exams over a two-year time period to give item writes a chance to adapt since writing valid multiple-choice test questions is an acquired skill. Converting all exams overnight would be a high-risk undertaking. I would suggest starting with Exam 6 since the content of the Exam 6 syllabus will likely show only minimal changes which means those test questions, if released, would be useful to candidates when studying for future exams.


The Society of Actuaries (SOA) change to their syllabus is a potential benefit to us. Moving some of the material on the current MAS I exam to replace Exam 3/IFM means we can move modeling material to an earlier exam. The candidates pick up valuable skills earlier than they would otherwise, and space is freed up on the MAS exams to cover additional material. I believe that we should allow at least a year to find the new study material on machine learning topics and write new questions for the revised MAS exams which would mean implementation would start in 2024 at the earliest. We could implement the proposed Exam 3 option starting in the Spring of 2023 using selected portions of the current MAS I exam.

Altering the advanced reserving and ratemaking exams to make use of the modeling material covered on the earlier exams and commonly available open-source modeling software will be a material change which will take time and resources (both volunteer and CAS staff) to implement. The lead time required to work out the details of Learning Objectives and Knowledge Statements for those exams, put out a request for proposal and train items writers is substantial – at least two years.

Scaling back material like link ratios may be controversial, yet we need to adapt to changing times. At one point, there was no commonly available alternative to using link ratios but that has changed and there are other long-standing approaches to solving ratemaking and reserving problems that it may be time to update. In earlier times we had to go our own way and devise methods that did not use the statisticians’ tools since those tools were based on assumptions that were violated by our data. Today though, the tool kit available from the work of statisticians, data scientists and computer scientists has expanded and can be used to meet our needs, if we have the training to use those tools safely. Moving from going our own way to develop techniques to solve reserving and ratemaking problems to a mindset where we need to learn which tool best matches a particular task and how to adapt tools created by other disciplines to do our work will require a change in outlook and training.

The scope of the proposed changes to our syllabus and exam process is substantial and may be difficult to accept. On the other hand, we have done little in recent years to adapt our syllabus to a rapidly changing applied modeling environment which means we are playing catch up. The MAS exams came online in 2018 and the syllabus for those exams was largely locked down by the end of 2016. For the most part, our other exams have been static for a longer time period than the MAS exams.

Moving to multiple-choice as the universal exam format may be controversial as well. That is not the system that we all came up through to achieve our designations. I would ask you to consider though how practical is it to continue with the constructed response format. A good part of the material on the later exams has remained static for years which means we have a large potential pool of volunteers who are familiar with that material which, in turn, helps make grading the constructed response exams possible. Once we start to update the syllabus, that pool of potential graders will likely shrink.

One open question from writing this proposal is how will the regulatory authorities respond? Today, the NAIC requires our current Exam 7 plus an ACAS designation to sign a reserve opinion. My belief is that would continue under the proposal (substitute proposed Exam 7 for current Exam 7) in this article with the likely addition that to sign a rate filing an actuary would need the ACAS plus the proposed Exam 8 on Advanced Ratemaking.

Some of you may recall filling out surveys related to your work as part of a Job Task Analysis (JTA) that the CAS launched in 2020 with the intent of making it the primary source to drive our syllabus and exam questions. The JTA is a commonly used tool to set credentialling standards using information gathered from members of a profession or trade, which makes using it appealing when talking to either the National Association of Insurance Commissioners (NAIC) or employers about our syllabus and exam process. Unfortunately, that tool assumes a static environment that does not match the world we live in today. The results were not useful. I would not recommend using it as the primary source to change our syllabus again which is part of the motivation for writing this article.

We do need a coherent story for the National Association of Insurance Commissioners and employers that explains why our syllabus and exam process meet their needs. I suggest that we develop some broad operating principles on what we want the syllabus and exam process to do that will serve as the foundation for that story. Feedback to our leadership group on this article could be used as the starting point for those operating principles. We would have an exposure and comment period on those operating principles similar to the process that the American Academy of Actuaries uses when they create a new Actuarial Standard of Practice.

Going back to the start of the article, my hope is that this will generate a conversation among our members. I also hope that this will generate a response from the CAS on what is the plan going forward in time for the membership to respond before enacting it. Changing our syllabus and exam process is a major investment and some dialogue on the direction we are going before making that investment seems essential.