Until recently, the attitude of much of the U.S. property-casualty insurance industry towards floods has been straightforward: Everything is insurable for the right price — except floods, which are uninsurable.
Floods were considered uninsurable for good reason. There are adverse selection problems in flood insurance: The only people who want flood insurance are the exact people who suffer floods. There are many types of floods: storm surge, river flooding, levee failure, ice jams, dam failures and volcano mud flows, for example. There are cost problems: The premiums needed to compensate for the risks involved would be unaffordable. And, of course, there are risk accumulation problems: Even high premiums probably wouldn’t cover the losses after a catastrophic flood event.
The reasons above illustrate the need for the U.S. federal government’s National Flood Insurance Program (NFIP), begun in the 1960s. But times are changing fast. The private flood insurance market reportedly grew by over 50 percent in 2017.1 The private market now accounts for 15 percent of all flood written premium in the U.S. Clearly, many insurers presently believe flood risk can be insured.
During the CAS Annual Meeting General Session on private flood insurance, Matt Chamberlain, FCAS, principal at Milliman, argued that at least four forces are pushing insurers into the flood market:
- Recent catastrophic events and legislation are encouraging private flood insurance development.
- Reinsurer market capacity is increasing, which means that reinsurers need a place to deploy capital.
- Consumer demand is growing, especially as populations continue to grow in flood-prone regions such as Florida.
- Flood risk and catastrophe models are improving, allowing for custom flood rating plans.
The last point is particularly important. Insurers can’t provide flood insurance if they can’t determine how much it should cost. James R. Watje, senior vice president at Wright Flood and panelist during the session, remarked, “Flood isn’t uninsurable; it’s actually the pricing that’s hard.” But pricing is getting less hard, if not easy, thanks to improved (and ever-improving) modeling technology.
The private flood insurance market reportedly grew by over 50 percent in 2017. The private market now accounts for 15 percent of all flood written premium in the U.S. Clearly, many insurers believe flood risk can be insured. Why?
That being said, Chamberlain cautioned that catastrophe modeling is still in its infancy. Because there are substantial differences between the models that are currently available, Chamberlain especially stressed the importance of knowing why models may give an actuary different estimates.
When looking at a model, “Open the hood,” he said, and ask questions. For example, model estimates need to be assessed for whether they’re valid on both an aggregate and a location-specific basis. On a location-specific level, are there risk discontinuities between jurisdictions, and do those discontinuities make sense?
For a specific risk, if a model spits out an average annual loss (AAL) of, say, zero, is the AAL actually zero or does the model simply not have enough events to assess the risk? If three different models give three very different AALs, which AAL makes the most logical sense? Does it make sense for risks close together to have wildly different AALs?
“Look at these things before you use the model. It’s important to bring in exogenous information to figure out if the model’s results are logical,” said Chamberlain.
Chamberlain also noted that actuaries should be aware of reinsurance costs. These costs may depend on what particular model the reinsurer — not the insurer — is using. That’s important to know, since the reinsurance cost will be calibrated to the reinsurer’s model regardless of what the insurer’s actuary may think the actual expected losses are.
Most of these issues also boil down to data quality. And indeed, access to clean, robust data remains a significant challenge. Watje pointed out that the private flood market does not yet have much data from past losses. The NFIP has data, but it can be difficult to obtain.
Chamberlain noted that there are ways for actuaries to compensate for sparse data. One technique is to create a “market basket,” a portfolio of hypothetical risks with a realistic distribution. This can give an actuary the ability to analyze regions with little to no current data.
Another technique is to leverage geographic information systems (GIS) to enrich a model’s data. GIS data include geographic characteristics that correlate with flood risk, such as elevation, distance to bodies of water, size of bodies of water, hydrological features and flood protection infrastructure.
Given these constraints — the limitations of catastrophe models and sparse datasets — how can actuaries begin developing rates for flood risks? Chamberlain identified four possible approaches:
- NFIP clone: rates and territories that follow those of NFIP.
- Refined rating plan: a complete rating plan that reflects characteristics related to flood risks within a territory.
- Grid rating plan: pre-compiled rating, with grids based on latitude and longitude; additional rating factors are employed.
- Risk-level modeling: using a catastrophe model to determine AALs for every risk, which are then loaded for expenses.
All of these come with unique advantages and disadvantages. For example, risk-level modeling is relatively easy and quick to develop, but often requires relying on one catastrophe model. As noted above, this may be ill-advised given the current state of catastrophe modeling. Or consider a refined rating plan: It could allow for insurer control over pricing strategy and might have fewer discontinuities, but it can cost quite a lot to develop and maintain.
Flood is most definitely an insurable risk. The ability to reliably model flood risks is a big reason that attitude has changed.
Fortunately, these approaches are not mutually exclusive. Chamberlain argued that they could be blended together or used separately to capitalize on the benefits of each. Either way, each insurer and its actuaries will need to determine which method (if any) aligns best with their goals and strategy.
And it’s not just private flood insurance that’s working to improve ratemaking. The NFIP is also moving to overhaul its rating methodologies to keep pace with these new modeling developments, noted Mitchell Waldner, FCAS, an actuary with the Federal Emergency Management Agency. Part of that overhaul includes better localized flood risk analysis using multiple models, including catastrophe models. More variables are also being incorporated into NFIP ratemaking, such as a risk’s relative elevation, construction type and distance to the coast.
What all the panelists made clear is that, despite current constraints, flood is most definitely an insurable risk. The ability to reliably model flood risk is a big reason that this attitude has changed. Modeling will continue to help grow private flood insurance as models and data improve.
The growth in private flood insurance could not come soon enough. In a separate CAS session on climate change, Dr. Peter Sousounis of Verisk’s AIR Worldwide and Paul Eaton, FCAS, associate director with Aon, discussed how climate change is increasing the frequency and severity of extreme weather events, including events that would result in flooding, such as hurricanes. To compound matters, U.S. population shifts have largely been towards regions at greater risk of extreme weather events, especially the Southeast. Both of these facts will soon increase the demand and need among consumers for flood insurance and other risk management solutions.
Lucian McMahon, CPCU, ARM-E, AU-M, is a senior research specialist at the Insurance Information Institute in New York City.