Fireside Chat—Rare Events and Insurance
Day 2 of the workshop began with a conversation between Michele Wucker and Markus Gesmann, co-founder of Insurance Capital Markets Research, which provides quantitative research on the global specialty reinsurance industry for insurance carriers, intermediaries, and investors. Wucker started the conversation by asking Gesmann to talk about the types of events on which reinsurance focuses. He first explained that specialty insurance is bespoke insurance for the risks for which a traditional insurance company does not have a standardized product, such as for a new solar park. Reinsurance is insurance for insurers to protect them against peak risks, such as earthquakes, wind storms, and other rare events. Both specialty insurance and reinsurance deal with rare events and risks for which there is little or no historical data.
Wucker commented that Gesmann’s work involves the predictions insurers have to make, such as the probability that a certain event will occur and what its impact might be, or even what the effect will be on the particular companies that a client firm insures versus all insured in the same market. His work also has relevance to the capital markets because it helps manage the difference between what insurance companies see as the risk and the right pricing and what the markets see as the right risk and pricing. With that as context, she asked Gesmann how the reinsurance and specialty insurance industries address a situation for which there is little or no historically available information.
One approach, said Gesmann, is to find analogies. He pointed out that Lloyd’s of London was the first company to insure a car, and the company treated the car like a ship that navigated on land. Another approach for insuring a portfolio of properties against windstorms is to use the historical record of storm tracks for the past century to inform the risk assessment. He then noted that when the first commercial airline flights were happening, someone came up with the idea of offering insurance for mid-air collisions. Although there was no historical data to go on, because such an accident had not occurred, a simple Bayesian model estimated the probability of a mid-air collision to be around 14 percent for any one year, which might seem high. However, there have in fact been 11 such incidents with more than 100 fatalities between 1955 and 2015, which shows that 14 percent was not a bad predictor after all: 11 events over 60 years equals 18 percent. The analogy he used was the lottery—the probability that any one person would win the lottery is remote, yet most every weekend, someone wins the lottery. In terms of setting a price for that insurance, the insurer will then need to figure out the probability that one of the planes it insures will collide with another plane based on its market share.
Gesmann explained the concept of risk sharing with the example of a terrorist attack on New York that destroys everything in a particular location. In that case, no one insurer would be able to cover the damage, so the industry practice is to syndicate the risk across many insurers spanning the globe. That led Wucker to prompt Gesmann
to explain the logic behind terrorism insurance and how the industry thinks when there has been a lull between attacks or immediately after a big attack. The short answer, he replied, is that the perception of risk after a major event is often much higher than the actual risk in the insurer’s mind, which creates an opportunity to charge more for such a policy. As a case in point, the World Trade Center attack was a disaster for the insurance industry, and many insurers retreated from the market. However, other companies felt prices would rise dramatically and they would make a great deal of money if they could tolerate the risk. The key here, said Gesmann, is to build resilience, which can be accomplished by accumulating cash, for example, or by diversifying the insurance portfolio and testing that diversification with portfolio modeling.
Wucker divulged that a few years ago, a friend of hers who worked on modeling virus outbreaks was talking with reinsurance companies about developing pandemic insurance. The companies looked into this quite exhaustively and not a single one decided to offer such a product, which proved to be a good thing for the insurers given what has happened since. Today, however, as Gesmann pointed out, the industry is looking into offering such a product because the perception of risk is high enough that insurers could charge a price that aligns with their risk models. Similarly, 30 years ago, a catastrophe modeler named Karen Clark was talking to insurers about the huge risk that a major hurricane would tear through Florida. Although they ignored her warning, Hurricane Andrew taught those insurers a lesson and completely transformed how the industry looks at natural catastrophes and risk and how it uses models to better understand that risk.
Wucker prompted Gesmann to explain how insurance companies factor near misses into their calculations, given that they only learn about incidents that occur, not how many times a policyholder was lucky. This is an issue for insurers, he replied. With hurricanes, the industry is constantly recalculating risks following each near miss using updated data on storm tracks. The auto insurance industry is benefiting in this regard by the increased use of telemetrics, the devices and apps that record a driver’s behavior and can provide data about near misses. In a similar manner, he predicted that insurers would deploy Internet of Things devices on container ships to gather more information about how many storms a ship missed that could have caused severe damage. Being able to monitor behavior and the conditions in which it occurs will be transformative for the industry, Gesmann predicted.
He then told a story from 20 years ago when the Russian Mir space station was brought down into the Pacific Ocean. An American fast-food chain marked a 40-foot-by-40-foot target in the Pacific Ocean off Australia as part of an advertising campaign and announced that everyone in the United States would get a free coupon if Mir landed inside the target area. Because the cost of providing free food at the scale would be huge, the company sought to insure itself. Allegedly, the insurance firm the company engaged decided that the probability of this happening was so low that it did not matter how it priced the policy, but given that the company perceived the risk was high, the company decided to charge as much as it could. Similarly, David Aldous wondered how much Berkshire Hathaway charged to insure the billion-dollar payout if someone predicted all 64 results from college basketball’s championship tournament. Gesmann replied he would look up the financial statement of the company offering that prize and determine how much they could afford. “That is how much I would charge,” he said.
Mel Eulau, a senior program officer at the National Academies, asked if scenario-based mitigation strategies affect risk assessment for extreme events for which little or no data are available. Yes, replied Gesmann, for such an approach helps insurers limit their exposure to such events and it helps set prices for such insurance in relationship to the number of companies that are willing to accept a share of the total risk.
One challenge the insurance industry is facing today arises from global climate change. Insurers have used the huge historical weather database for its actuarial calculations, but as the past couple of years have demonstrated, the nature and frequency of extreme weather events is changing, casting doubt on the value of model outputs. A similar situation is developing in the auto insurance industry with the advent of self-driving cars. Gesmann said that what the industry is doing in these cases is to assess how credible the historical information still is, using Bayesian modeling techniques that can provide insights into how such changes affect a model’s performance. He then took a moment to explain that Bayesian logic is just a formal approach of weighing the importance of different pieces of information and different assumptions to an outcome. For anyone interested in Bayesian concepts, Gesmann recommended two books—Thinking, Fast and Slow1 by Daniel Kahneman, as a good introduction to
1 D. Kahneman, 2011, Thinking, Fast and Slow, Farrar, Straus, and Giroux, New York.
Bayesian concepts, and the more advanced book Statistical Rethinking2 by Richard McElreath for learning to apply Bayesian concepts and tools.
Robert Axtell from George Mason University commented that human behavior is often the most important component in determining an outcome and wondered if insurers conduct after-the-fact analyses to determine whether they missed something structural or whether some aspect of human behavior was missing from a given risk model. The way insurers include human behavior in their models, said Gesmann, is to go into the field and observe the roles humans play in an event. For example, before insuring a power plant, insurers will send engineers to the plant to see how the plant is run and if the control room is neat and tidy, which provides an indication of how the company operates.
Gesmann pointed out that over the past 20 years, the insurance industry had significantly changed the way it assesses risk and uses probabilistic programming, in part because of the dramatic advances in computational power that has occurred during that time. In addition, modeling has evolved from a simplistic approach to a risk-based approach, by which he meant that insurers had to build models to assess risks from a 1 in 200 scenario, and by incorporating Bayesian analysis using probabilistic programming languages such as Stan or PyMC.
Lastly, Gesmann suggested that making predictions is one thing, but getting people to act on them and spend money to insure against them is another. No one buys insurance because they want to, only because they have to do it. This is a particular problem with policy makers who have to balance spending money now to protect against a huge disaster that may or may not occur in the future.
2 R. McElreath, 2020, Statistical Rethinking: A Bayesian Course with Examples in R and STAN, second edition, CRC Press, Boca Raton, FL.