Difficult questions raised by the 2005 hurricane season on the merits of the various types of analyses that underpinned trading decisions continue to worry senior management.

The Gulf of Mexico hurricane season in 2005 was the most expensive in history and set many unpalatable records for the insurance sector. With the passage of time, and the hardening of the market, much of the concern around the accuracy of the main catastrophe models used by the market has subsided. However, difficult questions raised by the results of the 2005 season on the merits of the various types of analyses that underpinned trading decisions continue to worry senior management.

During the last year, a considerable amount of effort has gone into adjusting the return periods used in catastrophe models to take into account the systemic changes in the patterns of windstorms over the last decade. While these are welcome enhancements, they are unlikely to be an adequate response to the underlying issues. Given that such seasons cause extensive damage to the capital base, it is critically important that such alterations in the current models are complemented by other changes in risk assessment methods which take into account the various layers of uncertainty inherent in modelling. The importance of this is highlighted in Figure 1.

The curves show the types of probability distributions of catastrophic losses projected by current models for a portfolio that is fictitious, but representative of the market. The difference between the two curves shows the sort of variation that can result by changing the underlying assumptions on return frequencies. Underwriting organisations make critical decisions, for example on reinsurance purchase, using results from such predictions in the region of the graphs where small changes in probability correspond to large variations in the associated loss figures. Conversely, alterations in the assumptions made by the models can shift the resulting probability curve in a way that results in a large change in the loss figure that will be exceeded at a given probability value. Given this sensitivity, key questions that merit careful consideration are:

- What are the factors that can cause shifts of the loss probability curve?
- What are the capabilities and limitations of the current processes and the science incorporated in various models for dealing with these factors?
- Which of the capabilities of the currently available methods are best aligned with the commercial parameters that the company is measured by?
- How should the capabilities of the various techniques be used to optimise the commercial performance over the timeframes that capital providers use to assess investment opportunities?


Figure 2 shows a stack of multiple factors whose cumulative influence determines the overall performance of a portfolio of risks underwritten in the Gulf of Mexico. The top layers of this stack are an illustrative, rather than a comprehensive, set of issues that can have a significant impact on the loss probability distribution curve. The colour coding in the chart gives an indication of either the level of influence that an underwriting organisation has on the nature and accuracy of a particular factor, or the precision with which it may reasonably expect to predict the impact of the layer. Factors marked in from orange to dark red are those that are likely to introduce increasing levels of uncertainty, while blue are those that the underwriter can fully control.

The organisation can set and record the policy details and outward reinsurance programme with complete accuracy. The level of detail available on the locations of insured assets is more susceptible to what the market as a whole is prepared to tolerate. This issue received a considerable amount of attention following the 2005 hurricane season, resulting in a significant boost in the general demand for complete and accurate location detail. The result has been a significant improvement, but further advances are needed.

The geographical accumulation of risk can be determined with precision using aggregations models. These models can provide an accurate figure for net losses using damage matrices that define the spatial profile of damage. They have been used historically to model set scenarios and assess losses from specific events retrospectively. They have the merit of being straightforward to use and are easy to understand and validate. An under-exploited strength of these models is that they can calculate loss scenarios across large portfolios rapidly.

Increasing uncertainty

The next three layers in Figure 2 introduce increasing uncertainty to the stack. Catastrophe models have played a central role in assessing the combined impact of such issues to date. When first developed, catastrophe models for windstorms brought together three key scientific developments: models for the spatial distribution of wind speeds in storms; engineering models facilitating realistic calculations of loss estimates for particular events and modelling of return periods based on historic data. While other capabilities continue to be added, this trio forms the foundation.

Historically, much of the emphasis in discussions on catastrophe models has tended to be on the complexity of the scientific algorithms embedded in them. The limitations of these models have only relatively recently become a focus of attention. The extent of work that remains to be done in making these models as accurate as the market perceived them before the 2005 hurricanes was highlighted in the paper by Tom Conway in the April 2006 edition of Catastrophe Risk Management.

He presented the results from catastrophe models for 40 companies run retrospectively for the 2004 season using the actual paths of the hurricanes Charley, Frances, Ivan and Jeanne. The output from the models was considered to be reasonably accurate in only 17% of the cases.

An emerging problem for the probabilistic component of the catastrophe model is the compelling evidence of shifts in climatic patterns, with an increased propensity for severe events. Recent adjustments to these models, which alter return frequencies to take account of this perceived change in the pattern of intense storms, provide an additional buffer. However, considerably more work, over many years, will be needed before the level of confidence placed on the precision of predictions from the models prior to 2005 is justifiably restored.

An important point in this respect is that the probabilistic models aim to deal with related, but not entirely correlated, factors (i.e. the frequency and the paths) using a common data set which is by its nature limited to around 10 decades. A single storm in one of the less active seasons can create record losses as Hurricane Andrew highlighted in 1992. Figure 3 shows the areas in the Gulf region in which the paths fell most frequently in the 50 year period between 1955 and 2005. Figure 4 shows the paths of hurricanes Katrina and Rita, and how they deviated from the most frequent paths.

Important but not well represented

The top layer of the stack in Figure 2 gives an example of factors that can have a substantial impact but that are not well represented in any model currently. Some of them are known, for example, the funnelling effect which can cause multiple events to follow similar track during particular seasons with the result that the conditions prevalent when an intense storm strikes are not the pristine ones assumed by the models. The modelling of these scenarios will improve over time.

Other combined events have simply not been encountered before, such as the combined impact of the flooding from the levy breach and hurricane winds in New Orleans as a result of Hurricane Katrina, and will not be incorporated in the models until they have had a major impact on the market. This will be a key challenge in the future, particularly in emerging markets where new combinations of hazards and exposures will occur.

Although much more work remains to be done to perfect the catastrophe models, a combination of the increased focus on accurate data capture, the management of the geographic accumulation of risk using aggregation software and the predictions from catastrophe models provide a good foundation for the future. This is true provided that the impact of the uncertainty introduced by the various layers in the stack in Figure 2 is considered carefully and appropriate adjustments are made when assessing the overall portfolio and the outward reinsurance programme.

Methods of adjustment

Various statistical models have been developed to adjust the catastrophe model predictions. These techniques are a valuable addition to the underwriter's armoury. For example, an important recent advancement is the emergence of algorithms that predict with a high level of precision the intensity of a coming hurricane season from anomalous wind patterns earlier in the year. The data available to date strongly indicates that this methodology can anticipate with considerable accuracy which quartile, in terms of overall damage, the coming season will fall into. Such predictions can be used to adjust the driving parameters of the various models to get a better understanding of the possible range of outcomes.

A practical problem encountered in overlaying the catastrophe models with statistical adjustment algorithms is the time taken to turn around large volumes of scenarios. Using an under-exploited strength of the aggregation models - their ability to turn around calculations quickly - can overcome this.

In conclusion, an approach which explicitly recognises the uncertainty from the each of the layers in Figure 2 and uses a combination of the most valuable attributes of various techniques currently available to make suitable adjustments would enhance commercial performance. This calls for a much higher level of collaboration from the all trading partners and software providers. This may seem a little risky, and it will be more costly in the short term, but it is essential for the retention of the confidence of the investment community in the medium term.

- Dr Ameet Dave is commercial director of ROOM Solution. He is a physicist by background. Email: Ameet.Dave@roomsolutions.net Website: www.roomsolutions.net