Modelling marine risks
International trading hubs have become centres of high value concentration, with the larger ports handling goods worth several billion euros every day. The catastrophe at Tianjin highlighted the insurance industry’s difficulty in reliably quantifying the loss potential of such facilities.
This is not a new issue. Past major events have frequently caused considerable losses for marine insurers. In 2008, a hailstorm struck the port of Emden, a leading export hub of the German automotive industry. A rare event in meteorological terms, the storm not only caused significant damage to vehicles parked on the dockside, but also inflicted considerable losses on a number of cargo insurers.
Four years later, the US eastern seaboard was hit by Hurricane Sandy, its storm surge flooding port facilities in New York and New Jersey. The magnitude of the losses took insurers by surprise and underscored the need for better control over liability accumulations.
What is it that makes accumulation control so difficult in marine insurance? And why are insurers still so far from the quality standards achieved in property insurance and its ability to model extreme loss scenarios? The problems are partly of the insurers’ own making, but they are also due to the specifics of marine insurance and the many special risks it covers.
The methods used to control accumulation are constantly being further developed. Falling hardware prices and higher computing power have led to the development of complex probabilistic catastrophe models capable of simulating the losses from tens of thousands of potential events. The models combine liability distribution and insurance terms with parameters specific to the event, such as wind speed or the intensity of an earthquake.
Like the models and their technical capabilities, the quality and granularity of exposure data have also matured. Many property insurers today are not only able to assess their portfolios at individual address level, they also store a wealth of other data in their systems on the individual risks. Aided by this mass of data, the models assign individual locations to the simulated scenarios and translate risk-relevant attributes such as occupancy type or year of construction into vulnerability functions. The modelling takes account of a portfolio’s features with increasing specificity as localisation becomes more precise and the database more complete.
Marine insurance has a fundamental problem in this context: localising the insured risks is often impossible or involves major uncertainty, as they are constantly moving. Although they often follow routes laid down by logistics, the insurer cannot know where an insured object is located at any given time; in other words, its location cannot be established precisely for an individual risk. This applies in particular to ports where goods are regularly transshipped without delay. While this lack of transparency makes loss appraisal following an event much more difficult, it becomes almost impossible to model loss potentials on the basis of detailed exposure.
In addition to the lack of information on liability, the models themselves can only meet the specific requirements of marine insurance to a limited extent, as they were traditionally developed for property insurance. The vulnerability of marine risks is more varied and, for a long time, the available loss data were too few and their granularity too poor to permit adequate validation of the results obtained by modelling. The industry only recognised the need to improve accumulation control tools following the enormous losses caused by Sandy.
In cooperation with Munich Re and other selected insurers and brokers, the model provider Risk Management Solutions (RMS) has developed an initial module for specifically modelling marine risks to be launched in 2016. This establishes the technical functionality for more accurate simulation of loss scenarios. It remains to be seen whether the quality of the exposure data successively improves as a result.