The dangers of Big Data

Big Data might be the next big thing – but, writes Peter Kochenburger, the industry’s use of it is outstripping the traditional capabilities of regulators

The dangers of Big Data

Opinion

By

BIG DATA gives insurers the ability to collect vast amounts of personalized information, create new predictive models and apply them to a variety of insurance functions, thereby enhancing the marketplace for insurance consumers. Unrestrained, however, Big Data will also negatively disrupt insurance regulation and essential insurance functions.

One concern is Big Data’s ability to evaluate consumer behaviour. Price optimization – the ability to predict how much a premium can be increased before a particular consumer is motivated to shop around – is the most recent battleground. This practice may well constitute unfair discrimination by differentiating among consumers with the same risk characteristics based on their propensity to comparison shop, and several states have issued bulletins banning the practice.

Perhaps building on this theme, several data analytics vendors are promoting claim optimization. This is good news if it means settlements will be quicker, fairer and more efficient. But the ability to ‘optimize’ can also mean establishing settlement offers on the statistical likelihood that the claimant will accept the offer, rather than on the fair value of the claim itself. In this area, the law is clear: Insurers are required to pay claims at their full value and not negotiate them downward based on unrelated factors, such as a policyholder’s need for a quick payout.

Neither practice is new, but Big Data allows companies to engage in them in a far more sophisticated manner, to the detriment of insurance consumers. It also challenges regulators’ ability to fulfill a number of traditional supervisory functions, including evaluating insurer rating and underwriting plans, and enforcing laws against discrimination.

There is a growing gap between the industry’s use of Big Data and regulators’ ability to supervise it. These models are increasingly complex, can include more than a thousand rating factors for a single insured, and are often created by third-party vendors over whom regulators have uncertain authority.

The most difficult challenge presented by Big Data and predictive analytics probably lies in its greatest potential: to align risk more precisely, perhaps down to the individual policyholder level. While this ability enhances actuarial fairness, it also fragments risk pools and reduces risk spreading, another essential insurance function.

To use an ‘old’ Big Data example, credit-scoring might result in the majority of policyholders paying less than without its use, but since the overall premiums collected are not reduced, it also means a minority is paying more – sometimes a lot more.

The political battles over the US National Flood Insurance Program’s rates for residential flood coverage illustrate the dilemma between charging homeowners the full

cost of insuring their homes from flood, or continuing the partial subsidization of rates – in essence, spreading some of the risk to all federal taxpayers. An article in the Redwood Times last October about wildfires and insur­ance rates noted that for some homeowners, rates had more than doubled in five years due to more individualized risk assessments and developments in fire metrics. “Today, insur­ance companies can zero in on the risk to a specific home and price the policy specific to that address,” said a state insurance regulator quoted in the article.

Big Data sharpens the divergence between risk precision and risk spreading. However, there are instances where we as a society want to subsidize policyholder risk; the best example may be forbidding health insurers from using an insured’s health status or pre-existing condition in setting rates. There are also good reasons why individuals who can legally drive (despite a series of accidents) should be able to obtain affordable auto insurance, and assigned risk plans are often subsidized in part by the voluntary market.

Determining the appropriate risk alloca­tion is neither an actuarial decision nor one to be made by private actors, but a matter of public policy, meaning our elected officials and regulators will be responsible. If the political system in the US is as fragmented as many believe, effective legislative guidance may not be forthcoming, leading to a continu­ation of de-facto subsidization (the National Flood Insurance Program), or essential insur­ance products becoming far more expensive and a growing number of policyholders being priced out of the market.

Keep up with the latest news and events

Join our mailing list, it’s free!