The dark side of Big Data

Big Data is the superhero fighting insurance fraud, but with this power comes responsibility, writes Peter Kochenburger

The dark side of Big Data

Opinion

By

Big data, predictive analytics and artificial intelligence are reshaping insurance under­writing, marketing and claims handling. Fighting fraud may be the most advanced use of these tools and is often the most champi­oned. However, without appropriate regula­tion and transparency, these same tools can unfairly target individuals and companies, with potentially grave consequences.

The most frequent and promising use of these tools appears to be the analysis of particular claim. Data analytics companies and insurers are developing a far deeper and more comprehensive understanding of poten­tial connections among individuals and claim service providers, types of claims, and how and where they occur. This allows them to make more accurate immediate estimates of the likelihood of fraud when a claim is first made, and to use the data to further refine their predictive models.

It’s a given that these models will produce false positives based on incorrect assump­tions, information and predictions, as was the case prior to Big Data when adjuster experi­ence and instinct were primary. It is essential that flagged claims are speedily evaluated by an experienced adjuster so valid claims aren’t sidetracked and actual fraud is investigated.

More troubling is the development and use of predictive models purporting to establish a ‘propensity for fraud,’ especially when based on information or characteristics unrelated to a particular claim. This data can include comprehensive criminal arrest (not just conviction) records, credit and bill-paying habits well beyond traditional credit reports, social media use (or non-use), and shopping habits. Further, these models may be used to screen applicants for this propensity at the underwriting stage, before the possibility of a claim could have arisen. Insurance has now entered the world of Minority Report.

Even leaving aside the appropriateness of pricing a product based on predictions of future criminality, these models create multiple concerns. First, is the data used by the model accurate, and are consumers aware of what personal information has been collected and how it is used? If not, then how can they deter­mine whether this information is correct or change their behavior to improve their score?

Second, even if this data is accurate in one sense, all models contain preconceptions, including decisions on what data to use and what to omit, which can reflect improper biases. For example, using criminal records as an underwriting tool could ignore the fact that policing and prosecutorial decisions might not be race-neutral. As the Department of Justice documented in its investigation of the Ferguson, Missouri, police department, African-Americans may be arrested more frequently for the same crime, even when controlling for population size.

The more this type of non-claim-related information is used for insurance purposes, the more likely it is for implicit biases to creep in. When inaccurate data and flawed models result in higher premiums, denied coverage and the classification of valid claims as poten­tially fraudulent, individuals will not only be treated unfairly in a specific transaction, but potentially tagged as an undesirable customer in the future, contributing to a vicious down­ward cycle. The lack of transparency to consumers and regulators, along with inade­quate rights of consumers to obtain and correct information, makes it much harder to evaluate and correct a model’s data and predictions.

Insurers and data modeling companies, rather than regulators or legislators, are the innovators in the use of Big Data and AI to combat insurance fraud. However, to para­phrase the Gospel of Luke – or Ben Parker, if you prefer – with this power comes respon­sibility. Insurers that use these technologies have an obligation to be the gatekeepers in determining whether and how a specific risk or other underwriting factor, claims process or fraud assessment tool is used.

They should understand the assumptions or factors built into any models they use, as well as the likely results for current and future policyholders. They should independently test and evaluate potential biases, and assess the potential costs and benefits to the company and policyholders. Equally important, they should be able to explain these issues clearly and in the level of detail needed by their regu­lators, whom the public and elected officials will call upon to justify their regulatory actions (or inactions).

Peter Kochenburger teaches insurance law at the University of Connecticut Law School. He is an NAIC consumer representative, has been elected to the American Law Institute and is a graduate of Harvard Law School.

 

 

 

 

 

Keep up with the latest news and events

Join our mailing list, it’s free!