How can insurance avoid a loss of trust?

The CII’s Sian Fisher explains that there is a tool the industry can turn to

How can insurance avoid a loss of trust?

Columns

By Sian Fisher

This year, the regulatory agenda has been dominated by the ethics of data, from the controversy around Cambridge Analytica to calls for far tighter regulation of crypto currencies.

Are these issues relevant to insurance? How can we avoid a similar loss of trust in the way we use data?

Fortunately, we already have a tool with which we can build trust: professionalism.

Professionalism can be broken down into three constituent parts: competence, integrity and care for the customer. Applying these principles to technological change can protect us from reputational damage.

First, technology gives professionals opportunities to enhance their competence. For example, insurers are already using artificial intelligence to improve claims processes in motor insurance, leading to lower costs and more effective repairs for customers.

However, if we want to retain the trust we have gained from becoming more effective, we must use technology in an honest and transparent way.

For example, if we use artificial intelligence to make decisions around underwriting, we have to make sure that these decisions are fair and do not reflect biases that we have introduced unconsciously when we set up the system.

One way in which firms are building trust in the use of artificial intelligence is through open sourcing. In the last three years, Google, Facebook and Microsoft - companies with a huge stake in intellectual property - have open-sourced software that they have developed for key services, for example around image searches. As we build AI systems in insurance, we must develop an approach to open-sourcing that reassures the public about our use of technology.

But transparency on its own is not enough. To retain public trust, it is not enough to discharge legal obligations around openness.

The introduction of technology will have profound effects on society, and professionals have a responsibility, as experts in their field, to understand these effects and to speak up when they are producing the wrong outcomes. For example, professionals can help people to develop skills to make them more employable in the future, rather than being made redundant by robots; or they can help society to manage the risks created by new technology, such as the unfortunate way in which the cast-iron privacy of crypto-currencies has helped organised crime to flourish.

To maintain public trust we have to use technology in our clients' best interests, and in the wider public interest, too. This is a tough challenge, but like all difficult tasks we can break it down - in this case, into the more manageable challenges of maintaining competence, acting with integrity and discharging a duty of care to consumers.

 

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!