What are the regulatory hurdles to launching an insurance chatbot?

From domestic regulation to global privacy scandals, insurtech co-founder discusses the challenges in getting a chatbot to market

What are the regulatory hurdles to launching an insurance chatbot?

Insurance News

By Ksenia Stepanova

Providing personalised advice through chatbot is an idea which has been hovering over the insurance sector for some time, and though opinions on their use vary wildly, their speed, simplicity and efficiency make them difficult to ignore.

The provision of robo-advice is currently only allowed if the FMA has specifically granted you approval – however, offering automated advice is far from a chatbot’s only use. Brokerage Cove Insurance uses its chatbot to collect information and generate quotes for car and phone insurance, and this does not require any regulatory approval – though that doesn’t mean that there are no regulatory hoops to navigate.

Cove co-founder Andy Coon will be a key speaker at Auckland’s upcoming TechFest, and will discuss how a chatbot can help enhance the customer experience. In the run-up to the event, Coon spoke in-depth to Insurance Business about some of the regulatory difficulties of getting an insurance chatbot off the ground.

“We are regulated by the FMA, but what we do is more of an ‘assisted sale process’ rather than advice, so the direct regulatory burden on the chatbot wasn’t heavy,” Coon explained. “We did get involved in the robo-advice submissions, but we don’t actually provide robo-advice.”

“The bigger piece was getting it all approved by our insurance partner Allied World, and then by Lloyd’s,” he explained.

“The chatbot context means you’ve got a low character limit to express complex things, so there was a lot of work getting that right. There was some back and forth with the policy wording to make sure we could cover what we wanted to cover, while still making it clear and concise on the customer journey.”

Coon says that despite the difficulties, Cove ended up very pleased with the final result and mapped their Facebook chatbot text over to the web version more or less verbatim. However, another issue they ran into was Facebook’s widely-publicised privacy scandal over Cambridge Analytica – a scandal which has resulted in a series of lawsuits for failure to safeguard personal data, and potentially billions of dollars in damages.

With data privacy thrust sharply into the public eye, tech companies are under massive pressure to ensure that their client’s data is secure, and isn’t being opportunistically harvested without their consent.

“The issue of Cambridge Analytica cropped up right when we were prepping for our final Lloyd’s submission,” Coon explained.

“That, combined with the GDPR framework which was going into place in the UK (where Lloyd’s is based) at the time was pretty unhelpful for getting a Facebook Messenger-hosted insurance process approved. But we were pretty clear on our approach to customer privacy and how to avoid Facebook seeing anything sensitive – when asked about criminal convictions, for example, we put that into a secure webview that Facebook couldn’t see, so ultimately, this ended up being fine.”

To hear more from Andy Coon and other industry experts at the upcoming TechFest, register here.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!