Are insurance customers ready for generative AI?

There's a "fundamental misunderstanding" over what ChatGPT and AI can do

Are insurance customers ready for generative AI?

Technology

By Gia Snape

Insurance companies are increasingly keen to explore the benefits of generative artificial intelligence (AI) tools like ChatGPT for their businesses.

But are customers ready to embrace this technology as part of the insurance experience?

A new survey commissioned by software company InRule Technology reveals that customers aren’t excited to encounter ChatGPT in their insurance journey, with nearly three in five (59%) saying they tend to distrust or fully distrust generative AI.

Even as cutting-edge technology aims to improve the insurance customer experience, most respondents (70%) said they still prefer to interact with a human.

Generational divide over AI attitudes

InRule’s survey, conducted with PR firm PAN Communications through Dynata, found striking generation differences between customer attitudes towards AI.

Most Boomers (71%) don’t enjoy or are uninterested in using chatbots like ChatGPT. The number decreases to only a quarter (25%) with Gen Z.

Younger generations are also more likely to believe AI automation helps yield stronger privacy and security through stricter compliance (40% of Gen Z, compared to 12% of Boomers).

Additionally, the survey found that:

  • 67% of Boomers think automation lessens human-to-human interaction versus 26% of Gen Z.
  • 47% of Boomers find automation impersonal, compared to 31% of Gen Z.
  • A data leak would scare away 70% of Boomers and make them less likely to return as a customer, but the same is only true for 37% of Gen Z

Why do customers distrust AI and ChatGPT?

Danny Shayman, AI and machine learning (ML) product manager at InRule, isn’t surprised by customers’ wariness over generative AI. Chat robots have existed for years and have produced mixed results, he pointed out.

“Generally, it's a frustrating experience to interact with chatbots,” Shayman said. “Chatbots can’t do things for you. They could run a rough semantic search over some existing documentation and pull out some answers.

“But you could talk to a human being and explain it 15 seconds, and an empowered human being could do it for you.”

Additionally, AI-driven tools rely on high-quality data to be efficient in customer service. Users might still see poor outcomes while engaging with generative AI, leading to a downturn in customer experience.

“Often, if anything in that data set is wrong, incorrect, or misleading, the customer is going to get frustrated. We feel like we spend an hour getting nowhere,” said Rik Chomko, CEO of InRule Technology.

The Chicago-headquartered firm offers process automation, machine learning and decisioning software to more than 500 financial services, insurance, healthcare, and retail firms. It counts the likes of Aon, Beazley, Fortegra, and Allstate among its clients.

“I believe [ChatGPT] is going to be better technology than what we've seen in the past,” Chomko told Insurance Business. “But we still run the risk of someone assuming [the AI is right], thinking a claim is going to be accepted, and finding out that's not the case.”

The risks of connecting ChatGPT with automation

According to Shayman, there’s a fundamental misunderstanding among consumers about how ChatGPT works.

“There's a big gap between generating text that says something and doing that thing. People have been working to hook APIs up to ChatGPT can connect to a system to go and do something,” he said.

“But you end with a disconnect between the tool’s capability, which is generating text, and being an efficient and accurate doer of tasks.”

Shayman also warned of a significant risk for businesses that set up automation around ChatGPT.

“If you're an insurer and have ChatGPT set up so that someone can come in and ask for a quote, ChatGPT can writes the policy, send it to the policy database, and produce the appropriate documentation,” he said. “But that's very reliant on ChatGPT having gotten the quote correct.”

Ultimately, insurance companies still need human oversight on AI-generated text – whether that’s for policy quotes or customer service.

“What happens if someone knows that they're interacting with a ChatGPT-based system and understands that you can get it to change output based on slight modifications to prompts?” Shayman asked.

“If you're trying to set up automation around a generative language tool, you need validations on its output and safety mechanisms to make sure that someone's not able to go and get it to do what the user wants, not what the company wants.”

What are your thoughts on InRule Technology’s findings about customers and ChatGPT? Share your comments below.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!