AI in insurance – training the future workforce

Getting to grips with one of the industry’s hottest topics

AI in insurance – training the future workforce

Technology

By Bethan Moorcraft

Allstate CEO Tom Wilson announced last year that artificial intelligence (AI) will rip through the service economy “like a tsunami,” with automation and technological advancements forcing some roles to change and making others completely redundant. In reaction, the Northbrook, Illinois-based insurer said it would invest $40 million to help train employees to thrive in an AI-driven environment.

As Wilson told Bloomberg: “Whether you’re an accountant, an auto adjuster, a computer programmer, technology is going to take over. We have to figure out: ‘How do we train them to do the new job, not the job that the computer can do?’”

The potential use-cases for AI in insurance are plentiful. They can be broken down into three core areas, the first of which revolves around engagement and how insurance firms interact with clients. As insurers develop digital platforms, a shift is occurring in how consumers engage, with many opting to use digital channels rather than having human interaction. In this context, insurance companies are trying to work out how to implement AI (for example, in the form of chatbots) to create an intelligent digital service that can give the client the same service they would have had from a human interaction.

Last week, QBE North America announced a new AI-based customer communication service called TextQBE. With TextQBE, some customers reporting claims will receive immediate responses from an AI virtual assistant, who will also then update and move customers through the claims process. QBE North America SVP of technical operations, Alyssa Hunt, said the intelligent conversation platform would help the insurer take their customer experience to the next level.

“We’re able to give customers the option to communicate how they prefer, and the virtual assistant’s ‘intelligence’ enables us to offer customers the answer to simple questions about deductibles, receipt of photos and other documents rapidly on a mobile device,” she said.

The majority of AI applications in insurance are around the automation of processes. That automation could also be called augmentation, according to Dr Charles Dugas, PhD, head of insurance, Element AI. Many ask the question as to whether AI will replace people or human tasks. In Dugas’s mind, the answer to that is ‘no’.  

“While easy cases can be automated, such as automated document processing, you still want to have a human in the loop for the more difficult cases,” he said. “Rather than seeing this as a black or white situation – 100% manual or 100% automated – there’s a happy middle you can reach where a portion of your easy cases are handled automatically, and the other portion is supported by AI but really managed by human beings.”

A third goal insurance companies can strive for through the implementation of AI is the discovery of new insights, which comes with the ability of AI to ingest more complex data and different types of data. New types of data can be treated in order to extract better insights, inform good decisions and make more accurate decisions, Dugas explained.

“Machine learning does particularly well with lots and lots of data. Oftentimes, the more data we throw at it, the better the predictive capabilities and the better the accuracy,” explained Dr Alex LaPlante, PhD, managing director, research, Global Risk Institute. However, in today’s connected society, not many consumers are aware of the extent that their personal information is publicly available (through mediums like social media, IoT devices and so on), so questions arise as to how ethical it is for insurance companies to tap into these data sources to feed machine learning.

“There’s always a question around the ethical usage of data,” said LaPlante. “In certain types of model, should we be considering race, sex, religion, and so on? I’m not necessarily saying those shouldn’t be used, but we have to determine when those uses are appropriate and when it makes sense in the decisions we’re making. Then there’s the issue of determinability and being able to explain why a machine learning algorithm has come to the conclusion it has.”

Regulators around the world have started to address the issue of determinability. For example, the European GDPR contains a right to explanation which means consumers can ask how and why a decision has been made. But when it comes to AI and machine learning, it can be very difficult to pass out why an algorithm does what it does. This is where the new training, referred to by Allstate’s Tom Wilson, has some significance. LaPlante added: “If you’re the machine learning expert and you can’t back up a decision, you’re in for some trouble.”  

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!