Is ChatGPT an insurance fraudster's best friend?

Intelligence expert warns about the dark side of generative AI

Is ChatGPT an insurance fraudster's best friend?


By Gia Snape

The use of artificial intelligence (AI) to facilitate fraud insurance claims has grown over the past decade, especially as AI has become more sophisticated. But the entry of generative AI could drive insurance fraud to levels yet unseen, a digital intelligence expert told Insurance Business.

“With the technology that’s being built now, it’s easy for just the regular person who wants to exaggerate their claim to do so, as well as organized crime groups that want to make a larger play across multiple companies at the same time,” said Joe Stephenson (pictured), a former special investigator and current director of digital intelligence at claims technology firm INTERTEL.

The problem is that generative AI, such as OpenAI’s ChatGPT, has made warping photo evidence – a key mechanism to verify claims – extremely easy and quick to do.

“The question becomes which is worse: a thousand individuals making false $100 claims, or an organized crime group making $50,000 in a single fraud claim?” Stephenson asked. “It just opens the door to problems.”

Fraud claims easier than ever?

Photo manipulation apps have long been used as a tool to commit insurance fraud. As these apps became increasingly accessible, fraud proliferated.

“The apps where you can upload a picture of a car and modify damage, such as put a crack in the windshield, then send it out to an insurance company and get $400 to replace the windshield that was never broken – those have been around for almost 10 years now,” Stephenson said.

“Advanced technology is making it so easy for the average person to be able to engage in a certain level of fraud.”

Insurance companies have become adept at identifying false claims over the years, but the development of AI tools like ChatGPT has been so rapid that insurance companies are fighting to keep up.

“Criminals are so quick to jump on this and start to leverage it in their fraud, that now we’re in this rush to try to compete against it,” Stephenson said. “Sometimes we don’t even recognize the fraud yet until we have massive losses.”

Total losses due to insurance fraud in the US have risen to $308 billion in 2022, according to the Coalition Against Insurance Fraud.

Across the border, the Insurance Bureau of Canada estimates that insurance fraud costs in Ontario alone range between CA$770 million and CA$1.6 billion a year.

Could generative AI fuel a rise in synthetic identity fraud?

Another worrisome development brought on by generative AI is the rise of synthetic identity fraud, according to Stephenson.

This type of fraud, more commonly seen in the banking and finance industries, occurs when individuals combine real and fake identity information to create accounts or make purchases.

“Synthetic ID is advancing at a much faster rate,” Stephenson said. “In the past, it typically involved a real person who didn’t have much of a footprint, say, somebody who has immigrated to the US.”

Individuals might get pulled in by organized crime groups who use their real social security numbers with false information, such as date of birth, to make fraud purchases. Alternatively, fraud actors can cobble together different pieces of personal identifiable information from real people to create “Frankenstein” identities.

But with generative AI, criminals don’t need real humans to create synthetic IDs. Tools like ChatGPT can generate realistic images of people using existing data, and with a greater proportion of insurance transactions happening purely online, verifying real identities has become a huge challenge for insurance companies, according to Stephenson.

“We’ve made it so much easier to do everything without that physical interaction that now synthetic IDs are building up quicker,” the intelligence expert said.

“Generative AI allows me to create a fake human being. I can pull an image of a person that exists, create a voice for them, and build a whole persona online of somebody that doesn’t exist, and then use that persona to commit crime.”

Stephenson said there may be as many as 20 different programs in the market that individuals – never mind organized crime groups – can access to create synthetic personas at scale.

Fighting AI-powered fraud with AI

On the other hand, insurance companies can also wield AI in the fight against claims fraud.

Experts have pointed out that by analyzing claims data, AI can identify suspicious patterns and anomalies that could indicate fraudulent activity, helping carriers reduce losses due to fraud.

Stephenson borrowed words from tech billionaire Elon Musk to share his advice for insurance companies experimenting with generative AI technology and want to stay a step ahead of fraudsters.

“We’ve got to slow down,” he said. “In terms of fraud, you need to think about how somebody can manipulate this technology. You might change your process because you think you’re going to get a 10% revenue increase, but are you also going to see a 12% revenue loss because you’ve made it that much easier for a fraudster to come in?

“Legitimate policyholders make up most of insurance claims. Fraudsters are a very small part, but a fraudster can have a massive impact on your bottom line if you make it easier for them. So, slow down and understand the technology before you flip that switch.”

Do you think generative AI will have a significant impact on insurance fraud claims? Share your thoughts in the comments.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!