Behind the scenes of the newly launched code of conduct for the use of AI in claims

"We must never shirk our accountability for AI outcomes"

Behind the scenes of the newly launched code of conduct for the use of AI in claims

Technology

By Mia Wallace

Yesterday saw the launch of an insurance industry first – a voluntary code of conduct for the development, implementation, and use of artificial intelligence (AI) in claims. Led by Eddie Longworth (pictured), director of JEL Consulting, the code is the result of collaboration between 127 experts, united by the ambition to establish and uphold the highest standards of behaviour and ethical responsibility when planning, designing, or utilising AI in the management and settlement of insurance claims.

At the core of the initiative is the ambition to understand the potential and the actuality of AI, Longworth said, and in doing so, lay the foundations required for claims departments to ensure that they are implementing AI applications transparently, safely and securely. If done so, he believes that carriers and the wider insurance supply chain will be able to take advantage of the significant benefits AI can bring their businesses, without endangering policyholders’ trust in a fair claims outcome.

His idea for this code of conduct would likely have remained just that, he said, except for the rapid and enthusiastic response of his contacts throughout the market, who have helped bring it to life over the past six or so months. These experts come from all across the insurance value chain and include insurers, lawyers, tech companies and suppliers. It’s a testament to the collaborative nature of all involved that the code has been launched – and is now available for any organisation or individual to sign up to.

Where does this code of conduct go next?

“Our objectives for participation are limitless,” he said. “There are hundreds of carriers across multiple lines of business and there are 1,000s of suppliers, including tech companies, but just about every supplier is going to be affected by AI in one shape or another. Then there are the consumer organisations and trade associations.

“We don’t have ambitions for participation but we certainly have expectations that it will be taken up by a very large number of organisations. And it’s a bit like a snowball rolling downhill; once the wheel starts to turn, we hope it will gather pace in the coming weeks.”

A great deal of work has gone into making the code of conduct both accessible and applicable to everyone in the insurance market, he said, and with this in mind, there are several principles behind the code. The first three relate to the fairness, accountability and transparency of any AI application or installation to make sure that the outputs of AI are seen and acknowledged by all relevant stakeholders.

“The second thing was [identifying] that there must be a method of justifying AI-influenced decisions,” he said. “That’s so if it’s challenged, no-one can ever use the excuse, ‘the computer said no’. We can’t have any black-box type arrangements whereby we don’t know why the decision has been made because we don’t know the basis on which it was made. We must never shirk our accountability for AI outcomes.

“And on that basis, therefore, claimants must always have the right to redress with human oversight of the outcomes of that appeals process. We want the people and organisations signing up to the code to give consideration to the issues of potential bias and, where possible, to have an impartial approach to outputs because it’s all about how we can build trust and belief.”

Accessibility and transparency – the key to the successful implementation of AI

It’s with accessibility in mind that the code has been distilled into a single page, Longworth said, because this is not a regulatory document but rather a self-regulatory instrument that is entirely dependent on the morals and ethics of the individuals and organisations that sign up to it. This is about encouraging the wider insurance ecosystem to do the right thing by signing up to these core principles and the ambition at the heart of them to set high standards for the profession.

Adding to this, Consumer Intelligence CEO Ian Hughes emphasised the sheer scope of the potential generative AI has to transform organisations – and the need for this to be tempered by having ethics embedded into AI processes as early as possible. Generative AI has advanced rapidly even in the last year, he said, and this is the time to teach good ethics to these systems, because you won’t be able to retrofit them later.

“[This is about] allowing claims organisations and insurers to embrace AI, but at the same understand that we have a moral obligation to the next generation, to the 100 claimants who are totally innocent, so that we don’t take them out by training an AI just to find the one person who’s the rogue, or just to find that one risk that’s the poor risk,” he said. “I’m delighted to be part of this and help [identify] the issues about transparency, openness, explainability and fairness – those all also speak to Consumer Duty and the things the FCA is talking about.

“Adoption of this code is not just something that’s going to help our business and [promote] good ethics and give us a good future, it’s also going to help with regulatory compliance going forward as well. So, I think it’s an exciting time.”

Bringing the industry together under a shared ambition

Chris Sawford, MD of claims for Verisk in the UK paid tribute to how the UK insurance industry put aside its competitive differences to come together in service of the common goal of this code of conduct. It’s a display of teamwork that should not be understated or overlooked, he said, and it also highlights the level of cohesion across the market and its ambition to do the right thing when it comes to AI in claims.

While generative AI is still a relatively new term, he said, AI more generally has been around for quite some time now - but it’s the pace and the accessibility of AI outputs which is really transforming business processes. The availability of AI, the availability of data and the capabilities AI possesses are now posing new challenges for the industry, and there are a lot of questions still to be asked and answered around data ownership, deploying AI without bias and, critically, the human impact of AI and it means for humans in the workforce.

The newly launched code of conduct represents a launch pad for the ever-evolving topic of AI and the questions it poses for workforces, workplaces and work processes alike. And what’s especially great, Sawford said, is that the code really encapsulates the air of entrepreneurialism and innovation that is inherent within AI – but also channels it in a virtuous direction.

Longworth concluded the Press briefing launch of the code with a clear directive for the industry: “We’re asking people to use AI in such a way as we do it once and do it the right way, on behalf of all the stakeholders in the claims and supply chain ecosystem including, and especially, policyholders and claimants.”

You can now view and sign the Code of Conduct for the Use of AI in Claims here

 

 

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!