US could fall behind on AI regulation, WTW Neuron CTO warns

But there's cause for optimism over regulatory efforts, he says

US could fall behind on AI regulation, WTW Neuron CTO warns

Technology

By Gia Snape

The United States stands to fall behind on critical artificial intelligence (AI) regulation if big tech firms continue to push back on landmark EU legislation to govern the use of the technology.  

Alex Morris-Tarry, chief technology officer of Neuron, WTW’s digital insurance platform, said tech firms are sounding the alarm about the dangers of unfettered AI development but, at the same time, are lobbying to weaken key parts of the AI Act’s framework.

“Some of the organizations that are involved in AI development are calling out for regulation, which is a weird irony. It's almost looking for self-regulation,” Morris-Tarry told Insurance Business at the Women in Insurance Summit in London on Thursday (November 30).

“The big tech firms, on the one hand, are saying ‘we need you to put controls in place,’ but on the other hand, are lobbying against that control.”

EU’s AI Act – where are talks now?

Initially published in April 2021, the EU’s AI Act proposed different levels of regulatory scrutiny on AI systems depending on their intended use.

Under the proposal, riskier use cases, such as those in the life and health insurance markets, would need stringent risk assessment and mitigation measures.

Talks to pass the legislation are in the final stages, but disagreements between EU member states remain over how foundation models should be regulated.

Foundation models, also known as “general purpose AI” or “GPAI,” can perform a range of general tasks such as text synthesis, image manipulation, and audio generation. Prime examples of foundation models include OpenAI’s GPT-3 and GPT-4, the models that underpin ChatGPT.

“The EU is the supranational organization that has the greatest opportunity to affect that change because we've seen it stand up against big tech on many issues,” Morris-Tarry said.

“Lobbying will probably result in the US being slightly further behind on regulation. But where one goes, the other typically follows because the tech firms don't want to create two sets of systems, one for one very large economic area and one for another. So, if the EU comes in and applies these rules, I think that will create benefit across the board.”

Insurance firms are also calling for broader AI regulation

Calls to regulate the development of AI, especially amid the rapid development of generative AI tools like ChatGPT, are gaining ground in the insurance industry.

Most recently, Antonio Huertas, chairman and CEO of global insurer MAPFRE, called for legislation over the ethical risks posed by AI, saying it has the potential to be one of the most disruptive technologies in history.

Insurance technology providers have also called on the insurance industry to create their own AI code of conduct to mitigate legal and ethical risks.

Morris-Tarry said that the rapid development of AI in insurance highlights the urgency of regulation.

“The natural tendency with these new tools is to take things to the extreme and see how far they'll go. [AI] can be a dangerous tool, and if they’re not regulated well, there’s a potential for serious problems down the line,” he said.

“The regulations will probably be later than they should be. But hopefully, they'll be early enough to help make sure we maximize the positive impact of them versus the negative.

“I don’t think it’s too late; I just think that there's the possibility that governments move more slowly on technology [because] they always have. But I do think there's the impetus for them to enact appropriate change quickly, because these tools could be a massive boon to society.”

Do you think AI regulation should come sooner? How should organizations manage the risks around AI use? Sound off in the comments.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!