Artificial intelligence regulation: Is it unfair to insurers?

One group has "expressed their disappointment"

Artificial intelligence regulation: Is it unfair to insurers?

Columns

By Bethan Moorcraft

The Council of the European Union (EU) has officially adopted its common position on the Artificial Intelligence (AI) Act, which is the first law on AI by a major regulator anywhere in the world.

The regulation’s scope encompasses all sectors (except for military) and aims to introduce a common regulatory and legal framework for AI, ensuring that all AI systems are safe and respect existing law on fundamental rights and values.

The AI Act follows a risk-based approach across industry, both public and private, assigning applications of AI into three risk categories:

  • Unacceptable risk: The Act bans actors from using AI for social scoring, and prohibits the use of AI systems that exploit people who are vulnerable due to their social or economic situation.
  • High-risk applications: This tier includes systems that affect employment, credit, and health care, among others. High-risk AI applications are subject to specific legal requirements around the use of data, record-keeping, and the allocation of responsibilities and roles of actors in the value chains.
  • All others: Any AI applications not specifically banned or deemed high-risk are largely left unregulated.

Personally, I think AI regulation and governance is very important. We’ve all seen the sci-fi movies where artificially intelligent robots (sorry, beings) take over the world and attempt to bring about the end of humanity as we know it, until some bruised and battered hero saves the day. While that’s the worst-case scenario meant only for our screens, there are some real use-cases for AI that are actually quite scary.

Think about deepfakes, for example, where AI is used to forge an image, video, or audio recording with such precision that the average human is unlikely to detect any manipulation. With the goal of misleading and deceiving people, deepfakes can be a dangerous tool if used maliciously against businesses and individuals – and as things stand, it remains unclear as to how insurance policies would respond to losses caused by deepfakes.

How does the AI Act classify the insurance sector?

Much to the concern of European insurers, the Council of the EU has included the industry in the AI Act’s high-risk list.

Specifically, algorithms used for the risk assessment and pricing of health and life insurance are considered high-risk and must meet more stringent regulatory requirements. Other common AI applications in the industry – such as using AI to improve customer service, increase efficiency, provide greater insight into customers’ needs, and to improve fraud detection – are covered by sectorial legislation.

William Vidonja, head of conduct and business at Insurance Europe, said: “Europe’s insurers wish to express their disappointment with the Council’s decision. With the exception of a restricted use case related to safety components in digital infrastructure, insurance is the only sector to be included by the Council in the high-risk list of Annex III without any proper analysis or impact assessment being conducted. This decision goes against the objectives of the EU’s better regulation agenda and does not promote evidence-based EU policymaking.”

Vidonja made the point that insurers are already “subject to a robust EU regulatory framework in terms of both prudential and conduct rules” in addition to other national frameworks and EU legal requirements, like the General Data Protection Regulation (GDPR).

“An impact assessment should have been carried out to assess whether the existing regulatory and supervisory framework already appropriately addresses the potential risks resulting from the use of AI in insurance. This would have avoided inconsistencies and duplication of rules that would only hinder innovation without bringing any benefits to consumers.”

The Council of the EU is adamant the AI Act “promotes investment and innovation in AI, enhances governance and effective enforcement of existing law on fundamental rights and safety, and facilitates the development of a single market for AI applications.”

But Insurance Europe seems to be arguing that over-regulation could impede innovation in the sector – and I’m inclined to agree. Around the world, insurance is one of the most highly-regulated and scrutinised industries, primarily with the aim (like the AI Act) of respecting fundamental human rights and prudential value.

The industry is already using AI applications for the benefit of businesses and consumers, and as far as I can tell, all in compliance with the current (extremely complex) regulatory frameworks. Even with new AI applications popping up all the time, I believe it would be hard for the insurance industry to fail to comply with the AI Act, given the stringent rules it is already subject to.

Why, then, is there a need to shackle the industry with yet more regulation? Even if the AI Act demands are easy to meet, insurers may still have to collect and provide proof of compliance, which is burdensome and could potentially hinder the speed of AI innovation.

Will other countries copy the EU’s AI Act?

The EU’s AI Act could serve as a template for other countries looking to regulate technologies more effectively, much like the EU’s General Data Protection Regulation (GDPR) influenced the data protection laws in other jurisdictions, such as the California Consumer Privacy Act (CCPA).  

In the past five years, many countries have adopted some form of AI regulation, but the EU’s AI Act is the first multi-jurisdictional regulation of its kind. Fair and ubiquitous AI governance is important in today’s increasingly global supply chain – but achieving that through meaningful international cooperation is easier said than done.

I think Alex C. Engler, a Fellow in Governance Studies at The Brookings Institution, put it nicely when he wrote: “An ideal outcome would be the implementation of meaningful governmental oversight of AI, while also enabling these global AI supply chains.” Is the EU’s AI Act meaningful enough? That remains to be seen – but it’s certainly a good start and an important step in the oversight of AI innovation in the future.

Is regulation helping or hindering AI innovation in insurance? Let us know in the comments below.

Keep up with the latest news and events

Join our mailing list, it’s free!