How can brokers ensure responsible AI usage in insurance operations?

How brokers can lead the charge in ensuring fair, secure, and transparent AI adoption

How can brokers ensure responsible AI usage in insurance operations?

Technology

By

Artificial intelligence is transforming workflows across industries — and the insurance sector is no exception. According to a Statista report, AI usage in global insurance “surged to 48% in 2025, up significantly from 29% in 2024.” However, its rapid growth continues to spark concern. A Kenny’s Law 2025 report highlighted that “AI emerges as the top-ranked risk for the year ahead, displacing all other major concerns.”

According to Rajeev Gupta (pictured), co-founder and CPO at Cowbell: “AI brings a lot of advantages (specifically speed and efficiency) to the underwriting and claims process. But, it can also bring risks like biases, lack of explainability, over-automation, and data privacy concerns.”

Emerging AI risks explained

AI-related risks often begin during model training. Gupta explained: “It can often be that biases develop during the training of an AI model, leading to discriminatory outcomes in underwriting or claims so that’s something to be aware of.”

He added that AI can “hallucinate”, meaning it may present incorrect or nonsensical information as facts. Gupta also addressed reliability: “AI doesn’t always give consistent answers when asked the same questions, or the same output to the same input.”

On explainability, he said: “AI may also make a decision.. without a clear explanation making it difficult to understand the rationale behind a non-payout, and heavily damage the trust of brokers and policyholders. This also increases the risk of lawsuits and reputational damage.” A recent legal issue reflected this concern. Cigna faced a lawsuit over its AI algorithm PxDx, which allegedly denied over 300,000 claims, with physicians reviewing each in just 1.2 seconds.

Unchecked automation can also rapidly scale flawed decisions. Gupta warned: “It often works with sensitive data, such as proprietary company data or employee information, which can then create an attack surface for data breaches or misuses.”

Brokers and insurers can support risk mitigation by:

  • Spotting and reporting biased decisions to push for fairer AI models
  • Reviewing AI-generated content to catch errors or misleading information
  • Flagging inconsistent outcomes to help improve model reliability
  • Demanding clear reasoning behind AI decisions, especially for denials
  • Escalating client issues early to reduce the risk of legal or PR fallout
  • Identifying patterns of faulty decisions before they spread widely
  • Ensuring clients understand how their data is used and promote secure handling

Developing risk aware AI systems

Gupta emphasised that “avoiding the risks mentioned above should be of the highest priority.” To do this, he said, “you need to begin the project by putting guardrails in place, carefully understanding which team is responsible for which deliverables (building, testing, reviewing, approving).” He also stressed regular testing: “Once the model is built, it needs to be tested regularly for accuracy and biases and set to give alerts that catch inconsistencies or illogical output, like a sudden spike in rejections or unexpected patterns in predictions, so issues can be caught early.” On transparency, Gupta said: “It is also important to create transparency by building a widely accessible dashboard that shows how your models are performing.”

He added that every AI-assisted underwriting decision should be traceable: “Any AI-assisted underwriting decision is logged, along with the model version used, data inputs, output scores, and any final action taken. These records will ultimately create a trail that can then be reviewed by a compliance team or regulators.”

Barriers to AI adoption

With AI usage, there is still a lot of hesitancy. According to Gupta: “The most common misconceptions about AI are fear-based. Some people think that you cannot trust AI under any circumstances, as it is probabilistic and missing the ‘human touch’.”

Another common fear is related to replacing workers. Gupta said: “There are also people who think that AI will completely replace humans, stealing jobs and taking over whole industries.” However, Gupta believes these beliefs should not put users off: “In my opinion, AI should be used as an assistant and partner ... When used that way and combined with responsible human oversight, it can help make smarter, fairer, and more accountable decisions.”

Brokers and insurers can help to build confidence and fairness in AI decisions by:

  • Advocating for transparency in AI decisions
  • Monitoring for consistency and fairness
  • Educating clients on AI-driven processes
  • Raising concerns early
  • Requesting human oversight in critical cases
  • Ensuring data privacy and security awareness
  • Pushing for fair and ethical AI development
  • Staying informed and up-to-date
  • Supporting regulatory compliance
  • Advocating for client appeals and review mechanisms

Reassuring brokers about AI’s role

As for what AI will mean for the future of the insurance broker, Karli Kalpala, head of regions - UK & Ireland, and head of strategy transformation at Digital Workforce, offers an optimistic take: “AI doesn’t replace the broker—it elevates their capacity.”

Kalpala explained that: “These tools do not replace the broker's judgment; instead, they augment it, offloading repetitive work while enabling faster, smarter interactions with carriers and clients.” Kalpala also envisioned a strategic future for brokers: “Brokers can evolve their roles - learning how to supervise, tune, and collaborate with AI tools. This positions brokers not only as risk experts but also as digital orchestrators.”

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!

IB+ Data Hub

The Ultimate Data Intelligence Platform for Insurance Professionals

Unlock powerful dashboards and industry insights with IB+ Data Hub—your essential subscription for data-driven decision-making.