Gallagher: AI goes mainstream, but insurers face skills, risk and coverage gaps

Many insurers are still stuck in "pilot purgatory"

Gallagher: AI goes mainstream, but insurers face skills, risk and coverage gaps

Transformation

By Josh Recamara

Most large companies have now pushed artificial intelligence beyond pilot phase but are still grappling with skills, risk governance and proving return on investment, according to Gallagher's third annual AI Adoption and Risk Survey.

According to the broker, 63% of respondents have fully operationalized AI or implemented it in parts of their business, up from 45% in 2025, with the heaviest use in IT operations, client-facing functions and analytics.

The report also said 82% of firms already see positive impacts from AI and 83% expect it to drive revenue growth. Meanwhile, Gallagher noted that 93% of respondents rate their understanding of AI risks as “quite well” or “very well”.

At the same time, just under two-thirds (63%) are formally measuring AI return on investment, and those that do expect it to take an average of 28 months to realize. That long payback window mirrors what is emerging in insurance as industry studies suggest AI adoption is now nearly universal, but only about 7% of insurers have achieved true enterprise-wide deployment with consistent, measurable ROI.

Confidence high, frameworks less so

Despite the high self-reported confidence on risk, Gallagher’s findings highlighted several persistent concerns. AI errors, misinformation and “hallucinations” top the list of perceived threats (57%), followed by legal and reputational risk from AI misuse (56%) and data protection and privacy violations (55%).

Almost half of businesses (46%) said they have appointed an AI ethics officer, signaling an attempt to formalize oversight. However, wider research suggests robust risk frameworks are still the exception rather than the rule. A UK parliamentary Treasury committee recently warned that financial regulators were taking a “wait-and-see” approach to AI risks in finance and urged AI-focused stress tests and clearer guidance on accountability for harm.

For insurers, that regulatory direction matters as heavily regulated users of AI in pricing, claims and distribution and as carriers of liability when AI-driven decisions go wrong in other sectors.

Heavy AI use, but “pilot purgatory” persists

External data also underlines how far AI has penetrated insurance. By 2025, around 90% of insurers had begun evaluating or implementing AI, with particularly high adoption in fraud detection and claims; in the US, roughly three-quarters of carriers reported using AI in claims and underwriting. Analysts estimate that, by 2026, about 80% of insurers will be deploying AI in at least one core function, with AI helping to automate 50 to 60% of claims and cut processing costs by up to 25 to 40%.

Yet only a small minority of carriers have scaled beyond pilots. Industry commentators describe a “pilot purgatory” in which dozens of proofs of concept run in parallel but few are embedded in core workflows - a pattern Gallagher’s 28‑month average ROI horizon tends to confirm.

Managing general agents (MGAs) are following a similar trajectory. Gallagher Bassett’s MGA Market Pulse found that while 61.3% of MGA and program administrator respondents use AI, only 35.5% are actively budgeting for AI tools, exposing a gap between experimentation and sustained investment. 

Insurers grow wary of open-ended AI exposures

While corporates and carriers are betting on AI to improve performance, some major insurers are starting to pull back from broad AI liability on the coverage side. In late 2025, several large underwriters, including AIG and W. R. Berkley, began introducing exclusions for AI-related risks in corporate policies, citing the potential for unpredictable, multibillion-dollar losses from systemic failures, deepfake-enabled fraud and high-profile AI misfires.

AI-related incidents often sit awkwardly between traditional cyber, tech E&O and general liability wordings. Brokers and coverage attorneys have warned that without clearer product design, disputes over which policy responds, and whether exclusions apply, are likely to increase after a large-scale AI event.

Human element and talent remain critical

On the workforce side, more than half of respondents report skills gaps and recruitment challenges around AI. That is consistent with sector-specific research showing that data science, model-risk and digital product skills are in short supply across insurance markets.

The broker stressed the importance of keeping a strong human element as AI spreads, emphasizing governance, training and personal accountability. Gallagher’s global chief digital officer Steve Rhee said the firm’s own AI journey has focused on investing in data, analytics and “digital workforce skill development” to keep tools aligned with client needs, not just technology for its own sake.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!