This article was produced in partnership with Hiscox
Hiscox has met and tackled every technology inflection point since it first underwrote a technology specialist policy in 1994. When the Millennium Bug loomed, its underwriters built wording for a threat the world had never seen. As cyber risk moved from a fringe concern to board-level nightmare, Hiscox adapted again. The insurer introduced clearer policy language that gives software developers the confidence that their evolving exposures are understood and insurable.
“We are always horizon-scanning,” recalls Adam Atkins, now Head of Technology at Hiscox UK. “Our job is to spot the next trend that a tech business needs protection for and iterate the wording before the risk arrives.”
That forward thinking approach is being tested once more as artificial intelligence (AI) becomes the tool of choice across the UK’s digital economy. A Moneypenny survey shows that nearly 70% of UK businesses are either already using AI or actively planning to. With 39% currently deploying AI tools and another 31% seriously considering adoption, the technology is moving quickly from experimental to essential.
The UK government’s 2025 action plan puts it plainly: Britain aims to lead the world in AI investment, innovation and deployment. The Prime Minister’s ambition contrasts with Europe’s more guarded posture, but it mirrors what Atkins hears every day from clients, as business’ ambition meets contractual reality, and liability questions multiply.
Businesses are already relying on AI in ways that go well beyond automation. From predicting signal changes on a live train and monitoring jet engine temperatures mid-flight to processing pharmaceutical data that informs a cancer diagnosis, the stakes are anything but low.
When something goes wrong, it’s not just a nuisance, it can seriously affect their ability to deliver on contractual obligations.”
Ultimately, “AI is only as good as the human who builds or uses it,” Atkins says. “There’s always a person shepherding it along — human oversight is still essential. You can’t shift the blame to the model, and we’re not at the point where you can sue a computer. The liability sits with you.”
Some threats, such as unpredictable AI behaviour or flawed outputs, feel novel, while others resemble the service-failures insurers have priced for decades. The problem is pace: companies are racing ahead with AI, whereas their risk-transfer programmes often lag behind. This mismatch creates “silent AI coverage,” uncertainty over whether standard wording would respond to algorithmic errors just as early cyber incidents once slipped through policy cracks.
Hiscox’s answer to AI’s sharpening risk profile is a rewrite of its Technology Professional Indemnity policy, unveiled in May and the first in the UK to grant clear, affirmative cover for AI-related claims. The document also introduces four wider clauses for network security, network interruption, personal-data loss and injury or property damage, keeping non-AI losses inside the same framework. Exclusions for insufficient resources and vulnerabilities such as Log4J disappear, while the old £250,000 cap on personal-data claims is lifted to track the policy limit.
The wording update spells out protection for instances such as when a client relies on an algorithm’s output to deliver a contract and the model fails; when faulty data fed into an AI tool produces defective software; or when a consultancy’s advice on AI suitability proves negligent. By naming the scenarios in the insuring clause, the policy removes the guesswork that often surrounds emerging tech risks.
Purpose-written AI liability insurance aims to close those gaps with explicit terms that address faulty outputs and downstream third-party harm, making risk transfer as modern as the code it protects.
Atkins calls it a reality check: “People will realise AI is here to stay, but it has always been here. We are looking at the next evolution of software, not a revolution. Insurance needs to evolve in the same measured way.”
For brokers, this update isn’t just about broader coverage, it’s about sharper tools. Behind the legal crafting of the policy sits a commercial objective: make complicated risks easy to place. Hiscox wrote the wording so that brokers do not need a data-science degree to explain coverage. That clarity applies at the claims stage too - if a consulting firm plugs an AI optimiser into a logistics chain and a model drift blocks half the country’s supermarket deliveries, the policy responds without creative interpretations.
The insurer’s calculation is straightforward: if AI really is the next evolution of software, liability inflation is inevitable. If a start-up feeds flawed medical images into an oncology algorithm and hospitals sue for missed diagnoses, for example, the defence costs start on day one; or if a broker advises a mid-sized manufacturer to license a generative-AI tool that later produces unusable designs, there’s clear advice exposure.
But both of those situations fall squarely within the new wording, because it’s built on the fundamental understanding that the duty of care remains with the user.
“Our wording recognises that and puts the safety net in black and white,” Atkins notes.
Time will tell how quickly customers embrace the cover, but early signals point to latent demand – with venture-backed software firms already negotiating warranty caps tied to AI performance and regulated utilities face board pressure to prove resilience when automated systems fail, liability is already a growing concern.
For now, Atkins is satisfied that Hiscox has drawn that map. The autonomous robots can wait. The real task is protecting the humans who write, sell and depend on the code that is already here.