AI-powered smear campaigns pose mounting risk to corporate reputation

AI-fueled smear campaigns are hitting corporate reputations harder – and faster – than ever before

AI-powered smear campaigns pose mounting risk to corporate reputation

Cyber

By Chris Davis

In the digital age, reputational sabotage has found an ally in artificial intelligence – and insurance professionals are increasingly on the front lines of the fallout.

A growing number of companies are confronting coordinated disinformation attacks, often orchestrated by ex-employees, corporate rivals, or activist groups. These campaigns – once limited in reach and complexity – are now cheaper, faster, and more damaging thanks to generative AI, according to London-based law firm Schillings, which specializes in crisis response and reputation management.

Speaking to City AM, Schillings partner Juliet Young reported a striking 150% rise in smear campaigns over the past three years targeting high-performing firms and executives. “The strategies and technology needed to launch these attacks are now accessible globally, across a wide range of actors and budgets,” said Young. “We’ve encountered instances where the cost of initiating a campaign is less than £50.”

 

Weaponizing disinformation for leverage

Far from random acts of online hostility, these smear efforts are increasingly strategic – sometimes used as leverage during legal disputes. “It’s not uncommon for clients to face a campaign intended to pressure them into a settlement, under threat of enduring reputational damage,” Young said.

The attacks often deploy a multi-pronged approach: AI-generated content disseminated through phony news outlets, bot-driven amplification on social media, and deepfaked visuals designed to manipulate public perception. Some even insert compliance-triggering terms into online narratives to flag regulators or financial institutions.

“Left unchallenged, these efforts can alter search engine results, dent investor confidence, and expose businesses to regulatory scrutiny,” Young said.

Smears go high-tech

Deepfake technology is playing an outsized role. According to Young, some of the more sophisticated smear efforts involve fabricated screenshots of headlines designed to gain traction among unsuspecting audiences. Others are seeded with compliance “red flags” that trigger alerts within due diligence databases – creating obstacles to financing or M&A transactions.

The evolving nature of AI-driven disinformation has made mitigation not only more urgent but significantly more complex. “The anonymity of the actors and the scale of these attacks make them difficult to trace and counter,” Young said. She emphasized that an effective response typically requires a coordinated team – including investigators, legal counsel, and crisis communications specialists – who can dismantle the disinformation and reset the narrative.

In some cases, forensic efforts can uncover the digital fingerprints behind these campaigns, enabling firms to take legal action or issue takedown requests through platform providers.

Industry insight: Awareness is critical

For the insurance sector – already managing reputational risk in areas like cyber liability, D&O, and professional indemnity – this trend carries clear implications.

“AI has certainly enhanced our on-line experience but it comes with its slew of issues,” said Anamae Saavedra, vice president at RLA Insurance Intermediaries. “From AI being used by bad actors for financial gain to erroneous data provided to folks relying exclusively on online searches where using AI or not, is no longer a choice. Companies need to be aware and not rely on AI as if it were the end-all be-all because of its inaccuracies and its subsequent damage to their reputations.”

Risk management in a post-truth era

For risk managers and insurers alike, the rise of AI-generated misinformation introduces new layers of exposure. From brand liability policies to the structure of cyber coverage, the insurance sector must grapple with the growing threat of synthetic reputational attacks – and the legal ambiguity surrounding them.

Young’s advice to clients is clear: don’t wait for the attack to start. Establish protocols, engage cross-functional crisis teams early, and monitor your digital footprint for anomalies.

“In today’s environment, speed matters,” she said. “The longer disinformation sits online unaddressed, the more damage it does.”

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!