Brokerage, law firm reveal how AI is reshaping insurance and risk in Australia

Key risk management and insurance strategies outlined

Brokerage, law firm reveal how AI is reshaping insurance and risk in Australia

Cyber

By Roxanne Libatique

The increasing use of artificial intelligence (AI) across industries is creating both opportunities and risks for businesses, prompting insurers to assess how coverage can address AI-related liabilities.

Legal firm Herbert Smith Freehills and brokerage Lockton Australia have shared insights on the evolving insurance landscape as AI adoption accelerates.

AI’s business impact and risk exposure 

Herbert Smith Freehills highlighted that AI is being integrated into industries such as financial services, healthcare, education, and manufacturing to enhance efficiency and automate processes. While the technology can improve productivity and streamline operations, its rapid adoption introduces new risks that require careful evaluation.

As with previous technological shifts, insurance will play a significant role in mitigating financial exposures. Businesses implementing AI must consider how liability may arise and whether their existing insurance policies provide adequate protection.

Insurance coverage for AI-related risks

The insurance market is responding to AI-related risks through both traditional policies and emerging AI-specific coverage options.

Herbert Smith Freehills spotlighted several insurers that have introduced products designed to address AI-related losses:

  • Munich Re has developed a policy that covers losses when an AI model fails to perform as expected. For example, if a financial institution uses AI for property valuations and the model produces inaccurate results, the policy may respond.
  • Armilla Insurance, an emerging provider, offers warranties ensuring that AI models perform as intended by developers.
  • Coalition, a cyber insurance provider, recently added an AI endorsement to its cyber policies, broadening coverage for AI-driven incidents.

The market for affirmative AI insurance remains in its early stages but is expected to expand as companies seek protection against AI-related failures.

Traditional policies and AI-related exposures

The law firm also noted that AI risks may be covered under existing policies, even when they are not explicitly addressed – something insurance companies and brokers may share with their clients to remind them about their options:

  • Professional indemnity (PI) insurance may cover liabilities related to AI-driven services, including regulatory actions or customer claims.
  • Directors and officers (D&O) insurance could apply if executives face regulatory scrutiny over AI governance issues.
  • Product liability insurance – relevant if an AI-powered product malfunctions and causes consumer harm.
  • Cyber insurance addresses data breaches, security incidents, and potential ransomware attacks linked to AI.
  • Employment practices liability insurance may cover claims related to AI-driven decisions that result in workplace discrimination or unfair treatment.
  • Property damage and business interruption insurance could respond to losses if AI contributes to property damage or operational disruptions.

Challenges in AI insurance claims

However, Herbert Smith Freehills warned that several factors could complicate insurance claims as AI technology evolves:

  • Determining liability: AI decision-making raises questions about accountability. When AI causes harm, it may be unclear whether liability falls on the company, the AI provider, or another party. This ambiguity can affect policy coverage.
  • Potential AI exclusions: While most policies do not yet include AI-specific exclusions, insurers may introduce them as risks become more defined.

Regulatory considerations for AI adoption

Lockton Australia emphasised that regulatory scrutiny of AI is increasing, particularly regarding data privacy and consumer protection.

The Privacy and Other Legislation Amendment Bill 2024 strengthens personal data protections and introduces significant penalties for breaches. Under the new framework, businesses could face fines of up to $50 million or 30% of annual revenue for serious privacy violations.

Companies using AI for automated decision-making must ensure transparency in how personal data is processed. Failure to comply could lead to regulatory action, reputational damage, and financial penalties.

AI risks in professional services

Lockton also addressed risks associated with AI-generated misinformation. AI models rely on training data, which may contain biases or inaccuracies. Professional services firms using AI-generated content must ensure its accuracy to avoid reputational harm.

Risk management and insurance considerations

The brokerage advised businesses adopting AI to assess their exposure and take proactive steps to mitigate risk. Key considerations include:

  • AI security – companies should evaluate how AI systems are protected against cyber threats and malicious actors
  • data protection – implementing encryption and segmenting systems can help reduce data breach risks
  • insurance coverage review – businesses should engage with insurers to ensure AI-related risks are covered under their policies
  • human oversight – AI-generated decisions should be reviewed by professionals to validate accuracy and reliability

As AI adoption continues to grow, insurers and businesses alike will need to navigate the evolving risk landscape.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!