The increasing use of artificial intelligence (AI) across industries is creating both opportunities and risks for businesses, prompting insurers to assess how coverage can address AI-related liabilities.
Legal firm Herbert Smith Freehills and brokerage Lockton Australia have shared insights on the evolving insurance landscape as AI adoption accelerates.
Herbert Smith Freehills highlighted that AI is being integrated into industries such as financial services, healthcare, education, and manufacturing to enhance efficiency and automate processes. While the technology can improve productivity and streamline operations, its rapid adoption introduces new risks that require careful evaluation.
As with previous technological shifts, insurance will play a significant role in mitigating financial exposures. Businesses implementing AI must consider how liability may arise and whether their existing insurance policies provide adequate protection.
The insurance market is responding to AI-related risks through both traditional policies and emerging AI-specific coverage options.
Herbert Smith Freehills spotlighted several insurers that have introduced products designed to address AI-related losses:
The market for affirmative AI insurance remains in its early stages but is expected to expand as companies seek protection against AI-related failures.
The law firm also noted that AI risks may be covered under existing policies, even when they are not explicitly addressed – something insurance companies and brokers may share with their clients to remind them about their options:
However, Herbert Smith Freehills warned that several factors could complicate insurance claims as AI technology evolves:
Lockton Australia emphasised that regulatory scrutiny of AI is increasing, particularly regarding data privacy and consumer protection.
The Privacy and Other Legislation Amendment Bill 2024 strengthens personal data protections and introduces significant penalties for breaches. Under the new framework, businesses could face fines of up to $50 million or 30% of annual revenue for serious privacy violations.
Companies using AI for automated decision-making must ensure transparency in how personal data is processed. Failure to comply could lead to regulatory action, reputational damage, and financial penalties.
Lockton also addressed risks associated with AI-generated misinformation. AI models rely on training data, which may contain biases or inaccuracies. Professional services firms using AI-generated content must ensure its accuracy to avoid reputational harm.
The brokerage advised businesses adopting AI to assess their exposure and take proactive steps to mitigate risk. Key considerations include:
As AI adoption continues to grow, insurers and businesses alike will need to navigate the evolving risk landscape.