What constitutes an AI risk – and how should the C-suite manage it?

"Potential can be harnessed" with the right moves

What constitutes an AI risk – and how should the C-suite manage it?

Risk Management News

By Kenneth Araullo

As artificial intelligence (AI) becomes increasingly integrated into corporate operations, it introduces a complex array of risks that require meticulous management. These risks range from potential regulatory infractions and cybersecurity vulnerabilities to ethical dilemmas and privacy concerns.

Given the significant consequences of mismanaging AI, it is essential for directors and officers to establish comprehensive risk management strategies to mitigate these threats effectively.

Edward Vaughan (pictured above), a management liability associate at Lockton, has emphasized the intricate challenges and responsibilities associated with integrating AI into business operations, particularly noting the potential liabilities for directors and officers.

“To be prepared for the potential regulatory scrutiny or claims activity that comes with the introduction of a new technology, it is imperative that boards carefully consider the introduction of AI, and ensure sufficient risk mitigation measures are in place,” Vaughan said.

AI significantly enhances productivity, streamlines operations, and fosters innovation across various sectors. However, Vaughan notes that these advantages are accompanied by substantial risks such as potential harm to customers, financial losses, and increased regulatory scrutiny.

“Companies’ disclosure of their AI usage is another potential source of exposure. Amid surging investor interest in AI, companies and their boards may be tempted to overstate the extent of their AI capabilities and investments. This practice, known as ‘AI washing’, recently led one plaintiff to file a securities class-action lawsuit in the US against an AI-enabled software platform company, arguing that investors had been misled,” he said.

Furthermore, the regulatory landscape is evolving, as seen with legislation like the EU AI Act, which demands greater transparency in how companies deploy AI.

“Just as disclosures may overstate AI capabilities, companies may also understate their exposure to AI-related disruption or fail to disclose that their competitors are adopting AI tools more rapidly and effectively. Cybersecurity risks or flawed algorithms leading to reputational harm, competitive harm or legal liability are all potential consequences of poorly implemented AI,” Vaughan said.

Who is responsible for these risks?

For directors and officers, these evolving challenges underscore the importance of overseeing AI integration and understanding the risks involved. Responsibilities extend across various domains, including ensuring legal and regulatory compliance to prevent AI from causing competitive or reputational harm.

“Allegations of poor AI governance procedures or claims for AI technology failure as well as misrepresentation may be alleged against directors and officers in the form of a breach of the directors’ duties. Such claims could damage a company’s reputation and result in a D&O class action,” he said.

Additionally, protecting AI systems from cyber threats and ensuring data privacy are critical concerns, given the vulnerabilities associated with digital technologies. Vaughan notes that transparent communication with investors about AI's role and impact is also crucial to managing expectations and avoiding misrepresentations that could lead to legal challenges.

Directors might face negligence claims from AI-related failures, such as discrimination or privacy breaches, leading to substantial legal and financial repercussions. Misrepresentation claims could also arise if AI-generated reports or disclosures contain inaccuracies.

Furthermore, directors must ensure that appropriate insurance coverage is in place to address potential losses induced by AI, as highlighted by insurers like Allianz Commercial, who have specifically warned about AI's implications for cybersecurity, regulatory risks, and misinformation management.

Risk management for AI-related risks

To effectively manage these risks, Vaughan suggests that boards implement comprehensive decision-making protocols for evaluating and adopting new technologies.

“Boards, in consultation with in-house and outside counsel, may consider setting up an AI ethics committee to consult on the implementation and management of AI tools. This committee may also be able to help monitor emerging policies and legislation in respect of AI. If a business doesn’t have the internal expertise to develop, use, and maintain AI, this may be actioned via a third-party,” he said.

Ensuring employees are well-trained and equipped to manage AI tools responsibly is crucial for maintaining operational integrity. Establishing an AI ethics committee can offer valuable guidance on the ethical use of AI, monitor legislative developments, and address concerns related to AI bias and intellectual property.

In conclusion, Vaughan said that while AI offers significant opportunities for growth and innovation, it also necessitates a diligent approach to governance and risk management.

“As AI continues to evolve, it is essential for companies and their boards of directors to have a strong grasp of the risks attached to this technology. With the appropriate action taken, AI’s exciting potential can be harnessed, and risk can be minimized,” Vaughan said.

What are your thoughts on this story? Please feel free to share your comments below.

Keep up with the latest news and events

Join our mailing list, it’s free!