The Australian Prudential Regulation Authority (APRA) is urging insurers, banks, and superannuation trustees to strengthen how they oversee and control artificial intelligence, pointing to a growing gap between the speed of AI deployment and existing risk management practices. In a letter to APRA‑regulated entities published on April 30, 2026, the regulator said current approaches to governance, risk management, assurance, and operational resilience are not keeping pace with the “scale, speed, and complexity” of AI adoption across the financial system.
The letter is based on a targeted supervisory review undertaken in late 2025 across insurance, banking, and superannuation. The review looked at how entities are using AI, how models are governed and monitored, and how AI‑enabled processes intersect with established risk and control frameworks. APRA found that the expanded use of advanced AI is introducing additional financial and operational vulnerabilities, while many entities’ information security capabilities are not developing at the same rate. The authority also drew attention to the potential impact of “frontier” models, including Anthropic’s Claude Mythos, which it said could help malicious actors identify and exploit weaknesses in systems, increasing the likelihood and speed of cyberattacks.
APRA member Therese McCarthy Hockey said entities need to adjust their cyber and operational practices on an ongoing basis as AI capabilities advance. “The AI revolution presents tremendous opportunities for banks, insurers, and superannuation trustees to deliver improved efficiency and enhanced customer services. We are already beginning to see these benefits materialise. But we cannot be blind to the risks of such powerful technology – whether in our own hands or the hands of those with malign intent,” McCarthy Hockey said.
According to APRA, AI use is moving quickly from pilots into business‑critical and customer‑facing applications, but supporting governance arrangements have not advanced at the same rate. Boards are showing interest in AI’s potential uses, yet many do not have the technical literacy needed to robustly challenge management on issues such as model risk, data quality, and AI‑specific controls. The review also identified concentration risk, with some institutions relying on a single AI or cloud provider across multiple use cases and lacking detailed contingency plans. Where AI components are embedded within broader software platforms or developer tools, entities often have limited visibility over how models are trained, updated, or constrained, which makes it harder to understand and manage the associated risks.
APRA noted that AI‑related risks span operational resilience, cyber and information security, privacy, data management, and third‑party arrangements. Existing change and assurance processes, which were designed for more traditional technology implementations, are often fragmented and may not provide a comprehensive view of AI‑enabled activities. “What we’ve observed from our supervisory engagement is that while AI adoption is continuing apace, the systems and processes required to safely govern its use aren’t keeping up. Likewise, the speed at which entities can identify and patch vulnerabilities needs to operate much faster, commensurate with the AI‑accelerated threat,” McCarthy Hockey said.
McCarthy Hockey said APRA is not introducing new, AI‑specific prudential standards at this time. Instead, the regulator is emphasising that AI should be managed within existing requirements for information security, operational risk management, governance, and data risk. “While we are not proposing to introduce additional requirements at this stage, we expect to see a significant improvement in how entities are closing the gaps between the power of the technology they are using and their ability to monitor and control it,” she said. She added that APRA will continue working with government agencies and peer regulators in Australia and overseas “to assess the implications of these technological advancements to ensure the ongoing safety and resilience of the financial system.”
APRA’s communication comes as insurers internationally step up AI investment across underwriting, pricing, claims, and distribution. Over the past 18 months, one large UK insurer has reduced the time taken to resolve complex liability claims by 23 days using AI‑enabled workflows. A major German carrier has built and implemented a multi‑agent AI claims system in less than 100 days. In the US, an insurtech has automated 55% of claims from end to end, with some settled in seconds, while Nationwide has announced a US$1.5 billion technology program, with about 20% of the funding allocated to AI.
Industry estimates suggest the AI in insurance market was valued at about US$8.63 billion in 2025 and could reach US$59.5 billion by 2033, implying a compound annual growth rate above 27%. Sector‑wide spending on AI is forecast to grow by more than 25% in 2026, with survey data indicating that 86% of insurers intend to increase AI budgets this year, particularly for generative and agent‑based applications. Survey work by Grant Thornton involving 950 executives shows that a little over half of insurance leaders report AI‑related revenue growth, while around two‑thirds say AI is influencing business decision‑making. Separate analysis suggests early AI adopters in insurance are generating significantly higher total shareholder returns than peers that have moved more slowly on the technology.
Within Australia, small and medium‑sized enterprises (SMEs) in finance and insurance report comparatively high rates of AI use, according to National Australia Bank’s (NAB) Embracing AI SME Business Insights report. Across all SME sectors, 42% say they currently use AI tools, 44% report no use, and 14% plan to adopt AI, indicating a clear divide between users and non‑users, with a segment preparing for near‑term implementation. Industries with intensive data use and more developed digital systems record the highest uptake.
Property services has an AI adoption rate of 69%, followed by finance and insurance at 64% and business services at 61%, all above the SME average. Among finance and insurance SMEs that use AI, 65% report applying the technology in operations and logistics. The report also notes that 29% of finance and insurance SMEs identify cost reduction and efficiency as their primary AI opportunity, the largest proportion of any sector. Respondents link this to regulatory obligations, manual processes, and administrative workloads that may be partially automated or supported by AI‑based tools.
APRA’s letter signals that AI use will be viewed through a prudential lens that includes board capability, model risk governance, cyber resilience, and third‑party risk management. As AI becomes more integrated into core activities such as underwriting, pricing, and claims, and as consumer expectations and global competitive pressures evolve, local market participants are likely to face parallel pressures: to realise efficiency and service gains from AI and to demonstrate to APRA that their control frameworks, assurance activities, and incident‑response arrangements are keeping pace with the technology’s growing role in their businesses.