Australia is giving AI agents access faster than it builds guardrails

Almost all Australian organisations are using AI for sensitive security tasks, but few are confident they can recover if something goes wrong

Australia is giving AI agents access faster than it builds guardrails

Cyber

By Roxanne Libatique

A study of 1,100 organisations across eight countries shows Australian firms are deploying AI into sensitive security functions while lacking the controls to manage a breach – a finding that sits alongside a fresh regulatory warning from the Australian Prudential Regulation Authority (APRA).

Australian organisations trail global peers on AI identity controls

Identity security firm Semperis released its State of Identity Security in the AI Era study on May 14, 2026, surveying organisations across Australia, the US, the UK, France, Germany, Italy, Spain, and Singapore. The research focused on how AI is affecting the attack surface of identity systems – the platforms that control who and what can access an organisation’s critical infrastructure. When the Australian data is examined separately, the numbers point to a local readiness problem that runs deeper than the global average.

Eighty percent of Australian respondents expect AI to drive an increase in attacks on identity infrastructure, compared with 74% globally. Nearly all Australian organisations surveyed – 95% – already use or plan to use AI agents for tasks touching security, including password resets and VPN access. Ninety-two percent report AI tools are installed on machines that can access SSH keys and encryption credentials. The confidence gap is where the Australian figures diverge most sharply. Only 21% of local organisations said they were certain they could recover control of their systems if AI exposed administrator credentials – against 32% globally. One in 10 Australian respondents said they lacked confidence in that capability entirely.

Formal tracking of AI-generated identities – referred to in the industry as non-human identities, or NHIs – is also less common in Australia. Just 52% of local organisations said their NHIs are fully registered, authenticated, and authorised through a structured system, compared with 65% globally. Gerry Sillars (pictured), Semperis vice president for Asia-Pacific and Japan, said the figures reflect a structural gap in how local organisations are approaching AI deployment. “The data reveals that Australian organisations are lagging behind their international peers when it comes to governing AI-related identities. Locally, organisations are racing to introduce AI identities, despite lacking the visibility and controls needed to securely manage them at scale. It is clear that AI is changing the identity threat landscape faster than Australian organisations can adapt,” Sillars said.

Help desk automation places AI close to critical systems

One of the study’s more direct implications for the insurance sector relates to how AI is being used in day-to-day security operations. Twenty-four percent of Australian organisations are already deploying AI agents to handle security-related help desk functions, including password resets and VPN requests. A further 69% plan to follow within 12 months. These functions sit at the identity layer of an organisation’s infrastructure – the point that attackers commonly target to move laterally through systems or escalate privileges. The study found most organisations undertaking this kind of deployment have limited confidence in their ability to recover if those systems are compromised.

Alex Weinert, Semperis chief product officer, said the volume of new NHIs being created is moving faster than governance frameworks can absorb. “The accelerated use of AI is introducing a bevy of new agents, each with its own non-human identity, throughout global enterprises, and many companies are just way too optimistic about their ability to recover their identity infrastructure following a breach, even as they expand this landscape,” Weinert said.

Chris Inglis, the first US National Cyber Director and a Semperis strategic advisor, said the gap between documented recovery plans and operational reality is a consistent pattern in how organisations respond to cyberattacks. “On paper, organisations have plans and backups; in practice, identity failures turn technical incidents into prolonged business crises, exposing a dangerous gap between perceived resilience and reality,” Inglis said.

Grace Cassy, partner at Ten Eleven Ventures, said the pace of AI integration at the identity layer demands a corresponding commitment to recovery capability. “Introducing AI at the identity layer offers operational advantages, but it must be accompanied by guardrails, observability, and recovery readiness. It is a new dimension of an old question, really: Are you resilient enough to respond in the event of critical disruption,” Cassy said.

On a constructive note, 79% of Australian respondents said AI identity governance is a priority for their organisation in the months ahead. The study recommends that organisations treat AI agents explicitly as NHIs within their identity frameworks, apply least-privilege and just-in-time access controls as rigorously as they do for human users, monitor for anomalous agent behaviour, and ensure identity systems can be recovered to a trusted state following a breach.

APRA letter reinforces the governance picture

The Semperis findings sit alongside a supervisory letter APRA sent to all regulated entities on April 30, 2026, based on a targeted review of large banks, insurers, and superannuation trustees conducted in late 2025. While the letter covers the financial sector broadly, its observations correspond closely with the identity and governance gaps the Semperis study identified. APRA found that identity and access management systems have not been updated to account for non-human actors such as AI agents. The regulator also observed that governance has not kept pace with deployment – most entities recognise that existing prudential standards apply to AI, but few have translated that recognition into operational practice. Board-level technical literacy on AI risks was noted as underdeveloped, with some boards relying on vendor presentations without sufficient independent examination of model risk or control design.

Concentration risk was another area of concern, with some institutions running multiple AI functions through a single provider and lacking tested fallback arrangements. Where AI is embedded within third-party software platforms or developer tooling, entities often have limited visibility over how underlying models are trained or updated. APRA member Therese McCarthy Hockey said the current environment requires entities to move faster on both governance and security remediation. “What we’ve observed from our supervisory engagement is that while AI adoption is continuing apace, the systems and processes required to safely govern its use aren’t keeping up. Likewise, the speed at which entities can identify and patch vulnerabilities needs to operate much faster, commensurate with the AI-accelerated threat,” McCarthy Hockey said.

McCarthy Hockey said APRA is not introducing AI-specific prudential standards at this stage, but that existing obligations in information security, operational risk, governance, and data management apply in full to AI activities. “We expect to see a significant improvement in how entities are closing the gaps between the power of the technology they are using and their ability to monitor and control it,” she said. APRA indicated it will pursue enforcement where entities fail to manage AI risks proportionate to their size and complexity.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!