Google warns: for the first time, hackers used AI to find and exploit a security flaw

‘We believe this is the tip of the iceberg. Other AI-developed zero-days are probably out there’, says Google’s expert

Google warns: for the first time, hackers used AI to find and exploit a security flaw

Cyber

By

Cybercriminals have used artificial intelligence to discover and weaponize a previously unknown software vulnerability – the first confirmed case of its kind – Google revealed Monday.

Google's Threat Intelligence Group (GTIG) detailed the finding in a report documenting an accelerating trend: malicious actors are no longer just experimenting with AI as a research aid, but are beginning to embed it directly into offensive operations.

"Frankly, the details of this event are not as important as the evidence that the era of adversary use is here," John Hultquist, chief analyst at Google Threat Intelligence Group, wrote in a LinkedIn post Monday. "We believe this is the tip of the iceberg. Other AI-developed zero-days are probably out there."

The attackers, described as prominent cybercriminals, had planned a large-scale exploitation campaign targeting a widely used open-source web-based system administration tool. The vulnerability they developed allowed them to bypass two-factor authentication on the platform. Google classified it as a zero-day exploit, meaning the affected vendor had no prior knowledge of the flaw and no fix existed at the time of discovery.

According to Google's analysis, the exploit script bore unmistakable hallmarks of AI-generated code: an abundance of educational annotations, a hallucinated severity score, and a clean, textbook-style structure characteristic of large language model output. Based on these indicators, GTIG said it had high confidence that an AI model was used to both identify and build the exploit, though it said the tool was most likely not Google's own Gemini.

Google said it worked with the affected vendor to disclose the vulnerability responsibly, and the planned attack was disrupted before it could cause any damage.

Hultquist cautioned that the incident is unlikely to be isolated. "If criminals are doing it, then state actors with significant resources probably are too," he wrote. "Each new generation of models will reduce the need for expert-developed harnesses, but they are almost certainly out there… The race has started already."

The broader GTIG report paints a picture of an adversarial ecosystem rapidly maturing in its use of AI. Hacking groups linked to China, Russia, and North Korea are all documented as integrating AI tools into different phases of their operations — from reconnaissance and phishing to malware development and large-scale vulnerability research.

Chinese state-linked groups have been observed using AI to conduct vulnerability research into embedded devices and router firmware, while also experimenting with specialized vulnerability databases to train models to reason like seasoned security experts. North Korean group APT45 has been sending thousands of automated prompts to recursively analyze known vulnerabilities and validate exploits, building a more robust attack arsenal than would otherwise be feasible.

On the malware front, Russia-linked actors have leveraged AI to generate decoy code — large volumes of inert but plausible-looking instructions designed to conceal the malicious components of their tools from detection. Google identified two malware families, CANFAIL and LONGSTREAM, using this technique against Ukrainian targets.

Perhaps most striking is the emergence of what researchers describe as autonomous attack orchestration. An Android backdoor called PROMPTSPY uses Google's own Gemini API to independently navigate a victim's device interface, interpret what's on screen, and execute commands — all without human direction. Google said it has taken action against the actors behind it, and confirmed no apps containing PROMPTSPY are currently available on Google Play. Android devices with Google Play Services are automatically protected through Google Play Protect.

The report also highlights a growing underground infrastructure built around gaining anonymous, large-scale access to premium AI models. Threat actors have developed automated pipelines to register and cycle through accounts at major AI providers, using anti-detection tools and proxy services to evade bans and safety filters — effectively industrializing their use of commercial AI.

Google said it continues to use AI defensively as well, including through its Big Sleep agent, which proactively hunts for vulnerabilities in software, and CodeMender, an experimental tool that uses Gemini to automatically patch critical code flaws once found.

Keep up with the latest news and events

Join our mailing list, it’s free!