Google ai zero day exploit stopped – Breaking News & Latest Updates 2026
Skip to main content

Google stopped a zero-day hack that it says was developed with AI

Google researchers found evidence in the exploit’s code that it may have been created using AI, like a ‘hallucinated’ CVSS score.

Google researchers found evidence in the exploit’s code that it may have been created using AI, like a ‘hallucinated’ CVSS score.

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

Photo illustration of a brain on a circuit board in red.
Photo illustration of a brain on a circuit board in red.
Illustration by Cath Virginia / The Verge | Photos from Getty Images
Stevie Bonifield
is a news writer covering all things consumer tech. Stevie started out at Laptop Mag writing news and reviews on hardware, gaming, and AI.

For the first time, Google says it has spotted and stopped a zero-day exploit developed with AI. According to a report from Google Threat Intelligence Group (GTIG), “prominent cyber crime threat actors” were planning to use the vulnerability for a “mass exploitation event” that would have allowed them to bypass two-factor authentication on an unnamed “open-source, web-based system administration tool.”

Google’s researchers found hints in the Python script used for the exploit that indicated help from AI, like a “hallucinated CVSS score” and “structured, textbook” formatting consistent with LLM training data. The exploit takes advantage of “a high-level semantic logic flaw where the developer hardcoded a trust assumption” in the platform’s 2FA system. This follows weeks of hand-wringing over the capabilities of cybersecurity-focused AI models like Anthropic’s Mythos and a recently disclosed Linux vulnerability that was discovered with AI assistance.

Related

It’s the first time Google has found evidence that AI was involved in an attack like this, although Google’s researchers note that they “do not believe Gemini was used.” Google says it was able to “disrupt” this particular exploit, but also says hackers are increasingly using AI to find and take advantage of security vulnerabilities. The report also mentions AI as a target for attackers, saying “GTIG has observed adversaries increasingly target the integrated components that grant AI systems their utility, such as autonomous skills and third-party data connectors.”

Google’s report also details how hackers are using “persona-driven jailbreaking” to get AI to find security vulnerabilities for them, like an example prompt that instructs the AI to pretend it’s a security expert. Hackers are also feeding AI models whole repositories of vulnerability data and using OpenClaw in ways that suggest “an interest in refining AI-generated payloads within controlled settings to increase exploit reliability prior to deployment.”

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.