Overview
- Google’s threat team reports a criminal group used AI to craft a previously unknown exploit that could bypass two-factor authentication on a popular admin tool, but the campaign was detected and the vendor issued a patch before broad abuse.
- Analysts found AI fingerprints in the exploit code, including tutorial-style notes, a made-up severity score, and rigid, textbook-like Python formatting.
- The attack hit a semantic logic gap in the software, which large language models can map by reasoning through how features interact rather than by finding a simple coding mistake.
- Google also details PROMPTSPY, an Android backdoor that uses the Gemini API to read on-screen elements and mimic taps and PIN patterns without a user.
- Google says criminals and state-linked units are folding generative AI into phishing, malware, and bug-hunting, prompting the company to deploy tools like Big Sleep and CodeMender to spot and fix weaknesses faster.