Overview
- Antigravity, Google’s AI coding environment, was found vulnerable to a prompt-injection attack that enabled remote code execution, with Google patching the bug on February 28 after a January disclosure by Pillar Security.
- The attack hinged on Antigravity’s find_by_name tool passing a user-supplied Pattern straight to the fd search command, where injecting the -X flag turned a file search into command execution.
- Researchers showed how file creation plus fd -X allowed a full chain by first staging a script in the workspace and then triggering it through what looked like a normal search.
- The flaw bypassed Antigravity’s most restrictive Secure/Strict Mode because the native file-search tool ran before those sandbox limits took effect, letting unvalidated input reach the shell.
- Pillar Security and other teams warn this pattern is surfacing across AI agents like Claude, GitHub Copilot, and Cursor, and they urge auditing every native-tool parameter that touches a shell rather than relying on input sanitization.