Prompt injection turned Google’s Antigravity file search into RCE

Security researchers have revealed a prompt injection flaw in Google’s Antigravity IDE that could be weaponized to bypass its sandbox protections and achieve remote code execution (RCE).

The issue came from Antigravity’s ability to allow AI agents to invoke native functions, like searching files, on behalf of the user. Designed to kill complexity, the feature could allow attackers to inject malicious input into a tool parameter.

According to Pillar Security researchers, the vulnerability could bypass Antigravity’s “most restrictive security configuration,” Secure Mode.

The flaw was reported to Google in January, which acknowledged and fixed the issue internally, awarding Pillar Security a bounty through its Vulnerability Reward Program (VRP) for AI-specific categories. Google did not immediately respond to CSO’s request for comments.

File search could be turned into code execution

Pillar’s prompt injection vector relied on Antigravity’s “find_my_name” tool and an “fd” utility within. find_my_name is one of Antigravity’s built-in agent tools that allows the AI to search for files and directories in the project workspace using the fd command line.

What was happening is that any string beginning with “-” was being interpreted by fd as a flag rather than a search pattern, allowing execution of binaries within files matching a “-Xsh” pattern. “The technique exploits insufficient input sanitization of the find_by_name tool’s Pattern parameter, allowing attackers to inject command-line flags into the underlying fd utility, converting a file search operation into arbitrary code execution,” the researchers said in a blog post.

Essentially, instead of just locating files, “fd” could be tricked into executing attacker-supplied binaries across those files using a crafted prompt that manipulates the “Pattern” parameter. The researchers demonstrated this by creating a file in the local directory with the malicious prompt to exploit the “pattern” injected. Antigravity picked up the file, ran its intended tasks (like launching Calculator), and also launched the search tool, now primed to execute “-Xsh” patterns.

This could also be turned into remote code execution via indirect prompt injection. “A user pulls a benign-looking source file from an untrusted origin, such as a public repository, containing attacker-controlled comments that instruct the agent to stage and trigger the exploit,” the researchers explained.

The worst part was that it was unstoppable with the existing protection.

Google’s sandbox never got a chance

Antigravity’s Secure Mode, which is designed to restrict network access, prevent out-of-workspace writes, and ensure all command operations run strictly under a sandbox context, could not flag or quarantine this technique. This is because the find_my_name tool is called much before Secure Mode restrictions are evaluated.

“The agent treats it as a native tool invocation, not a shell command, so it never reaches the security boundary that Secure Mode enforces,“ the researchers noted.

The issue was trimmed down to a twofold root cause. A “No input validation” at the Pattern parameter, which accepts arbitrary strings without checking for legitimate search pattern characters. The second was “no argument termination,” which refers to fd’s inability to distinguish between flags and search terms. Google has already fixed the flaw internally, and Antigravity users need not do anything else to remain protected. However, the flaw’s ability to bypass Secure Mode, Pillar researchers point out, underlines that security controls focused on shell commands are insufficient. “The industry must move beyond sanitization-based controls toward execution isolation,” they said. “Every native tool parameter that reaches a shell command is a potential injection point.”