Google Vertex AI security permissions could amplify insider threats

The finding of fresh privilege-escalation vulnerabilities in Google’s Vertex AI is a stark reminder to CISOs that managing AI service agents is a task unlike any that they have encountered before.

XM Cyber reported two different issues with Vertex AI on Thursday, in which default configurations allow low-privileged users to pivot into higher-privileged Service Agent roles. But, it said, Google told it the system is just working as intended.

“The OWASP Agentic Top 10 just codified identity and privilege abuse as ASI03 and Google immediately gave us a case study,” said Rock Lambros, CEO of security firm RockCyber. “We’ve seen this movie before. Orca found Azure Storage privilege escalation, Microsoft called it ‘by design.’ Aqua found AWS SageMaker lateral movement paths, AWS said ‘operating as expected.’ Cloud providers have turned ‘shared responsibility’ into a liability shield for their own insecure defaults. CISOs need to stop trusting that ‘managed’ means ‘secured’ and start auditing every service identity attached to their AI workloads, because the vendors clearly aren’t doing it for you.”

Sanchit Vir Gogia, chief analyst at Greyhound Research, said the report is “a window into how the trust model behind Google’s Vertex AI is fundamentally misaligned with enterprise security principles.” In these platforms, he said, “Managed service agents are granted sweeping permissions so AI features can function out of the box. But that convenience comes at the cost of visibility and control. These service identities operate in the background, carry project-wide privileges, and can be manipulated by any user who understands how the system behaves.”

Google didn’t respond to a request for comment. 

The vulnerabilities, XM Cyber explained in its report, lie in how privileges are allocated to different roles associated with Vertex AI. “Central to this is the role of Service Agents: special service accounts created and managed by Google Cloud that allow services to access your resources and perform internal processes on your behalf. Because these invisible managed identities are required for services to function, they are often automatically granted broad project-wide permissions,” it said. “These vulnerabilities allow an attacker with minimal permissions to hijack high-privileged Service Agents, effectively turning these invisible managed identities into double agents that facilitate privilege escalation. When we disclosed the findings to Google, their rationale was that the services are currently ‘working as intended.’”

XM Cyber found that someone with control over an identity with even minimal privileges consistent with Vertex AI’s “Viewer” role, the lowest level of privilege, could in certain circumstances manipulate the system to retrieve the access token for the service agent and use its privileges in the project.

Gogia said the issue is alarming. “When a cloud provider says that a low-privileged user being able to hijack a highly privileged service identity is ‘working as intended,’ what they are really saying is that your governance model is subordinate to their architecture,” he said. “It is a structural design flaw that hands out power to components most customers don’t even realize exist.”

Don’t wait for vendors to act

Cybersecurity consultant Brian Levine, executive director of FormerGov, was also concerned. “The smart move for CISOs is to build compensating controls now because waiting for vendors to redefine ‘intended behavior’ is not a security strategy,” he said.

Flavio Villanustre, CISO for the LexisNexis Risk Solutions Group, warned, “A malicious insider could leverage these weaknesses to grant themselves more access than normally allowed.” But, he said, “There is little that can be done to mitigate the risk other than, possibly, limiting the blast radius by reducing the authentication scope and introducing robust security boundaries in between them.” However, “This could have the side effect of significantly increasing the cost, so it may not be a commercially viable option either.”

Gogia said the biggest risk is that these are holes that will likely go undetected because enterprise security tools are not programmed to look for them. 

“Most enterprises have no monitoring in place for service agent behavior. If one of these identities is abused, it won’t look like an attacker. It will look like the platform doing its job,” Gogia said. “That is what makes the risk severe. You are trusting components that you cannot observe, constrain, or isolate without fundamentally redesigning your cloud posture. Most organizations log user activity but ignore what the platform does internally. That needs to change. You need to monitor your service agents like they’re privileged employees. Build alerts around unexpected BigQuery queries, storage access, or session behavior. The attacker will look like the service agent, so that is where detection must focus.”

He added: “Organizations are trusting code to run under identities they do not understand, performing actions they do not monitor, in environments they assume are safe. That is the textbook definition of invisible risk. And it is amplified in AI environments, because AI workloads often span multiple services, cross-reference sensitive datasets, and require orchestration that touches everything from logs to APIs.”

This is not the first time Google’s Vertex AI has been found vulnerable to a privilege escalation attack: In November 2024, Palo Alto Networks issued a report finding similar issues with the Google Vertex AI environment, problems that Google told Palo Alto at the time that it had fixed.