How cybersecurity leaders can defend against the spur of AI-driven NHI

Machine identities pose a big security risk for enterprises, and that risk will be magnified dramatically as AI agents are deployed. According to a report by cybersecurity vendor CyberArk, machine identities — also known as non-human identities (NHI) — now outnumber humans by 82 to 1, and their number is expected to increase exponentially. By comparison, in 2022, machine identities outnumbered humans by 45 to 1.

“If you look at IAM [identity and access management] as a whole, machine identity is the most immature space,” says Gartner analyst Steve Wessels. “It’s so hard to catch up. And then we talk about AI. Things are moving so fast. People are doing it willy-nilly. They’re throwing up AI agents everywhere.”

Traditional security risks

Managing machine identities was already a problem before AI agents, but businesses found ways to bypass that, including building automation script that goes in every 90 days to change the certificate or password or account. This can result in self-signed certificates, certificates expiring without proper renewal processes, hard-coded credentials, and potential security risks from service accounts.

There are three main issues when it comes to NHI: visibility of these identities, long lost and untracked NHI and default and hard-coded credentials

Visibility

Yageo Group had so many problematic machine identities that information security operations manager Terrick Taylor says he is almost embarrassed to say this, even though the group has now automated the monitoring of both human and non-human identities and has a process for managing identity lifecycles. “Last time I looked at the portal, there were over 500 accounts,” he says.

But once he can see the problem — a default password, for example, or an account that was too permissive, or older than 90 days — he can take steps to shut it down or take other measures. This issue can increase considerably if it is a company often acquiring others with different technologies.

According to the CyberArk survey — of more than 2,600 security decision-makers across 20 countries — 70% of respondents say that identity silos are a root cause of cybersecurity risk, and 49% say they lack complete visibility into entitlements and permissions across the cloud environments.

What makes it complicated is that machine identities can be created by various individuals and systems within an organization, for a multitude of different reasons. Some of these identities are created by employees who then leave the company, taking the knowledge of their existence with them as they go. But the access rights remain.

Even more worrisome is that a single compromised account with high privileges can be used by an attacker to create more service accounts, helping them spread further and deeper within an organization and making it much harder to root them out.

Long lost non-human identities

Lifecycle management is crucial to secure machine identities. In addition to the operational challenges of expired certificates there’s also the risk that the longer a credential has been hanging around, the higher the odds that someone has stumbled across it. “The hardest thing with a service account is keeping track of why it was created and what it is being used for,” says Gartner’s Wessels. “When you spin it up, you know exactly what it is, but if you don’t document that really well and maintain that documentation, it quickly becomes unmanaged.”

Companies end up with service accounts everywhere, which creates a large attack surface, that only grows over time. “We’ve seen passwords that were set and haven’t been changed for nine years,” Wessels says. “That password becomes kind of embedded, and it’s very difficult to rotate it, change it, secure it.”

Many companies don’t have lifecycle management for all their machine identities and security teams may be reluctant to shut down old accounts because doing so might break critical business processes.

Yageo’s Taylor isn’t one of those people. “If I see anything more than 90 days old, I’m killing it regardless. If it’s more than 90 days, I can’t see how it would still be useful.”

Others may soon have to join him. In April, the Certificate Authority Browser Forum unanimously voted to reduce TLS certificate lifespans from the current 398 days to 200 days by next March, 100 days by March of 2027, and just 47 days by March of 2029. “That is going to be a fundamental problem for a lot of us because of the operational disruption that would happen,” Nemi George, vice president of IT and CISO at PDS Health says. “We have a very robust process but there are still days when we come in and a cert renewal fell through the cracks.”

Shorter lifespans reduce the chance for keys becoming compromised via man-in-the-middle attacks and data breaches and encourages companies to embrace automation.

Default and hard-coded credentials

When an application is first built, it’s easy to use passwords that are simply the word “password” as placeholders. Access-management systems that provide one-time-use credentials to be used exactly when they are needed are cumbersome to set up. And some systems come with default logins like “admin” that are never changed.

There are a lot of mistakes like this that companies make all the time, says George. “An attacker doesn’t really have to be sophisticated to get in.” It’s like leaving your key in the lock when you leave the house. At that point, does it even count as a break-in if the criminal enters? “You kind of let them in.”

Similarly, when developers hard-code passwords and other access credentials right into the software, and the code is leaked, those credentials are ripe for the harvesting.

According to Verizon’s 2025 data breach investigations report, there were nearly half a million exposed credentials in public git repos, which Verizon refers to as secrets. And the median time it took to remediate discovered leaked secrets was 94 days. That’s three months in which an attacker could find this information and exploit it.

And they did. According to the report, credential abuse was the single most common access vector, occurring in 22% of nearly 10,000 breaches analyzed, putting it ahead of both exploitation of vulnerabilities and phishing, though Verizon did not differentiate between human and machine identities in its report.

As attackers deploy more AI and automation, all the traditional risks of machine identities become more acute. AI-powered bots can crawl through leaked data and source code repositories to find insecure machine identities and leverage them for even greater access.

Generative AI and AI agents increase NHI risks

According to the CyberArk survey, AI is expected to be the top source of new identities with privileged and sensitive access in 2025. It’s no surprise that 82% of companies say their use of AI creates access risks. Many generative AI technologies are so easy to deploy that business users can do it without input from IT, and without security oversight. Almost half of all organizations, 47%, say that they aren’t able to secure and manage shadow AI.

AI agents are the next step in the evolution of generative AI. Unlike chatbots, which only work with company data when provided by a user or an augmented prompt, agents are typically more autonomous, and can go out and find needed information on their own. This means that they need access to enterprise systems, at a level that would allow them to carry out all their assigned tasks. “The thing I’m worried about first is misconfiguration,” says Yageo’s Taylor. If an AI agent’s permissions are set incorrectly “it opens up the door to a lot of bad things to happen.”

Because of their ability to plan, reason, act, and learn AI agents can exhibit unpredictable and emergent behaviors. An AI agent that’s been instructed to accomplish a particular goal might find a way to do it in an unanticipated way, and with unanticipated consequences.

This risk is magnified even further, with agentic AI systems that use multiple AI agents working together to complete bigger tasks, or even automate entire business processes. In addition to individual agents, agentic AI systems can also include access to data and tools, as well as security and risk guardrails.

“In old scripts the code is static and you can look at the behavior, look at the code, and you know that this thing should be connecting,” Taylor says. “In AI, the code changes itself… Agentic AI is cutting edge. And sometimes you step over that edge, and it can cut.”

This isn’t a purely theoretical threat. In May, Anthropic released the results of the security testing on its latest Claude models. In one test, Claude was allowed access to company emails, so that it could serve as a useful assistant. In reading the emails, Claude discovered information about its own impending replacement with a newer AI system, and also that the engineer in charge of this replacement was having an affair. In 84% of the tests, Claude attempted to blackmail the engineer so that it wouldn’t be replaced. Anthropic said it put guardrails in place to keep this kind of thing from happening, but it hasn’t released the results of any tests on those guardrails.

This should raise significant concerns for any company giving AI direct access to email systems.

Unanticipated behaviors are just the start. According to CSA, another challenge with agents is the unstructured nature of their communications. Traditional applications communicate through extremely predictable, well-defined channels and formats. AI agents can communicate with other agents and systems using plain language, making it hard to monitor with traditional security techniques.

How cybersecurity leaders can secure machine identities

The first step is to get visibility into all the machine identities in an environment and to create policies for how to manage them.

Gartner’s Wessels recommends that enterprises move towards centralized governance for machine identities and attach credentials to specific workloads. “Then manage the lifecycle of that application or workload. That way of doing it is a much more modern way.”

The credentials could last for five minutes, or even less than that. “Just for the time they need that connection. Then it goes away.”

There’s a lot of guidance out there for companies looking to modernize their identity management, and many established vendors in the space. And the technology continues to evolve as the uses of AI become more developed.

According to the CyberArk survey, 94% of respondents are already using AI and LLM in their identity security strategies. For example, 61% are considering using AI to secure both human and machine identities in the next 12 months.

Unfortunately, when it comes to securing the identities of AI agents, things aren’t looking as rosy. “There aren’t a lot of standards around agentic AI and it’s being spun up and put in by anybody and everybody,” says Wessels. “There’s not a whole lot of structure even around who should handle these things.”

Companies also need to monitor what the AI agents are doing, what connections they’re making, and what information they’re pulling, he says.

Anand Rao, AI professor at Carnegie Mellon University, suggests that some enterprises might want to wait and secure their legacy infrastructure first, and only deploy AI agents after they’ve modernized their machine identity environment.

It all depends on their risk tolerance. And there are some frameworks that companies can look at. The SANS Institute released in March a set of AI security guidelines, which includes recommendations such as enterprises limiting the functions and tools that AI agents have access to, and ensuring that the agent has the least privilege possible.

CSA released in May its agentic AI red teaming guide, which outlines several ways in which AI agents have risks that are different than traditional applications, and offers practical recommendations on how to spot if agents are misbehaving.