The ‘manager of agents’: How AI evolves the SOC analyst role

Every SOC analyst has heard it by now: “AI is coming for your job”.

I hear it in conversations with SOC teams. I see it in the hesitation during evaluations. And increasingly, I feel it as a source of resistance — especially from the very people AI is supposed to help.

But the reality is the opposite.

Instead of eliminating the Tier 1 analyst role, AI is elevating it — from a job defined by repetitive tasks to one defined by judgment, oversight and decision-making. In short, it makes them more powerful as SOC commanders.

The work was never the point

To understand what’s changing, we need to be honest about the historical role of Tier 1 analysts.

In a typical SOC, a Tier 1 analyst might spend 20–30 minutes investigating a single phishing alert — pivoting across email logs, endpoint data and threat intelligence tools, validating signals and documenting findings. It’s necessary work, but it’s also highly repetitive and time-consuming.

Modern security operations generate more data than humans can reasonably process. Investigating a single alert often requires pivoting across identity systems, endpoint telemetry, cloud logs and threat intelligence sources. Multiply that by hundreds or thousands of alerts per day, and you have a workload that is fundamentally misaligned with human capacity.

More importantly, SOC analysts are too talented for this kind of non-human work. For years, we’ve accepted this as the cost of doing business. AI changes that equation.

From doing the work to directing it

What agentic AI introduces into the SOC is the ability to delegate.

Instead of analysts manually gathering evidence and stitching together context, AI agents can now autonomously execute investigative steps: Querying systems, correlating signals and building evidence chains in real time. It doesn’t remove the human from the process. It elevates them within it.

The emerging model is one where analysts manage a system of agents — each responsible for a piece of the investigation — rather than performing each step themselves. The human role shifts from operator to orchestrator.

What I consistently hear from security leaders isn’t, “I need my analysts to move faster.” It’s, “I need my analysts to stop collecting data and start making decisions based on it.” Those are fundamentally different problems. And the gap between them is where AI creates the most value.

The rise of the ‘manager of agents’

This is where the Tier 1 role evolves — not disappears.

In this new model, entry-level analysts are effectively managing a swarm of AI agents. They are responsible for reviewing investigations, validating conclusions and ensuring actions align with business context and risk tolerance.

They are not “in the loop” for every step. They are “on the loop” — overseeing outcomes rather than executing tasks.

When analysts are forced to stay in the loop — checking every enrichment, every query, every intermediate step — they become a bottleneck. When they move on the loop, they can operate at scale, reviewing dozens or hundreds of investigations with the right level of oversight.

This is how trust in AI is built: Not by asking humans to verify everything, but by giving them the visibility to verify anything.

Transparency becomes the control plane. Analysts can see exactly what the AI did, how it reached a conclusion and where uncertainty exists. Over time, as accuracy proves out, they naturally increase their level of trust — just as they would with a new colleague joining the team.

Why cybersecurity is different

The fear of job displacement is understandable. In many industries, AI is reducing the need for entry-level roles. Cybersecurity is one of the few domains where AI won’t reduce work. It will expose how much work we’ve been unable to do.

The volume and complexity of threats are increasing faster than teams can scale. Attackers are already using AI to automate reconnaissance, generate code and accelerate exploitation. Defenders don’t have the option to sit this out.

Threat hunting, detection engineering and control optimization have historically been under-resourced because teams were consumed by alert triage. When AI removes that burden, it creates much-needed capacity for analysts to do what they were trained to do. The work doesn’t shrink. The right work finally gets done.

A new baseline for entry-level talent

This shift also changes what we expect from entry-level analysts.

Historically, Tier 1 roles were designed as places where analysts learned by doing repetitive tasks. That model no longer makes sense when those tasks can be automated.

The baseline is moving toward understanding how AI systems operate: Interpreting their outputs, questioning their reasoning and guiding their behavior. Human-centric skills become more important, not less. Curiosity, critical thinking and the ability to connect disparate signals into a coherent narrative — these are the differentiators in an AI-driven SOC.

We’re already seeing organizations rethink how they hire for these roles. There is less emphasis on credentials and more on how someone thinks and solves problems. When AI handles the mechanics, judgment is the job.

Building trust that holds

If the future is so clear, why is there resistance? In most cases, it comes down to trust — and trust must be earned, not assumed.

The deployments I’ve seen fail share a common pattern: Organizations treat AI as a binary shift from no automation to full autonomy. That’s not how security teams work, and it’s not how they should be asked to work.

What works is a progression. Start with limited, high-confidence use cases. Provide full transparency into how the system reaches its conclusions. Let analysts validate outcomes before expanding the scope. And critically, put practitioners in the room. Not implementation consultants or project managers, but people who have run SOC shifts, triaged thousands of alerts and earned credibility the hard way.

This is why, when we deploy, we bring former SOC leads, threat hunters and detection engineers to work directly alongside analyst teams. They’re not there to configure software. They’re there to build trust in the system — because they’ve already earned trust from the people using it. When analysts see that the people helping them deploy this technology have lived the same grind, the conversation changes. It stops being “will this replace me” and starts being “how do I use this well.”

That shift in orientation — from threat to tool — is what separates a successful deployment from one that stalls.

The trust gap isn’t a technology problem. It’s a human one. And it closes the same way trust always closes: Through demonstrated competence, shared context and time.

The future SOC is human-led

The end state here is not an autonomous SOC with no humans involved. It’s a human-led SOC, powered by AI.

AI agents handle the labor-intensive, evidence-gathering aspects of security operations. Humans provide direction, oversight and accountability. Together, they operate at a speed and scale neither could achieve alone. That’s not a theory — it’s what’s happening in production environments today.

Elevation, not elimination

The narrative that AI will eliminate Tier 1 analysts misses the point. The role isn’t going away. It’s being redefined.

The analysts who succeed in this new environment will be those who can manage intelligence systems, interpret complex outputs and make high-quality decisions under uncertainty.

They won’t be replaced. They’ll be promoted.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?