{"id":15964,"date":"2026-03-17T07:06:22","date_gmt":"2026-03-17T07:06:22","guid":{"rendered":"https:\/\/newestek.com\/?p=15964"},"modified":"2026-03-17T07:06:22","modified_gmt":"2026-03-17T07:06:22","slug":"runtime-the-new-frontier-of-ai-agent-security","status":"publish","type":"post","link":"https:\/\/newestek.com\/?p=15964","title":{"rendered":"Runtime: The new frontier of AI agent security"},"content":{"rendered":"<div>\n<div id=\"remove_no_follow\">\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<section class=\"wp-block-bigbite-multi-title\">\n<div class=\"container\"><\/div>\n<\/section>\n<p>AI agents are already operating inside enterprise networks, quietly doing some of the work employees once handled themselves \u2014 writing code, drafting emails, retrieving files, and connecting to internal systems.<\/p>\n<p>Sometimes they also make costly mistakes.<\/p>\n<p>At Meta, an employee <a href=\"https:\/\/www.404media.co\/meta-director-of-ai-safety-allows-ai-agent-to-accidentally-delete-her-inbox\/\">asked an AI assistant<\/a> to help manage her inbox. It deleted it instead. At Amazon, an internal agent <a href=\"https:\/\/www.ft.com\/content\/00c282de-ed14-4acd-a948-bc8d6bdb339d\">autonomously decided<\/a> to tear down and rebuild a deployment environment, knocking an AWS service offline for 13 hours.<\/p>\n<p>These incidents offer glimpses of a larger shift security leaders are confronting: Autonomous software is now acting inside corporate environments with real permissions and real consequences.<\/p>\n<p>\u201cAgents are like teenagers,\u201d <a href=\"https:\/\/www.linkedin.com\/in\/joesu11ivan\/\">Joe Sullivan<\/a>, former chief security officer of Uber, Cloudflare, and Facebook, and now head of Joe Sullivan Security, tells CSO. \u201cThey have all the access and none of the judgment.\u201d<\/p>\n<p>For years, most efforts to <a href=\"https:\/\/www.csoonline.com\/article\/4033338\/how-cybersecurity-leaders-are-securing-ai-infrastructures.html\">secure AI have focused on prevention<\/a> \u2014 scanning models, filtering prompts, and analyzing AI-generated code before it reaches production. But as enterprises deploy autonomous agents that interact directly with internal systems, some security leaders say the real risk begins <a href=\"https:\/\/www.csoonline.com\/article\/4109999\/agentic-ai-already-hinting-at-cybersecuritys-pending-identity-crisis.html\">only after those agents are live<\/a>.<\/p>\n<p>\u201cIn security, we always assume prevention will fail,\u201d Sullivan says. \u201cThat\u2019s why detection and monitoring are equally important.\u201d<\/p>\n<p>The speed and autonomy of AI agents mean mistakes or unexpected actions can cascade quickly across systems. That dynamic is why a growing number of security leaders are rallying around, at least conceptually, what Sullivan calls <a href=\"https:\/\/www.linkedin.com\/posts\/joesu11ivan_someone-asked-what-is-your-word-for-2026-activity-7417653902076043264-D22a?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAABIJm4BsAtHK231QAJTQLC2tv15e-IHcMU.\">runtime security<\/a>, or continuously monitoring agents as they operate inside enterprise environments.<\/p>\n<p>In simple terms, runtime security focuses on what software does while it is running, rather than only evaluating it before deployment.<\/p>\n<p>Why agents change the security model<\/p>\n<p>CISOs have spent years governing human behavior inside enterprise networks. They have identity management, role-based access controls, user behavior analytics, and endpoint detection tools.<\/p>\n<p>The question is whether those same frameworks \u2014 and those same tools for tracking employees \u2014 <a href=\"https:\/\/www.csoonline.com\/article\/4123246\/think-agentic-ai-is-hard-to-secure-today-just-wait-a-few-months.html\">can be extended to AI agents<\/a>. Security leaders studying the problem say the answer is: only partially. The traditional frameworks still apply conceptually, but the mechanisms required to observe agent behavior are fundamentally different.<\/p>\n<p>\u201cThe what isn\u2019t new \u2014 the how is new,\u201d says <a href=\"https:\/\/www.linkedin.com\/in\/hanah-marie-darley\/\">Hanah-Marie Darley<\/a>, co-founder and chief AI officer of Geordie AI. \u201cHow do you actually get this data, where you actually get the agent\u2019s behavioral information mostly through logs, and not every AI agent platform will mean that you are getting logs, that you have logs in the first place.\u201d<\/p>\n<p>Traditional security tools were built to intercept human behavior at perimeter checkpoints where employees access the internet, log into systems, or move data across boundaries. Agents frequently bypass those checkpoints entirely. They operate through API calls and <a href=\"https:\/\/www.csoonline.com\/article\/4087656\/what-cisos-need-to-know-about-new-tools-for-securing-mcp-servers.html\">MCP connections<\/a> that may never pass through the security tooling that would <a href=\"https:\/\/www.csoonline.com\/article\/3822459\/what-is-anomaly-detection-behavior-based-analysis-for-cyber-threats.html\">ordinarily flag anomalous behavior<\/a>.<\/p>\n<p>They also generate dramatically more activity. Where a typical employee might produce 50 to 100 log events in a two-hour period, an agent can generate 10 to 20 times that volume in the same window. And \u2014 critically \u2014 they often don\u2019t produce any logs.<\/p>\n<p>Some agent platforms generate robust audit trails by default. Others don\u2019t. Coding agents can overwrite their own session logs when a previous session is replayed, meaning a security team investigating an incident may find the record of what happened has been erased.<\/p>\n<p>\u201cHaving the logs in the first place is often a bigger step than people realize because not every agent natively has logs,\u201d Darley tells CSO.<\/p>\n<h2 class=\"wp-block-heading\" id=\"the-inventory-problem\">The inventory problem<\/h2>\n<p>Before a CISO can monitor what agents are doing, they face a more elemental challenge: knowing which agents exist.<\/p>\n<p>This simple idea is harder than it sounds. In many large enterprises, agents are proliferating faster than any central inventory can capture. Marketing teams deploy AI assistants. HR departments use agents for resume screening. Engineers run coding agents with broad filesystem access. Non-technical employees connect AI productivity tools such as <a href=\"https:\/\/www.csoonline.com\/article\/4077438\/manipulating-the-meeting-notetaker-the-rise-of-ai-summarization-optimization.html\">note-takers<\/a>, email managers, and scheduling assistants to corporate accounts, often without formal IT approval.<\/p>\n<p>\u201cCISOs right now are getting the hard question from their board and their CEO,\u201d Sullivan says. \u201cWhat AI is being run inside the company right now? You\u2019ve got to answer that question. What AI is being run, and what\u2019s it doing?\u201d<\/p>\n<p>Darley recommends starting with a structured inventory effort, ideally using tools built specifically for agent discovery, since general-purpose application management systems often can\u2019t see agents living in the cloud, in code repositories, or inside third-party SaaS platforms.<\/p>\n<p>\u201cStart with at least one system,\u201d she advises. \u201cIt will give you a sense of scale, help you understand who the owners are, and start to educate you on the kind of tooling you actually need.\u201d<\/p>\n<p>Without inventory, behavioral monitoring has nothing to which it can anchor. Security teams can watch the logs of the agents they know about. But the agents they have missed are precisely the ones most likely to deliver unwelcome consequences.<\/p>\n<h2 class=\"wp-block-heading\" id=\"what-runtime-monitoring-looks-like\">What runtime monitoring looks like<\/h2>\n<p>Once an organization knows where its agents are, the question is what to watch for \u2014 and how.<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/elia-zaitsev-2792694\/\">Elia Zaitsev<\/a>, CTO of CrowdStrike, tells CSO that existing <a href=\"https:\/\/www.csoonline.com\/article\/653052\/how-to-pick-the-best-endpoint-detection-and-response-solution.html\">endpoint detection and response (EDR) tools<\/a> already capture the kinds of behavior needed to track AI agents. They instrument operating systems like a flight data recorder, recording every application that runs, every file it touches, every network connection it makes, and every command it spawns.<\/p>\n<p>CrowdStrike\u2019s EDR, for example, builds a threat graph: a connected map of behaviors and their upstream causes. If a suspicious network connection occurs, the threat graph can trace it back through many degrees of separation to the application or agent that initiated the chain.<\/p>\n<p>\u201cEDR technology can associate this end behavior with the fact that it came from an application ultimately being driven by an agentic system,\u201d Zaitsev explains. \u201cA firewall just tells you something on this computer is trying to communicate with an AI model in the cloud. EDR allows you to say: This specific application is talking to this specific model.\u201d<\/p>\n<p>For AI agents specifically, this creates a new set of controls. A system that recognizes a known agent application \u2014 Claude Code, OpenAI\u2019s Codex, OpenHands \u2014 can apply a different policy to that application than it would to the same application running under human control. \u201cThere are activities that may be benign if a human is responsible,\u201d Zaitsev says, \u201cbut if it\u2019s an AI agent I don\u2019t necessarily trust, I may want to apply different policies on the fly.\u201d<\/p>\n<h2 class=\"wp-block-heading\" id=\"build-time-security-still-has-a-pivotal-role-to-play\">Build-time security still has a pivotal role to play<\/h2>\n<p>Not all enterprises will be solely introducing AI agents off the shelf. Many will be building such systems themselves.<\/p>\n<p>Because of this, runtime monitoring does not mean that build-time security \u2014 scanning code, evaluating models before deployment, and checking prompts \u2014 is yesterday\u2019s problem. <a href=\"https:\/\/www.linkedin.com\/in\/vbadhwar\/\">Varun Badhwar<\/a>, CEO of Endor Labs, pushes back on that kind of framing.<\/p>\n<p>\u201cI\u2019ll never say runtime isn\u2019t important,\u201d Badhwar tells CSO. \u201cBut you want to fix as much as you can early. The average cost of a runtime security finding is $4,000, versus $40 at build time. So, guess what? You want to fix as much as you can before it ever gets there.\u201d<\/p>\n<p>A vulnerability caught while a developer is still writing code takes minutes to fix. That same vulnerability, once deployed into a container, run through QA, and pushed to a production environment, requires retracing every step of that journey before it can be addressed \u2014 at roughly a hundredfold the cost. Badhwar uses the analogy of a car manufacturing line: Quality controls on the assembly line are always cheaper than recalling 70,000 cars from the street.<\/p>\n<p>His framework is simple: Shift left, shield right. Shift as many security controls as possible into the development process \u2014 catch problems while agents are being built, not after they\u2019re running. Then shield right with runtime monitoring as your last-mile safety net, because some things will always slip through, and zero-day vulnerabilities by definition can\u2019t be anticipated at build time.<\/p>\n<h2 class=\"wp-block-heading\" id=\"what-cisos-should-do-now\">What CISOs should do now<\/h2>\n<p>For CISOs, the shift is less about a single new tool and more about a new way of thinking about AI risk. Instead of focusing solely on how agents are built, security teams increasingly need visibility into how they behave once they begin operating inside enterprise systems.<\/p>\n<p>The path forward for CISOs is therefore not a single new product or a rip-and-replace of existing infrastructure. It is a methodical extension of security discipline to a new category of actor inside the enterprise.<\/p>\n<p>Zaitsev frames it using the security-in-depth model: You don\u2019t stop securing agents at build time just because runtime monitoring is available. You build both. \u201cEDR and runtime security is that last-level safety net,\u201d he says. \u201cYou still want all those other layers.\u201d<\/p>\n<p>But the experts say the following starting points could be practical first steps toward implementing runtime security for most CISOs:<\/p>\n<p><strong>Build an inventory first.<\/strong> Pick one system \u2014 a major SaaS platform, your code repositories, your endpoint fleet \u2014 and map the agents operating within it. Identify the owners, the permissions, and the protocols. Without visibility, nothing else is possible.<\/p>\n<p><strong>Extend behavioral monitoring to agents.<\/strong> Whether through EDR, dedicated agent security tooling, or a combination, establish what normal looks like. What systems should each agent touch? What data should it process? Who should it communicate with? Deviations from that baseline are your signal.<\/p>\n<p><strong>Apply agent-specific policies.<\/strong> Don\u2019t govern agents with the same controls you use for employees. They have different access patterns, different risk profiles, and different failure modes. Agent-aware tools can differentiate policy based on whether an application is AI-driven.<\/p>\n<p><strong>Design for incident response before you need it.<\/strong> Know how you\u2019ll stop a misbehaving agent without destroying the evidence of what it did. Behavioral logs need to be captured in separate, write-protected stores \u2014 not just in the agent platform\u2019s native logging, which may be overwritten.<\/p>\n<p><strong>Plan for AI solutions to AI problems.<\/strong> You won\u2019t hire your way out of the volume challenge. Security teams will need automation to monitor systems that operate at machine speed.<\/p>\n<p><strong>See also:<\/strong><\/p>\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/www.csoonline.com\/article\/4047974\/agentic-ai-a-cisos-security-nightmare-in-the-making.html\">Agentic AI: A CISO\u2019s security nightmare in the making?<\/a><\/li>\n<li><a href=\"https:\/\/www.csoonline.com\/article\/4123246\/think-agentic-ai-is-hard-to-secure-today-just-wait-a-few-months.html\">Think agentic AI is hard to secure today? Just wait a few months<\/a><\/li>\n<li><a href=\"https:\/\/www.csoonline.com\/article\/4087656\/what-cisos-need-to-know-about-new-tools-for-securing-mcp-servers.html\">What CISOs need to know about new tools for securing MCP servers<\/a><\/li>\n<li><a href=\"https:\/\/www.csoonline.com\/article\/4012712\/misconfigured-mcp-servers-expose-ai-agent-systems-to-compromise.html\">Misconfigured MCP servers expose AI agent systems to compromise<\/a><\/li>\n<li><a href=\"https:\/\/www.csoonline.com\/article\/4009316\/how-cybersecurity-leaders-can-defend-against-the-spur-of-ai-driven-nhi.html\">How cybersecurity leaders can defend against the spur of AI-driven NHI<\/a><\/li>\n<li><a href=\"https:\/\/www.csoonline.com\/article\/4031749\/mcp-security-securing-the-backbone-of-agentic-ai.html\">MCP: Securing the backbone of agentic AI<\/a><\/li>\n<\/ul>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>AI agents are already operating inside enterprise networks, quietly doing some of the work employees once handled themselves \u2014 writing code, drafting emails, retrieving files, and connecting to internal systems. Sometimes they also make costly mistakes. At Meta, an employee asked an AI assistant to help manage her inbox. It deleted it instead. At Amazon, an internal agent autonomously decided to tear down and rebuild&#8230; <\/p>\n<p class=\"more\"><a class=\"more-link\" href=\"https:\/\/newestek.com\/?p=15964\">Read More<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-15964","post","type-post","status-publish","format-standard","hentry","category-uncategorized","is-cat-link-borders-light is-cat-link-rounded"],"_links":{"self":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/15964","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=15964"}],"version-history":[{"count":0,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/15964\/revisions"}],"wp:attachment":[{"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=15964"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=15964"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=15964"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}