{"id":14577,"date":"2025-08-07T04:33:48","date_gmt":"2025-08-07T04:33:48","guid":{"rendered":"https:\/\/newestek.com\/?p=14577"},"modified":"2025-08-07T04:33:48","modified_gmt":"2025-08-07T04:33:48","slug":"beef-up-ai-security-with-zero-trust-principles","status":"publish","type":"post","link":"https:\/\/newestek.com\/?p=14577","title":{"rendered":"Beef up AI security with zero trust principles"},"content":{"rendered":"<div>\n<div id=\"remove_no_follow\">\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<section class=\"wp-block-bigbite-multi-title\">\n<div class=\"container\"><\/div>\n<\/section>\n<p>Many CSOs worry about their firm\u2019s AI agents spitting out advice to users on how to build a bomb, or citing non-existent legal decisions. But those are the least of their worries, said a security expert at this week\u2019s Black Hat security conference in Las Vegas. Systems using large language models (LLMs) that connect to enterprise data contain\u00a0other\u00a0vulnerabilities that will be leveraged in dangerous ways unless developers and infosec leaders tighten security.<\/p>\n<p>One example that David Brauchler, NCC Group\u2019s technical director and head of AI and machine learning security, showed the conference was how easy it was for penetration testers to pull passwords from a customer\u2019s AI system.<\/p>\n<p>\u201cThis organization didn\u2019t properly tag the trust levels associated with their data and gave the AI access to their entire organization\u2019s data link,\u201d <a href=\"https:\/\/www.linkedin.com\/in\/david-brauchler-iii-73bb5034b\/\" target=\"_blank\" rel=\"noreferrer noopener\">Brauchler<\/a> said in an interview after his presentation. \u201cBecause they didn\u2019t have the proper permissions assigned to the data and the proper permissions assigned to the user, they had no fine-grained access control to assign what types of information my user level was able to interact with.\u201d<\/p>\n<h2 class=\"wp-block-heading\" id=\"guardrails-alone-arent-enough\">Guardrails alone aren\u2019t enough<\/h2>\n<p>Guardrails, such as word or content filters of results, just aren\u2019t enough to lower risk for today\u2019s AI systems, his presentation stressed. In fact,\u00a0he added in the interview, \u201cwhen we see our customers say \u2018We need stronger guardrails,\u2019 what they\u2019re saying is \u2018We are accepting an application with known vulnerabilities and just hoping a threat actor doesn\u2019t decide to target us.&#8217;\u201d<\/p>\n<p>\u201cMature AI security isolates potentially malicious inputs from trusted contexts,\u201d he told the conference. Developers and CSOs have to bring the principles of\u00a0<a href=\"https:\/\/url.usb.m.mimecastprotect.com\/s\/CJLDC8Xy9yTXA3KEFnf7sy2kvV?domain=csoonline.com\" target=\"_blank\" rel=\"noreferrer noopener\">zero trust<\/a>\u00a0to the AI landscape with tactics like assigning trust labels to all application data.<\/p>\n<p>\u201cRight now we\u2019re seeing organizations implementing these language models into their applications \u2014 usually because the shareholders demand some sort of AI these days \u2014 and the developers really don\u2019t understand how to do it in a secure manner,\u201d he told CSO.<\/p>\n<p>\u201cThose who are architecting AI systems don\u2019t understand some of the implications this has on their environments,\u201d he noted, adding, \u201cCSOs don\u2019t know what lessons to bring back to their teams.\u201d<\/p>\n<p>Almost every AI system that NCC Group has done an assessment on has been vulnerable to security attacks, he pointed out: \u201cWe have been able to use large language models to compromise database entries, get code execution in environments, take over your cloud.\u201d<\/p>\n<p>\u201cBusiness are ignorant of how their risk is augmented by the introduction of AI,\u201d he said. Large language models are manipulated by the inputs they receive. As soon an AI agent is exposed to data that has a lower level of trust than the user whose account is running that model, there\u2019s the potential for that untrusted data to manipulate the language model\u2019s behavior and access trusted functionality or sensitive resources.<\/p>\n<p>Imagine, he said, a retailer with an AI system that allows online buyers to ask the chatbot to summarize customer reviews of a product. If the system is compromised by a crook, the prompt [query] can be ignored in favor of the automatic purchase of a product the threat actor wants.<\/p>\n<p>Trying to eliminate prompt injections, such as, \u201cshow me all customer passwords,\u201d is a waste of time, Brauchler added, because an LLM is a statistical algorithm that spits out an output. LLMs are intended to replicate human language interaction, so there\u2019s no hard boundary between inputs that would be malicious and inputs that are trusted or benign. Instead, developers and CSOs need to rely on true trust segmentation, using their current knowledge.<\/p>\n<p>\u201cIt\u2019s less a question of new security fundamentals and more a question of how do we apply the lessons we have already learned in security and apply them in an AI landscape,\u201d he said.<\/p>\n<h2 class=\"wp-block-heading\" id=\"strategies-for-csos\">Strategies for CSOs<\/h2>\n<p>Brauchler offered three AI threat modelling strategies CSOs should consider:<\/p>\n<ul class=\"wp-block-list\">\n<li>Trust flow tracking, the tracking of the movement of data throughout an application, and monitoring the level of trust that is associated with that data. It\u2019s a defense against an attacker who is able to get untrusted data into an application to control its behavior to abuse trust;<\/li>\n<li>Source sink mapping: A data source is any system whose output goes into the context window of an LLM. A sink is any system that consumes the output of an LLM model (like a function call or another system downstream). The purpose of mapping sources and sinks is to discover if there is an attack path through which a threat actor can get untrusted data into a data source that accesses a data sink the threat actor doesn\u2019t already have access to;<\/li>\n<li>Models as threat actors: Look at your threat model landscape and replace any LLMs with a threat actor. There\u2019s a vulnerability if the theoretical threat actor at those points can access something they normally couldn\u2019t. \u201cYour team should make absolutely certain there is no way for the language model at that vantage point to be exposed to untrusted data,\u201d he said. \u201cOtherwise you risk critical level threats within your application.\u201d<\/li>\n<\/ul>\n<p>\u201cIf we implement these security control primitives, we can begin to eliminate attack classes that right now we are seeing in every AI system we test,\u201d he said.<\/p>\n<p>One of the most critical strategies, Brauchler said, comes down to segmentation: LLM models that run in high trust contexts should never be exposed to untrusted data. And models exposed to untrusted data should never have access to high privilege functionality. \u201cIt\u2019s a matter of segmenting those models that are operating in high trusted zones, and those operating with low trusted data.\u201d<\/p>\n<p>In addition, CSOs should approach AI defense beginning with their architecture teams. \u201cAI security is not something you can add as a patch-on solution,\u201d he said. \u201cYou can\u2019t add layers of guardrails, you can\u2019t add something in the middle to make your application magically secure. Your teams need to be developing your systems with security from the ground up. And the encouraging aspect is, this isn\u2019t a new lesson. Security and its fundamentals are still applying in the same way we\u2019ve seen in the last 30 years. What\u2019s changed is how they\u2019re integrated into environments that leverage AI.\u201d<\/p>\n<p>He also referred CSOs and developers to:<\/p>\n<ul class=\"wp-block-list\">\n<li>the <a href=\"https:\/\/www.iso.org\/standard\/42001\" target=\"_blank\" rel=\"noreferrer noopener\">ISO 42001 standard<\/a> for establishing, implementing, maintaining an Artificial Intelligence Management System;<\/li>\n<li>the <a href=\"https:\/\/atlas.mitre.org\/\" target=\"_blank\" rel=\"noreferrer noopener\">MITRE Atlas<\/a> knowledge base of adversary tactics and techniques against Al-enabled systems;<\/li>\n<li>the <a href=\"https:\/\/genai.owasp.org\/llm-top-10\/\" target=\"_blank\" rel=\"noreferrer noopener\">OWASP Top 10 Risks and Mitigations for LLMs<\/a><\/li>\n<\/ul>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Many CSOs worry about their firm\u2019s AI agents spitting out advice to users on how to build a bomb, or citing non-existent legal decisions. But those are the least of their worries, said a security expert at this week\u2019s Black Hat security conference in Las Vegas. Systems using large language models (LLMs) that connect to enterprise data contain\u00a0other\u00a0vulnerabilities that will be leveraged in dangerous ways&#8230; <\/p>\n<p class=\"more\"><a class=\"more-link\" href=\"https:\/\/newestek.com\/?p=14577\">Read More<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-14577","post","type-post","status-publish","format-standard","hentry","category-uncategorized","is-cat-link-borders-light is-cat-link-rounded"],"_links":{"self":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/14577","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=14577"}],"version-history":[{"count":0,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/14577\/revisions"}],"wp:attachment":[{"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=14577"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=14577"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=14577"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}