{"id":14773,"date":"2025-09-11T07:03:14","date_gmt":"2025-09-11T07:03:14","guid":{"rendered":"https:\/\/newestek.com\/?p=14773"},"modified":"2025-09-11T07:03:14","modified_gmt":"2025-09-11T07:03:14","slug":"ai-prompt-injection-gets-real-with-macros-the-latest-hidden-threat","status":"publish","type":"post","link":"https:\/\/newestek.com\/?p=14773","title":{"rendered":"AI prompt injection gets real \u2014 with macros the latest hidden threat"},"content":{"rendered":"<div>\n<div id=\"remove_no_follow\">\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<section class=\"wp-block-bigbite-multi-title\">\n<div class=\"container\"><\/div>\n<\/section>\n<p>Attackers are increasingly exploiting generative AI by embedding malicious prompts in macros and exposing hidden data through parsers.<\/p>\n<p>The switch in adversarial tactics \u2014 noted in a recent <a href=\"https:\/\/www.opswat.com\/resources\/reports\/ponemon-state-of-file-security\">State of File Security study from OPSWAT<\/a> \u2014 calls for enterprises to extend the same type of protection they already apply to software development pipelines into AI environments, according to experts in AI security polled by CSO.<\/p>\n<p>\u201cBroadly speaking, this threat vector \u2014 \u2018malicious prompts embedded in macros\u2019 \u2014 is yet another prompt injection method,\u201d <a href=\"https:\/\/www.linkedin.com\/in\/roberto-enea-572453ab\/?locale=en_US\">Roberto Enea<\/a>, lead data scientist at cybersecurity services firm Fortra, told CSO. \u201cIn this specific case, the injection is done inside document macros or VBA [Visual Basic for Applications] scripts and is aimed at AI systems that analyze files.\u201d<\/p>\n<p>Enea added: \u201cTypically, the end goal is to mislead the AI system into classifying malware as safe.\u201d<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/dane-sherrets-7a049973\/\">Dane Sherrets<\/a>, staff innovations architect at bug bounty platform HackerOne, said that embedding malicious prompts in macros is a prime example of where the capabilities of gen AI can be turned against the systems themselves.<\/p>\n<p>\u201cThis technique uses macros to deliver a form of prompt injection, feeding deceptive inputs that push the LLM to behave in an unintended way,\u201d Sherrets said. \u201cThis can cause the system to spit out sensitive or confidential data or help the malicious actor gain access to the back end of the system.\u201d<\/p>\n<h2 class=\"wp-block-heading\" id=\"zero-click-prompt-injection\">Zero-click prompt injection<\/h2>\n<p>Isolated examples of exploits and malware abusing gen AI only began emerging earlier this year.<\/p>\n<p>For example, Aim Security\u2019s researchers recently discovered <a href=\"https:\/\/nvd.nist.gov\/vuln\/detail\/cve-2025-32711\">EchoLeak (CVE-2025-32711)<\/a>, a zero-click <a href=\"https:\/\/www.csoonline.com\/article\/1294996\/top-4-llm-threats-to-the-enterprise.html\">prompt injection vulnerability<\/a> discovered in Microsoft 365 Copilot, and described as the first such attack on an AI agent.<\/p>\n<p>\u201cAttackers could embed hidden instructions in common business files like emails or Word documents, and when Copilot processed the file, it executed those instructions automatically,\u201d <a href=\"https:\/\/www.stratascale.com\/team\/quentin-rhoads-herrera\">Quentin Rhoads-Herrera<\/a>, VP of cybersecurity services at Stratascale, explained.<\/p>\n<p>In response to the vulnerability, Microsoft recommended patching, restricting Copilot access, stripping hidden metadata from shared files, and enabling its built-in AI security controls.<\/p>\n<p>Another similar attack, <a href=\"https:\/\/nvd.nist.gov\/vuln\/detail\/CVE-2025-54135\">CurXecute (CVE-2025-54135)<\/a>, allowed remote code execution through prompt injection in software development environments.<\/p>\n<p>\u201cAttackers will keep finding novel ways to embed their prompt injections in places that are out of sight for the user but are processed by the LLM nonetheless,\u201d said <a href=\"https:\/\/www.linkedin.com\/in\/itay-ravia-30ba13134\/?originalSubdomain=il\">Itay Ravia<\/a>, Aim Labs\u2019 head of research. \u201cEmbedding prompt injections in macros is just one of the latest trends.\u201d<\/p>\n<h2 class=\"wp-block-heading\" id=\"jedi-mind-trick-turned-against-ai-based-malware-scanners\">Jedi mind trick turned against AI-based malware scanners<\/h2>\n<p>The <a href=\"https:\/\/research.checkpoint.com\/2025\/ai-evasion-prompt-injection\/\">\u201cSkynet\u201d malware, discovered in June 2025, featured an attempted prompt injection<\/a> against AI-powered security tools. The technique was designed to manipulate AI malware analysis systems into falsely declaring no malware was detected in a sample through a form of \u201cJedi mind trick.\u201d<\/p>\n<p>Researchers at Check Point reckon the malware was most likely a proof-of-concept experiment by malware developers.<\/p>\n<p>\u201cWe\u2019ve already seen proof-of-concept attacks where malicious prompts are hidden inside documents, macros, or configuration files to trick AI systems into exfiltrating data or executing unintended actions,\u201d Stratascale\u2019s Rhoads-Herrera commented. \u201cResearchers have also demonstrated how LLMs can be misled through hidden instructions in code comments or metadata, showing the same principle at work.\u201d<\/p>\n<p>Rhoads-Herrera added: \u201cWhile some of these remain research-driven, the techniques are quickly moving into the hands of attackers who are skilled at weaponizing proof-of-concepts.\u201d<\/p>\n<h2 class=\"wp-block-heading\" id=\"under-the-radar\">Under the radar<\/h2>\n<p><a href=\"https:\/\/www.sans.org\/profiles\/ensar-seker\">Ensar Seker<\/a>, CISO at threat intelligence vendor SOCRadar, described the abuse of gen AI systems through prompt injection as an evolution in malware delivery tactics.<\/p>\n<p>\u201cIt\u2019s not just about dropping a payload anymore; it\u2019s about crafting dynamic instructions that can manipulate behavior at runtime, and then hiding or encoding those instructions so they evade traditional scanning tools,\u201d Seker said.<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/jasonkeirstead\/?originalSubdomain=ca\">Jason Keirstead<\/a>, VP of security strategy at security operations firm Simbian AI, said that many prompt injection attacks against gen AI systems are going under the radar.<\/p>\n<p>\u201cFor example, people are putting malicious prompts in resumes they upload to recruitment sites, causing the AIs used in job portals to surface their resume at the top,\u201d Keirstead explained. \u201cWe also have recently seen the malicious prompts that targeted the Comet browser, etc.\u201d<\/p>\n<h2 class=\"wp-block-heading\" id=\"stealthy-and-systemic-threat\">Stealthy and systemic threat<\/h2>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/dgranosa\/?originalSubdomain=hr\">Dorian Grano\u0161a<\/a>, lead red team data scientist at AI security specialists SplxAI, said that prompt injection has become a \u201cstealthy and systemic threat\u201d In real-world deployments tested by the firm.<\/p>\n<p>\u201cAttackers conceal instructions via ultra-small fonts, background-matched text, ASCII smuggling using Unicode Tags, macros that inject payloads at parsing time, and even file metadata (e.g., DOCX custom properties, PDF\/XMP, EXIF),\u201d Grano\u0161a explained. \u201cThese vectors evade human review yet are fully parsed and executed by LLMs, enabling indirect prompt injection.\u201d<\/p>\n<h2 class=\"wp-block-heading\" id=\"countermeasures\">Countermeasures<\/h2>\n<p>Justin Endres, head of data security at cybersecurity vendor Seclore, argued that security leaders can\u2019t rely on legacy tools alone to defend against malicious prompts that turn \u201ceveryday files into Trojan horses for AI systems.\u201d<\/p>\n<p>\u201c[Security leaders] need layered defenses that sanitize content before it ever reaches an AI parser, enforce strict guardrails around model inputs, and keep humans in the loop for critical workflows,\u201d Endres advised. \u201cOtherwise, attackers will be the ones writing the prompts that shape your AI\u2019s behavior.\u201d<\/p>\n<p>Defending against these types of attacks involves a combination of technical defense procedures and policy controls, such as:<\/p>\n<ul class=\"wp-block-list\">\n<li>Perform deep inspection of any file that enters an enterprise environment, especially from untrusted sources. \u201cUse sandboxing, static analysis, and behavioral simulation tools to see what the macros or embedded prompts actually do before opening,\u201d SOCRadar\u2019s Seker advised.<\/li>\n<li>Implement policies that isolate macro execution \u2014 for example, application sandboxing or Microsoft\u2019s protected view.<\/li>\n<li>Evaluate content disarm and reconstruction (CDR) tools. \u201cCDR rebuilds files without active content, neutralizing embedded threats,\u201d SOCRadar\u2019s Seker explained. \u201cThis is especially effective for PDFs, Office files, and other structured documents.\u201d<\/li>\n<li>Sanitize any input (prompts) into generative AI systems.<\/li>\n<li>Design AI systems to include a \u201cverification\u201d component that reviews inputs and applies guardrails.<\/li>\n<li>Apply clear protocols for validating AI outputs.<\/li>\n<\/ul>\n<p>The most effective countermeasures come down to visibility, governance, and guardrails, according to Stratascale\u2019s Rhoads-Herrera.<\/p>\n<p>SOCRadar\u2019s Seker argued that enterprises should treat AI pipelines the same way they handle CI\/CD pipelines by extending zero-trust principles into their data parsing and AI workflows. In practice this means introducing guardrails, enforcing output verification, and using contextual filters to block unauthorized instructions from being executed or acted on by LLM-based systems.<\/p>\n<p>\u201cI strongly encourage CISOs and red teams to begin testing AI-enabled workflows against adversarial prompts today, before threat actors make this mainstream,\u201d Seker concluded.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Attackers are increasingly exploiting generative AI by embedding malicious prompts in macros and exposing hidden data through parsers. The switch in adversarial tactics \u2014 noted in a recent State of File Security study from OPSWAT \u2014 calls for enterprises to extend the same type of protection they already apply to software development pipelines into AI environments, according to experts in AI security polled by CSO&#8230;. <\/p>\n<p class=\"more\"><a class=\"more-link\" href=\"https:\/\/newestek.com\/?p=14773\">Read More<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-14773","post","type-post","status-publish","format-standard","hentry","category-uncategorized","is-cat-link-borders-light is-cat-link-rounded"],"_links":{"self":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/14773","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=14773"}],"version-history":[{"count":0,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/14773\/revisions"}],"wp:attachment":[{"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=14773"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=14773"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=14773"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}