{"id":15427,"date":"2026-01-08T07:03:56","date_gmt":"2026-01-08T07:03:56","guid":{"rendered":"https:\/\/newestek.com\/?p=15427"},"modified":"2026-01-08T07:03:56","modified_gmt":"2026-01-08T07:03:56","slug":"top-cyber-threats-to-your-ai-systems-and-infrastructure","status":"publish","type":"post","link":"https:\/\/newestek.com\/?p=15427","title":{"rendered":"Top cyber threats to your AI systems and infrastructure"},"content":{"rendered":"<div>\n<div id=\"remove_no_follow\">\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<section class=\"wp-block-bigbite-multi-title\">\n<div class=\"container\"><\/div>\n<\/section>\n<p>Attacks against AI systems and infrastructure are beginning to take shape in real-world instances, and security experts expect the number of these attack types will rise in coming years. In a rush to realize the benefits of AI, most organizations have played it <a href=\"https:\/\/www.csoonline.com\/article\/3529615\/companies-skip-security-hardening-in-rush-to-adopt-ai.html\">fast and loose on security hardening<\/a> when rolling out AI tools and use cases. As a result, experts also warn that many organizations aren\u2019t prepared to detect, deflect, or respond to such attacks.<\/p>\n<p>\u201cMost are aware of the possibility of such attacks, but I don\u2019t think a lot of people are fully aware of how to properly mitigate the risk,\u201d says <a href=\"https:\/\/www.usf.edu\/ai-cybersecurity-computing\/people\/faculty\/licato-john.aspx\">John Licato<\/a>, associate professor in the Bellini College of Artificial Intelligence, Cybersecurity and Computing at the University of South Florida, founder and director of the Advancing Machine and Human Reasoning Lab, and owner of startup company Actualization.AI.<\/p>\n<h2 class=\"wp-block-heading\" id=\"top-threats-to-ai-systems\">Top threats to AI systems<\/h2>\n<p>Multiple attack types against AI systems are arising. Some attacks, such as data poisoning, occur during training. Others, such as adversarial inputs, happen during inference. Still others, such as model theft, occur during deployment.<\/p>\n<p>Here is a rundown of the top threat types to AI infrastructure experts warn about today. Some are more rare or theoretical than others, though many have been observed in the wild or have been demonstrated by researchers through notable proofs of concept.<\/p>\n<h3 class=\"wp-block-heading\" id=\"data-poisoning\">Data poisoning<\/h3>\n<p><a href=\"https:\/\/www.csoonline.com\/article\/570555\/how-data-poisoning-attacks-corrupt-machine-learning-models.html\">Data poisoning<\/a> is a type of attack in which bad actorsmanipulate, tamper with, and pollute the data used to develop or train AI systems, including machine learning models. By corrupting the data or introducing faulty data, attackers can alter, bias, or otherwise render inaccurate a model\u2019s performance.<\/p>\n<p>Imagine an attack that tells a model that green means stop instead of go, says <a href=\"file:\/\/\/C:\/Users\/Owner\/AppData\/Local\/Microsoft\/Windows\/INetCache\/Content.Outlook\/V6WVSWFS\/Rob%20T.%20Lee\">Robert T. Lee<\/a>, CAIO and chief of research at SANS, a security training and certification firm. \u201cIt\u2019s meant to degrade the output of the model,\u201d he explains.<\/p>\n<h3 class=\"wp-block-heading\" id=\"model-poisoning\">Model poisoning<\/h3>\n<p>Here, the attack goes after the model itself, seeking to produce inaccurate results by tampering with the model\u2019s architecture or parameters. Some definitions of model poisoning models also include attacks where the model\u2019s training data has been corrupted through data poisoning.<\/p>\n<h3 class=\"wp-block-heading\" id=\"tool-poisoning\">Tool poisoning<\/h3>\n<p>Invariant Labs identified this type of attack in spring 2025. When <a href=\"https:\/\/invariantlabs.ai\/blog\/mcp-security-notification-tool-poisoning-attacks\">announcing its findings<\/a>, Invariant wrote that it had \u201cdiscovered a critical vulnerability in the Model Context Protocol (MCP) that allows for what we term <em>Tool Poisoning Attacks<\/em>. This vulnerability can lead to sensitive data exfiltration and unauthorized actions by AI models.\u201d<\/p>\n<p>The company went on to note that its experiments showed \u201cthat a malicious server can not only exfiltrate sensitive data from the user but also hijack the agent\u2019s behavior and override instructions provided by other, trusted servers, leading to a complete compromise of the agent\u2019s functionality, even with respect to trusted infrastructure.\u201d<\/p>\n<p>These attacks involve embedding malicious instructions inside MCP tool descriptions that, when interpreted by AI models, can hijack the model. These attacks essentially corrupt the MCP layer \u201cto trick an agent to do something,\u201d says <a href=\"https:\/\/www.constellationr.com\/users\/chirag-mehta\">Chirag Mehta<\/a>, vice principal and principal analyst at Constellation Research.<\/p>\n<p>For more on MCP threats, see \u201c<a href=\"https:\/\/www.csoonline.com\/article\/4023795\/top-10-mcp-vulnerabilities.html\">Top 10 MCP vulnerabilities: The hidden risks of AI integrations<\/a>.\u201d<\/p>\n<h3 class=\"wp-block-heading\" id=\"prompt-injection\">Prompt injection<\/h3>\n<p>During a prompt injection attack, hackers use prompts that look legitimate but actually have embedded malicious commands meant to get the large language model to do something it shouldn\u2019t. Hackers use these prompts to trick the model to bypass or override its guardrails, to share sensitive data, or to perform unauthorized actions.<\/p>\n<p>\u201cWith prompt injection, you can change what the AI agent is supposed to do,\u201d says <a href=\"https:\/\/www.linkedin.com\/in\/fabien-cros-3b66a332\/\">Fabien Cros<\/a>, chief data and AI officer at global consulting firm Ducker Carlisle.<\/p>\n<p>Several notable prompt injection attacks and proofs of concept have been reported of late, including <a href=\"https:\/\/www.csoonline.com\/article\/4086965\/researchers-trick-chatgpt-into-prompt-injecting-itself.html\">researchers tricking ChatGPT into prompt injecting itself<\/a>, attackers <a href=\"https:\/\/www.csoonline.com\/article\/4053107\/ai-prompt-injection-gets-real-with-macros-the-latest-hidden-threat.html\">embedding malicious prompts into document macros<\/a>, and researchers <a href=\"https:\/\/www.csoonline.com\/article\/4036868\/black-hat-researchers-demonstrate-zero-click-prompt-injection-attacks-in-popular-ai-agents.html\">demoing zero-click prompt attacks on popular AI agents<\/a>.<\/p>\n<h3 class=\"wp-block-heading\" id=\"adversarial-inputs\">Adversarial inputs<\/h3>\n<p>Model owners and operators use perturbed data to test models for resiliency, but hackers use it to disrupt. In an adversarial input attack, malicious actors feed deceptive data to a model with the goal of making the model output incorrect.<\/p>\n<p>The changes to the perturbed input are typically small, or the deceptive data may be noise; the changes are deliberately designed to be subtle enough to evade detection by security systems but still capable of throwing off the model. This makes adversarial inputs a type of evasion attack.<\/p>\n<h3 class=\"wp-block-heading\" id=\"model-theft-model-extraction\">Model theft\/model extraction<\/h3>\n<p>Malicious actors can replicate, or reverse-engineer, a model, its parameters, and even its training data. They typically do this using publicly available APIs \u2014 for example, the model\u2019s prediction API or a cloud services API \u2014 to repeatedly query the model and collect outputs.<\/p>\n<p>They then can analyze how the model responds and use that analysis to reconstruct it.<\/p>\n<p>\u201cIt\u2019s enabling unauthorized duplication of the tools itself,\u201d says <a href=\"https:\/\/www.csoonline.com\/article\/4110008\/Allison%20W.%20%7C%20LinkedIn\">Allison Wikoff<\/a>, director and Americas lead for global threat intelligence at PwC.<\/p>\n<h3 class=\"wp-block-heading\" id=\"model-inversion\">Model inversion<\/h3>\n<p>Model inversion refers to a specific extraction attack in which the adversary attempts to reconstruct or infer the data that was used to train the model.<\/p>\n<p>The name comes from the hackers \u201cinverting\u201d the model, using its outputs to reconstruct or reverse-engineer information about the inputs used to train the model.<\/p>\n<h3 class=\"wp-block-heading\" id=\"supply-chain-risks\">Supply chain risks<\/h3>\n<p>Like other software systems, AI systems are built with a combination of components that can include open-source code, open-source models, third-party models, and various sources of data. Any security vulnerability in the components can show up in the AI systems. This makes AI systems vulnerable to supply chain attacks, where hackers can exploit vulnerabilities within the components to launch an attack.<\/p>\n<p>For recent examples, see \u201c<a href=\"https:\/\/www.csoonline.com\/article\/4015077\/ai-supply-chain-threats-are-looming-as-security-practices-lag.html\">AI supply chain threats loom \u2014 as security practices lag<\/a>.\u201d<\/p>\n<h3 class=\"wp-block-heading\" id=\"jailbreaking\">Jailbreaking<\/h3>\n<p>Also called model jailbreaking, attackers\u2019 goal here is to get AI systems \u2014 primarily through engaging with LLMs \u2014 to disregard the guardrails that confine their actions and behavior, such as safeguards to prevent harmful, offensive, or unethical outputs.<\/p>\n<p>Hackers can use various techniques to execute this type of attack. For example, they could employ a role-playing exploit (aka role-play attack), using commands to instruct the AI to adopt a persona (such as a developer) that can work around the guardrails. They could disguise malicious instructions in seemingly legitimate prompts or use encoding, foreign words, or keyboard characters to bypass filters. They could also use a prompt framed as a hypothetical or research question or a series of prompts that leads to their end objective.<\/p>\n<p>Those objectives, which also are varied, include getting AI systems to write malicious code, spread problematic content, and reveal sensitive data.<\/p>\n<p>\u201cWhen there is a chat interface, there are ways to interact with it to get it to operate outside the parameters,\u201d Licato says. \u201cThat\u2019s the tradeoff of having an increasingly powerful reasoning system.\u201d<\/p>\n<h2 class=\"wp-block-heading\" id=\"counteracting-threats-to-ai-systems\">Counteracting threats to AI systems<\/h2>\n<p>While their executive colleagues jump into AI initiatives in search of enhanced productivity and innovation, CISOs must take an active role in ensuring security for those initiatives \u2014 and the organization\u2019s AI infrastructure at large \u2014 is a top priority.<\/p>\n<p>According to a <a href=\"https:\/\/www.hackerone.com\/report\/ciso\">recent survey<\/a> from security tech company HackerOne, 84% of CISOs are now responsible for AI security and 82% now oversee data privacy. If CISOs don\u2019t <a href=\"https:\/\/www.csoonline.com\/article\/4011384\/the-cisos-5-step-guide-to-securing-ai-operations.html\">advance their security strategies to counteract attacks against AI systems<\/a> and the data the feeds them, future issues will reflect on their leadership \u2014\u00a0regardless of whether they were invited to the table when AI initiatives were conceived and launched.<\/p>\n<p>As a result, CISOs have a \u201cneed for a proactive AI security strategy,\u201d according to Constellation\u2019s Mehta.<\/p>\n<p>\u201cAI security is not just a technical challenge but also a strategic imperative requiring executive buy in and cross-functional collaboration,\u201d he writes in his 2025 report <a href=\"https:\/\/www.constellationr.com\/research\/ai-security-beyond-traditional-cyberdefenses\">AI Security Beyond Traditional Cyberdefenses: Rethinking Cybersecurity for the Age of AI and Autonomy<\/a>. \u201cData governance is foundational, because securing AI begins with ensuring the integrity and provenance of training data and model inputs. Security teams must develop new expertise to handle AI-driven risks, and business leaders must recognize the implications of autonomous AI systems and the governance frameworks needed to manage them responsibly.\u201d<\/p>\n<p>Strategies for assessing, managing, and counteracting the threat of attacks on AI systems are emerging. In addition to maintaining strong data governance and other fundamental cyber defense best practices, AI and security experts say CISOs and their organizations should be evaluating AI models before deploying them, monitoring AI systems in use, and <a href=\"https:\/\/www.csoonline.com\/article\/4101929\/offensive-security-takes-center-stage-in-the-ai-era.html\">using red teams to test models<\/a>.<\/p>\n<p>CISOs may need to implement specific actions to counter certain attacks, says PwC\u2019s Wikoff. For example, CISOs looking to head off model theft can monitor for suspicious queries and patterns as well as have timeouts and capture rate-limited responses. Or, to help prevent evasion attacks, security leaders could employ adversarial training \u2014 essentially training models to guard against those types of attacks.<\/p>\n<p>Adopting <a href=\"https:\/\/atlas.mitre.org\/\">MITRE ATLAS<\/a> is another step. This framework, short for Adversarial Threat Landscape for Artificial-Intelligence Systems, provides a knowledge base mapping how attackers target AI systems and details identifying tactics, techniques, and procedures (TTPs).<\/p>\n<p>Security and AI experts acknowledge the challenges of taking such steps. Many CISOs are contending with more immediate threats, including <a href=\"https:\/\/www.csoonline.com\/article\/4044007\/shadow-ai-is-surging-getting-ai-adoption-right-is-your-best-defense.html\">shadow AI<\/a> and attacks that are getting faster, more sophisticated, and harder to detect, thanks in part to attackers\u2019 use of AI. And given that attacks on AI systems are still nascent, with some attack types still considered theoretical, CISOs face challenges in getting resources to develop strategies and skills to counteract attacks on AI systems.<\/p>\n<p>\u201cFor the CISO this is something that\u2019s really difficult, because attacks on AI backends is still being researched. We\u2019re at the early stages of figuring out what hackers are doing and why,\u201d Lee, of SANS, says.<\/p>\n<p>Lee and others recognize the competitive pressure on organizations to make the most of AI, yet they stress that CISOs and their executive colleagues can\u2019t let securing AI systems be an afterthought.<\/p>\n<p>\u201cThinking about what these attacks could be as they build the infrastructure is key for the CISO,\u201d says <a href=\"https:\/\/www.csoonline.com\/article\/4110008\/(35)%20Matt%20G.%20%7C%20LinkedIn\">Matt Gorham<\/a>, leader of PwC\u2019s Cyber and Risk Innovation Institute.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Attacks against AI systems and infrastructure are beginning to take shape in real-world instances, and security experts expect the number of these attack types will rise in coming years. In a rush to realize the benefits of AI, most organizations have played it fast and loose on security hardening when rolling out AI tools and use cases. As a result, experts also warn that many&#8230; <\/p>\n<p class=\"more\"><a class=\"more-link\" href=\"https:\/\/newestek.com\/?p=15427\">Read More<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-15427","post","type-post","status-publish","format-standard","hentry","category-uncategorized","is-cat-link-borders-light is-cat-link-rounded"],"_links":{"self":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/15427","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=15427"}],"version-history":[{"count":0,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/15427\/revisions"}],"wp:attachment":[{"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=15427"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=15427"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=15427"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}