{"id":14365,"date":"2025-07-01T07:55:04","date_gmt":"2025-07-01T07:55:04","guid":{"rendered":"https:\/\/newestek.com\/?p=14365"},"modified":"2025-07-01T07:55:04","modified_gmt":"2025-07-01T07:55:04","slug":"ai-supply-chain-threats-loom-as-security-practices-lag","status":"publish","type":"post","link":"https:\/\/newestek.com\/?p=14365","title":{"rendered":"AI supply chain threats loom \u2014 as security practices lag"},"content":{"rendered":"<div>\n<div id=\"remove_no_follow\">\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<section class=\"wp-block-bigbite-multi-title\">\n<div class=\"container\"><\/div>\n<\/section>\n<p>The AI software supply chain is rapidly expanding to include not only open-source development tools but also collaborative platforms where developers share custom models, agents, prompts, and other resources. And with this expansion of third-party AI component and services use comes an expanded security threat \u2014 one that in many ways may be more complex, obscured, and pernicious than traditional software supply chain issues.<\/p>\n<p>While companies rush to experiment with AI, often with <a href=\"https:\/\/www.csoonline.com\/article\/3529615\/companies-skip-security-hardening-in-rush-to-adopt-ai.html\">less oversight than for traditional software development<\/a>, attackers are quickly realizing these new platforms and shareable assets that fall outside typical security monitoring can be exploited to compromise systems and data.<\/p>\n<p>\u201cThis pattern isn\u2019t new; it echoes what we\u2019ve seen repeatedly in software development,\u201d Brian Fox, CTO at software supply chain management company Sonatype, tells CSO. \u201cWhen something new and exciting comes along, whether it\u2019s containerization, serverless, or now AI, organizations move fast to reap the benefits, but security often lags behind. What\u2019s unique with AI is the added complexity: pre-trained models, opaque data sources, and new attack vectors like prompt injection. These elements make traditional security practices harder to apply.\u201d<\/p>\n<h2 class=\"wp-block-heading\" id=\"flaws-and-malicious-code-in-ai-dependencies-on-the-rise\">Flaws and malicious code in AI dependencies on the rise<\/h2>\n<p>Just last week, researchers from application security firm Backslash warned that <a href=\"https:\/\/www.csoonline.com\/article\/4012712\/misconfigured-mcp-servers-expose-ai-agent-systems-to-compromise.html\">hundreds of publicly shared Model Context Protocol (MCP) servers have insecure configurations<\/a> that can open arbitrary command execution holes on systems that deploy them. MCP servers link large language models (LLMs) to third-party services, data sources, and tools to provide improved reasoning context, making them an indispensable tool for building AI agents.<\/p>\n<p>Earlier in June, researchers from AI security startup Noma Security <a href=\"https:\/\/noma.security\/blog\/how-an-ai-agent-vulnerability-in-langsmith-could-lead-to-stolen-api-keys-and-hijacked-llm-responses\/\">warned about a feature in LangChain\u2019s Prompt Hub<\/a> that could be abused by attackers to distribute malicious prompts that steal API tokens and sensitive data. The LangChain Prompt Hub is part of LangSmith, a platform for building, testing, and monitoring the performance of AI-based applications.<\/p>\n<p>LangSmith users can share complex system prompts with one another in Prompt Hub, and these behave like AI agents. One feature when developing such prompts is to specify a proxy server through which to route all API requests to an LLM provider.<\/p>\n<p>\u201cThis newly identified vulnerability exploited unsuspecting users who adopt an agent containing a pre-configured malicious proxy server uploaded to \u2018Prompt Hub\u2019 (which is against LangChain ToS),\u201d the Noma Security\u2019s researchers wrote. \u201cOnce adopted, the malicious proxy discreetly intercepted all user communications \u2014 including sensitive data such as API keys (including OpenAI API Keys), user prompts, documents, images, and voice inputs \u2014 without the victim\u2019s knowledge.\u201d<\/p>\n<p>The LangChain team has since added warnings to agents that contain custom proxy configurations, but this vulnerability highlights how well-intentioned features can have serious security repercussions if users don\u2019t pay attention, especially on platforms where they copy and run other people\u2019s code on their systems.<\/p>\n<p>The problem, as Sonatype\u2019s Fox mentioned, is that, with AI, the risk expands beyond traditional executable code. Developers might more easily understand why running software components from repositories such as PyPI, npm, NuGet, and Maven Central on their machines carry significant risks if those components are not vetted first by their security teams. But they might not think the same risks apply when testing a system prompt in an LLM or even a custom machine learning (ML) model shared by others.<\/p>\n<p>Attackers fully understand that the AI supply chain is lagging behind traditional software development in oversight and have started taking advantage of it. Earlier this year <a href=\"https:\/\/www.csoonline.com\/article\/3819920\/attackers-hide-malicious-code-in-hugging-face-ai-model-pickle-files.html\">researchers found malicious code inside AI models hosted on Hugging Face<\/a>, the largest platform for sharing machine learning assets.<\/p>\n<p>Those attacks took advantage of Python\u2019s serialized Pickle format. Because of PyTorch\u2019s popularity as a widely used ML library written in Python, Pickle has become a common way to store and distribute ML models. Not many security tools have the capability to scan such files yet for malicious code.<\/p>\n<p>More recently researchers <a href=\"https:\/\/www.csoonline.com\/article\/3998351\/poisoned-models-hidden-in-fake-alibaba-sdks-show-challenges-of-securing-ai-supply-chains.html\">found a rogue component on PyPI that masqueraded as an Alibaba AI SDK<\/a> and contained a poisoned model in Pickle format with malicious hidden code inside.<\/p>\n<h2 class=\"wp-block-heading\" id=\"security-tools-still-catching-up-to-ai-supply-chain-risks\">Security tools still catching up to AI supply chain risks<\/h2>\n<p>\u201cMost tools today aren\u2019t fully equipped to scan AI models or prompts for malicious code, and attackers are already exploiting that gap,\u201d Sonatype\u2019s Fox says. \u201cWhile some early solutions are emerging, organizations shouldn\u2019t wait. They need to extend existing security policies to cover these new components now \u2014 because the risk is real and growing.\u201d<\/p>\n<p>Ken Huang, CAIO of DistributedApps.ai and co-chair of the Cloud Security Alliance (CSA)\u2019s AI Safety Working Group, concurs: \u201cTeams often prioritize speed and innovation over rigorous vetting, especially as vibe coding makes it easier to generate and share code rapidly. This environment fosters shortcuts and overconfidence in AI outputs, leading to the integration of insecure or unverified components and increasing the likelihood of supply chain compromise.\u201d<\/p>\n<p><a href=\"https:\/\/www.infoworld.com\/article\/3960574\/vibe-code-or-retire.html\">Vibe coding<\/a> is the increasingly common practice of developing entire applications with the help of LLM-powered code assistants, with the human acting as the overseer giving input through natural language prompts. Security researchers have warned that this practice can result in code with hard-to-detect errors and vulnerabilities.<\/p>\n<p>The CSA, a nonprofit industry association that promotes security assurance practices in cloud computing, recently published an <a href=\"https:\/\/cloudsecurityalliance.org\/artifacts\/agentic-ai-red-teaming-guide\">Agentic AI Red Teaming Guide<\/a> co-authored by Huang together with more than 50 industry contributors and reviewers. One of the chapters tackles testing for AI agent supply chain and dependency attacks that can lead to unauthorized access, data breaches, or system failures.<\/p>\n<h2 class=\"wp-block-heading\" id=\"a-comprehensive-mlsecops-approach\">A comprehensive MLSecOps approach<\/h2>\n<p>\u201cDependency scanners, lockfiles, and hash verification help pin packages to trusted versions and identify unsafe or hallucinated dependencies.\u201d Huang tells CSO. \u201cHowever, not all threats \u2014 such as subtle data poisoning or prompt-based attacks \u2014 are detectable via automated scans, so layered defenses and human review remain critical.\u201d<\/p>\n<p>Huang\u2019s recommendations include:<\/p>\n<ul class=\"wp-block-list\">\n<li><strong>Vibe coding risk mitigation:<\/strong> Recognize that vibe coding can introduce insecure or unnecessary dependencies, and enforce manual review of AI-generated code and libraries. Encourage skepticism and verification of all AI-generated suggestions, especially package names and framework recommendations.<\/li>\n<li><strong>MLBOM and AIBOM:<\/strong> Establishing a machine learning or AI bill of materials will provide enterprises with detailed inventories of all datasets, models, and code dependencies, offering transparency and traceability for AI-specific assets. Model cards and system cards help document intended use, limitations, and ethical considerations, but do not address the technical supply chain risks. MLBOM\/AIBOM complements these by focusing on provenance and integrity.<\/li>\n<li><strong>Continuous scanning and monitoring:<\/strong> Integrate model and dependency scanners into CI\/CD pipelines, and monitor for <a href=\"https:\/\/www.csoonline.com\/article\/3822459\/what-is-anomaly-detection-behavior-based-analysis-for-cyber-threats.html\">anomalous behaviors<\/a> post-deployment.<\/li>\n<li><strong>Zero trust and least privilege:<\/strong> Treat all third-party AI assets as untrusted by default, isolate and sandbox new models and agents, and restrict permissions for AI agents.<\/li>\n<li><strong>Policy alignment:<\/strong> Ensure that AI platforms and repositories are covered by existing software supply chain security policies, updated to address the unique risks of AI and vibe coding.<\/li>\n<\/ul>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>The AI software supply chain is rapidly expanding to include not only open-source development tools but also collaborative platforms where developers share custom models, agents, prompts, and other resources. And with this expansion of third-party AI component and services use comes an expanded security threat \u2014 one that in many ways may be more complex, obscured, and pernicious than traditional software supply chain issues. While&#8230; <\/p>\n<p class=\"more\"><a class=\"more-link\" href=\"https:\/\/newestek.com\/?p=14365\">Read More<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-14365","post","type-post","status-publish","format-standard","hentry","category-uncategorized","is-cat-link-borders-light is-cat-link-rounded"],"_links":{"self":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/14365","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=14365"}],"version-history":[{"count":0,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/14365\/revisions"}],"wp:attachment":[{"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=14365"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=14365"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=14365"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}