{"id":16178,"date":"2026-05-05T21:21:41","date_gmt":"2026-05-05T21:21:41","guid":{"rendered":"https:\/\/newestek.com\/?p=16178"},"modified":"2026-05-05T21:21:41","modified_gmt":"2026-05-05T21:21:41","slug":"supply-chain-attacks-take-aim-at-your-ai-coding-agents","status":"publish","type":"post","link":"https:\/\/newestek.com\/?p=16178","title":{"rendered":"Supply-chain attacks take aim at your AI coding agents"},"content":{"rendered":"<div>\n<div id=\"remove_no_follow\">\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<section class=\"wp-block-bigbite-multi-title\">\n<div class=\"container\"><\/div>\n<\/section>\n<p>Attackers too are looking to cash in on the AI coding craze, adapting their supply-chain techniques to target coding agents themselves.<\/p>\n<p>Many AI agents autonomously scan package registries such as NPM and PyPI for components to integrate into their coding projects, and attackers are beginning to take advantage of this. Bait packages with persuasive descriptions and legitimate functionality have cropped up on such registries, while packages that target names that AI coding agents are likely to hallucinate as dependencies are another attack vector on the horizon.<\/p>\n<p>Researchers from security firm ReversingLabs have been tracking one such supply-chain attack that uses \u201cLLM Optimization (LLMO) abuse and knowledge injection\u201d to make packages more likely to be discovered and chosen by AI agents. <a href=\"https:\/\/www.reversinglabs.com\/blog\/claude-promptmink-malware-crypto\">Dubbed PromptMink<\/a>, the attack was attributed to Famous Chollima, one of North Korea\u2019s APT groups tasked with generating funds for the regime by targeting developers and users from the cryptocurrency and fintech space.<\/p>\n<p>\u201cThis campaign presents us with the new frontier in software supply chain security: AI coding agents manipulated into installing and using malicious dependencies in the code they generate,\u201d the researchers wrote in their report. \u201cThe underlying problem is, in principle, not much different from the well established pattern of cybercriminals and malicious actors socially engineering developers to use malicious packages in their codebase. Where it differs is in the ability of the threat actors to test their lure before it is deployed.\u201d<\/p>\n<h2 class=\"wp-block-heading\" id=\"an-evolving-campaign\">An evolving campaign<\/h2>\n<p>North Korean threat actors commonly use social engineering to trick developers into installing malware, whether <a href=\"https:\/\/www.csoonline.com\/article\/3518577\/fake-recruitment-campaign-targets-developers-using-trojanized-python-packages.html\">through fake job interviews<\/a> or by publishing rogue software components that could appeal to developers from specific industries.<\/p>\n<p>The PromptMink campaign appears to have started last September with two malicious packages called @hash-validator\/v2 and @solana-launchpad\/sdk. The SDK was used as a bait package with legitimate functionality intended to be discovered by developers, while hash-validator, a dependency for the SDK, contained a JavaScript infostealer.<\/p>\n<p>This combo of a lure package and a malicious dependency appears to be a central technique used by the group to make their campaigns more resilient. The bait packages have a better chance of remaining undetected for longer, accumulating downloads and history to appear more credible.<\/p>\n<p>Multiple second-layer malicious packages were rotated over time as part of the campaign, including aes-create-ipheriv, jito-proper-excutor, jito-sub-aes-ipheriv, and @validate-sdk\/v2. All were related to cryptocurrency networks, posing as tools to work with cryptographic hashes and functions. The bait packages were also diversified over time with @validate-ethereum-address\/core and several others, expanding across multiple package registries and programming languages such as Python and Rust.<\/p>\n<p>The attack later evolved to include additional obfuscation techniques and malicious actions \u2014 for example, deploying an attacker-controlled SSH key on victims\u2019 machines for direct remote access, and archiving and exfiltrating entire code projects from compromised environments.<\/p>\n<p>One notable development was the pivot to compiled payloads to complicate detection. For example, in February the @validate-sdk\/v2 package started bundling Single Executable Applications (SEAs) \u2014 self-contained applications that include JS code with the full Node.js interpreter. SEAs aren\u2019t typically distributed as part of NPM packages because users already have Node.js installed locally on their machines.<\/p>\n<p>In March, the attackers pivoted from SEAs to pre-compiled malicious Node.js add-ons written in Rust with the NAPI-RS project. This was likely done to reduce payload size, as SEAs are unusually large, exceeding 100MB in some cases.<\/p>\n<h2 class=\"wp-block-heading\" id=\"using-llms-to-trick-llms\">Using LLMs to trick LLMs<\/h2>\n<p>ReversingLabs\u2019 researchers observed clear signs of vibe coding in the creation of these malicious components, including LLM-generated code comments. However, something else stood out: the level of detail in their README files and the way the documentaton boasted about how effective these packages were at performing their tasks.<\/p>\n<p>The researchers questioned whether this was intended to make the rogue components more appealing to developers, who are typically the target of such attacks. But the overly persuasive language made more sense if the intended targets were LLM-powered autonomous coding agents, and it wasn\u2019t long before they confirmed this was likely the case.<\/p>\n<p>In a January 2026 post on Moltbook, a Reddit-like platform where AI agents make posts and discuss topics autonomously, one bot described how it created a memecoin and used the @solana-launchpad\/sdk package because it had one of the needed functions. It is possible the post was generated intentionally by an AI bot controlled by the attackers. But it wasn\u2019t the only example of an AI agent falling for the bait package.<\/p>\n<p>The researchers later found a legitimate project called openpaw-graveyard that was developed as part of the Solana Graveyard Hackathon and included the @solana-launchpad\/sdk as a dependency. The repository history showed the dependency had been added in a commit co-authored by Claude Opus.<\/p>\n<p>\u201cThis transforms the technique from social engineering to a combination of LLM Optimization (LLMO) abuse and knowledge injection,\u201d the researchers concluded. \u201cIn the context of this campaign, the goal is to make the LLM likely to recommend using the malicious package by making the documentation as believable (knowledge injection) and as appropriate as possible in the project that the specific LLM coding agent is working on.\u201d<\/p>\n<h2 class=\"wp-block-heading\" id=\"slopsquatting\">\u2018Slopsquatting\u2019<\/h2>\n<p>This AI agent supply-chain risk isn\u2019t limited to specifically crafted package descriptions and documentation. Coding agents can also hallucinate package names entirely. Previous research has shown that this happens often and predictably enough to make it something attackers could abuse.<\/p>\n<p>Back in January, Aikido Security researcher Charlie Eriksen registered <a href=\"https:\/\/www.aikido.dev\/blog\/agent-skills-spreading-hallucinated-npx-commands\">an npm package called react-codeshift that was hallucinated by an LLM<\/a> and subsequently made its way into 237 GitHub repositories.<\/p>\n<p>It started with someone vibe coding a collection of agent skills back in October for migrating coding projects to different frameworks. That collection included two skills \u2014 react-modernization and dependency-upgrade \u2014 that invoked the hallucinated react-codeshift package via npx, a CLI tool bundled with npm for downloading and executing Node.js packages on the fly without installation.<\/p>\n<p>Agent skills are markdown or JSON files that contain instructions, metadata, and code examples to teach AI agents how to perform certain tasks. They are automatically activated during agent operation when specific keywords are encountered in prompts.<\/p>\n<p>Eriksen registered the react-codeshift package on NPM and immediately started seeing downloads, suggesting that skills with the hallucinated package names were being used in practice. And not just with npx but with other Node.js package installers as well, because the original skills were cloned and modified by other developers.<\/p>\n<p>\u201cThe supply chain just got a new link, made of LLM dreams,\u201d said Eriksen, who called the new threat \u201cslopsquatting.\u201d<\/p>\n<p>\u201cThis was a hallucination. It spread to 237 repositories. It generated real download attempts. The only reason it didn\u2019t become an attack vector is because I got there first,\u201d he said.<\/p>\n<h2 class=\"wp-block-heading\" id=\"vibe-coding-agents-need-stronger-security-controls\">Vibe coding agents need stronger security controls<\/h2>\n<p>As organizations <a href=\"https:\/\/www.csoonline.com\/article\/3529615\/companies-skip-security-hardening-in-rush-to-adopt-ai.html\">rush to incorporate AI agents<\/a> into business workflows and software development pipelines, their security controls need to keep pace with the novel attack vectors these agents introduce.<\/p>\n<p>The US Cybersecurity and Infrastructure Security Agency, the US National Security Agency, and their Five Eyes partners recently published <a href=\"https:\/\/www.csoonline.com\/article\/4166479\/security-agencies-draw-red-lines-around-agentic-ai-deployments.html\">a joint advisory<\/a> on the adoption of agentic AI services. Among the many recommendations, the agencies advise organizations to maintain trusted registries of approved third-party components, restrict AI agents to allow-listed tools and versions, and require human approval before high-impact actions.<\/p>\n<p>\u201cPoor or deliberately misleading tool descriptions can cause agents to select tools unreliably, with persuasive descriptions chosen more often,\u201d the agencies warned, effectively confirming that LLMs can be socially engineered through documentation.<\/p>\n<p>AI coding agents should not be allowed to install dependencies without developer review, and every suggested package should be treated as untrusted by default until their transient dependencies are reviewed. Development teams should implement Software Bill of Materials (SBOM) practices so they can track and audit the components used in their development pipelines.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Attackers too are looking to cash in on the AI coding craze, adapting their supply-chain techniques to target coding agents themselves. Many AI agents autonomously scan package registries such as NPM and PyPI for components to integrate into their coding projects, and attackers are beginning to take advantage of this. Bait packages with persuasive descriptions and legitimate functionality have cropped up on such registries, while&#8230; <\/p>\n<p class=\"more\"><a class=\"more-link\" href=\"https:\/\/newestek.com\/?p=16178\">Read More<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-16178","post","type-post","status-publish","format-standard","hentry","category-uncategorized","is-cat-link-borders-light is-cat-link-rounded"],"_links":{"self":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/16178","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=16178"}],"version-history":[{"count":0,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/16178\/revisions"}],"wp:attachment":[{"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=16178"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=16178"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=16178"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}