Attackers too are looking to cash in on the AI coding craze, adapting their supply-chain techniques to target coding agents themselves.
Many AI agents autonomously scan package registries such as NPM and PyPI for components to integrate into their coding projects, and attackers are beginning to take advantage of this. Bait packages with persuasive descriptions and legitimate functionality have cropped up on such registries, while packages that target names that AI coding agents are likely to hallucinate as dependencies are another attack vector on the horizon.
Researchers from security firm ReversingLabs have been tracking one such supply-chain attack that uses “LLM Optimization (LLMO) abuse and knowledge injection” to make packages more likely to be discovered and chosen by AI agents. Dubbed PromptMink, the attack was attributed to Famous Chollima, one of North Korea’s APT groups tasked with generating funds for the regime by targeting developers and users from the cryptocurrency and fintech space.
“This campaign presents us with the new frontier in software supply chain security: AI coding agents manipulated into installing and using malicious dependencies in the code they generate,” the researchers wrote in their report. “The underlying problem is, in principle, not much different from the well established pattern of cybercriminals and malicious actors socially engineering developers to use malicious packages in their codebase. Where it differs is in the ability of the threat actors to test their lure before it is deployed.”
An evolving campaign
North Korean threat actors commonly use social engineering to trick developers into installing malware, whether through fake job interviews or by publishing rogue software components that could appeal to developers from specific industries.
The PromptMink campaign appears to have started last September with two malicious packages called @hash-validator/v2 and @solana-launchpad/sdk. The SDK was used as a bait package with legitimate functionality intended to be discovered by developers, while hash-validator, a dependency for the SDK, contained a JavaScript infostealer.
This combo of a lure package and a malicious dependency appears to be a central technique used by the group to make their campaigns more resilient. The bait packages have a better chance of remaining undetected for longer, accumulating downloads and history to appear more credible.
Multiple second-layer malicious packages were rotated over time as part of the campaign, including aes-create-ipheriv, jito-proper-excutor, jito-sub-aes-ipheriv, and @validate-sdk/v2. All were related to cryptocurrency networks, posing as tools to work with cryptographic hashes and functions. The bait packages were also diversified over time with @validate-ethereum-address/core and several others, expanding across multiple package registries and programming languages such as Python and Rust.
The attack later evolved to include additional obfuscation techniques and malicious actions — for example, deploying an attacker-controlled SSH key on victims’ machines for direct remote access, and archiving and exfiltrating entire code projects from compromised environments.
One notable development was the pivot to compiled payloads to complicate detection. For example, in February the @validate-sdk/v2 package started bundling Single Executable Applications (SEAs) — self-contained applications that include JS code with the full Node.js interpreter. SEAs aren’t typically distributed as part of NPM packages because users already have Node.js installed locally on their machines.
In March, the attackers pivoted from SEAs to pre-compiled malicious Node.js add-ons written in Rust with the NAPI-RS project. This was likely done to reduce payload size, as SEAs are unusually large, exceeding 100MB in some cases.
Using LLMs to trick LLMs
ReversingLabs’ researchers observed clear signs of vibe coding in the creation of these malicious components, including LLM-generated code comments. However, something else stood out: the level of detail in their README files and the way the documentaton boasted about how effective these packages were at performing their tasks.
The researchers questioned whether this was intended to make the rogue components more appealing to developers, who are typically the target of such attacks. But the overly persuasive language made more sense if the intended targets were LLM-powered autonomous coding agents, and it wasn’t long before they confirmed this was likely the case.
In a January 2026 post on Moltbook, a Reddit-like platform where AI agents make posts and discuss topics autonomously, one bot described how it created a memecoin and used the @solana-launchpad/sdk package because it had one of the needed functions. It is possible the post was generated intentionally by an AI bot controlled by the attackers. But it wasn’t the only example of an AI agent falling for the bait package.
The researchers later found a legitimate project called openpaw-graveyard that was developed as part of the Solana Graveyard Hackathon and included the @solana-launchpad/sdk as a dependency. The repository history showed the dependency had been added in a commit co-authored by Claude Opus.
“This transforms the technique from social engineering to a combination of LLM Optimization (LLMO) abuse and knowledge injection,” the researchers concluded. “In the context of this campaign, the goal is to make the LLM likely to recommend using the malicious package by making the documentation as believable (knowledge injection) and as appropriate as possible in the project that the specific LLM coding agent is working on.”
‘Slopsquatting’
This AI agent supply-chain risk isn’t limited to specifically crafted package descriptions and documentation. Coding agents can also hallucinate package names entirely. Previous research has shown that this happens often and predictably enough to make it something attackers could abuse.
Back in January, Aikido Security researcher Charlie Eriksen registered an npm package called react-codeshift that was hallucinated by an LLM and subsequently made its way into 237 GitHub repositories.
It started with someone vibe coding a collection of agent skills back in October for migrating coding projects to different frameworks. That collection included two skills — react-modernization and dependency-upgrade — that invoked the hallucinated react-codeshift package via npx, a CLI tool bundled with npm for downloading and executing Node.js packages on the fly without installation.
Agent skills are markdown or JSON files that contain instructions, metadata, and code examples to teach AI agents how to perform certain tasks. They are automatically activated during agent operation when specific keywords are encountered in prompts.
Eriksen registered the react-codeshift package on NPM and immediately started seeing downloads, suggesting that skills with the hallucinated package names were being used in practice. And not just with npx but with other Node.js package installers as well, because the original skills were cloned and modified by other developers.
“The supply chain just got a new link, made of LLM dreams,” said Eriksen, who called the new threat “slopsquatting.”
“This was a hallucination. It spread to 237 repositories. It generated real download attempts. The only reason it didn’t become an attack vector is because I got there first,” he said.
Vibe coding agents need stronger security controls
As organizations rush to incorporate AI agents into business workflows and software development pipelines, their security controls need to keep pace with the novel attack vectors these agents introduce.
The US Cybersecurity and Infrastructure Security Agency, the US National Security Agency, and their Five Eyes partners recently published a joint advisory on the adoption of agentic AI services. Among the many recommendations, the agencies advise organizations to maintain trusted registries of approved third-party components, restrict AI agents to allow-listed tools and versions, and require human approval before high-impact actions.
“Poor or deliberately misleading tool descriptions can cause agents to select tools unreliably, with persuasive descriptions chosen more often,” the agencies warned, effectively confirming that LLMs can be socially engineered through documentation.
AI coding agents should not be allowed to install dependencies without developer review, and every suggested package should be treated as untrusted by default until their transient dependencies are reviewed. Development teams should implement Software Bill of Materials (SBOM) practices so they can track and audit the components used in their development pipelines.