New npm worm hits CI pipelines and AI coding tools

A massive Shai-Hulud-style npm supply chain worm is hitting the software ecosystem, burrowing through developer machines, CI pipelines, and AI coding tools.

Socket researchers uncovered the active attack campaign and called it SANDWORM_MODE,  derived from the “SANDWORM_*” environment variable switches embedded in the malware’s runtime control logic.”

At least 19 typosquatted packages were published under multiple aliases, posing as popular developer utilities and AI-related tools. Once installed, the packages execute a multi-stage payload that harvests secrets from local environments and CI systems, then uses stolen tokens to modify other repositories.

The payload also implements a Shai-Hulud-style “dead switch” that remains OFF by default to trigger home directory wiping when the malware is detected. Researchers called the campaign a  “real and high-risk” threat, advising defenders to treat the packages as active compromise risks.

Typo to takeover

The campaign starts with typosquatting, where attackers publish packages with names nearly identical to legitimate ones, banking on a developer typo or an AI hallucinating wrong dependencies.

“The typosquatting targets several high-traffic developer utilities in the Node.js ecosystem, crypto tooling, and, perhaps most notably, AI coding tools that are seeing rapid adoption: three packages impersonate Claude Code and one targets OpenClaw, the viral AI agent that recently passed 210k stars on GitHub,” the researchers wrote in a blog post.

Once a malicious package is installed and executed, the malware hunts for sensitive credentials, including npm and GitHub tokens, environment secrets, and cloud keys. Those credentials are then used to push malicious changes into other repositories and inject new dependencies or workflows, expanding the infection chain.

Additionally, the campaign uses a weaponized GitHub Action that could potentially amplify the attack inside CI pipelines, extracting secrets during builds and enabling further propagation, the researchers added.

Poisoning the AI developer interface

The campaign was specifically flagged for its direct targeting of AI coding assistants. The malware deploys a malicious Model Context Protocol (MCP) server and injects it into configurations of popular AI tools, embedding itself as a trusted component in the assistant’s environment.

Once this is achieved, prompt-injection techniques can trick the AI into retrieving sensitive local data, which can include SSH keys or cloud credentials, and pass it to the attacker without the user’s knowledge.

Go to Source

Author: