A new malicious campaign linked to the Shai-Hulud worm is making its way throughout the npm ecosystem. According to findings from Wiz, over 25,000 npm packages have been compromised and over 350 users have been impacted.
Shai-Hulud was a worm that infected the npm registry back in September, and now a new worm spelled as Sha1-Hulud is appearing in the ecosystem again, though it is unclear at the time of writing whether the two worms were made by the same threat actor.
Wiz and Aikido researchers have confirmed that Sha1-Hulud was uploaded to the npm ecosystem between November 21st and 23rd. They also say that projects from Zapier, ENS Domains, PostHog, and Postman were some of the ones that were trojanized, and newly compromised packages are still being discovered.
Like Shai-Hulud, this new malware also steals developer secrets, though Garrett Calpouzos, principal security researcher at Sonatype, explained that the mechanism is slightly different, with two files instead of one. “The first checks for and installs a non-standard ‘bun’ JavaScript runtime, and then uses bun to execute the actual rather massive malicious source file that publishes stolen data to .json files in a randomly named GitHub repository,” he told SD Times.
Wiz believes this preinstall-phase significantly increases the blast radius across build and runtime environments.
Other differences, according to Aikido, are that it creates a repository of stolen data with a random name instead of a hardcoded name, can infect up to 100 packages instead of 20, and if it can’t authenticate with GitHub or npm it wipes all files in the user’s Home directory.
The researchers from Wiz recommend that developers remove and replace compromised packages, rotate their secrets, audit their GitHub and CI/CD environments, and then harden their pipelines by restricting lifecycle scripts in CI/CD, limiting outbound network access from build systems, and using short-lived scoped automation tokens.
Sonatype’s Calpouzos also said that the size and structure of the file confuses AI analysis tools because it is bigger than the normal context window, making it hard for LLMs to keep track of what they are reading. He explained that he tested this out by asking ChatGPT and Gemini to analyze it, and has been getting different results every time. This is because the models are searching for obvious malware patterns, such as calls to suspicious domains, and aren’t finding any, leading to the conclusion that the files are legitimate.
“It’s a clever evolution. The attackers aren’t just hiding from humans, they’re learning to hide from machines too,” Calpouzos said.
