
Security News
NIST Officially Stops Enriching Most CVEs as Vulnerability Volume Skyrockets
NIST will stop enriching most CVEs under a new risk-based model, narrowing the NVD's scope as vulnerability submissions continue to surge.
The Axios compromise shows how time-dependent dependency resolution makes exposure harder to detect and contain.

April 1, 2026
13 min read


Yesterday, we reported on a supply chain attack targeting Axios that introduced a malicious dependency (plain-crypto-js) into specific npm releases.
At first glance, the scope seemed contained:
Over the past 24 hours, we’re seeing many teams focus on checking their lockfiles and node_modules directories, but that only captures part of the picture, especially when tools are executed dynamically via npx.
During the exposure window, widely used tools, including CI systems, developer CLIs, build tools like Nx, and even MCP servers, could resolve the compromised version through normal dependency ranges, often without explicitly depending on Axios at all.
This incident is one of the clearest examples of dynamics that we have been warning about for years. Modern dependency resolution makes incidents like this far harder to reason about and far broader in impact than they initially appear.
This post explains how that happens, where common assumptions break down (especially around lockfiles and npx), and why the blast radius is often larger than it looks.
A malicious version of Axios (1.14.1) was published to npm. That version introduced a new dependency (plain-crypto-js@4.2.1) containing a multi-stage malware payload.
Any project installing Axios during that window could pull the malicious version. If Axios was already resolved in your lockfile and installs respected that lockfile, you were likely protected. That’s where most explanations stop. During this attack, we have observed common workflows where this assumption does not hold, particularly when tools are executed dynamically via npx.
What’s much harder to understand is how many systems could have installed that version without ever explicitly depending on Axios.
This comes down to one detail:
Most packages do not pin exact dependency versions.
Instead, they use version ranges like:
axios: "^1.13.5"
That range means:
When axios@1.14.1 was published, it became the default resolution for that range, without any code changes or alerts. It was just freshly installed.
It’s easy to focus on a single example, but the pattern is widespread.
The key condition for exposure was simple:
1.14.1npm install without a lockfile, or installing a CLI outside a project context)Under those conditions, the package manager will select the malicious version by default.
During the exposure window, a large number of widely used tools met these conditions.
Note: None of the packages listed below were compromised, and their dependency declarations are typical and appropriate for the npm ecosystem.
Using semver ranges is a deliberate design tradeoff that enables compatibility and deduplication across the dependency graph. These examples illustrate how that same mechanism can expand the blast radius of a short-lived malicious release.
The examples below are not exhaustive. They illustrate how common this pattern is across CI tooling, CLIs, and frameworks:
@datadog/datadog-ci)axios: "^1.13.5" across multiple sub-packages, such as @datadog/datadog-ci-plugin-coverage. npx execution during the window resolves to 1.14.1.^1.1.3 for Axios.^1.12.2).@aws-amplify/cli, others)npx or installed globally, declares Axios range (^1.11.0)Gatsby)^1.6.4).npx gatsby new) or fresh installs would resolve to the latest matching version.Nx)^1.2.0.wait-on)^1.13.5.In all of these cases:
In one observable case, a CI pipeline running a CLI via npx pulled in the malicious dependency through a transitive Axios range, resulting in observable command-and-control traffic during a build step.
This is what makes the blast radius unintuitive.
The question is not:
“Do you depend on Axios?”
It is:
“Did anything you executed resolve Axios during that window?”
Looking beyond traditional CI and CLI tooling, similar patterns show up in MCP servers and agent-oriented packages.
In a sample of MCP servers from a public leaderboard, a significant portion included Axios ranges that would have resolved to the compromised version during the exposure window.
A few examples:
pipenet@1.4.0 with range ^1.7.31.13.5, but multiple transitive dependencies — including @1password/connect, ibm-cloud-sdk-core, snowflake-sdk, and others — use ranges such as ^1.10.0, ^1.13.5, and ^1.6.2, which would resolve to 1.14.1 during the attack window^1.13.6, both directly and via agnost@0.1.10agentic-flow@2.0.7 and pipenet@1.4.0, with range ^1.12.2@mendable/firecrawl-js@4.15.2 with range ^1.13.5^1.11.0To understand how far this pattern extends, we looked at a sample of widely used SDKs and infrastructure packages commonly found in production environments.
They are core integrations used across CI systems, backend services, and developer workflows. Across this sample, the same pattern holds: broad semver ranges and transitive usage.
A few examples:
^1.10.0@sendgrid/client@8.1.6 with range ^1.12.0^1.13.5@slack/web-api, with range ^1.12.0^1.13.4^1.13.5ibm-cloud-sdk-core@sap/xssec, with mixed ranges including ^1.6.x^1.13.5^1.13.5@apimatic/axios-client-adapter with range ^1.8.4^1.7.4^1.13.5^1.13.5^1While this post focuses on the Axios compromise, this pattern is not unique. Recent incidents, including the hijacking of the widely used is package and maintainer compromises affecting packages in the eslint and prettier ecosystems, in 2025, have shown how malicious versions can be introduced and propagate quickly through dependency graphs. Any widely used package with broad semver ranges and transitive adoption can exhibit the same behavior under the right conditions.
There are three compounding factors that make incidents like this difficult to scope.
Most packages intentionally avoid pinning dependencies. This is intentional. Pinning dependencies in published packages would force every consumer to install that exact version, leading to duplication, and version conflicts across the dependency tree.
Using semver ranges allows package managers to share compatible versions across dependencies, reducing install size and avoiding conflicts. This is a deliberate ecosystem tradeoff, not negligence.
This keeps ecosystems flexible and avoids and bloat, but it also means dependency graphs are constantly shifting based on what is available in the registry.
Lockfiles are often presented as the solution, but they only protect you under specific conditions:
Many real-world workflows fall outside those conditions:
npx or global installs, especially in CI environmentsEven when a lockfile exists, it does not apply to everything you execute.
Darcy Clarke, founder of vlt and former npm Engineering Manager of Community & Open Source, explained it this way:
When you're installing or executing something new, the dependency graph has to be recalculated. That’s how package managers work. Lockfiles don’t prevent net-new installs when updating/adding new dependencies. That’s the point.
vlt takes the approach that all third-party packages are untrusted & gates the execution of lifecycle scripts with the use of Socket's insights. The minute Socket flagged the malicious versions ofAxios&plain-crypto-js- if you were usingvlt exec-local- you were protected from this exploit.
Lockfiles make existing installs deterministic. They do not make new installs safe.
In most cases, well-configured CI workflows that rely on committed lockfiles and deterministic installs (e.g. npm ci) are not affected by this class of issue.
However, this protection breaks down when new dependency resolution is introduced, such as when adding or updating dependencies, executing tools dynamically, or automatically merging dependency update PRs.
This dynamic is amplified in environments where dependency updates are automated, including with bots or AI-driven workflows, where new dependency resolution can be introduced continuously and without direct human review.
npx Change the ModelCI install workflows are generally safe, but using npx in CI is not necessarily safe. This introduces another layer of complexity.
These tools are often:
When using npx, a locally installed version will be used if available. Otherwise, the package is fetched from the registry and its dependencies are resolved at execution time, which can introduce risk if a malicious version is briefly available.
That means every execution can trigger fresh dependency resolution against the current state of the registry. You are essentially trusting whatever versions exist at that exact moment.
Even explicitly pinning a version at execution time does not fully solve this.
For example, running npx foo@1.2.3 ensures that specific version of foo is used, but its dependencies are still resolved dynamically based on their declared version ranges. Those transitive dependencies are not pinned and will be resolved against whatever is available in the registry at that moment.
This is compounded by the fact that npm does not distribute lockfiles with published packages. Lockfiles are intentionally excluded from registry artifacts (locking transitive dependencies in a published package would cause version conflicts across the wider dependency tree and prevent deduplication), which means there is no way for package authors to enforce a fully pinned dependency graph for consumers at install time.
This is where things become genuinely difficult. After the malicious version is removed:
So you might check your project today and see nothing unusual. That does not mean you were not exposed.
Reconstructing what happened during the window would require a complete snapshot of the ecosystem at that point in time, which most environments do not retain. It requires:
node_modulesMost environments do not retain all of this. And even when they do, it may not be complete enough to answer definitively.
There are additional complications:
plain-crypto-js) without ever seeing Axios itself.Even with perfect logs, you are often reconstructing behavior indirectly. In many cases, the best you can do is infer exposure based on timing and partial evidence.
In other words:
The absence of evidence after the fact is not strong evidence of absence.
At the center of this is a fundamental property of the ecosystem:
Dependency resolution is time-dependent.
Two identical commands run hours apart can produce different results:
Nothing in your code changed. The only thing that changed is the registry.
There is no clean, universally accepted solution here. While there are some mitigations that reduce risk in scenarios like this, none of them fully eliminate it.
npx executions, or dynamic tooling in CI.npm ci, pnpm install --frozen-lockfile)npx, global installs)minimumReleaseAge setting. Support is not standardized across ecosystems, and behavior can often be overridden via configuration (e.g. .npmrc), which limits its reliability.The important point is not that these controls are ineffective. It’s that they are context-dependent.
They work well in controlled environments, but break down in exactly the kinds of workflows that are now common across modern development and CI systems.
These tradeoffs are well understood by maintainers. They are also exactly what attackers are beginning to exploit.
package.json (or use a dedicated npm workspace)sfw npm install <package>@version the desired packagespackage-lock.json will be generatedpackage.json and pin every version to the specific one you are usingnpm install defaults to ^ ranges.socket scan to verify safety.npm ci where the package-lock.json was committed.npm install - this will override your package-lock.json and pull in new package versions (within applicable ranges).npx invocations in your CI pipeline to: npx --no --offline--no will ensure not to install a package if it's not present in the local project dependencies--offline Forces full offline mode. Any packages not locally cached will result in an error.npx --no --offline --include-workspace-root --workspace /path/to/ci-workspace cd $HOME/mcp) create a new package.json (npm init --yes)In your your AI Client configuration, for each mcp server that uses npx, ensure the following:
--include-workspace-root --workspace $HOME/mcp --no --offline, to every npx invocation.latest! always specify the in the version of the package you want the AI agent to execute with npx.Full example:
{
"mcpServers": {
"playwright": {
"command": "npx",
"args": [
"--include-workspace-root",
"--workspace $HOME/mcp",
"--no",
"--offline",
"@playwright/mcp@v0.0.70"
]
}
}
}
Note:
You can also set npm_config_yes=false in a .npmrc or set NPM_CONFIG_YES=false as an env var to instead of using --no every time.
One of the few places this type of attack can be reliably stopped is at install time.
In this incident, the risk existed only while the malicious version was live on the registry and being resolved by package managers. Once installed, the payload executed immediately. After removal, the version was no longer available to analyze or reproduce.
That makes traditional approaches less effective:
Controls that operate at install time address a different part of the problem.
For example, Socket Firewall intercepts package requests as they are made to the registry and checks them against known malicious packages and policy rules, blocking those that have already been identified as unsafe before they are downloaded or executed. (This tool is free, by the way, and we also offer enterprise support for additional features.)
This does not eliminate the underlying issues with dependency resolution, but it changes the outcome in scenarios like this one:
npx executionThis is one of the few control points where short-lived supply chain attacks can be stopped before execution. This helps in scenarios like this, but it doesn’t change the underlying complexity.
A package as widely used as Axios being compromised shows how difficult it is to reason about exposure in a modern JavaScript environment.
This is not a failure of one project or one team. It is a property of how dependency resolution in the ecosystem works today. And it is a problem that does not yet have a simple answer.

Subscribe to our newsletter
Get notified when we publish new security blog posts!

Security News
NIST will stop enriching most CVEs under a new risk-based model, narrowing the NVD's scope as vulnerability submissions continue to surge.

Company News
/Security News
Socket is an initial recipient of OpenAI's Cybersecurity Grant Program, which commits $10M in API credits to defenders securing open source software.

Security News
Socket CEO Feross Aboukhadijeh joins 10 Minutes or Less, a podcast by Ali Rohde, to discuss the recent surge in open source supply chain attacks.