New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details
Socket
Book a DemoSign in
Socket
Blog
Security News

OpenClaw Skill Marketplace Emerges as Active Malware Vector

Security researchers report widespread abuse of OpenClaw skills to deliver info-stealing malware, exposing a new supply chain risk as agent ecosystems scale.

Sarah Gooding

February 9, 2026

5 min read

OpenClaw Skill Marketplace Emerges as Active Malware Vector
Sidebar CTA Background

Secure your dependencies with us

Socket proactively blocks malicious open source packages in your code.
Install

Over the past week, security researchers have documented a large-scale campaign abusing OpenClaw skills to distribute password stealers and other malware, turning the platform’s skill marketplace into an active delivery channel.

VirusTotal reported that it analyzed 3,016 OpenClaw skills, finding that hundreds exhibited malicious characteristics, including staged downloads, execution of external payloads, and instructions designed to coerce unsafe behavior. In one cluster, 314 malicious skills were traced to a single publisher account, all disguised as legitimate automation tools.

“What looks clean on the surface often isn’t,” VirusTotal wrote. “Nothing in the file is technically ‘malware’ by itself. The malware is the workflow.”

The Promise and Risk of Agent Skills#

OpenClaw’s rapid adoption is driven by what skills make possible. Unlike traditional software that follows a fixed execution path, AI agents interpret natural language and make decisions about actions. “They blur the boundary between user intent and machine execution,” OpenClaw wrote in a recent security announcement. “They can be manipulated through language itself.”

That flexibility is the point. Skills allow agents to take real-world actions on a user’s behalf, chaining tools, APIs, and workflows together dynamically. OpenClaw’s creators argue that this capability represents a fundamental shift in personal computing.

“We understand that with the great utility of a tool like OpenClaw comes great responsibility,” the company wrote. “Done wrong, an AI agent is a liability. Done right, we can change personal computing for the better.”

It is that same flexibility, researchers say, that makes skills both powerful and difficult to secure. When execution is guided by language and documentation rather than rigid code paths, the boundary between instruction and action becomes harder to police.

How Skills Became a Malware Delivery Mechanism#

OpenClaw skills are typically centered around a SKILL.md file that describes how an agent should perform a task. While some skills bundle scripts or binaries, many rely entirely on documentation and setup steps. That structure has become a key part of the abuse.

In multiple cases documented by VirusTotal and independent researchers, skills instructed users to paste base64-encoded shell commands into their terminal, download password-protected archives, or run binaries fetched from external hosts. The skill packages themselves often contained little executable code, allowing them to evade traditional antivirus detection.

“This is exactly why traditional detection fails,” VirusTotal wrote. “The skill acts as a social engineering wrapper whose only real purpose is to push remote execution.”

Active Campaigns Targeting OpenClaw Skills#

Security researcher Jamieson O’Reilly publicly demonstrated how easily OpenClaw’s skill ecosystem could be abused. In a series of writeups and posts, O’Reilly described publishing a backdoored skill, inflating its download count, and watching developers execute arbitrary commands on their machines within hours.

"AI skills when abused, are literally natural language malware, and almost all the traditional security tooling we've built over 30 years is completely blind to this threat," O'Reilly said. "Skills are markdown files containing instructions that tell the agent what to do... I proved the supply chain attack was trivial to execute at scale."

Researchers also highlighted the role of ranking and reputation in amplifying this abuse. Skills that appeared popular or highly downloaded were more likely to be trusted and installed, even when their setup instructions included risky behavior. In this context, distribution mechanics and perceived legitimacy mattered as much as the contents of the skill itself.

In one case highlighted by VirusTotal, a skill named “Yahoo Finance” instructed Windows users to download and execute a password-protected ZIP archive containing an executable later identified as a trojan. On macOS, the same skill pointed users to an obfuscated shell script that downloaded and ran a Mach-O binary classified as an Atomic Stealer variant.

The skill itself passed basic file-based checks. The payload did not.

Several of the reported cases illustrate a key limitation of file-based analysis: a skill can be clean at rest while still reliably instructing an agent or user to fetch and execute a malicious payload elsewhere, often through setup steps or external links embedded in documentation.

Other reporting emphasized that not all flagged skills were intentionally malicious. VirusTotal and 1Password, which published an independent analysis of OpenClaw’s skill ecosystem, both noted that many skills exhibited dangerous behavior due to poor security practices, including unsafe command execution, hardcoded secrets, excessive permissions, and unvalidated user input.

“Skills are just markdown. That’s the problem,” 1Password wrote. “Markdown isn’t ‘content’ in an agent ecosystem. Markdown is an installer.”

This means that even well-intentioned skills can expose users to risk, particularly in ecosystems where agents are granted broad access to local files, credentials, and system tools.

"Most people are completely unprepared for this," O'Reilly said. "They treat it like installing Spotify when it's actually more like giving someone sudo access to your entire machine."

OpenClaw Adds VirusTotal Scanning to ClawHub#

On February 7, OpenClaw announced a partnership with VirusTotal to scan all skills published to its ClawHub marketplace. Under the new system, skills are deterministically packaged, hashed, and scanned using VirusTotal’s threat intelligence and Code Insight analysis. Skills flagged as malicious are blocked from download, while suspicious skills are marked with warnings.

“This is not a silver bullet,” OpenClaw wrote in its announcement. “VirusTotal scanning won’t catch everything.”

The company described the integration as one layer in a broader security effort, noting that skills relying purely on natural language instructions or social engineering may still evade detection.

Researchers also noted that skills do not need to be malicious at publish time to become dangerous. A skill that appears benign today can later be modified, or updated to point to a different external dependency, turning an existing distribution channel into a malware delivery path without changing the overall structure of the package.

The integration primarily addresses known malware and suspicious behavior, but is not designed to reliably detect zero-day supply chain attacks.

A New Class of Supply Chain Attack#

Although recent reporting has focused on OpenClaw, the mechanics at play are not unique to a single platform. Skills built around markdown instructions and optional scripts are already showing up across multiple agent ecosystems, often in near-identical formats. That makes both legitimate functionality and abuse easy to copy, republish, and move between tools.

We’re looking at an emerging class of supply chain attack. Unlike traditional package or plugin ecosystems, agent skills collapse documentation, configuration, and execution into a single artifact. The attack surface is not just code, but instructions, workflows, and the trust that agents and users place in them. That makes abuse possible even when a skill is technically “clean,” and allows behavior to change without modifying the artifact itself.

Agent skill ecosystems are still nascent from a security perspective. Early abuse surfaced quickly and leaned on simple, repeatable techniques, indicating that attackers are probing how trust is established and where enforcement is light. It will soon become harder to distinguish from normal usage, with skills that evolve over time, defer execution, and rely on external components to change behavior without changing the skill itself.

Not all agent platforms rely on centralized marketplaces, but the same skill and workflow models are already being shared through repositories, templates, and informal distribution channels. As adoption accelerates, similar pressure is likely to appear wherever natural-language instructions are trusted to trigger execution.

Sidebar CTA Background

Secure your dependencies with us

Socket proactively blocks malicious open source packages in your code.
Install

Subscribe to our newsletter

Get notified when we publish new security blog posts!

Related posts

Back to all posts