You're Invited:Meet the Socket Team at RSAC and BSidesSF 2026, March 23–26.RSVP
Socket
Book a DemoSign in
Socket

vskill

Package Overview
Dependencies
Maintainers
1
Versions
219
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

vskill

Secure multi-platform AI skill installer — scan before you install

latest
Source
npmnpm
Version
0.5.40
Version published
Weekly downloads
6.3K
110.34%
Maintainers
1
Weekly downloads
 
Created
Source

vskill

The package manager for AI skills.
Scan. Verify. Install. Across 49 agent platforms.

npm downloads 49 agents 5 plugins 7 skills registry MIT


npx vskill install remotion-best-practices

The Problem

36.82% of AI skills have security flaws (Snyk ToxicSkills).

When you install a skill today, you're trusting blindly:

  • No scanning — malicious prompts execute with full system access
  • No versioning — silent updates can inject anything, anytime
  • No deduplication — the same skill lives in 3 repos, all diverging
  • No blocklist — known-bad skills install just fine

vskill fixes all of this.


How It Works

  ┌──────────┐     ┌──────────┐     ┌──────────┐     ┌──────────┐
  │  Source   │────>│   Scan   │────>│  Verify  │────>│ Install  │
  │          │     │          │     │          │     │          │
  │ GitHub   │     │ 38 rules │     │ LLM      │     │ Pin SHA  │
  │ Registry │     │ Blocklist│     │ analysis │     │ Lock ver │
  │ Local    │     │ Patterns │     │ Intent   │     │ Symlink  │
  └──────────┘     └──────────┘     └──────────┘     └──────────┘

Every install goes through the security pipeline. No exceptions. No --skip-scan.


Quick Start

# Install from any GitHub repo
npx vskill install remotion-dev/skills/remotion-best-practices

# Browse a repo and pick interactively
npx vskill install remotion-dev/skills

# Install a plugin (Claude Code)
npx vskill install --repo anton-abyzov/vskill --plugin mobile

Or install globally: npm install -g vskill

Getting E401 errors? If your project has a .npmrc pointing to a private registry (e.g. AWS CodeArtifact, GitHub Packages), npx may fail with npm error code E401. Fix it by overriding the registry:

npx --registry https://registry.npmjs.org vskill install <skill>

Or install globally once to avoid this entirely: npm i -g vskill --registry https://registry.npmjs.org


Three-Tier Verification

TierHowTrust Level
Scanned38 deterministic pattern checks against known attack vectorsBaseline
VerifiedPattern scan + LLM-based intent analysis for subtle threatsRecommended
CertifiedFull manual security review by the vskill teamHighest

Every install is at minimum Scanned. The vskill.lock file tracks the SHA-256 hash, scan date, and tier for every installed skill. Running vskill update diffs against the locked version and re-scans before applying.


49 Agent Platforms

vskill auto-detects your installed agents and installs skills to all of them at once.

CLI & Terminal — Claude Code, Cursor, GitHub Copilot, Windsurf, Codex, Gemini CLI, Amp, Cline, Roo Code, Goose, Aider, Kilo, Devin, OpenHands, Qwen Code, Trae, and more

IDE Extensions — VS Code, JetBrains, Zed, Neovim, Emacs, Sublime Text, Xcode

Cloud & Hosted — Replit, Bolt, v0, GPT Pilot, Plandex, Sweep


Plugin Marketplace

vskill ships 7 expert skills organized into 5 domain plugins. Each plugin has its own namespace — install only what you need.

npx vskill install --repo anton-abyzov/vskill --plugin mobile
npx vskill install --repo anton-abyzov/vskill --plugin marketing

Then invoke as /plugin:skill in your agent:

/mobile:appstore             /marketing:social-media-posting
/google-workspace:gws        /skills:scout

Available Plugins

PluginDescriptionSkills
mobileReact Native, Expo, Flutter, SwiftUI, Jetpack Compose, app storeappstore
marketingSocial media content creation, posting, and engagement across 11 platforms, plus Slack messagingsocial-media-posting slack-messaging
google-workspaceGoogle Workspace CLI (gws) for Drive, Sheets, Docs, Calendar, Chat, Admingws
skillsSkill discovery and recommendationsscout
productivityExpert network survey completion and paid expertise sharingsurvey-passing

Commands

vskill install <source>     Install skill after security scan
vskill find <query>         Search the verified-skill.com registry
vskill scan <path>          Run security scan without installing
vskill list                 Show installed skills with status
vskill remove <skill>       Remove an installed skill
vskill update [skill]       Update with diff scanning (--all for everything)
vskill audit [path]         Full project security audit with LLM analysis
vskill info <skill>         Show detailed skill information
vskill submit <source>      Submit a skill for verification
vskill blocklist            Manage blocked malicious skills
vskill init                 Initialize vskill in a project
Install flags
FlagDescription
--yes -yAccept defaults, no prompts
--global -gInstall to global scope
--copyCopy files instead of symlinking
--skill <name>Pick a specific skill from a multi-skill repo
--plugin <name>Pick a plugin by name (checks marketplace, then plugins/ folder)
--plugin-dir <path>Local directory as plugin source
--repo <owner/repo>Remote GitHub repo as plugin source
--agent <id>Target a specific agent (e.g., cursor)
--forceInstall even if blocklisted
--cwd <path>Override project root
--allInstall all skills from a repo

Security Audit

Scan entire projects for security issues — not just skills:

vskill audit                              # scan current directory
vskill audit --ci --report sarif          # CI-friendly SARIF output
vskill audit --severity high,critical     # filter by severity

Skills vs Plugins

Skills are single SKILL.md files that work with any of the 49 supported agents. They follow the Agent Skills Standard — drop a SKILL.md into the agent's commands directory.

Plugins are multi-component containers for Claude Code. They bundle skills, hooks, commands, and agents under a single namespace with enable/disable support and marketplace integration.


Why Deduplication Matters

Even Anthropic ships the same skill in two places:

Install both? Duplicates. They diverge? Inconsistencies. vskill gives you one install path with version pinning and dedup, regardless of source.


Skill Evals

Every skill can include evaluations — standardized test cases that verify the skill actually improves LLM output. Skills with evals get quality scores on verified-skill.com and regression tracking across versions.

How it works

The eval system tests the skill's plan, not its execution. It doesn't post to social media, generate images, or call external APIs. Instead, it measures whether your SKILL.md successfully teaches an LLM the correct behavior.

The algorithm:

  • Your SKILL.md is loaded as a system prompt
  • The eval prompt (a realistic user request) is sent to the LLM
  • The LLM generates a text response describing what it would do
  • An LLM judge grades each assertion against that response

For example, a social media posting skill's eval might check: does the LLM mention checking for duplicate posts? Does it use the correct aspect ratios per platform? Does it wait for user approval? If the skill description is clear about these behaviors, the LLM will demonstrate them in its response. If it's vague, assertions fail — telling you exactly what to improve.

Think of it like testing a recipe book: you don't cook the food, you check whether someone reading your recipe would know the right steps, quantities, and order.

Three evaluation modes

ModeWhat it doesWhen to use
BenchmarkRuns prompts WITH skill, grades assertionsMeasure pass rate after edits
A/B ComparisonRuns each prompt WITH and WITHOUT skill, blind-judges bothProve the skill adds value
Activation TestTests whether the skill correctly triggers on relevant promptsReduce false positives/negatives

The A/B comparison randomly shuffles outputs as "Response A" and "Response B" before scoring, so the judge can't tell which used the skill. Each response is scored on content (1-5) and structure (1-5). The delta between skill and baseline averages produces a verdict: EFFECTIVE, MARGINAL, INEFFECTIVE, or DEGRADING.

Unit testing vs integration testing

Skill evals are unit tests — they verify the skill's teaching quality in isolation, without calling external tools or APIs. This is a deliberate design choice:

Unit Tests (current)Integration Tests
WhatDoes the SKILL.md teach the right workflow?Does the end-to-end tool execution work?
Speed~30s per case~3min per case
InfrastructureNone — any LLM providerReal MCP servers, auth tokens, test data
CI/CDRuns anywhereNeeds secrets, test workspaces
FlakinessLow (deterministic text)High (external APIs, rate limits)
CoverageWorkflow, tool selection, formatting, parametersAPI compatibility, auth, error recovery

Why unit tests are sufficient for most skills: The eval doesn't test whether Slack's API works — it tests whether your SKILL.md correctly teaches an LLM to use slack_search_channels before slack_read_channel, to use thread_ts for replies, and to format messages with *bold* instead of **bold**. If the teaching is correct, the execution follows.

MCP-dependent skills (Slack, GitHub, Linear, etc.)

Skills that reference MCP tools automatically get simulation mode during evals. The eval system detects MCP tool references in your SKILL.md and instructs the LLM to demonstrate the complete workflow with simulated tool responses. This means your assertions can test tool selection, parameter correctness, and workflow order — even without a real MCP connection.

Standard skill eval:               MCP skill eval (automatic):
┌──────────┐                        ┌──────────┐
│ SKILL.md │ → system prompt        │ SKILL.md │ → system prompt
└──────────┘                        └──────────┘   + simulation instructions
     ↓                                    ↓
┌──────────┐                        ┌──────────┐
│ LLM      │ → text response        │ LLM      │ → simulated workflow
└──────────┘                        └──────────┘   (tool calls + mock responses)
     ↓                                    ↓
┌──────────┐                        ┌──────────┐
│ Judge    │ → pass/fail            │ Judge    │ → pass/fail
└──────────┘                        └──────────┘

No configuration needed — if your SKILL.md mentions slack_*, github_*, linear_*, or gws_* tools, simulation mode activates automatically.

Activation testing

Skills can also include trigger accuracy tests in evals/activation-prompts.json:

{
  "prompts": [
    { "prompt": "check what's new in #engineering", "expected": "should_activate" },
    { "prompt": "send an email to the team", "expected": "should_not_activate" }
  ]
}

This tests whether your skill's description field in SKILL.md causes the skill to trigger on the right prompts (precision) and not miss relevant ones (recall). Results show TP/TN/FP/FN classification with precision, recall, and reliability metrics.

Cross-model testing

The eval system supports Claude (CLI or API), Anthropic API, and Ollama. Testing across models reveals:

  • Whether your skill helps weaker models (Llama, Qwen) follow complex workflows
  • Whether base model improvements have made a skill unnecessary
  • Whether your simulation instructions are clear enough for smaller models
# Test with Opus (high-end)
VSKILL_EVAL_MODEL=opus npx vskill eval run my-skill

# Test with Ollama (open-source)
VSKILL_EVAL_PROVIDER=ollama VSKILL_EVAL_MODEL=llama3.1:8b npx vskill eval run my-skill

Directory structure

your-skill/
├── SKILL.md                          # The skill definition
└── evals/
    ├── evals.json                    # Test cases + assertions
    ├── activation-prompts.json       # Trigger accuracy tests (optional)
    └── benchmark.json                # Latest benchmark results (auto-generated)

evals.json format

{
  "skill_name": "your-skill",
  "evals": [
    {
      "id": 1,
      "name": "Descriptive test name",
      "prompt": "Realistic user prompt that tests the skill",
      "expected_output": "Reference output (not graded, for human context)",
      "files": [],
      "assertions": [
        { "id": "a1", "text": "Output includes specific technique X", "type": "boolean" },
        { "id": "a2", "text": "Code example compiles without errors", "type": "boolean" }
      ]
    }
  ]
}

Writing good evals

  • Prompts should be realistic user requests, not synthetic test inputs
  • Assertions must be objectively verifiable — avoid subjective criteria like "well-written"
  • Each eval case should test a distinct capability of the skill
  • 3-5 eval cases with 2-4 assertions each is a good starting point

CLI commands

npx vskill eval serve                     # Open visual eval UI (benchmark, compare, history)
npx vskill eval init <skill-dir>          # Scaffold evals.json from SKILL.md via LLM
npx vskill eval run <skill-dir>           # Run evals and grade assertions (CLI output)
npx vskill eval coverage                  # Show eval status for all skills
npx vskill eval generate-all              # Batch-generate for all skills

Visual eval UI

vskill eval serve launches a local web UI where you can:

  • Run benchmarks — all cases or individually, with real-time streaming results
  • Compare A/B — side-by-side with/without skill scoring and grouped bar charts
  • View history — previous benchmark results load automatically
  • Edit evals — add/remove assertions, create new eval cases
  • Switch models — dropdown to change provider (Claude CLI, Anthropic API, Ollama) and model

Previous benchmark results are displayed on the skill detail page without re-running. Per-case pass/fail status, time, and token usage are shown inline.

Model configuration

The eval system supports multiple LLM providers. Switch between them in the eval UI dropdown or via environment variables.

ProviderModelsRequirements
Claude CLISonnet, Opus, HaikuClaude Max/Pro subscription + claude CLI installed
Anthropic APIClaude Sonnet 4.6, Opus 4.6, Haiku 4.5ANTHROPIC_API_KEY env var
OllamaAny locally installed modelOllama running at localhost:11434
# Use Anthropic API with Opus
VSKILL_EVAL_PROVIDER=anthropic VSKILL_EVAL_MODEL=claude-opus-4-6 npx vskill eval run my-skill

# Use Ollama with a local model
VSKILL_EVAL_PROVIDER=ollama VSKILL_EVAL_MODEL=qwen2.5:32b npx vskill eval run my-skill

# Custom Ollama server
OLLAMA_BASE_URL=http://gpu-server:11434 VSKILL_EVAL_PROVIDER=ollama npx vskill eval run my-skill

Which model for what?

  • Skill creation/improvement: Claude (Sonnet or Opus) produces the best SKILL.md refinements. Other models like Gemini and Codex can create skills too — they understand the SKILL.md format — but output quality may vary. See Anthropic's Skill Creator for the reference methodology.
  • Benchmarks & A/B comparisons: Use any model. Cross-model testing reveals whether your skill helps weaker models, and whether base model improvements have made a capability uplift skill unnecessary.
  • Ollama: Free, local, no API key. Useful for rapid iteration and validating cross-model portability.

Platform integration

Skills with evals/evals.json get:

  • Quality evaluation results displayed at /skills/[name]/evals
  • Admin editing at /admin/evals (admin-only)
  • Regression tracking across eval runs

Registry

Browse and search verified skills at verified-skill.com.

vskill find "react native"          # search from CLI
vskill info remotion-best-practices # skill details

Contributing

Submit your skill for verification:

vskill submit your-org/your-repo/your-skill

License

MIT

Keywords

ai

FAQs

Package last updated on 19 Mar 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts