New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

botvisibility

Package Overview
Dependencies
Maintainers
1
Versions
4
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

botvisibility

Scan any URL to check if it's ready for AI agents

latest
Source
npmnpm
Version
1.3.0
Version published
Maintainers
1
Created
Source

BotVisibility CLI

Scan any URL to check how visible and usable it is to AI agents. Like Lighthouse for AI agent readiness.

npx botvisibility stripe.com

What it does

BotVisibility runs 30+ automated checks across 4 levels to measure how well your site works with AI agents like Claude, GPT, Copilot, and autonomous agent frameworks.

Without agent-ready metadata and APIs, agents burn 5-100x more tokens through HTML scraping, trial-and-error discovery, and retry loops. A fully unoptimized site can cost agents 120,000-500,000+ excess tokens per session.

The CLI tells you exactly what's missing and how to fix it.

Install & run

No install needed. Just run:

npx botvisibility <url>

Or install globally:

npm install -g botvisibility
botvisibility stripe.com

Usage

# Basic URL scan
npx botvisibility https://example.com

# JSON output for CI/CD
npx botvisibility stripe.com --json

# Full scan with local repo analysis (unlocks Level 3 code checks + Level 4)
npx botvisibility https://myapp.com --repo ./

# Combined scan with JSON output
npx botvisibility mysite.com --repo ../my-backend --json

What it checks

Level 1: Discoverable (12 checks)

Bots can find you. These checks verify that AI agents can discover your site's capabilities without scraping HTML.

CheckWhat it looks for
llms.txtMachine-readable site description at /llms.txt
Agent CardCapability declaration at /.well-known/agent-card.json
OpenAPI SpecPublished API specification
robots.txt AI PolicyAI crawler directives in robots.txt
Documentation AccessibilityPublic dev docs without auth walls
CORS HeadersCross-origin access for browser-based agents
AI Meta Tagsllms:description, llms:url, llms:instructions meta tags
Skill FileStructured agent instructions at /skill.md
AI Site ProfileSite manifest at /.well-known/ai.json
Skills IndexSkills catalog at /.well-known/skills/index.json
Link HeadersHTML link elements pointing to AI discovery files
MCP ServerModel Context Protocol endpoint discovery

Level 2: Usable (9 checks)

Your API works for agents. Authentication, error handling, and core operations are agent-compatible.

CheckWhat it looks for
API Read OperationsGET/list/search endpoints in API spec
API Write OperationsPOST/PUT/PATCH/DELETE endpoints
API Primary ActionCore value action available via API
API Key AuthenticationSimple API key auth (not just OAuth)
Scoped API KeysPermission-scoped API keys
OpenID ConfigurationOIDC discovery document
Structured Error ResponsesJSON errors with codes, not HTML error pages
Async OperationsJob ID + polling for long-running operations
Idempotency SupportIdempotency key support on write endpoints

Level 3: Optimized (7 checks)

Agents work efficiently. Pagination, filtering, caching, and MCP tools reduce token waste.

CheckWhat it looks for
Sparse Fieldsfields/select parameter to request only needed data
Cursor PaginationCursor-based pagination on list endpoints
Search & FilteringServer-side filter and search parameters
Bulk OperationsBatch create/update/delete endpoints
Rate Limit HeadersX-RateLimit-* headers on API responses
Caching HeadersETag, Cache-Control, Last-Modified headers
MCP Tool QualityWell-described MCP tools with input schemas

With --repo, Level 3 checks also scan your codebase for these patterns in code, catching implementations that the web scanner can't detect from HTTP responses alone.

Level 4: Agent-Native (7 checks, --repo required)

First-class agent support. These checks require local code access.

CheckWhat it looks for
Intent-Based EndpointsHigh-level action endpoints (e.g., /send-invoice)
Agent SessionsPersistent session management for multi-step interactions
Scoped Agent TokensAgent-specific tokens with capability limits
Agent Audit LogsAPI actions logged with agent identifiers
Sandbox EnvironmentTest environment for safe agent experimentation
Consequence LabelsAnnotations marking irreversible/destructive actions
Native Tool SchemasReady-to-use tool definitions for agent frameworks

Scoring

BotVisibility uses a weighted cross-level algorithm:

  • L1 Discoverable: Pass 50%+ of L1 checks
  • L2 Usable: Pass 50%+ of L1 AND 50%+ of L2 (or 35%+ L1 with 75%+ L2)
  • L3 Optimized: Achieve L2 AND pass 50%+ of L3 (or 35%+ L2 with 75%+ L3)
  • L4 Agent-Native: Requires --repo flag for code-level analysis

This rewards sites that invest in higher-level capabilities even if some lower-level items are still missing.

CI/CD integration

Add to your CI pipeline to catch agent-readiness regressions:

# GitHub Actions
- name: Check BotVisibility
  run: |
    SCORE=$(npx botvisibility mysite.com --json | jq '.currentLevel')
    if [ "$SCORE" -lt 1 ]; then
      echo "BotVisibility score below Level 1"
      exit 1
    fi

The agent tax

Every unoptimized interaction costs AI agents extra tokens:

WithoutWithSavings
Scrape HTML (30,000 tokens)Read llms.txt (500 tokens)98%
Guess API endpoints (100,000 tokens)Read OpenAPI spec (15,000 tokens)85%
Parse HTML errors (10,000 tokens)Read JSON error (50 tokens)99%
Fetch all fields (2,000 tokens)Sparse fields (200 tokens)90%

At Claude Sonnet 4.6 rates, a single unoptimized session costs $0.83 vs $0.07 optimized. At 1,000 agent visits/day, that's $22,800/month in wasted tokens.

Read the full analysis: The Agent Tax

License

MIT

Keywords

ai

FAQs

Package last updated on 30 Mar 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts