
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
botvisibility
Advanced tools
Scan any URL to check how visible and usable it is to AI agents. Like Lighthouse for AI agent readiness.
npx botvisibility stripe.com
BotVisibility runs 30+ automated checks across 4 levels to measure how well your site works with AI agents like Claude, GPT, Copilot, and autonomous agent frameworks.
Without agent-ready metadata and APIs, agents burn 5-100x more tokens through HTML scraping, trial-and-error discovery, and retry loops. A fully unoptimized site can cost agents 120,000-500,000+ excess tokens per session.
The CLI tells you exactly what's missing and how to fix it.
No install needed. Just run:
npx botvisibility <url>
Or install globally:
npm install -g botvisibility
botvisibility stripe.com
# Basic URL scan
npx botvisibility https://example.com
# JSON output for CI/CD
npx botvisibility stripe.com --json
# Full scan with local repo analysis (unlocks Level 3 code checks + Level 4)
npx botvisibility https://myapp.com --repo ./
# Combined scan with JSON output
npx botvisibility mysite.com --repo ../my-backend --json
Bots can find you. These checks verify that AI agents can discover your site's capabilities without scraping HTML.
| Check | What it looks for |
|---|---|
| llms.txt | Machine-readable site description at /llms.txt |
| Agent Card | Capability declaration at /.well-known/agent-card.json |
| OpenAPI Spec | Published API specification |
| robots.txt AI Policy | AI crawler directives in robots.txt |
| Documentation Accessibility | Public dev docs without auth walls |
| CORS Headers | Cross-origin access for browser-based agents |
| AI Meta Tags | llms:description, llms:url, llms:instructions meta tags |
| Skill File | Structured agent instructions at /skill.md |
| AI Site Profile | Site manifest at /.well-known/ai.json |
| Skills Index | Skills catalog at /.well-known/skills/index.json |
| Link Headers | HTML link elements pointing to AI discovery files |
| MCP Server | Model Context Protocol endpoint discovery |
Your API works for agents. Authentication, error handling, and core operations are agent-compatible.
| Check | What it looks for |
|---|---|
| API Read Operations | GET/list/search endpoints in API spec |
| API Write Operations | POST/PUT/PATCH/DELETE endpoints |
| API Primary Action | Core value action available via API |
| API Key Authentication | Simple API key auth (not just OAuth) |
| Scoped API Keys | Permission-scoped API keys |
| OpenID Configuration | OIDC discovery document |
| Structured Error Responses | JSON errors with codes, not HTML error pages |
| Async Operations | Job ID + polling for long-running operations |
| Idempotency Support | Idempotency key support on write endpoints |
Agents work efficiently. Pagination, filtering, caching, and MCP tools reduce token waste.
| Check | What it looks for |
|---|---|
| Sparse Fields | fields/select parameter to request only needed data |
| Cursor Pagination | Cursor-based pagination on list endpoints |
| Search & Filtering | Server-side filter and search parameters |
| Bulk Operations | Batch create/update/delete endpoints |
| Rate Limit Headers | X-RateLimit-* headers on API responses |
| Caching Headers | ETag, Cache-Control, Last-Modified headers |
| MCP Tool Quality | Well-described MCP tools with input schemas |
With --repo, Level 3 checks also scan your codebase for these patterns in code, catching implementations that the web scanner can't detect from HTTP responses alone.
First-class agent support. These checks require local code access.
| Check | What it looks for |
|---|---|
| Intent-Based Endpoints | High-level action endpoints (e.g., /send-invoice) |
| Agent Sessions | Persistent session management for multi-step interactions |
| Scoped Agent Tokens | Agent-specific tokens with capability limits |
| Agent Audit Logs | API actions logged with agent identifiers |
| Sandbox Environment | Test environment for safe agent experimentation |
| Consequence Labels | Annotations marking irreversible/destructive actions |
| Native Tool Schemas | Ready-to-use tool definitions for agent frameworks |
BotVisibility uses a weighted cross-level algorithm:
--repo flag for code-level analysisThis rewards sites that invest in higher-level capabilities even if some lower-level items are still missing.
Add to your CI pipeline to catch agent-readiness regressions:
# GitHub Actions
- name: Check BotVisibility
run: |
SCORE=$(npx botvisibility mysite.com --json | jq '.currentLevel')
if [ "$SCORE" -lt 1 ]; then
echo "BotVisibility score below Level 1"
exit 1
fi
Every unoptimized interaction costs AI agents extra tokens:
| Without | With | Savings |
|---|---|---|
| Scrape HTML (30,000 tokens) | Read llms.txt (500 tokens) | 98% |
| Guess API endpoints (100,000 tokens) | Read OpenAPI spec (15,000 tokens) | 85% |
| Parse HTML errors (10,000 tokens) | Read JSON error (50 tokens) | 99% |
| Fetch all fields (2,000 tokens) | Sparse fields (200 tokens) | 90% |
At Claude Sonnet 4.6 rates, a single unoptimized session costs $0.83 vs $0.07 optimized. At 1,000 agent visits/day, that's $22,800/month in wasted tokens.
Read the full analysis: The Agent Tax
MIT
FAQs
Scan any URL to check if it's ready for AI agents
We found that botvisibility demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.