
Company News
Socket Joins the OpenJS Foundation
Socket is proud to join the OpenJS Foundation as a Silver Member, deepening our commitment to the long-term health and security of the JavaScript ecosystem.
@sanity/agent-context-explorer
Advanced tools
Exploration tool for Sanity Agent Context — produces knowledge docs for production AI agents
Companion tool for Sanity Agent Context. Explores your Agent Context server, documents what works and what doesn't, and produces exploration-results.md — ready to copy directly into your Agent Context Document.
When building AI agents that work with Agent Context, agents need to know:
Manual exploration is time-consuming and doesn't scale. This tool automates that process, producing structured documentation that helps production agents work correctly from day one.
After running the explorer, you'll have documentation that tells agents:
# Global install (recommended — reuse across projects)
npm install -g @sanity/agent-context-explorer
# Or install per-project
npm install @sanity/agent-context-explorer
questions.json). Always include expected_answer — the explorer uses it to guide exploration and validate results:{
"questions": [
{ "question": "What sizes does the Trailblazer Hiking Boot come in?", "expected_answer": "US 7-13, including half sizes" },
{ "question": "Is the Ultralight Tent waterproof?", "expected_answer": "Yes, it has a 3000mm waterproof rating with taped seams" },
{ "question": "What's the difference between the Summit Pack and the Daybreak Pack?", "expected_answer": "Summit is 65L for multi-day trips with a frame; Daybreak is 28L for day hikes" }
]
}
agent-context-explorer \
--mcp-url https://api.sanity.io/vX/agent-context/YOUR_PROJECT_ID/YOUR_DATASET/YOUR_SLUG \
--questions ./questions.json \
--sanity-token $SANITY_API_READ_TOKEN \
--anthropic-api-key $ANTHROPIC_API_KEY
exploration-results.md into your Agent Context Document's instructions field. The output directory will be timestamped (e.g., ./explorer-output-2026-02-11T09-22-30/).{
"questions": [
{
"question": string, // The question to explore (required)
"expected_answer"?: string, // Strongly recommended — guides exploration and validates results
"id"?: string // Auto-generated as q1, q2, ... if omitted
}
]
}
Always include expected_answer. The explorer uses it in two ways:
Without expected answers, the explorer can't tell if it found the right data. Even a rough or partial answer is better than none.
Tips for writing good questions:
expected_answer — even approximate answers help the explorer validate findingsThe tool generates several files in the output directory:
exploration-results.md (Primary Output)This is the file you copy into your Agent Context Document. It contains LLM-ready instructions with five sections:
The synthesis agent may add additional sections when findings naturally cluster (e.g., "Locale Handling" if locale issues were prominent).
Failures are the most valuable output — they document what would trip up a naive agent, preventing wrong answers and wasted queries.
logs/*.jsonIndividual exploration logs for each question, containing:
metrics.jsonAggregated statistics: success rates, confidence distribution, category coverage, and validation results.
When you provide expected_answer in your questions, the explorer validates the agent's answer against yours using an LLM comparison. This appears in the CLI output as:
[1/12] ✓ Success (high confidence)
Validation: full
Match levels:
| Match | Meaning |
|---|---|
full | Agent's answer conveys the same information as expected (even if worded differently) |
partial | Answer contains some expected information but is missing parts |
none | Answer is different or contradictory |
gap_identified | Agent correctly determined the data doesn't exist in this dataset |
Validation results flow into the final exploration-results.md — full matches produce [High] confidence patterns, partial matches produce [Medium], and failures are documented in the Known Limitations section.
| Option | Description | Default |
|---|---|---|
--mcp-url <url> | Agent Context server URL (required) | — |
--questions <path> | Path to questions JSON file (required) | — |
--sanity-token <token> | Sanity API read token for authentication | — |
--anthropic-api-key <key> | Anthropic API key (or set ANTHROPIC_API_KEY env var) | — |
--model <model> | Claude model for exploration | claude-sonnet-4-20250514 |
--output <dir> | Output directory | ./explorer-output-{timestamp} |
--concurrency <n> | Number of questions to explore in parallel | 3 |
--help | Show help message | — |
The key insight: failures are more valuable than successes. A naive agent can figure out what works through trial-and-error. What they can't discover is why something that looks right doesn't work, or why data that should exist is missing.
Here's a snippet from a generated exploration-results.md:
## Schema Reference
| Document Type | Use For | Key Fields |
|---------------|---------|------------|
| product | Product info, specs | name, description, specs, variants |
| category | Product categorization | title, slug, products[] |
| support-article | Help content | title, body, relatedProducts[] |
## Query Patterns
### Product Details
**When to use:** User asks about a specific product's features or specifications
*[_type == "product" && name match $productName][0]{
name, description, specs, variants
}
**Important:** Always use `name` not `title` — the `title` field is null on products.
### Product Comparison
**When to use:** User wants to compare two or more products
*[_type == "product" && name in $productNames]{
name, specs, variants
}
**Important:** Filter results by locale if your dataset has multiple language variants.
## Critical Rules
- Always use `name` for product lookups — `title` is null on all product documents
- Always filter by locale when querying products to avoid duplicate results
- Never query `inventory` or `stockLevel` — these fields are always null (managed in external system)
## Known Limitations
- Inventory and stock data lives in Shopify, not this dataset [High confidence]
- Pricing data lives in Commerce API [High confidence]
- The `title` field on products is always null — use `name` instead [High confidence]
## Exploration Coverage
**Validated areas:** product specs, product comparison, category browsing, support content
**Confidence:** High — 12 questions explored with 92% success rate
**Not explored:** user reviews, order history, real-time inventory
After running the explorer:
exploration-results.md in your output directoryinstructions fieldThis gives your agent dataset-specific knowledge from day one.
--anthropic-api-key or set ANTHROPIC_API_KEY env var)MIT
FAQs
Exploration tool for Sanity Agent Context — produces knowledge docs for production AI agents
We found that @sanity/agent-context-explorer demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 113 open source maintainers collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Company News
Socket is proud to join the OpenJS Foundation as a Silver Member, deepening our commitment to the long-term health and security of the JavaScript ecosystem.

Security News
npm now links to Socket's security analysis on every package page. Here's what you'll find when you click through.

Security News
A compromised npm publish token was used to push a malicious postinstall script in cline@2.3.0, affecting the popular AI coding agent CLI with 90k weekly downloads.