
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
@toolrelay/cli
Advanced tools
CLI for ToolRelay — validate configs, serve local MCP servers, and deploy to production with audit traces
Turn any REST API into an MCP server that AI agents can use — in one command.
Define your API endpoints in a JSON config, and the CLI gives you a local Model Context Protocol server that Claude, GPT, and other AI agents can connect to instantly. No SDK integration, no code changes, no infrastructure to manage.
AI agents need tools. Your API already exists. The gap between the two is boilerplate: building an MCP server, handling auth, mapping parameters, debugging what the agent actually sent. This CLI closes that gap.
${VAR} syntax keeps secrets out of committed config filesnpx @toolrelay/cli init
The wizard walks you through your API — name, base URL, auth, and endpoints. It generates a toolrelay.json config file.
Then start the MCP server:
npx @toolrelay/cli serve toolrelay.json
That's it. Add the printed URL to Claude Desktop's config and your API is available as tools.
toolrelay.json
|
npx @toolrelay/cli serve toolrelay.json
|
Local MCP Server (localhost:8787)
/ | \
tools/list tools/call health
|
Your REST API backend
toolrelay.json — endpoints, parameters, authserve starts a local MCP Streamable HTTP servertools/listThe init wizard generates this, or create it by hand:
{
"app": {
"name": "My API",
"base_url": "http://localhost:3000",
"auth_type": "static_token",
"auth_config": { "token": "${API_TOKEN}" }
},
"tools": [
{
"name": "get_user",
"description": "Get a user by ID",
"http_method": "GET",
"endpoint_path": "/api/users/{id}",
"parameter_mapping": [
{ "name": "id", "type": "string", "required": true, "target": "path", "description": "The user ID" }
]
}
]
}
Tip:
descriptionon both tools and parameters is how AI agents decide which tool to use. Clear descriptions directly improve how well agents interact with your API.
Use ${VAR_NAME} in any string field to reference environment variables. Secrets stay out of your config:
export API_TOKEN=sk-my-secret-key
npx @toolrelay/cli serve toolrelay.json
If a referenced variable is missing, the CLI fails with a clear error listing the missing names.
auth_type | auth_config | Description |
|---|---|---|
none | — | No authentication |
static_token | { "token": "..." } | Bearer token in Authorization header |
api_key_relay | { "header_name": "X-API-Key" } | Consumer API keys relayed to backend |
custom_header | { "headers": { "X-Custom": "value" } } | Custom header name/value pairs |
oauth2 | { "authorize_url", "token_url", ... } | OAuth 2.0 with PKCE (see OAuth2 section) |
Each parameter in parameter_mapping tells the CLI how to translate what the agent sends into what your API expects:
| Field | Required | Description |
|---|---|---|
name | yes | Parameter name (what the AI agent sends) |
type | yes | string, number, boolean, object, or array |
required | yes | Whether the parameter is required |
target | yes | Where it goes: path, query, body, or header |
backend_key | no | Backend field name if different from name |
default_value | no | Default when not provided |
description | no | Shown to AI agents — always include this |
Path parameters replace {placeholder} in the URL. Query parameters are appended as ?key=value. Body parameters are sent as JSON (use dot-notation in backend_key for nesting, e.g. address.city). Header parameters are injected as HTTP headers.
A single tool can mix all four targets:
{
"name": "update_item",
"http_method": "PUT",
"endpoint_path": "/api/items/{item_id}",
"parameter_mapping": [
{ "name": "item_id", "type": "string", "required": true, "target": "path" },
{ "name": "dry_run", "type": "boolean", "required": false, "target": "query", "default_value": false },
{ "name": "title", "type": "string", "required": true, "target": "body" },
{ "name": "idempotency_key", "type": "string", "required": false, "target": "header", "backend_key": "Idempotency-Key" }
]
}
serve — Local MCP servernpx @toolrelay/cli serve toolrelay.json
npx @toolrelay/cli serve toolrelay.json --port 9000 --verbose
Starts a local MCP Streamable HTTP server. On startup it prints a Claude Desktop config snippet you can copy directly.
| Option | Description |
|---|---|
--base-url <url> | Override base_url from config |
-p, --port <number> | Port (default: 8787) |
-v, --verbose | Show response headers and extra detail |
Endpoints:
| Endpoint | Description |
|---|---|
POST /mcp | MCP JSON-RPC (tools/list, tools/call, etc.) |
GET /mcp | SSE stream (with Mcp-Session-Id) |
GET /health | Health check + OAuth status |
GET /timeline | Call timeline as JSON |
GET /oauth/start | Trigger OAuth flow (OAuth2 apps) |
GET /oauth/status | Check OAuth state |
Every tool call is tracked with sequence number, timing, arguments, status, and the gap between calls. On Ctrl+C, a formatted timeline and summary are printed. Sessions are saved to .toolrelay/sessions/ — browse them with toolrelay ui.
init — Generate a confignpx @toolrelay/cli init # Interactive wizard
npx @toolrelay/cli init --yes # Starter template
validate — Check confignpx @toolrelay/cli validate toolrelay.json
Validates without making HTTP calls. Catches schema errors, path parameter mismatches, and missing auth fields. Run in CI to catch config issues before deploying.
ui — Session viewernpx @toolrelay/cli ui
Opens a local web UI to browse past serve sessions. Drill into individual tool calls to see parameter resolution, request/response bodies, diagnostics, and errors.
publish — Deploy to productionnpx @toolrelay/cli login
npx @toolrelay/cli publish toolrelay.json
npx @toolrelay/cli publish toolrelay.json --dry-run
Deploys your config to ToolRelay. The command is idempotent — creates on first run, updates only what changed after that.
What you get after publishing:
https://proxy.toolrelay.io/mcp/<your-slug>| Option | Description |
|---|---|
--dry-run | Show what would change without making API calls |
--prune | Delete remote tools no longer in config |
For CI/CD, use TOOLRELAY_DEPLOY_TOKEN instead of toolrelay login. Generate one at toolrelay.io/dashboard/settings.
When auth_type is "oauth2", the serve command handles the full browser-based authorization code flow locally:
{
"auth_config": {
"authorize_url": "https://provider.com/oauth/authorize",
"token_url": "https://provider.com/oauth/token",
"client_id": "${OAUTH_CLIENT_ID}",
"client_secret": "${OAUTH_CLIENT_SECRET}",
"scopes": "read write"
}
}
On startup, a browser window opens for authorization. After you approve, tokens are stored in memory and injected into every tool call automatically. Refresh is handled transparently.
client_id and client_secret are optional — public client PKCE flows work without them.
Register http://localhost (any port) as a redirect URI in your OAuth provider.
PKCE is enabled by default. Set "use_pkce": false for providers that reject it (e.g., AWS Cognito confidential clients):
| Provider | PKCE | Recommendation |
|---|---|---|
| Auth0, Okta, Google, Supabase | Yes | Leave default |
| AWS Cognito (public client) | Yes | Leave default |
| AWS Cognito (confidential client) | May reject | Set "use_pkce": false |
1. npx @toolrelay/cli init → Generate toolrelay.json
2. npx @toolrelay/cli serve → Test locally with Claude Desktop
3. (iterate on config)
4. npx @toolrelay/cli publish → Deploy to production
| Path | Purpose |
|---|---|
toolrelay.json | App + tool config (committed to git) |
.toolrelay/sessions/ | Local session logs (auto-gitignored) |
~/.toolrelay/credentials.json | Login credentials (mode 0600) |
Proprietary — see LICENSE for details.
FAQs
CLI for ToolRelay — validate configs, serve local MCP servers, and deploy to production with audit traces
We found that @toolrelay/cli demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.