You're Invited:Meet the Socket Team at RSAC and BSidesSF 2026, March 23–26.RSVP
Socket
Book a DemoSign in
Socket

claw-router

Package Overview
Dependencies
Maintainers
1
Versions
2
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

claw-router

Smart LLM router — save 63% on inference costs. 30+ models, one wallet, x402 micropayments.

latest
Source
npmnpm
Version
0.1.1
Version published
Maintainers
1
Created
Source

ClawRouter

Save 63% on LLM costs. Automatically.

Route every request to the cheapest model that can handle it. One wallet, 30+ models, zero API keys.

npm License: MIT TypeScript Node

Docs · Models · Telegram · X

"What is 2+2?"            → Gemini Flash    $0.60/M    saved 94%
"Summarize this article"   → DeepSeek Chat   $0.42/M    saved 96%
"Build a React component"  → Claude Sonnet   $15.00/M   best quality
"Prove this theorem"       → o3              $8.00/M    saved 20%

ClawRouter is a smart LLM router for OpenClaw. It classifies each request, picks the cheapest model that can handle it, and pays per-request via x402 USDC micropayments on Base. No account, no API key — your wallet signs each payment.

Quick Start

# 1. Install — auto-generates a wallet on Base
openclaw plugin install claw-router

# 2. Fund your wallet with USDC on Base (address printed on install)
#    A few dollars is enough to start — each request costs fractions of a cent

# 3. Enable smart routing
openclaw config set model blockrun/auto

Every request now routes to the cheapest capable model.

Already have a funded wallet? Bring your own: export BLOCKRUN_WALLET_KEY=0x...

Want a specific model instead? openclaw config set model openai/gpt-4o — you still get x402 payments and usage logging.

How Routing Works

Hybrid rules-first approach. Heuristic rules handle ~80% of requests in <1ms at zero cost. Only ambiguous queries hit the LLM classifier.

Request → Rule-based scorer (< 1ms, free)
            ├── Clear → pick model → done
            └── Ambiguous → LLM classifier (~200ms, ~$0.00003)
                              └── classify → pick model → done

Rules Engine

8 scoring dimensions: token count, code presence, reasoning markers, technical terms, creative markers, simple indicators, multi-step patterns, question complexity.

Score maps to a tier:

ScoreTierPrimary ModelOutput $/Mvs GPT-4o
≤ 0SIMPLEgemini-2.5-flash$0.6094% cheaper
1-2ambiguous→ LLM classifier
3-4MEDIUMdeepseek-chat$0.4296% cheaper
5-6COMPLEXclaude-sonnet-4$15.00higher quality
7+REASONINGo3$8.0020% cheaper

LLM Classifier Fallback

When rules score in the ambiguous zone (1-2), ClawRouter sends the first 500 characters to gemini-2.5-flash with max_tokens: 10 and asks for one word: SIMPLE, MEDIUM, COMPLEX, or REASONING. Cost per classification: ~$0.00003. Results cached for 1 hour.

Estimated Savings

Tier% of TrafficOutput $/M
SIMPLE40%$0.60
MEDIUM30%$0.42
COMPLEX20%$15.00
REASONING10%$8.00
Weighted avg$3.67/M — 63% savings vs GPT-4o

Every routed request logs its decision:

[ClawRouter] deepseek-chat (MEDIUM, rules, confidence=0.85)
             Cost: $0.0004 | Baseline: $0.0095 | Saved: 95.8%

Models

30+ models across 5 providers, all through one wallet:

ModelInput $/MOutput $/MContextReasoning
OpenAI
gpt-5.2$1.75$14.00400K*
gpt-5-mini$0.25$2.00200K
gpt-5-nano$0.05$0.40128K
gpt-4o$2.50$10.00128K
gpt-4o-mini$0.15$0.60128K
o3$2.00$8.00200K*
o3-mini$1.10$4.40128K*
o4-mini$1.10$4.40128K*
Anthropic
claude-opus-4.5$15.00$75.00200K*
claude-sonnet-4$3.00$15.00200K*
claude-haiku-4.5$1.00$5.00200K
Google
gemini-3-pro-preview$2.00$12.001M*
gemini-2.5-pro$1.25$10.001M*
gemini-2.5-flash$0.15$0.601M
DeepSeek
deepseek-chat$0.28$0.42128K
deepseek-reasoner$0.28$0.42128K*
xAI
grok-3$3.00$15.00131K*
grok-3-fast$5.00$25.00131K*
grok-3-mini$0.30$0.50131K

Full list in src/models.ts.

Payment

No account. No API key. Payment IS authentication via x402.

Request → 402 (price: $0.003) → wallet signs USDC → retry → response

USDC stays in your wallet until the moment each request is paid — non-custodial. The price is visible in the 402 response before your wallet signs.

Pricing formula:

Price = (input_tokens × input_rate) + (max_output_tokens × output_rate)

Funding your wallet — send USDC on Base to your wallet address:

  • Coinbase — buy USDC, send to Base
  • Any CEX — withdraw USDC to Base
  • Bridge — move USDC from any chain to Base

Usage Logging

Every routed request is logged as a JSON line:

~/.openclaw/blockrun/logs/usage-2026-02-03.jsonl
{"timestamp":"2026-02-03T20:15:30.123Z","model":"google/gemini-2.5-flash","cost":0.000246,"latencyMs":1250}

Configuration

Override Routing

# openclaw.yaml
plugins:
  - id: "claw-router"
    config:
      routing:
        tiers:
          COMPLEX:
            primary: "openai/gpt-4o"
          SIMPLE:
            primary: "openai/gpt-4o-mini"
        scoring:
          reasoningKeywords: ["proof", "theorem", "formal verification"]

Pin a Model

Skip routing. Use one model for everything:

openclaw config set model openai/gpt-4o

You still get x402 payments and usage logging.

Architecture

src/
├── index.ts              # Plugin entry — register() + activate()
├── provider.ts           # Registers "blockrun" provider in OpenClaw
├── proxy.ts              # Local HTTP proxy — routing + x402 payment
├── models.ts             # 30+ model definitions with pricing
├── auth.ts               # Wallet key resolution (env, config, prompt)
├── logger.ts             # JSON lines usage logger
├── types.ts              # OpenClaw plugin type definitions
└── router/
    ├── index.ts           # route() entry point
    ├── rules.ts           # Rule-based classifier (8 scoring dimensions)
    ├── llm-classifier.ts  # LLM fallback (gemini-flash, cached)
    ├── selector.ts        # Tier → model selection + cost calculation
    ├── config.ts          # Default routing configuration
    └── types.ts           # RoutingDecision, Tier, ScoringResult

The plugin runs a local HTTP proxy between OpenClaw and BlockRun's API. OpenClaw sees a standard OpenAI-compatible endpoint at localhost. Routing is client-side — open source and inspectable.

OpenClaw Agent
    │
    ▼
ClawRouter (localhost proxy)
    │  ① Classify query (rules → LLM fallback)
    │  ② Pick cheapest capable model
    │  ③ Sign x402 USDC payment
    │
    ▼
BlockRun API → Provider (OpenAI, Anthropic, Google, DeepSeek, xAI)

Programmatic Usage

Use ClawRouter as a library without OpenClaw:

import { startProxy } from "claw-router";

const proxy = await startProxy({
  walletKey: process.env.BLOCKRUN_WALLET_KEY!,
  onReady: (port) => console.log(`Proxy on port ${port}`),
  onRouted: (d) => {
    const saved = (d.savings * 100).toFixed(0);
    console.log(`${d.model} (${d.tier}) saved ${saved}%`);
  },
});

// Use with any OpenAI-compatible client
const res = await fetch(`${proxy.baseUrl}/v1/chat/completions`, {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({
    model: "blockrun/auto",
    messages: [{ role: "user", content: "What is 2+2?" }],
  }),
});

await proxy.close();

Or use the router directly:

import { route, DEFAULT_ROUTING_CONFIG } from "claw-router";

const decision = await route("Prove sqrt(2) is irrational", undefined, 4096, {
  config: DEFAULT_ROUTING_CONFIG,
  modelPricing,
  payFetch,
  apiBase: "https://api.blockrun.ai/api",
});

console.log(decision);
// {
//   model: "openai/o3",
//   tier: "REASONING",
//   confidence: 0.9,
//   method: "rules",
//   savings: 0.20,
//   costEstimate: 0.032776,
//   baselineCost: 0.040970,
// }

Development

git clone https://github.com/blockrunai/claw-router.git
cd claw-router
npm install
npm run build        # Build with tsup
npm run dev          # Watch mode
npm run typecheck    # Type check

# Run tests
npx tsup test/e2e.ts --format esm --outDir test/dist --no-dts
node test/dist/e2e.js

# Run with live proxy (requires funded wallet)
BLOCKRUN_WALLET_KEY=0x... node test/dist/e2e.js

Roadmap

  • Provider plugin — one wallet, 30+ models, x402 payment proxy
  • Smart routing — hybrid rules + LLM classifier, 4-tier model selection
  • Usage logging — JSON lines to disk, per-request cost tracking
  • Graceful fallback — auto-switch on rate limit or provider error
  • Spend controls — daily/monthly budgets, server-side enforcement
  • Cost dashboard — analytics at blockrun.ai

Why Not OpenRouter / LiteLLM?

They're built for developers — create an account, get an API key, prepay a balance, manage it through a dashboard.

ClawRouter is built for agents. The difference:

OpenRouter / LiteLLMClawRouter
SetupHuman creates account, gets API keyAgent generates wallet, pays per request
PaymentPrepaid balance (custodial)Per-request micropayment (non-custodial)
AuthAPI key (shared secret)Wallet signature (cryptographic proof)
CustodyProvider holds your moneyUSDC stays in YOUR wallet until spent
RoutingProprietary / closedOpen source, client-side, inspectable

As agents become autonomous, they need financial infrastructure designed for machines. An agent shouldn't need a human to sign up for a service and paste an API key. It should generate a wallet, receive funds, and pay per request — programmatically.

License

MIT

Keywords

llm

FAQs

Package last updated on 04 Feb 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts