You're Invited:Meet the Socket Team at RSAC and BSidesSF 2026, March 23–26.RSVP
Socket
Book a DemoSign in
Socket

@blockrun/llm

Package Overview
Dependencies
Maintainers
1
Versions
12
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@blockrun/llm

BlockRun LLM Gateway SDK - Pay-per-request AI via x402 on Base and Solana

latest
Source
npmnpm
Version
1.4.2
Version published
Weekly downloads
33
-76.6%
Maintainers
1
Weekly downloads
 
Created
Source

@blockrun/llm (TypeScript SDK)

@blockrun/llm is a TypeScript/Node.js SDK for accessing 40+ large language models (GPT-5, Claude, Gemini, Grok, DeepSeek, Kimi, and more) with automatic pay-per-request USDC micropayments via the x402 protocol. No API keys required — your wallet signature is your authentication. Supports Base and Solana chains.

npm License: MIT

Supported Chains

ChainNetworkPaymentStatus
BaseBase Mainnet (Chain ID: 8453)USDCPrimary
Base TestnetBase Sepolia (Chain ID: 84532)Testnet USDCDevelopment
SolanaSolana MainnetUSDC (SPL)New

XRPL (RLUSD): Use @blockrun/llm-xrpl for XRPL payments

Protocol: x402 v2 (CDP Facilitator)

Installation

# Base and Solana support (optional Solana deps auto-installed)
npm install @blockrun/llm
# or
pnpm add @blockrun/llm
# or
yarn add @blockrun/llm

Quick Start (Base - Default)

import { LLMClient } from '@blockrun/llm';

const client = new LLMClient();  // Uses BASE_CHAIN_WALLET_KEY (never sent to server)
const response = await client.chat('openai/gpt-4o', 'Hello!');

That's it. The SDK handles x402 payment automatically.

Quick Start (Solana)

import { SolanaLLMClient } from '@blockrun/llm';

// SOLANA_WALLET_KEY env var (bs58-encoded Solana secret key)
const client = new SolanaLLMClient();
const response = await client.chat('openai/gpt-4o', 'gm Solana');
console.log(response);

Set SOLANA_WALLET_KEY to your bs58-encoded Solana secret key. Payments are automatic via x402 — your key never leaves your machine.

Solana Support

Pay for AI calls with Solana USDC via sol.blockrun.ai:

import { SolanaLLMClient } from '@blockrun/llm';

// SOLANA_WALLET_KEY env var (bs58-encoded Solana secret key)
const client = new SolanaLLMClient();

// Or pass key directly
const client2 = new SolanaLLMClient({ privateKey: 'your-bs58-solana-key' });

// Same API as LLMClient
const response = await client.chat('openai/gpt-4o', 'gm Solana');
console.log(response);

// Live Search with Grok (Solana payment)
const tweet = await client.chat('xai/grok-3-mini', 'What is trending on X?', { search: true });

Setup:

  • Export your Solana wallet key: export SOLANA_WALLET_KEY="your-bs58-key"
  • Fund with USDC on Solana mainnet
  • That's it — payments are automatic via x402

Supported endpoint: https://sol.blockrun.ai/api Payment: Solana USDC (SPL, mainnet)

How It Works

  • You send a request to BlockRun's API
  • The API returns a 402 Payment Required with the price
  • The SDK automatically signs a USDC payment on Base
  • The request is retried with the payment proof
  • You receive the AI response

Your private key never leaves your machine - it's only used for local signing.

Smart Routing (ClawRouter)

Let the SDK automatically pick the cheapest capable model for each request:

import { LLMClient } from '@blockrun/llm';

const client = new LLMClient();

// Auto-routes to cheapest capable model
const result = await client.smartChat('What is 2+2?');
console.log(result.response);     // '4'
console.log(result.model);        // 'nvidia/kimi-k2.5' (cheap, fast)
console.log(`Saved ${(result.routing.savings * 100).toFixed(0)}%`); // 'Saved 78%'

// Complex reasoning task -> routes to reasoning model
const complex = await client.smartChat('Prove the Riemann hypothesis step by step');
console.log(complex.model);  // 'xai/grok-4-1-fast-reasoning'

Routing Profiles

ProfileDescriptionBest For
freenvidia/gpt-oss-120b only (FREE)Testing, development
ecoCheapest models per tier (DeepSeek, xAI)Cost-sensitive production
autoBest balance of cost/quality (default)General use
premiumTop-tier models (OpenAI, Anthropic)Quality-critical tasks
// Use premium models for complex tasks
const result = await client.smartChat(
  'Write production-grade async TypeScript code',
  { routingProfile: 'premium' }
);
console.log(result.model);  // 'anthropic/claude-opus-4.5'

How ClawRouter Works

ClawRouter uses a 14-dimension rule-based classifier to analyze each request:

  • Token count - Short vs long prompts
  • Code presence - Programming keywords
  • Reasoning markers - "prove", "step by step", etc.
  • Technical terms - Architecture, optimization, etc.
  • Creative markers - Story, poem, brainstorm, etc.
  • Agentic patterns - Multi-step, tool use indicators

The classifier runs in <1ms, 100% locally, and routes to one of four tiers:

TierExample TasksAuto Profile Model
SIMPLE"What is 2+2?", definitionsnvidia/kimi-k2.5
MEDIUMCode snippets, explanationsxai/grok-code-fast-1
COMPLEXArchitecture, long documentsgoogle/gemini-3.1-pro
REASONINGProofs, multi-step reasoningxai/grok-4-1-fast-reasoning

Available Models

OpenAI GPT-5 Family

ModelInput PriceOutput Price
openai/gpt-5.2$1.75/M$14.00/M
openai/gpt-5-mini$0.25/M$2.00/M
openai/gpt-5-nano$0.05/M$0.40/M
openai/gpt-5.2-pro$21.00/M$168.00/M
openai/gpt-5.2-codex$1.75/M$14.00/M

OpenAI GPT-4 Family

ModelInput PriceOutput Price
openai/gpt-4.1$2.00/M$8.00/M
openai/gpt-4.1-mini$0.40/M$1.60/M
openai/gpt-4.1-nano$0.10/M$0.40/M
openai/gpt-4o$2.50/M$10.00/M
openai/gpt-4o-mini$0.15/M$0.60/M

OpenAI O-Series (Reasoning)

ModelInput PriceOutput Price
openai/o1$15.00/M$60.00/M
openai/o1-mini$1.10/M$4.40/M
openai/o3$2.00/M$8.00/M
openai/o3-mini$1.10/M$4.40/M
openai/o4-mini$1.10/M$4.40/M

Anthropic Claude

ModelInput PriceOutput Price
anthropic/claude-opus-4.6$5.00/M$25.00/M
anthropic/claude-opus-4.5$5.00/M$25.00/M
anthropic/claude-opus-4$15.00/M$75.00/M
anthropic/claude-sonnet-4.6$3.00/M$15.00/M
anthropic/claude-sonnet-4$3.00/M$15.00/M
anthropic/claude-haiku-4.5$1.00/M$5.00/M

Google Gemini

ModelInput PriceOutput Price
google/gemini-3.1-pro$2.00/M$12.00/M
google/gemini-3-flash-preview$0.50/M$3.00/M
google/gemini-2.5-pro$1.25/M$10.00/M
google/gemini-2.5-flash$0.30/M$2.50/M
google/gemini-2.5-flash-lite$0.10/M$0.40/M

DeepSeek

ModelInput PriceOutput Price
deepseek/deepseek-chat$0.28/M$0.42/M
deepseek/deepseek-reasoner$0.28/M$0.42/M

xAI Grok

ModelInput PriceOutput PriceContextNotes
xai/grok-3$3.00/M$15.00/M131KFlagship
xai/grok-3-mini$0.30/M$0.50/M131KFast & affordable
xai/grok-4-1-fast-reasoning$0.20/M$0.50/M2MLatest, chain-of-thought
xai/grok-4-1-fast-non-reasoning$0.20/M$0.50/M2MLatest, direct response
xai/grok-4-fast-reasoning$0.20/M$0.50/M2MStep-by-step reasoning
xai/grok-4-fast-non-reasoning$0.20/M$0.50/M2MQuick responses
xai/grok-code-fast-1$0.20/M$1.50/M256KCode generation
xai/grok-4-0709$0.20/M$1.50/M256KPremium quality
xai/grok-2-vision$2.00/M$10.00/M32KVision capabilities

Moonshot Kimi

ModelInput PriceOutput Price
moonshot/kimi-k2.5$0.60/M$3.00/M

MiniMax

ModelInput PriceOutput Price
minimax/minimax-m2.7$0.30/M$1.20/M
minimax/minimax-m2.5$0.30/M$1.20/M

NVIDIA (Free & Hosted)

ModelInput PriceOutput PriceNotes
nvidia/gpt-oss-120bFREEFREEOpenAI open-weight 120B (Apache 2.0)
nvidia/kimi-k2.5$0.60/M$3.00/MMoonshot 1T MoE with vision

E2E Verified Models

All models below have been tested end-to-end via the TypeScript SDK (Feb 2026):

ProviderModelStatus
OpenAIopenai/gpt-4o-miniPassed
OpenAIopenai/gpt-5.2-codexPassed
Anthropicanthropic/claude-opus-4.6Passed
Anthropicanthropic/claude-sonnet-4Passed
Googlegoogle/gemini-2.5-flashPassed
DeepSeekdeepseek/deepseek-chatPassed
xAIxai/grok-3Passed
Moonshotmoonshot/kimi-k2.5Passed

Image Generation

ModelPrice
openai/dall-e-3$0.04-0.08/image
openai/gpt-image-1$0.02-0.04/image
google/nano-banana$0.05/image
google/nano-banana-pro$0.10-0.15/image
black-forest/flux-1.1-pro$0.04/image

Testnet Models (Base Sepolia)

ModelPrice
openai/gpt-oss-20b$0.001/request
openai/gpt-oss-120b$0.002/request

Testnet models use flat pricing (no token counting) for simplicity.

X/Twitter Data (Powered by AttentionVC)

Access X/Twitter user profiles, followers, and followings via AttentionVC partner API. No API keys needed — pay-per-request via x402.

import { LLMClient } from '@blockrun/llm';

const client = new LLMClient();

// Look up user profiles ($0.002/user, min $0.02)
const users = await client.xUserLookup(['elonmusk', 'blockaborr']);
for (const user of users.users) {
  console.log(`@${user.userName}: ${user.followers} followers`);
}

// Get followers ($0.05/page, ~200 accounts)
let result = await client.xFollowers('blockaborr');
for (const f of result.followers) {
  console.log(`  @${f.screen_name}`);
}

// Paginate through all followers
while (result.has_next_page) {
  result = await client.xFollowers('blockaborr', result.next_cursor);
}

// Get followings ($0.05/page)
const followings = await client.xFollowings('blockaborr');

Works on both LLMClient (Base) and SolanaLLMClient.

Search web, X/Twitter, and news without using a chat model:

import { LLMClient } from '@blockrun/llm';

const client = new LLMClient();

const result = await client.search('latest AI agent frameworks 2026');
console.log(result.summary);
for (const cite of result.citations ?? []) {
  console.log(`  - ${cite}`);
}

// Filter by source type and date range
const filtered = await client.search('BlockRun x402', {
  sources: ['web', 'x'],
  fromDate: '2026-01-01',
  maxResults: 5,
});

Image Editing (img2img)

Edit existing images with text prompts:

import { LLMClient } from '@blockrun/llm';

const client = new LLMClient();

const result = await client.imageEdit(
  'Make the sky purple and add northern lights',
  'data:image/png;base64,...',  // base64 or URL
  { model: 'openai/gpt-image-1' }
);
console.log(result.data[0].url);

Testnet Usage

For development and testing without real USDC, use the testnet:

import { testnetClient } from '@blockrun/llm';

// Create testnet client (uses Base Sepolia)
const client = testnetClient({ privateKey: '0x...' });

// Chat with testnet model
const response = await client.chat('openai/gpt-oss-20b', 'Hello!');
console.log(response);

// Check if client is on testnet
console.log(client.isTestnet()); // true

Testnet Setup

Available Testnet Models

  • openai/gpt-oss-20b - $0.001/request (flat price)
  • openai/gpt-oss-120b - $0.002/request (flat price)

Manual Testnet Configuration

import { LLMClient } from '@blockrun/llm';

// Or configure manually
const client = new LLMClient({
  privateKey: '0x...',
  apiUrl: 'https://testnet.blockrun.ai/api'
});
const response = await client.chat('openai/gpt-oss-20b', 'Hello!');

Usage Examples

Simple Chat

import { LLMClient } from '@blockrun/llm';

const client = new LLMClient();  // Uses BASE_CHAIN_WALLET_KEY (never sent to server)

const response = await client.chat('openai/gpt-4o', 'Explain quantum computing');
console.log(response);

// With system prompt
const response2 = await client.chat('anthropic/claude-sonnet-4', 'Write a haiku', {
  system: 'You are a creative poet.',
});

Smart Routing (ClawRouter)

Save up to 78% on inference costs with intelligent model routing. ClawRouter uses a 14-dimension rule-based scoring algorithm to select the cheapest model that can handle your request (<1ms, 100% local).

import { LLMClient } from '@blockrun/llm';

const client = new LLMClient();

// Auto-route to cheapest capable model
const result = await client.smartChat('What is 2+2?');
console.log(result.response);     // '4'
console.log(result.model);        // 'google/gemini-2.5-flash'
console.log(result.routing.tier); // 'SIMPLE'
console.log(`Saved ${(result.routing.savings * 100).toFixed(0)}%`); // 'Saved 78%'

// Routing profiles
const free = await client.smartChat('Hello!', { routingProfile: 'free' });     // Zero cost
const eco = await client.smartChat('Explain AI', { routingProfile: 'eco' });   // Budget optimized
const auto = await client.smartChat('Code review', { routingProfile: 'auto' }); // Balanced (default)
const premium = await client.smartChat('Write a legal brief', { routingProfile: 'premium' }); // Best quality

Routing Profiles:

ProfileDescriptionBest For
freeNVIDIA free models onlyTesting, simple queries
ecoBudget-optimizedCost-sensitive workloads
autoIntelligent routing (default)General use
premiumBest quality modelsCritical tasks

Tiers:

TierExample TasksTypical Models
SIMPLEGreetings, math, lookupsGemini Flash, GPT-4o-mini
MEDIUMExplanations, summariesGPT-4o, Claude Sonnet
COMPLEXAnalysis, code generationGPT-5.2, Claude Opus
REASONINGMulti-step logic, planningo3, DeepSeek Reasoner

Full Chat Completion

import { LLMClient, type ChatMessage } from '@blockrun/llm';

const client = new LLMClient();  // Uses BASE_CHAIN_WALLET_KEY (never sent to server)

const messages: ChatMessage[] = [
  { role: 'system', content: 'You are a helpful assistant.' },
  { role: 'user', content: 'How do I read a file in Node.js?' },
];

const result = await client.chatCompletion('openai/gpt-4o', messages);
console.log(result.choices[0].message.content);

List Available Models

import { LLMClient } from '@blockrun/llm';

const client = new LLMClient();  // Uses BASE_CHAIN_WALLET_KEY (never sent to server)
const models = await client.listModels();

for (const model of models) {
  console.log(`${model.id}: $${model.inputPrice}/M input`);
}

Multiple Requests

import { LLMClient } from '@blockrun/llm';

const client = new LLMClient();  // Uses BASE_CHAIN_WALLET_KEY (never sent to server)

const [gpt, claude, gemini] = await Promise.all([
  client.chat('openai/gpt-4o', 'What is 2+2?'),
  client.chat('anthropic/claude-sonnet-4', 'What is 3+3?'),
  client.chat('google/gemini-2.5-flash', 'What is 4+4?'),
]);

Prediction Markets (Powered by Predexon)

Access real-time prediction market data from Polymarket, Kalshi, and Binance Futures via Predexon. No API keys needed — pay-per-request via x402.

Polymarket

import { LLMClient } from '@blockrun/llm';

const client = new LLMClient();

// List markets with optional filters ($0.001/request)
const markets = await client.pm("polymarket/markets");
const filtered = await client.pm("polymarket/markets", { status: "active", limit: 10 });
const searched = await client.pm("polymarket/markets", { search: "bitcoin" });

// List events ($0.001/request)
const events = await client.pm("polymarket/events");

// Historical trades ($0.001/request)
const trades = await client.pm("polymarket/trades");

// OHLCV candlestick data for a specific condition ($0.001/request)
const candles = await client.pm("polymarket/candlesticks/0x1234abcd...");

// Wallet profile ($0.005/request — tier 2)
const profile = await client.pm("polymarket/wallet/0xABC123...");

// Wallet P&L ($0.005/request — tier 2)
const pnl = await client.pm("polymarket/wallet/pnl/0xABC123...");

// Global leaderboard ($0.001/request)
const leaderboard = await client.pm("polymarket/leaderboard");

Kalshi & Binance

// Kalshi markets ($0.001/request)
const kalshiMarkets = await client.pm("kalshi/markets");

// Kalshi trades ($0.001/request)
const kalshiTrades = await client.pm("kalshi/trades");

// Binance candles for supported pairs ($0.001/request)
const btcCandles = await client.pm("binance/candles/BTCUSDT");
const ethCandles = await client.pm("binance/candles/ETHUSDT");
// Also: SOLUSDT, XRPUSDT

Cross-Platform

// Cross-platform matching pairs ($0.001/request)
const pairs = await client.pm("matching-markets/pairs");

All current endpoints are GET. The pmQuery() method is available for future POST endpoints.

Works on both LLMClient (Base) and SolanaLLMClient.

Configuration

// Default: reads BASE_CHAIN_WALLET_KEY from environment
const client = new LLMClient();

// Or pass options explicitly
const client = new LLMClient({
  privateKey: '0x...',           // Your wallet key (never sent to server)
  apiUrl: 'https://blockrun.ai/api',   // Optional
  timeout: 60000,                // Optional (ms)
});

Environment Variables

VariableDescription
BASE_CHAIN_WALLET_KEYYour Base chain wallet private key (for Base / LLMClient)
SOLANA_WALLET_KEYYour Solana wallet secret key - bs58 encoded (for SolanaLLMClient)
BLOCKRUN_API_URLAPI endpoint (optional, default: https://blockrun.ai/api)

Error Handling

import { LLMClient, APIError, PaymentError } from '@blockrun/llm';

const client = new LLMClient();

try {
  const response = await client.chat('openai/gpt-4o', 'Hello!');
} catch (error) {
  if (error instanceof PaymentError) {
    console.error('Payment failed - check USDC balance');
  } else if (error instanceof APIError) {
    console.error(`API error: ${error.message}`);
  }
}

Testing

Running Unit Tests

Unit tests do not require API access or funded wallets:

npm test                          # Run tests in watch mode
npm test run                      # Run tests once
npm test -- --coverage            # Run with coverage report

Running Integration Tests

Integration tests call the production API and require:

  • A funded Base wallet with USDC ($1+ recommended)
  • BASE_CHAIN_WALLET_KEY environment variable set
  • Estimated cost: ~$0.05 per test run
export BASE_CHAIN_WALLET_KEY=0x...
npm test -- test/integration       # Run integration tests only

Integration tests are automatically skipped if BASE_CHAIN_WALLET_KEY is not set.

Setting Up Your Wallet

Base (EVM)

  • Create a wallet on Base (Coinbase Wallet, MetaMask, etc.)
  • Get USDC on Base for API payments
  • Export your private key and set as BASE_CHAIN_WALLET_KEY
# .env
BASE_CHAIN_WALLET_KEY=0x...

Solana

  • Create a Solana wallet (Phantom, Backpack, Solflare, etc.)
  • Get USDC on Solana for API payments
  • Export your secret key and set as SOLANA_WALLET_KEY
# .env
SOLANA_WALLET_KEY=...your_bs58_secret_key

Note: Solana transactions are gasless for the user - the CDP facilitator pays for transaction fees.

Security

Private Key Safety

  • Private key stays local: Your key is only used for signing on your machine
  • No custody: BlockRun never holds your funds
  • Verify transactions: All payments are on-chain and verifiable

Best Practices

Private Key Management:

  • Use environment variables, never hard-code keys
  • Use dedicated wallets for API payments (separate from main holdings)
  • Set spending limits by only funding payment wallets with small amounts
  • Never commit .env files to version control
  • Rotate keys periodically

Input Validation: The SDK validates all inputs before API requests:

  • Private keys (format, length, valid hex)
  • API URLs (HTTPS required for production, HTTP allowed for localhost)
  • Model names and parameters (ranges for max_tokens, temperature, top_p)

Error Sanitization: API errors are automatically sanitized to prevent sensitive information leaks.

Monitoring:

const address = client.getWalletAddress();
console.log(`View transactions: https://basescan.org/address/${address}`);

Keep Updated:

npm update @blockrun/llm  # Get security patches

TypeScript Support

Full TypeScript support with exported types:

import {
  LLMClient,
  testnetClient,
  type ChatMessage,
  type ChatResponse,
  type ChatOptions,
  type Model,
  // Smart routing types
  type SmartChatOptions,
  type SmartChatResponse,
  type RoutingDecision,
  type RoutingProfile,
  type RoutingTier,
  APIError,
  PaymentError,
} from '@blockrun/llm';

Agent Wallet Setup

One-line setup for agent runtimes (Claude Code skills, MCP servers, etc.):

import { setupAgentWallet } from '@blockrun/llm';

// Auto-creates wallet if none exists, returns ready client
const client = setupAgentWallet();
const response = await client.chat('openai/gpt-5.4', 'Hello!');

For Solana:

import { setupAgentSolanaWallet } from '@blockrun/llm';

const client = await setupAgentSolanaWallet();
const response = await client.chat('anthropic/claude-sonnet-4.6', 'Hello!');

Check wallet status:

import { status } from '@blockrun/llm';

await status();
// Wallet: 0xCC8c...5EF8
// Balance: $5.30 USDC

Wallet Scanning

The SDK auto-detects wallets from any provider on your system:

import { scanWallets, scanSolanaWallets } from '@blockrun/llm';

// Scans ~/.<dir>/wallet.json for Base wallets
const baseWallets = scanWallets();

// Scans ~/.<dir>/solana-wallet.json and ~/.brcc/wallet.json
const solWallets = scanSolanaWallets();

getOrCreateWallet() checks scanned wallets first, so if you already have a wallet from another BlockRun tool, it will be reused automatically.

Response Caching

The SDK caches responses to avoid duplicate payments:

import { getCachedByRequest, saveToCache, clearCache } from '@blockrun/llm';

// Automatic TTLs by endpoint:
// - X/Twitter: 1 hour
// - Search: 15 minutes
// - Models: 24 hours
// - Chat/Image: no cache (every call is unique)

// Manual cache management
clearCache(); // Remove all cached responses

Cost Logging

Track spending across sessions:

import { logCost, getCostSummary } from '@blockrun/llm';

// Costs are logged to ~/.blockrun/data/costs.jsonl
const summary = getCostSummary();
console.log(`Total: $${summary.totalUsd.toFixed(2)}`);
console.log(`Calls: ${summary.calls}`);
console.log(`By model:`, summary.byModel);

Anthropic SDK Compatibility

Use the official Anthropic SDK interface with BlockRun's pay-per-request backend:

import { AnthropicClient } from '@blockrun/llm';

const client = new AnthropicClient();  // Auto-detects wallet, auto-pays

const response = await client.messages.create({
  model: 'claude-sonnet-4-6',
  max_tokens: 1024,
  messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(response.content[0].text);

// Any model works in Anthropic format
const gptResponse = await client.messages.create({
  model: 'openai/gpt-5.4',
  max_tokens: 1024,
  messages: [{ role: 'user', content: 'Hello from GPT!' }],
});

The AnthropicClient wraps the official @anthropic-ai/sdk with a custom fetch that handles x402 payment automatically. Your private key never leaves your machine.

Frequently Asked Questions

What is @blockrun/llm?

@blockrun/llm is a TypeScript SDK that provides pay-per-request access to 40+ large language models from OpenAI, Anthropic, Google, xAI, DeepSeek, Moonshot, and more. It uses the x402 protocol for automatic USDC micropayments — no API keys, no subscriptions, no vendor lock-in.

How does payment work?

When you make an API call, the SDK automatically handles x402 payment. It signs a USDC transaction locally using your wallet private key (which never leaves your machine), and includes the payment proof in the request header. Settlement is non-custodial and instant on Base or Solana.

What is smart routing / ClawRouter?

ClawRouter is a built-in smart routing engine that analyzes your request across 14 dimensions and automatically picks the cheapest model capable of handling it. Routing happens locally in under 1ms. It can save up to 78% on LLM costs compared to using premium models for every request.

How much does it cost?

Pay only for what you use. Prices start at $0.0002 per request (GPT-5 Nano). There are no minimums, subscriptions, or monthly fees. $5 in USDC gets you thousands of requests.

Does it support both Base and Solana?

Yes. Use LLMClient for Base (EVM) payments and SolanaLLMClient for Solana payments. Same API, different payment chain.

License

MIT

Keywords

llm

FAQs

Package last updated on 23 Mar 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts