
Security News
Axios Supply Chain Attack Reaches OpenAI macOS Signing Pipeline, Forces Certificate Rotation
OpenAI rotated macOS signing certificates after a malicious Axios package reached its CI pipeline in a broader software supply chain attack.
A lightweight registry of LLM model information, like name and context sizes, for building AI-powered apps.
Typed model metadata and context/cost utilities that help AI apps answer: Does this fit? What will it cost? Should we compact now? How much budget is left for the next turn?
Works great with the Vercel AI SDK out of the box, and remains SDK‑agnostic.

Highlights
ModelId autocomplete and safe helpers.Install
npm i tokenlenspnpm add tokenlensyarn add tokenlensQuick Start
import {
type ModelId,
modelMeta,
percentOfContextUsed,
tokensRemaining,
costFromUsage,
} from 'tokenlens';
const id: ModelId = 'openai:gpt-4.1';
// Works with provider usage or Vercel AI SDK usage
const usage = { prompt_tokens: 3200, completion_tokens: 400 };
const meta = modelMeta(id);
const used = percentOfContextUsed({ id, usage, reserveOutput: 256 });
const remaining = tokensRemaining({ id, usage, reserveOutput: 256 });
const costUSD = costFromUsage({ id, usage });
console.log({ meta, used, remaining, costUSD });
Core Helpers
resolveModel, listModels, MODEL_IDS, isModelId, assertModelIdnormalizeUsage, breakdownTokens, consumedTokensgetContextWindow, remainingContext, percentRemaining, fitsContext, pickModelForestimateCostshouldCompact, contextHealth, tokensToCompactsumUsage, estimateConversationCost, computeContextRot, nextTurnBudgetmodelMeta, percentOfContextUsed, tokensRemaining, costFromUsageProvider‑Agnostic Usage
import { normalizeUsage, breakdownTokens } from 'tokenlens';
// Works with many shapes, including Vercel AI SDK fields
const u1 = normalizeUsage({ prompt_tokens: 1000, completion_tokens: 150 });
const u2 = normalizeUsage({ inputTokens: 900, outputTokens: 200, totalTokens: 1100 });
const b = breakdownTokens({ inputTokens: 900, cachedInputTokens: 300, reasoningTokens: 120 });
Async Fetch (models.dev)
import {
fetchModels,
FetchModelsError,
type ModelCatalog,
type ProviderInfo,
type ProviderModel,
} from 'tokenlens';
// 1) Fetch the full catalog (Node 18+ or modern browsers with global fetch)
const catalog: ModelCatalog = await fetchModels();
// 2) Fetch by provider key (e.g. 'openai', 'anthropic', 'deepseek')
const openai: ProviderInfo | undefined = await fetchModels({ provider: 'openai' });
// 3) Fetch a specific model within a provider
const gpto: ProviderModel | undefined = await fetchModels({ provider: 'openai', model: 'gpt-4o' });
// 4) Search for a model across providers when provider is omitted
const matches: Array<{ provider: string; model: ProviderModel }> = await fetchModels({ model: 'gpt-4.1' });
// 5) Error handling with typed error codes
try {
await fetchModels();
} catch (err) {
if (err instanceof FetchModelsError) {
// err.code is one of: 'UNAVAILABLE' | 'NETWORK' | 'HTTP' | 'PARSE'
console.error('Fetch failed:', err.code, err.status, err.message);
} else {
throw err;
}
}
// 6) Provide a custom fetch (for Node < 18 or custom runtimes)
// import fetch from 'cross-fetch' or 'undici'
// const catalog = await fetchModels({ fetch });
Picking Model Metadata
import { getModels, getModelMeta } from 'tokenlens';
// Build a static catalog (or use fetchModels() dynamically)
const providers = getModels();
// Provider info
const openai = getModelMeta({ providers, provider: 'openai' });
// Single model
const gpto = getModelMeta({ providers, provider: 'openai', model: 'gpt-4o' });
// Multiple models
const picks = getModelMeta({ providers, provider: 'openai', models: ['gpt-4o', 'o3-mini'] });
// Providerless id
const viaId = getModelMeta({ providers, id: 'openai:gpt-4o' }); // or 'openai/gpt-4o'
Context Budgeting & Compaction
import { remainingContext, shouldCompact, tokensToCompact, contextHealth } from 'tokenlens';
const rc = remainingContext({ modelId: 'anthropic:claude-3-5-sonnet-20240620', usage: u1, reserveOutput: 256 });
if (shouldCompact({ modelId: 'anthropic:claude-3-5-sonnet-20240620', usage: u1 })) {
const n = tokensToCompact({ modelId: 'anthropic:claude-3-5-sonnet-20240620', usage: u1 });
// summarize oldest messages by ~n tokens
}
const badge = contextHealth({ modelId: 'openai:gpt-4.1', usage: u2 }); // { status: 'ok'|'warn'|'compact' }
Advanced
remainingContext supports strategy: 'provider-default' | 'combined' | 'input-only'.
provider-default (default): prefers combinedMax when available; otherwise uses inputMax.combined: always uses combinedMax (falls back to inputMax if missing).input-only: uses only inputMax for remaining/percent calculations.shouldCompact defaults to threshold: 0.85; contextHealth defaults to warnAt: 0.75, compactAt: 0.85.Conversation Utilities
import { sumUsage, estimateConversationCost, computeContextRot, nextTurnBudget } from 'tokenlens';
const totals = sumUsage([turn1.usage, turn2.usage, turn3.usage]);
const cost = estimateConversationCost({ modelId: 'openai:gpt-4.1', usages: [turn1.usage, turn2.usage] });
const rot = computeContextRot({ messageTokens: [800, 600, 400, 300, 200], keepRecentTurns: 2, modelId: 'openai:gpt-4.1' });
const nextBudget = nextTurnBudget({ modelId: 'openai:gpt-4.1', usage: totals, reserveOutput: 256 });
Listing Models
import { listModels } from 'tokenlens';
const stableOpenAI = listModels({ provider: 'openai', status: 'stable' });
Data Source & Sync
pnpm -C packages/tokenlens sync:modelsAcknowlegements
License MIT
FAQs
A lightweight registry of LLM model information, like name and context sizes, for building AI-powered apps.
The npm package tokenlens receives a total of 261,338 weekly downloads. As such, tokenlens popularity was classified as popular.
We found that tokenlens demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
OpenAI rotated macOS signing certificates after a malicious Axios package reached its CI pipeline in a broader software supply chain attack.

Security News
Open source is under attack because of how much value it creates. It has been the foundation of every major software innovation for the last three decades. This is not the time to walk away from it.

Security News
Socket CEO Feross Aboukhadijeh breaks down how North Korea hijacked Axios and what it means for the future of software supply chain security.