
Security News
Axios Maintainer Confirms Social Engineering Attack Behind npm Compromise
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.
token-shrinker
Advanced tools
Model Context Protocol (MCP) server for compressing AI context to reduce token usage. Provides tools to shrink text, summarize files, and cache repository content.
Model Context Protocol (MCP) Server - Comresses AI context to reduce token usage.
TokenShrinker provides AI context compression tools via the Model Context Protocol (MCP). It reduces token usage by intelligently summarizing text, files, and repositories for MCP-compatible AI assistants.
AI Agent (MCP host) --> MCP request --> TokenShrinker (MCP server)
(chat text) (shrink / summarize / select)
|
V
compressed/context (returned)
|
V
Agent forwards compressed payload to model backend
npm install -g token-shrinker
TokenShrinker supports multiple AI providers! Create a .env file in your project directory:
# Choose your provider (default: openrouter)
echo "AI_PROVIDER=openrouter" >> .env # Options: openrouter, openai, anthropic
# Provider-specific API keys (choose one based on your AI_PROVIDER)
echo "OPENROUTER_API_KEY=sk-or-v1-your-openrouter-key-here" >> .env
# OR
echo "OPENAI_API_KEY=sk-your-openai-key-here" >> .env
# OR
echo "ANTHROPIC_API_KEY=sk-ant-your-anthropic-key-here" >> .env
# Optional: Set your preferred model for your provider
echo "AI_MODEL=anthropic/claude-3.5-sonnet" >> .env
Environment Variables:
Provider Selection:
AI_PROVIDER - Choose your AI provider (openrouter, openai, anthropic)
openrouter (free tier model)API Keys (choose based on your provider):
OPENROUTER_API_KEY - Get from openrouter.aiOPENAI_API_KEY - Get from platform.openai.comANTHROPIC_API_KEY - Get from console.anthropic.comModel Selection:
AI_MODEL - Generic model name that works across providersOPENROUTER_MODEL, OPENAI_MODEL, ANTHROPIC_MODELExamples by Provider:
OpenRouter (Recommended for Free Tier):
AI_PROVIDER=openrouter
OPENROUTER_API_KEY=sk-or-v1-...
AI_MODEL=meta-llama/llama-4-maverick:free
OpenAI:
AI_PROVIDER=openai
OPENAI_API_KEY=sk-...
AI_MODEL=gpt-4o-mini
Anthropic:
AI_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...
AI_MODEL=claude-3-haiku-20240307
Add to your claude_desktop_config.json:
For OpenRouter (default):
{
"mcpServers": {
"token-shrinker": {
"command": "npx",
"args": ["token-shrinker"],
"env": {
"AI_PROVIDER": "openrouter",
"OPENROUTER_API_KEY": "sk-or-v1-your-openrouter-key-here",
"AI_MODEL": "meta-llama/llama-4-maverick:free"
}
}
}
}
For OpenAI:
{
"mcpServers": {
"token-shrinker": {
"command": "npx",
"args": ["token-shrinker"],
"env": {
"AI_PROVIDER": "openai",
"OPENAI_API_KEY": "sk-your-openai-key-here",
"AI_MODEL": "gpt-4o-mini"
}
}
}
}
For Anthropic:
{
"mcpServers": {
"token-shrinker": {
"command": "npx",
"args": ["token-shrinker"],
"env": {
"AI_PROVIDER": "anthropic",
"ANTHROPIC_API_KEY": "sk-ant-your-anthropic-key-here",
"AI_MODEL": "claude-3-haiku-20240307"
}
}
}
}
Add similar configurations to your MCP settings. You can switch between providers by changing the AI_PROVIDER and corresponding API key environment variables.
Once connected, you can switch providers on-the-fly using MCP tools:
# Ask Claude/Cursor to switch providers
"I want to use OpenAI instead of OpenRouter for compression"
# Or switch models
"Use Claude 3.5 Sonnet for better compression quality"
The set-provider, set-api-key, and set-model tools allow you to configure TokenShrinker dynamically through natural language!
All summaries are saved in a summaries/ directory in your project root:
your-project/
├── src/
│ ├── app.js
│ └── utils.js
├── summaries/
│ ├── src/
│ │ ├── app.js.summary.json
│ │ └── utils.js.summary.json
│ └── .cache.json
├── .env
└── package.json
File Structure:
summaries/ - Mirror of your source tree with .summary.json filessummaries/.cache.json - Cache metadata (file hashes and timestamps)TokenShrinker provides 5 MCP tools for AI assistants:
shrinkCompress text content to reduce token usage
// Input
{
"text": "Your large text content here..."
}
// Output
{
"compressedText": "Shortened version...",
"compressionRatio": "75%",
"success": true
}
summarizeGenerate summaries for text, files, or entire repositories
// Input
{
"content": "your content or file path",
"type": "text" // or "file" or "repo"
}
fetch-summaryRetrieve cached repository summaries
// Input
{
"repoPath": "/path/to/repo" // optional, uses current dir
}
set-modelSet your preferred model for the current provider
// Input
{
"model": "anthropic/claude-3.5-sonnet"
}
// Output
{
"message": "Model set to: anthropic/claude-3.5-sonnet",
"model": "anthropic/claude-3.5-sonnet",
"note": "This setting persists for the current session..."
}
get-configView current configuration and available models
// Input
{}
// Output
{
"openRouterApiKey": "configured",
"currentModel": "meta-llama/llama-4-maverick:free",
"availableModels": ["anthropic/claude-3.5-sonnet", "openai/gpt-4o", "..."]
}
When connected to Claude Desktop or Cursor, you can use natural language:
"Can you compress this long code snippet for me?"
"Show me a summary of this entire codebase"
"What's the cached summary of our current repository?"
The MCP server handles everything automatically!
FAQs
Model Context Protocol (MCP) server for compressing AI context to reduce token usage. Provides tools to shrink text, summarize files, and cache repository content.
We found that token-shrinker demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.

Security News
The Axios compromise shows how time-dependent dependency resolution makes exposure harder to detect and contain.