
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
code-mode-toon
Advanced tools
Lightweight MCP orchestrator with TOON compression (30-90% token savings) and lazy loading for efficient AI agent workflows
A lightweight Model Context Protocol (MCP) orchestrator designed for efficiency at scale. It features TOON compression (reducing token usage by 30-90%) and Lazy Loading, making it the ideal solution for complex, multi-tool agentic workflows.
Recent articles from Anthropic and Cloudflare (see Here) highlights a critical bottleneck: AI agents struggle with complex, multi-step workflows because they lack state.
While Code Execution (e.g., TypeScript) allows agents to maintain state and structure workflows effectively, it introduces a new problem: Data Bloat. Real-world operations (like SRE log analysis or database dumps) generate massive JSON payloads that explode the context window, making stateful execution prohibitively expensive.
CodeModeTOON bridges this gap. It enables:
graph LR
A[AI Agent<br/>Claude/Cursor] -->|JSON-RPC| B[CodeModeTOON<br/>Server]
B -->|Lazy Load| C[Perplexity]
B -->|Lazy Load| D[Context7]
B -->|Lazy Load| E[Custom Servers]
C -->|Raw JSON| B
D -->|Raw JSON| B
E -->|Raw JSON| B
B -->|TOON<br/>Compressed| A
style B fill:#4f46e5,color:#fff
style A fill:#10b981,color:#fff
Data Flow: Requests route through CodeModeTOON → Servers are lazy-loaded on-demand → Responses are TOON-compressed before returning to the agent.
Reduces token usage by 30-90% for structured data.
Servers only start when needed. Zero overhead for unused tools.
Secure JS execution with auto-proxied MCP tool access.
vm module (not for multi-tenant use)Designed for programmatic discovery and self-correction.
suggest_approach: Meta-tool that recommends the best execution strategy (code vs workflow vs direct call).execute_code returns operation counts and compression savings to reinforce efficient behavior.✅ Perfect for:
❌ Not ideal for:
Add this to your ~/.cursor/mcp.json:
{
"mcpServers": {
"code-mode-toon": {
"type": "stdio",
"command": "npx",
"args": ["-y", "code-mode-toon"],
"env": {
"CODE_MODE_TOON_CONFIG": "~/.cursor/mcp.json"
}
}
}
}
CodeModeTOON includes a pre-built Claude Skill to make your AI assistant an expert at using this orchestrator.
code-mode-toon-workflow-expertA specialized skill that teaches Claude how to:
Installation:
claude-skills/code-mode-toon-workflow-expert.skill.claude/skills/ directory (or import via Claude desktop app).Copy these prompts into your AI's custom instructions (e.g., .cursorrules or Claude Project instructions) to maximize CodeModeTOON's potential.
Goal: Teaches the AI to act as an orchestrator and prioritize workflows.
YOU ARE AN AGENTIC ORCHESTRATOR. You have access to "CodeModeTOON", a high-efficiency MCP bridge.
1. PRIORITIZE WORKFLOWS: Before running single tools, check `list_workflows`. If a workflow exists (e.g., `research`, `k8s-detective`), USE IT. It is faster and saves tokens.
2. HANDLE COMPRESSED DATA: Outputs may be "TOON encoded" (highly compressed JSON). This is normal. Do not complain about "unreadable data" - simply parse it or ask for specific fields if needed.
3. BATCH OPERATIONS: Never run 3+ sequential tool calls if they can be batched. Use `execute_code` to run them in a single block.
Goal: Prevents the AI from giving up if a tool isn't immediately visible.
TOOLS ARE LAZY LOADED. If you need a capability (e.g., "search", "kubernetes", "database") and don't see the tool:
1. DO NOT assume it's missing.
2. RUN `search_tools({ query: "..." })` to find it.
3. RUN `get_tool_api({ serverName: "..." })` to learn how to use it.
4. Only then, execute the tool.
Goal: Enforces token-saving behaviors for large data operations.
OPTIMIZE FOR TOKENS. When fetching large datasets (logs, docs, API responses):
1. ALWAYS wrap the output in `TOON.encode(data)` inside `execute_code`.
2. PREFER structured data (JSON/Objects) over plain text. TOON compresses structure by ~83%, but text by only ~4%.
3. IF synthesizing data, do it server-side (via workflow `synthesize: true`) to avoid pulling raw data into context.
After installation, try this 30-second demo in Claude or Cursor:
// Ask your AI assistant to run this via execute_code
const api = await get_tool_api({ serverName: 'perplexity' });
const result = await servers['perplexity'].perplexity_ask({
messages: [{ role: 'user', content: "Explain TOON compression" }]
});
console.log(result); // See compression in action! ~40% token savings
What just happened? The response was automatically TOON-encoded, saving tokens.
// Inside execute_code
const api = await get_tool_api({ serverName: 'perplexity' });
// Request large data - automatically compressed!
const result = await servers['perplexity'].perplexity_ask({
messages: [{ role: 'user', content: "Summarize the history of Rome" }]
});
console.log(result); // Returns TOON-encoded string, saving ~40% tokens
// Fetch large documentation from Context7
const api = await get_tool_api({ serverName: 'context7' });
const docs = await servers['context7']['get-library-docs']({
context7CompatibleLibraryID: 'kubernetes/kubernetes'
});
console.log(TOON.encode(docs)); // Massive compression on structured data
// Run a complex research workflow
const result = await workflows.research({
goal: "Compare xsync vs sync.Map performance",
queries: ["xsync vs sync.Map benchmarks"],
synthesize: true,
outputFile: "/tmp/research.toon"
});
console.log(result.synthesis); // LLM-synthesized findings
CodeModeTOON supports Workflows—pre-defined, server-side TypeScript modules that orchestrate multiple MCP tools.
A powerful research assistant that:
See .workflows/README.md for detailed documentation, usage examples, and AI prompts.
Scenario 2 (92% savings) demonstrates CodeModeTOON's strength:
| Metric | Original | TOON | Savings |
|---|---|---|---|
| Characters | 37,263 | 2,824 | ~83% |
| Estimated Tokens* | ~9,315 | ~706 | ~8,600 tokens |
| Cost (Claude Sonnet)** | $0.028 | $0.002 | $0.026 |
*Assuming 4 chars/token average
***$3/M tokens input pricing*
Key Insight: For infrastructure audits, log analysis, or database dumps, TOON compression can reduce token costs by 90%+, making complex agentic workflows feasible within budget.
Scenario 1: Natural Language Query (History of Rome) Unstructured text compresses poorly, as expected.
Scenario 2: Kubernetes Cluster Audit (50 Pods) Highly structured, repetitive JSON (infrastructure dumps) compresses extremely well.
Cause: CodeModeTOON can't locate your MCP config.
Solution: Ensure CODE_MODE_TOON_CONFIG points to your config:
export CODE_MODE_TOON_CONFIG=~/.cursor/mcp.json
Cause: Results aren't being encoded.
Solution: Use console.log(TOON.encode(data)), not console.log(data).
Cause: Server name mismatch.
Solution: Verify server name matches your config. Use get_tool_api({ serverName: 'name' }) to inspect available servers.
⚠️ The vm module is NOT a security sandbox. Suitable for personal AI assistant use (Claude, Cursor) with trusted code. Not for multi-tenant or public services.
Built by Ziad Hassan (Senior SRE/DevOps) — LinkedIn · GitHub
Contributions are welcome! 🙌
git clone https://github.com/ziad-hsn/code-mode-toon.git
cd code-mode-toon
npm install
npm test
MIT License — see LICENSE for details.
FAQs
Lightweight MCP orchestrator with TOON compression (30-90% token savings) and lazy loading for efficient AI agent workflows
The npm package code-mode-toon receives a total of 0 weekly downloads. As such, code-mode-toon popularity was classified as not popular.
We found that code-mode-toon demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.