toon-optimizer
Smart TOON optimization with cost analysis and integrations — built on the official @toon-format/toon encoder.
Why
Reduce LLM token costs by converting tabular JSON payloads to TOON text when it helps — and avoid TOON when it hurts (deep nesting). This package adds:
- Token counting and multi-scale cost estimates (per-request, per1K, per100K, per1M)
- Smart recommendations with structure analysis and actionable suggestions
- Textual TOON output compatible with the official spec
- Streaming, middleware, and CLI for easy adoption
Install
npm install toon-optimizer
Quick Start
import { toTOON, analyzeSavings, smartConvert } from 'toon-optimizer';
const toonText = toTOON(data);
const analysis = analyzeSavings(data, {
cost: { provider: 'openai', model: 'gpt-4o-mini' },
conciseSummary: true
});
console.log(analysis.summary);
const result = smartConvert(data, { autoDetect: true, threshold: 0.2, fallback: 'json' });
if (result.format === 'toon-text') {
}
CLI
node dist/cli/index.cjs analyze ./tests/test2.json --provider openai --model gpt-4o-mini --summary
node dist/cli/index.cjs convert ./tests/test2.json --output ./results/test2.toon
Examples (measured)
When TOON helps (and when it doesn’t)
- Helps: Arrays of objects with consistent keys (tabular), modest depth (≤3)
- Avoid: Deeply nested structures (>3), heterogeneous row keys, large unstructured text
Recommendation details
analyzeSavings() returns:
tokens: { jsonTokens, toonTokens, tokenDiff, diffPct }
cost: { perRequest, per1K, per100K, per1M, normalized }
structure: { maxDepth, arrayCount, rowCount, keyConsistency, depthThreshold, exceededBy, verdict }
recommendation: { recommended, reason, suggestions, alternatives }
summary (optional): one-line CI-friendly string when conciseSummary: true
Acknowledgements
- Built on the official TOON library:
@toon-format/toon (MIT). Thanks to the maintainers for providing the canonical encoder/decoder.
License
MIT