New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details
Socket
Book a DemoSign in
Socket

toon-optimizer

Package Overview
Dependencies
Maintainers
1
Versions
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

toon-optimizer

Intelligent TOON optimization with cost analysis and integrations, built on @toon-format/toon.

latest
Source
npmnpm
Version
0.1.0
Version published
Maintainers
1
Created
Source

toon-optimizer

Smart TOON optimization with cost analysis and integrations — built on the official @toon-format/toon encoder.

Why

Reduce LLM token costs by converting tabular JSON payloads to TOON text when it helps — and avoid TOON when it hurts (deep nesting). This package adds:

  • Token counting and multi-scale cost estimates (per-request, per1K, per100K, per1M)
  • Smart recommendations with structure analysis and actionable suggestions
  • Textual TOON output compatible with the official spec
  • Streaming, middleware, and CLI for easy adoption

Install

npm install toon-optimizer

Quick Start

import { toTOON, analyzeSavings, smartConvert } from 'toon-optimizer';

// Convert to textual TOON
const toonText = toTOON(data);

// Analyze token and cost impact (OpenAI example)
const analysis = analyzeSavings(data, {
  cost: { provider: 'openai', model: 'gpt-4o-mini' },
  conciseSummary: true
});
console.log(analysis.summary);

// Smart conversion with automatic fallback
const result = smartConvert(data, { autoDetect: true, threshold: 0.2, fallback: 'json' });
if (result.format === 'toon-text') {
  // Send toon text to your LLM/pipeline
}

CLI

# Analyze (includes summary line with --summary)
node dist/cli/index.cjs analyze ./tests/test2.json --provider openai --model gpt-4o-mini --summary

# Convert to textual TOON
node dist/cli/index.cjs convert ./tests/test2.json --output ./results/test2.toon

Examples (measured)

  • Deeply nested JSON (test2.json)

    • Tokens: JSON 920 vs TOON 1401 (+52.3%)
    • Cost (OpenAI input @ $0.15 / 1M): extra $72.15 per 1M requests
    • Verdict: keep JSON
    • Reason: Deep nesting (depth 12 > threshold 3) adds whitespace/overhead
  • Flat(ish) users example (users.flat.json)

    • Tokens: JSON 53 vs TOON 33 (−37.7%)
    • Cost: saves ~$3.00 per 1M requests
    • Note: Best savings are achieved when your top-level data is an array of objects with consistent keys. Nesting under another object may limit the savings shown here.

When TOON helps (and when it doesn’t)

  • Helps: Arrays of objects with consistent keys (tabular), modest depth (≤3)
  • Avoid: Deeply nested structures (>3), heterogeneous row keys, large unstructured text

Recommendation details

analyzeSavings() returns:

  • tokens: { jsonTokens, toonTokens, tokenDiff, diffPct }
  • cost: { perRequest, per1K, per100K, per1M, normalized }
  • structure: { maxDepth, arrayCount, rowCount, keyConsistency, depthThreshold, exceededBy, verdict }
  • recommendation: { recommended, reason, suggestions, alternatives }
  • summary (optional): one-line CI-friendly string when conciseSummary: true

Acknowledgements

  • Built on the official TOON library: @toon-format/toon (MIT). Thanks to the maintainers for providing the canonical encoder/decoder.

License

MIT

Keywords

TOON

FAQs

Package last updated on 11 Nov 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts