New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

better-ccusage

Package Overview
Dependencies
Maintainers
1
Versions
20
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

better-ccusage

Enhanced usage analysis tool for Claude Code with multi-provider support

latest
Source
npmnpm
Version
1.2.9
Version published
Weekly downloads
50
4.17%
Maintainers
1
Weekly downloads
 
Created
Source
better-ccusage logo

better-ccusage

npm version install size zread Ask DeepWiki ClaudeLog - A comprehensive knowledge base for Claude. CodeRabbit Pull Request Reviews

Analyze your Claude Code or Droid token usage and costs from local JSONL files with multi-provider support — incredibly fast and informative!

About better-ccusage

better-ccusage is a fork of the original ccusage project that addresses a critical limitation: while ccusage focuses exclusively on Claude Code usage with Anthropic models, better-ccusage extends support to external providers that use Claude Code with different providers like Anthropic, Zai, Dashscope and many models like glm series from Zai, kat-coder from Kwaipilot, kimi from Moonshot, Minimax, sonnet-4, sonnet-4.5 and Qwen-Max etc..

Why the Fork?

The original ccusage project is designed specifically for Anthropic's Claude Code and doesn't account for:

  • Zai providers that use Claude Code infrastructure with their own models
  • All GLM models (including GLM-5-Turbo), kat-coder, minimax, moonshot models from other AI providers
  • Multi-provider environments where organizations use different AI services through Claude Code

better-ccusage maintains full compatibility with ccusage while adding comprehensive support for these additional providers and models.

better-ccusage Family

📊 better-ccusage - Enhanced Claude Code/Droid Usage Analyzer with Multi-Provider Support

The main CLI tool for analyzing Claude Code/Droid Usage from local JSONL files with support for multiple AI providers including Anthropic, Zai, and All GLM models (including GLM-5-Turbo), kat-coder models. Track daily, monthly, and session-based usage with beautiful tables and live monitoring.

🤖 @better-ccusage/codex - OpenAI Codex Usage Analyzer

Companion tool for analyzing OpenAI Codex usage. Same powerful features as better-ccusage but tailored for Codex users, including GPT-5 support and 1M token context windows.

🔌 @better-ccusage/mcp - MCP Server Integration

Model Context Protocol server that exposes better-ccusage data to Claude Desktop and other MCP-compatible tools. Enable real-time usage tracking directly in your AI workflows.

Installation

Thanks to better-ccusage's incredibly small bundle size, you can run it directly without installation:

# Recommended - always include @latest to ensure you get the newest version
npx better-ccusage@latest
bunx better-ccusage

# Alternative package runners
pnpm dlx better-ccusage
pnpx better-ccusage

# Using deno (with security flags)
deno run -E -R=$HOME/.claude/projects/ -S=homedir -N='raw.githubusercontent.com:443' npm:better-ccusage@latest

💡 Important: We strongly recommend using @latest suffix with npx (e.g., npx better-ccusage@latest) to ensure you're running the most recent version with the latest features and bug fixes.

Codex CLI

Analyze OpenAI Codex usage with our companion tool @better-ccusage/codex:

# Recommended - always include @latest
npx @better-ccusage/codex@latest
bunx @better-ccusage/codex@latest  # ⚠️ MUST include @latest with bunx

# Alternative package runners
pnpm dlx @better-ccusage/codex
pnpx @better-ccusage/codex

# Using deno (with security flags)
deno run -E -R=$HOME/.codex/ -S=homedir -N='raw.githubusercontent.com:443' npm:@better-ccusage/codex@latest

⚠️ Critical for bunx users: Bun 1.2.x's bunx prioritizes binaries matching the package name suffix when given a scoped package. For @better-ccusage/codex, it looks for a codex binary in PATH first. If you have an existing codex command installed (e.g., GitHub Copilot's codex), that will be executed instead. Always use bunx @better-ccusage/codex@latest with the version tag to force bunx to fetch and run the correct package.

MCP Server

Integrate better-ccusage with Claude Desktop using @better-ccusage/mcp:

# Start MCP server for Claude Desktop integration
npx @better-ccusage/mcp@latest --type http --port 8080

This enables real-time usage tracking and analysis directly within Claude Desktop conversations.

Usage

# Basic usage
npx better-ccusage          # Show daily report (default)
npx better-ccusage daily    # Daily token usage and costs
npx better-ccusage monthly  # Monthly aggregated report
npx better-ccusage session  # Usage by conversation session
npx better-ccusage blocks   # 5-hour billing windows
npx better-ccusage statusline  # Compact status line for hooks (Beta)

# Live monitoring
npx better-ccusage blocks --live  # Real-time usage dashboard

# Filters and options
npx better-ccusage daily --since 20250525 --until 20250530
npx better-ccusage daily --json  # JSON output
npx better-ccusage daily --breakdown  # Per-model cost breakdown
npx better-ccusage daily --timezone UTC  # Use UTC timezone
npx better-ccusage daily --locale ja-JP  # Use Japanese locale for date/time formatting

# Project analysis
npx better-ccusage daily --instances  # Group by project/instance
npx better-ccusage daily --project myproject  # Filter to specific project
npx better-ccusage daily --instances --project myproject --json  # Combined usage

# Compact mode for screenshots/sharing
npx better-ccusage --compact  # Force compact table mode
npx better-ccusage monthly --compact  # Compact monthly report

Multi-Provider Support

better-ccusage extends the original ccusage functionality with automatic support for multiple AI providers:

🔄 Automatic Provider Detection

  • Zero Configuration Required: New providers work automatically without code changes
  • Intelligent Model Resolution: Finds models with or without provider prefixes
  • Fallback Matching: Three-tier matching (exact → suffix → fuzzy) ensures models are always found

How It Works:

  • Direct match: "kimi-for-coding"
  • Provider prefix match: "moonshot/kimi-for-coding"
  • Automatic fallback prevents $0.00 costs from unfound models

🚀 Supported AI Providers & Models

Moonshot AI (kimi-* models):

  • kimi-k2-0905-preview, kimi-k2-0711-preview, kimi-k2-turbo-preview
  • kimi-k2-thinking, kimi-k2-thinking-turbo, kimi-for-coding

MiniMax:

  • MiniMax-M2

All GLM Models

Anthropic (Claude models):

  • All Claude models including claude-sonnet-4-20250514, claude-sonnet-4-5-20250929, etc.

Zai Provider:

  • All Zai-specific model variants like glm-5

And More:

  • kat-coder, deepseek, dashscope, streamlake, etc.

🌐 Provider-Aware Analytics

  • Automatic provider detection from usage data
  • Separate reporting and aggregation by provider
  • Unified interface for multi-provider environments
  • Accurate cost calculation for each provider's pricing structure

Features

  • 📊 Daily Report: View token usage and costs aggregated by date
  • 📅 Monthly Report: View token usage and costs aggregated by month
  • 💬 Session Report: View usage grouped by conversation sessions
  • 5-Hour Blocks Report: Track usage within Claude's billing windows with active block monitoring
  • 📈 Live Monitoring: Real-time dashboard showing active session progress, token burn rate, and cost projections with blocks --live
  • 🚀 Statusline Integration: Compact usage display for Claude Code status bar hooks (Beta)
  • 🤖 Multi-Provider Model Tracking: Track models from Anthropic, Zai, Dashscope and other providers
  • 📊 Model Breakdown: View per-model cost breakdown with --breakdown flag
  • 📅 Date Filtering: Filter reports by date range using --since and --until
  • 📁 Custom Path: Support for custom Claude data directory locations
  • 🎨 Beautiful Output: Colorful table-formatted display with automatic responsive layout
  • 📱 Smart Tables: Automatic compact mode for narrow terminals (< 100 characters) with essential columns
  • 📸 Compact Mode: Use --compact flag to force compact table layout, perfect for screenshots and sharing
  • 📋 Enhanced Model Display: Model names shown as bulleted lists for better readability
  • 📄 JSON Output: Export data in structured JSON format with --json
  • 💰 Cost Tracking: Shows costs in USD for each day/month/session
  • 🔄 Cache Token Support: Tracks and displays cache creation and cache read tokens separately
  • 🔌 MCP Integration: Built-in Model Context Protocol server for integration with other tools
  • 🏗️ Multi-Instance Support: Group usage by project with --instances flag and filter by specific projects
  • 🌍 Timezone Support: Configure timezone for date grouping with --timezone option
  • 🌐 Locale Support: Customize date/time formatting with --locale option (e.g., en-US, ja-JP, de-DE)
  • ⚙️ Configuration Files: Set defaults with JSON configuration files, complete with IDE autocomplete and validation
  • 🚀 Ultra-Small Bundle: Unlike other CLI tools, we pay extreme attention to bundle size - incredibly small even without minification!

Comparison with ccusage

Featureccusagebetter-ccusage
Anthropic Models
Moonshot (kimi) Models
MiniMax Models
GLM* Models
Zai Provider
kat-coder
Automatic Provider Detection
Multi-Provider Support
Cost Calculation by Provider
Original ccusage Features
Show prompt usage for Coding
Droid usage

Star History

Star History Chart

License

MIT © @cobra91

FAQs

Package last updated on 05 Apr 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts