New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details โ†’
Socket
Book a DemoSign in
Socket

cursor-usage-tracker

Package Overview
Dependencies
Maintainers
1
Versions
35
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

cursor-usage-tracker

Open-source Cursor IDE usage monitoring, anomaly detection, and alerting for enterprise teams

latest
Source
npmnpm
Version
1.24.0
Version published
Maintainers
1
Created
Source

Cursor Usage Tracker - open-source Cursor Enterprise cost monitoring dashboard with anomaly detection and Slack alerts

Cursor Usage Tracker

Open-source cost monitoring and optimization for Cursor Enterprise teams. Track AI spend per developer, spot unnecessary expensive model usage, detect anomalies automatically, and get Slack alerts before the invoice surprises you. Self-host with Docker or let us run it for you.

Website โ€” Hosted & Managed ย  Quick Start ย  Features

CI CodeQL OpenSSF Scorecard OpenSSF Best Practices License: MIT TypeScript Docker

AI Spend Is a Blind Spot

Engineering costs used to be two things: headcount and cloud infrastructure. You had tools for both. Then AI coding assistants showed up, and suddenly there's a third cost center that nobody has good tooling for.

A single developer on Cursor can burn through hundreds of dollars a day just by switching to an expensive model or letting an agent loop run wild. Developers often don't know which models cost more - one of our team members used opus-max for weeks thinking it was a cheaper option. Now multiply that confusion by 50, 100, 500 developers. The bill gets big fast, and there's nothing like Datadog or CloudHealth for this category yet.

Cursor's admin dashboard shows you the raw numbers, but it won't tell you when something is off. No anomaly detection. No alerts. No incident tracking. You find out about cost spikes when the invoice lands, weeks after the damage is done.

I built cursor-usage-tracker to fix that. It sits on top of Cursor's Enterprise APIs and gives engineering managers, finance, and platform teams actual visibility into AI spend before it becomes a surprise.

What This Dashboard Answers

  • Cost monitoring - Are we spending too much? Who's driving it? Why?
  • Cost optimization - Who's using expensive models when cheaper ones would do? How much would switching save?
  • Adoption tracking - Is everyone using the tool we're paying for?
  • Usage understanding - How is each person working with AI?

What It Does

Your company has 50+ developers on Cursor. Do you know who's spending $200/day on Claude Opus while everyone else uses Sonnet?

You're about to find out.

Watch the full demo (90 seconds):

Watch the full demo (90 seconds)

It connects to Cursor's Enterprise APIs, collects usage data, and automatically detects anomalies across three layers. When something looks off, you get a Slack message or email within the hour, not next month.

Developer uses Cursor โ†’ API collects data hourly โ†’ Engine detects anomaly โ†’ You get a Slack alert

How It Works

What happensExample
A developer exceeds the spend limitBob spent $82 this cycle (limit: $50) โ†’ Slack alert
Someone's daily spend spikesAlice: daily spend spiked to $214 (4.2x her 7-day avg of $51) โ†’ Slack alert
A user's cycle spend is far above the teamBob: cycle spend $957 is 5.1x the team median ($188) โ†’ Slack alert
A user is statistically far from the teamBob: daily spend $214 is 3.2ฯƒ above team mean ($42) โ†’ Slack alert
Someone switches to an expensive modelBob: cost/request spiked to $1.45 (4.2x his avg of $0.34), using opus-max โ†’ Slack alert
A developer uses an expensive model when others don'tBob averaged $4.20/req on claude-opus-max (team median: $0.52 on sonnet) โ†’ Model cost comparison table

Every alert includes who, what model, how much, and a link to their dashboard page so you can investigate immediately.

Quiz Time: Can you guess which Cursor model costs the most? ๐Ÿค”

Your developers see this list every day. Can you spot the expensive one?

โ€ƒ claude-4.6-opus-high โ€ƒ
โ€ƒ composer-1.5 โ€ƒ
โ€ƒ claude-4.6-opus-max-thinking-fast โ€ƒ
โ€ƒ claude-4.5-opus-high โ€ƒ
โ€ƒ claude-4.6-opus-max โ€ƒ
ย ย Reveal the answerย ย 
ModelOutput $/1M tokensRelative cost
composer-1.5$17.501x (cheapest)
claude-4.5-opus-high$251.4x
claude-4.6-opus-high$251.4x
claude-4.6-opus-max$25 + 20% surcharge~1.7x
claude-4.6-opus-max-thinking-fast$150 + 20% surcharge~10x ๐Ÿ”ฅ

"Fast" sounds cheap. It's the most expensive model Cursor offers.
Now imagine 200 developers picking from this dropdown without knowing.

Features

Three-Layer Anomaly Detection

LayerMethodWhat it catches
ThresholdsStatic limitsOptional hard caps on spend, requests, or tokens (disabled by default)
Z-ScoreStatisticalUser daily spend 2.5+ standard deviations above team mean (active users only)
TrendsSpend-basedDaily spend spikes vs personal average, cycle spend outliers vs team median
Expensive ModelCost/requestUser's $/request jumps vs their own history (catches model switches like max-thinking)

Incident Lifecycle (MTTD / MTTI / MTTR)

Every anomaly becomes a tracked incident with full lifecycle metrics:

Anomaly Detected โ”€โ”€โ†’ Alert Sent โ”€โ”€โ†’ Acknowledged โ”€โ”€โ†’ Resolved
       โ”‚                  โ”‚               โ”‚              โ”‚
       โ””โ”€โ”€โ”€โ”€ MTTD โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜               โ”‚              โ”‚
                                          โ””โ”€โ”€ MTTI โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
       โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ MTTR โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
  • MTTD (Mean Time to Detect): how fast the system catches it
  • MTTI (Mean Time to Identify): how fast a human acknowledges it
  • MTTR (Mean Time to Resolve): how fast it gets fixed

Proactive Slack Notifications

You don't need to remember to check the dashboard. The system comes to you.

NotificationWhen it firesWhat you learn
Anomaly alertsWithin the hour"Alice's daily spend spiked to $214 (4.2x her 7-day avg)"
Plan exhaustionDaily, when users exceed plan"65/151 active users have exceeded their included plan this cycle"
Cycle summary3 days before billing cycle endsTotal spend, unused seats, top spenders, adoption breakdown, cycle-over-cycle trend

Anomaly alerts include severity, user, model, value vs threshold, and a direct link to the user's dashboard page. Cycle summaries tell you how many seats are going unused and who's driving cost, so you can act before the invoice lands.

Also supports email alerts via Resend (one API key, no SMTP config).

Web Dashboard

PageWhat you see
Team OverviewStat cards, spend by user, daily spend trend, spend breakdown, members table with search/sort, group filter dropdown, billing cycle progress, time range picker
InsightsDAU chart, model adoption trends, model efficiency rankings (cost/precision), repository code volume (lines, % of total, AI %), MCP tool usage, file extensions, client versions
User DrilldownPer-user token timeline, model breakdown, feature usage, repository breakdown, activity profile, anomaly history
AnomaliesOpen incidents, MTTD/MTTI/MTTR metrics, full anomaly timeline
SettingsDetection thresholds, expensive model alerts, billing group management, HiBob CSV import, group export/import

For a detailed breakdown of every section, metric, badge, and chart, see FEATURES.md.

Deploy

One-click deploy

Deploy your own instance in minutes. You'll need a Cursor Enterprise plan and an Admin API key.

Deploy to Render

Railway and Docker options below. Want help setting this up for your team? Deployment, threshold tuning, first spend analysis, ongoing support. Let's talk.

Quick Start

Prerequisites

WhatWhere to get it
Cursor Enterprise planRequired for API access
Admin API keyCursor dashboard โ†’ Settings โ†’ Advanced โ†’ Admin API Keys
Node.js 18+nodejs.org

1. Set up

Option A: One command

npx cursor-usage-tracker my-tracker
cd my-tracker

Option B: Manual clone

git clone https://github.com/ofershap/cursor-usage-tracker.git
cd cursor-usage-tracker
npm install

2. Configure

cp .env.example .env

Edit .env:

# Required
CURSOR_ADMIN_API_KEY=your_admin_api_key

# Alerting โ€” Slack (at least one alerting channel recommended)
SLACK_BOT_TOKEN=xoxb-your-bot-token          # bot token with chat:write scope
SLACK_CHANNEL_ID=C0123456789                  # channel to post alerts to

# Dashboard URL (used in alert links)
DASHBOARD_URL=http://localhost:3000

# Optional
CRON_SECRET=your_secret_here                  # protects the cron endpoint

# Email alerts via Resend (optional)
RESEND_API_KEY=re_xxxxxxxxxxxx
ALERT_EMAIL_TO=team-lead@company.com

3. Start the dashboard

npm run dev
# Open http://localhost:3000

4. Collect your first data

npm run collect

You should see:

[collect] Done in 4.2s
  Members: 87
  Daily usage: 30
  Spending: 87
  Usage events: 12,847

5. Run anomaly detection

After collecting data, run detection separately:

npm run detect

This runs the stored data through all three detection layers and sends alerts for anything it finds.

npm run collect only fetches data. npm run detect only runs detection. The cron endpoint (POST /api/cron) does both in one call.

6. Set up recurring collection

Trigger the cron endpoint hourly (via crontab, GitHub Actions, or any scheduler):

curl -X POST http://localhost:3000/api/cron -H "x-cron-secret: YOUR_SECRET"

This collects data, runs anomaly detection, and sends alerts in one call.

Production Deployment

Docker (self-hosted)

cp .env.example .env   # configure your keys
docker compose up -d
# Dashboard at http://localhost:3000

The Docker image uses multi-stage builds for a minimal production image. Data persists in a Docker volume.

Docker Compose details
services:
  tracker:
    build: .
    ports:
      - "3000:3000"
    env_file: .env
    volumes:
      - tracker-data:/app/data
volumes:
  tracker-data:
fly launch --copy-config          # creates the app from fly.toml
fly volumes create tracker_data --region ams --size 1
fly secrets set CURSOR_ADMIN_API_KEY=your_key CRON_SECRET=your_secret
fly deploy
# Dashboard at https://your-app.fly.dev

Set up hourly collection by adding DASHBOARD_URL and CRON_SECRET as GitHub Actions secrets. The included .github/workflows/cron.yml workflow triggers /api/cron every hour.

Other cloud platforms

Any platform that supports Docker + persistent volumes works:

  • Render - use the deploy button above, or render.yaml in this repo
  • Railway - create a project from this repo, attach a volume at /app/data

Serverless platforms (Vercel, AWS Lambda, etc.) require replacing SQLite with an external database. The data layer is abstracted behind src/lib/data/. Swap the implementation to use Postgres, Supabase, PlanetScale, or any other database. See Architecture for details.

Architecture

flowchart TB
    APIs["Cursor Enterprise APIs\n/teams/members ยท /teams/spend ยท /teams/daily-usage-data\n/teams/filtered-usage-events ยท /teams/groups ยท /analytics/team/*"]
    C["Collector (hourly)"]
    DB[("Database\n(SQLite default, swappable)")]
    D["Detection Engine, 3 layers"]
    AL["Alerts: Slack / Email"]
    DA["Dashboard: Next.js"]

    APIs --> C --> DB --> D
    DB --> DA
    D --> AL

The data layer is abstracted behind src/lib/data/. SQLite is the default (zero-config), but you can swap the implementation for Postgres, Supabase, or any database that fits your infrastructure.

Configuration

All detection thresholds are configurable via the Settings page or the API:

SettingDefaultWhat it does
Max spend per cycle0 (off)Alert when a user exceeds this in a billing cycle
Max requests per day0 (off)Alert on excessive daily request count
Max tokens per day0 (off)Alert on excessive daily token consumption
Z-score multiplier2.5How many standard deviations above mean to flag (spend + reqs)
Z-score window7 daysHistorical window for statistical comparison
Spend spike multiplier5.0xAlert when today's spend > Nร— user's personal daily average
Spend spike lookback7 daysHow many days of history to compare against
Cycle outlier multiplier10.0xAlert when cycle spend > Nร— team median (active users only)
Cost/req spike multiplier3.0xAlert when today's $/request > Nร— user's historical average
Cost/req min daily spend$20Skip cost/req alerts for users below this daily spend

Settings

The Settings page (/settings) is where you configure detection behavior and manage your team structure. Everything is persisted locally and takes effect on the next detection run.

Detection Thresholds

All anomaly detection parameters listed in Configuration above are editable from the Settings page. Static thresholds, z-score sensitivity, spend spike multipliers, the expensive model detector. Set any value to 0 to disable that check.

Billing Groups

Billing groups let you organize team members by department, team, or any structure that fits your org. The Team Overview page has a group filter dropdown. Select a group to scope all stats, charts, and the members table to that subset.

From the Settings page you can:

  • View all groups with member counts and per-group spend
  • Rename groups to match your org structure (displayed as Parent > Team)
  • Reassign individual members between groups
  • Create new groups manually
  • Search across all members to find and reassign anyone
  • Export your current group mapping as a CSV backup
  • Import a previously exported CSV to restore or transfer mappings between environments

HiBob Import

For teams using HiBob as their HR platform, the Settings page includes a dedicated Import from HiBob flow:

  • Export a CSV from HiBob's People Directory (include Email, Department, Group, and Team columns)
  • Upload it to the import modal
  • Review the preview: which members move, which groups get created, who wasn't matched
  • Selectively approve or reject individual changes before applying

The import builds a Group > Team hierarchy automatically. Small teams (fewer than 3 members) are merged into their parent group. Members not found in the CSV keep their current assignment.

The HiBob import updates your local billing groups only. It does not push changes back to HiBob or to Cursor's billing API.

Authentication

Authentication is fully optional. When no auth environment variables are set, the dashboard is open (the default behavior). Setting AUTH_SECRET enables Google OAuth sign-in.

Setup

  • Create a Google OAuth app with redirect URI:

    • Local: http://localhost:3000/api/auth/callback/google
    • Production: https://your-domain.com/api/auth/callback/google
  • Add to your .env:

AUTH_SECRET=$(openssl rand -base64 32)       # encryption key for sessions
AUTH_GOOGLE_ID=your-client-id.apps.google... # Google OAuth client ID
AUTH_GOOGLE_SECRET=GOCSPX-...               # Google OAuth client secret
AUTH_TRUST_HOST=true                         # required behind a reverse proxy
AUTH_URL=https://your-domain.com             # public URL (auto-detected locally)
  • Optionally restrict access by domain or specific emails:
AUTH_ALLOWED_DOMAIN=yourcompany.com          # only @yourcompany.com emails
AUTH_ALLOWED_EMAILS=admin@example.com,cto@example.com  # or specific emails

When both are set, either match grants access. When neither is set, any Google account can sign in.

How It Works

  • Sessions use encrypted JWT cookies, no database tables needed
  • The /api/cron endpoint is excluded from auth (it uses its own CRON_SECRET)
  • Sign-in page appears automatically when auth is enabled
  • User avatar and sign-out menu appear in the nav bar

API Endpoints

EndpointMethodDescription
/api/cronPOSTCollect + detect + alert (use with scheduler)
/api/statsGETDashboard statistics (?days=7)
/api/analyticsGETAnalytics data: DAU, models, MCP, etc. (?days=30)
/api/team-spendGETDaily team spend breakdown
/api/model-costsGETModel cost breakdown by users and spend
/api/groupsGETBilling groups with members and spend
/api/groupsPATCHRename group, assign member, or create group
/api/groups/importPOSTHiBob CSV import (preview + apply)
/api/anomaliesGETAnomaly timeline (?days=30)
/api/users/[email]GETPer-user statistics (?days=30)
/api/incidents/[id]PATCHAcknowledge or resolve incident
/api/settingsGET/PUTDetection configuration

Tech Stack

ComponentTechnology
FrameworkNext.js App Router
LanguageTypeScript strict mode
DatabaseSQLite via better-sqlite3 (swappable)
ChartsRecharts
StylingTailwind CSS
TestingVitest
DeploymentDocker multi-stage build

Development

npm run dev          # Start dev server
npm run collect      # Manual data collection
npm run detect       # Manual anomaly detection + alerting
npm run typecheck    # Type checking
npm test             # Run tests
npm run lint         # Lint + format check

Cursor API Requirements

Requires a Cursor Enterprise plan. The tool uses these endpoints:

EndpointAuthWhat it provides
GET /teams/membersAdmin API keyTeam member list
POST /teams/spendAdmin API keyPer-user spending data
POST /teams/daily-usage-dataAdmin API keyDaily usage metrics
POST /teams/filtered-usage-eventsAdmin API keyDetailed usage events with model/token info
POST /teams/groupsAdmin API keyBilling groups + cycle dates
GET /analytics/team/*Analytics API keyDAU, model usage, MCP, tabs, etc. (optional)

Rate limit: 20 requests/minute (Admin API), 100 requests/minute (Analytics API). The collector handles rate limiting with automatic retry.

Security

This project handles sensitive usage and spending data, so security matters here more than most.

  • Vulnerability reporting: See SECURITY.md for the disclosure policy. Report vulnerabilities privately via GitHub Security Advisories, not public issues.
  • Automated scanning: Every push and PR goes through CodeQL (SQL injection, XSS, CSRF, etc.) and Dependabot for dependency vulnerabilities.
  • OpenSSF Scorecard: Continuously evaluated against OpenSSF Scorecard security benchmarks.
  • OpenSSF Best Practices: Passing badge earned.
  • Data stays yours: Everything is stored in your own infrastructure. No external services, no telemetry, no data leaving your network.
  • Small dependency tree: Fewer dependencies = smaller attack surface.
  • Signed releases: Automated via semantic-release with GitHub-verified provenance.

Contributing

See CONTRIBUTING.md for setup and guidelines. Bug reports, feature requests, docs improvements, and code are all welcome. Use conventional commits and make sure CI is green before opening a PR.

Code of Conduct

This project uses the Contributor Covenant Code of Conduct.

Author

Made by ofershap

LinkedIn GitHub

License

MIT ยฉ Ofer Shapira

Keywords

cursor

FAQs

Package last updated on 29 Mar 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts