New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

bunqueue

Package Overview
Dependencies
Maintainers
1
Versions
194
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

bunqueue

High-performance job queue for Bun & AI agents. SQLite persistence, cron scheduling, priorities, retries, DLQ, webhooks, native MCP server. Zero external dependencies.

latest
Source
npmnpm
Version
2.6.113
Version published
Weekly downloads
3.1K
-52.07%
Maintainers
1
Weekly downloads
 
Created
Source

bunqueue

npm version npm downloads CI GitHub Stars License

High-performance job queue for Bun. Built for AI agents and automation.
Zero external dependencies. MCP-native. TypeScript-first.

Documentation · Benchmarks · npm

bunqueue Dashboard
A visual interface for managing queues, jobs, workers and monitoring in real time. Currently in beta.

https://github.com/user-attachments/assets/e8a8d38e-b4a6-4dc8-8360-876c0f24d116

Want early access? Reach out at egeominotti@gmail.com

Why bunqueue?

LibraryRequiresAI-native
BullMQRedisNo
AgendaMongoDBNo
pg-bossPostgreSQLNo
bunqueueNothingYes
  • MCP server included — 73 tools, 5 resources, 3 prompts. AI agents get full control out of the box
  • BullMQ-compatible API — Same Queue, Worker, QueueEvents
  • Zero dependencies — No Redis, no MongoDB
  • SQLite persistence — Survives restarts, WAL mode for concurrent access
  • Up to 286K ops/secVerified benchmarks

Built for AI Agents (MCP Server)

HTTP Handler Flow: Cron/Add Job → Queue → Embedded Worker → HTTP API → Job Result

bunqueue is the first job queue with native MCP support. AI agents get a full-featured scheduler, task queue, and monitoring system — no glue code needed.

HTTP Handlers solve a fundamental problem: an AI agent can schedule jobs and manage queues, but it cannot run a persistent worker. When the agent registers an HTTP handler, bunqueue spawns an embedded Worker that continuously pulls jobs and calls your HTTP endpoint. Responses are saved as results. Failed calls retry automatically via DLQ.

What AI agents can do with bunqueue:

  • Schedule tasks — cron jobs, delayed execution, recurring workflows
  • Manage job pipelines — push jobs, monitor progress, retry failures
  • Full pull/ack/fail cycle — agents can consume and process jobs directly
  • Monitor everything — stats, memory, Prometheus metrics, logs, DLQ
  • Control flow — pause/resume queues, set rate limits, manage concurrency
  • 73 MCP tools + 5 resources + 3 prompts — complete control over every feature
  • HTTP handlers — register a URL, bunqueue auto-processes jobs via HTTP calls
# One command to connect Claude Code
claude mcp add bunqueue -- bunx bunqueue-mcp
// Claude Desktop / Cursor / Windsurf — add to MCP config
{
  "mcpServers": {
    "bunqueue": {
      "command": "bunx",
      "args": ["bunqueue-mcp"]
    }
  }
}

Example agent interactions:

  • "Schedule a cleanup job every day at 3 AM"
  • "Add 500 email jobs to the queue with priority 10"
  • "Show me all failed jobs and retry them"
  • "Set rate limit to 50/sec on the api-calls queue"
  • "What's the memory usage and queue throughput?"

Plugin ecosystem — bunqueue ships with auto-discovery (.mcp.json), a custom Claude Code agent for bunqueue tasks, and installable skills for setup, API reference, and real-world patterns. Drop bunqueue into any project and your AI tools discover it automatically.

Supports embedded (local SQLite) and TCP (remote server) modes. Full MCP documentation →

When to use bunqueue

Great for:

  • AI agents that need a scheduler — cron jobs, delayed tasks, retries, all via MCP
  • Agentic workflows — agents push jobs, workers process, agents monitor results
  • Single-server deployments
  • Prototypes and MVPs
  • Moderate to high workloads (up to 286K ops/sec)
  • Teams that want to avoid Redis operational overhead
  • Embedded use cases (CLI tools, edge functions, serverless)

Not ideal for:

  • Multi-region distributed systems requiring HA
  • Workloads that need automatic failover today
  • Systems already running Redis with existing infrastructure

Why not just use BullMQ?

If you're already running Redis, BullMQ is great — battle-tested and feature-rich.

bunqueue is for when you don't want to run Redis. SQLite with WAL mode handles surprisingly high throughput for single-node deployments (tested up to 286K ops/sec). You get persistence, priorities, delays, retries, cron jobs, and DLQ — without the operational overhead of another service.

Install

bun add bunqueue

Requires Bun runtime. Node.js is not supported.

Two Modes

bunqueue runs in two modes depending on your architecture:

EmbeddedServer (TCP)
How it worksQueue runs inside your processStandalone server, clients connect via TCP
Setupbun add bunqueuedocker run or bunqueue start
Performance286K ops/sec149K ops/sec
Best forSingle-process apps, CLIs, serverlessMultiple workers, separate producer/consumer
ScalingSame process onlyMultiple clients across machines

Embedded Mode

Everything runs in your process. No server, no network, no setup.

import { Queue, Worker } from 'bunqueue/client';

const queue = new Queue('emails', { embedded: true });

const worker = new Worker(
  'emails',
  async (job) => {
    console.log('Processing:', job.data);
    return { sent: true };
  },
  { embedded: true }
);

await queue.add('welcome', { to: 'user@example.com' });

Server Mode (TCP)

Run bunqueue as a standalone server. Multiple workers and producers connect via TCP.

# Start with persistent data
docker run -d -p 6789:6789 -p 6790:6790 \
  -v bunqueue-data:/app/data \
  ghcr.io/egeominotti/bunqueue:latest

Connect from your app:

import { Queue, Worker } from 'bunqueue/client';

const queue = new Queue('tasks', { connection: { host: 'localhost', port: 6789 } });

const worker = new Worker(
  'tasks',
  async (job) => {
    return { done: true };
  },
  { connection: { host: 'localhost', port: 6789 } }
);

await queue.add('process', { data: 'hello' });

Simple Mode

One object. Queue + Worker + Routes + Middleware + Cron. Zero boilerplate.

import { Bunqueue } from 'bunqueue/client';

const app = new Bunqueue('notifications', {
  embedded: true,

  // Route jobs by name
  routes: {
    'send-email': async (job) => {
      console.log(`Email to ${job.data.to}`);
      return { sent: true };
    },
    'send-sms': async (job) => {
      console.log(`SMS to ${job.data.to}`);
      return { sent: true };
    },
  },
  concurrency: 10,
});

// Middleware — wraps every job (logging, timing, error recovery)
app.use(async (job, next) => {
  const start = Date.now();
  const result = await next();
  console.log(`${job.name} took ${Date.now() - start}ms`);
  return result;
});

// Cron — scheduled jobs
await app.cron('daily-report', '0 9 * * *', { type: 'summary' });
await app.every('healthcheck', 30000, { type: 'ping' });

// Events
app.on('completed', (job, result) => console.log(result));
app.on('failed', (job, err) => console.error(err));

// Add jobs
await app.add('send-email', { to: 'alice@example.com' });
await app.add('send-sms', { to: '+1234567890' });

// Graceful shutdown
await app.close();

Works with both embedded and TCP mode. Simple Mode docs →

Performance

SQLite handles surprisingly high throughput for single-node deployments:

ModePeak ThroughputUse Case
Embedded286K ops/secSame process
TCP149K ops/secDistributed workers

Run bun run bench to verify on your hardware. Full benchmark methodology →

Monitoring

# Start with Prometheus + Grafana
docker compose --profile monitoring up -d

Documentation

Read the full documentation →

License

MIT

Keywords

bun

FAQs

Package last updated on 03 Apr 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts