Workmatic
A persistent job queue for Node.js using SQLite. Simple, reliable, and zero external dependencies beyond SQLite.
Why?
I love fastq - it's fast, simple, and has a great API. But it's in-memory only, which can be frustrating when you need jobs to survive process restarts or crashes.
Workmatic combines the simplicity of fastq with SQLite persistence. No Redis, no external services - just a single file that keeps your jobs safe. Perfect for small to medium workloads where you want durability without infrastructure complexity.
Features
- Persistent: Jobs survive restarts via SQLite storage
- Concurrent: Process multiple jobs simultaneously with fastq
- Priority: Process high-priority jobs first
- Delayed: Schedule jobs to run in the future
- Retries: Automatic retries with exponential backoff
- Lease-based: Prevents double processing with lease locks
- State Persistence: Optionally persist worker state across restarts
- Dashboard: Built-in web UI for monitoring
- Type-safe: Full TypeScript support with Kysely
Installation
npm install workmatic
Quick Start
import { createDatabase, createClient, createWorker } from 'workmatic';
const db = createDatabase({ filename: './jobs.db' });
const client = createClient({ db, queue: 'emails' });
const { id } = await client.add({
to: 'user@example.com',
subject: 'Hello!'
});
console.log(`Job created: ${id}`);
const worker = createWorker({
db,
queue: 'emails',
concurrency: 4,
});
worker.process(async (job) => {
console.log(`Sending email to ${job.payload.to}`);
await sendEmail(job.payload);
});
worker.start();
API Reference
createDatabase(options)
Initialize the database connection and schema.
const db = createDatabase({
filename: './jobs.db',
filename: ':memory:',
db: existingSqliteInstance,
});
createClient(options)
Create a client for adding jobs to a queue.
const client = createClient({
db,
queue: 'default',
});
client.add(payload, options?)
Add a job to the queue.
const { ok, id } = await client.add(
{ email: 'user@example.com' },
{
priority: 0,
delayMs: 5000,
maxAttempts: 3,
}
);
client.stats()
Get job statistics for the queue.
const stats = await client.stats();
client.clear(options?)
Clear all jobs from the queue.
const deleted = await client.clear();
console.log(`Deleted ${deleted} jobs`);
const deleted = await client.clear({ status: 'done' });
console.log(`Deleted ${deleted} done jobs`);
createWorker(options)
Create a worker to process jobs from a queue.
const worker = createWorker({
db,
queue: 'default',
concurrency: 1,
leaseMs: 30000,
pollMs: 1000,
timeoutMs: 60000,
backoff: (n) => 1000 * Math.pow(2, n),
persistState: false,
autoRestore: true,
});
worker.process(fn)
Set the job processor function.
worker.process(async (job) => {
console.log(`Processing job ${job.id}`);
console.log(`Payload:`, job.payload);
console.log(`Attempt ${job.attempts + 1} of ${job.maxAttempts}`);
});
worker.start()
Start processing jobs.
worker.start();
worker.stop()
Stop processing and wait for current jobs to finish.
await worker.stop();
worker.pause() / worker.resume()
Pause and resume job processing.
worker.pause();
worker.resume();
worker.stats()
Get job statistics for the queue.
const stats = await worker.stats();
worker.clear(options?)
Clear all jobs from the queue.
const deleted = await worker.clear();
console.log(`Deleted ${deleted} jobs`);
const deleted = await worker.clear({ status: 'dead' });
console.log(`Deleted ${deleted} dead jobs`);
Worker properties
worker.isRunning;
worker.isPaused;
worker.queue;
Persist State Mode
When persistState: true, the worker's state (running/paused/stopped) is saved to the database. This is useful when you want the worker to remember its state across app restarts.
const worker = createWorker({
db,
queue: 'emails',
persistState: true,
autoRestore: true,
});
worker.process(async (job) => {
});
worker.start();
await worker.stop();
With autoRestore: false, you can manually control when to restore:
const worker = createWorker({
db,
persistState: true,
autoRestore: false,
});
worker.process(async (job) => { });
const savedState = await worker.restoreState();
console.log(`Restored state: ${savedState}`);
createDashboard(options)
Create a standalone web dashboard server for monitoring and control.
const dashboard = createDashboard({
db,
port: 3000,
workers: [worker1],
});
console.log(`Dashboard at http://localhost:${dashboard.port}`);
await dashboard.close();
createDashboardMiddleware(options)
Create an Express-compatible middleware to mount the dashboard on an existing app.
import express from 'express';
import { createDashboardMiddleware } from 'workmatic';
const app = express();
app.use(createDashboardMiddleware({
db,
basePath: '/workmatic',
workers: [worker],
}));
app.listen(3000);
Works with any framework that supports Node.js (req, res, next) middleware:
import fastify from 'fastify';
import middie from '@fastify/middie';
const app = fastify();
await app.register(middie);
app.use(createDashboardMiddleware({ db, basePath: '/jobs' }));
import { Hono } from 'hono';
import { handle } from 'hono/node-server';
const app = new Hono();
app.use('/workmatic/*', (c) => {
return new Promise((resolve) => {
const middleware = createDashboardMiddleware({ db, basePath: '/workmatic' });
middleware(c.env.incoming, c.env.outgoing, resolve);
});
});
Job Object
The job object passed to processors has these properties:
interface Job<TPayload> {
id: string;
queue: string;
payload: TPayload;
status: JobStatus;
priority: number;
attempts: number;
maxAttempts: number;
createdAt: number;
lastError: string | null;
}
Durability Model
Workmatic provides at-least-once delivery:
- Jobs are persisted to SQLite before
add() returns
- A job may be processed multiple times if:
- The worker crashes during processing
- The lease expires before completion
- Jobs are only marked
done after successful processing
Idempotency Recommendation
Design your job handlers to be idempotent (safe to run multiple times):
worker.process(async (job) => {
const exists = await db.checkProcessed(job.id);
if (exists) return;
await processPayment(job.payload);
await db.markProcessed(job.id);
});
Job Lifecycle
┌─────────┐ add() ┌─────────┐
│ NEW │ ─────────────▶ │ READY │
└─────────┘ └────┬────┘
│
claim │
▼
┌─────────┐
│ RUNNING │
└────┬────┘
│
┌────────────────┼────────────────┐
│ │ │
success failure failure
│ (retries (max
│ left) attempts)
▼ │ │
┌─────────┐ │ ▼
│ DONE │ │ ┌─────────┐
└─────────┘ │ │ DEAD │
│ └─────────┘
│
▼
┌─────────────────┐
│ READY (retry) │
│ with backoff │
└─────────────────┘
Options Glossary
queue | 'default' | Queue name for job isolation |
concurrency | 1 | Number of jobs to process in parallel |
leaseMs | 30000 | How long a job is "locked" during processing |
pollMs | 1000 | How often to check for new jobs when idle |
timeoutMs | undefined | Job execution timeout (fails job if exceeded) |
priority | 0 | Job priority (lower = processed first) |
delayMs | 0 | Delay before job becomes available |
maxAttempts | 3 | Maximum processing attempts |
backoff | 2^n * 1000 | Function returning retry delay in ms |
Dashboard
The built-in dashboard provides:
- Real-time job statistics
- Job list with filtering and pagination
- Worker status and control (pause/resume)
- Auto-refresh every 2 seconds

Examples
See the examples/ directory:
basic.ts - Simple job processing
advanced.ts - Priority, delays, retries
with-dashboard.ts - Dashboard monitoring
Run examples:
npx tsx examples/basic.ts
npx tsx examples/with-dashboard.ts
Benchmarks
Run performance benchmarks:
npm run bench
npm run bench -- --file
Results Comparison
| Sequential Insert | 27,000/s | 13,000/s |
| Parallel Insert | 23,000/s | 12,000/s |
| Process (concurrency=1) | 1,100/s | 1,100/s |
| Process (concurrency=4) | 4,800/s | 4,800/s |
| Process (concurrency=8) | 10,000/s | 8,300/s |
| Process (concurrency=16) | 18,000/s | 5,700/s |
| Mixed Insert+Process | 7,500/s | 3,500/s |
| Claim + Process Batch | 23,600/s | 11,800/s |
Note: File-based performance degrades at high concurrency due to disk I/O. For file-based databases, concurrency=8 is often optimal. Performance varies by hardware.
CLI
Workmatic includes a command-line tool for managing jobs directly from the database file.
npx workmatic stats ./jobs.db
npx workmatic queues ./jobs.db
npx workmatic pause ./jobs.db emails
npx workmatic resume ./jobs.db emails
npx workmatic list ./jobs.db --status=dead --limit=10
npx workmatic export ./jobs.db backup.csv
npx workmatic export ./jobs.db --status=failed > failed.csv
npx workmatic import ./jobs.db backup.csv
npx workmatic purge ./jobs.db --status=done
npx workmatic retry ./jobs.db --status=dead
CLI Commands
stats <db> | Show job counts by status and queue |
queues <db> | List all queues with pause status |
pause <db> <queue> | Pause a queue (workers stop claiming) |
resume <db> <queue> | Resume a paused queue |
list <db> | List jobs with optional filters |
export <db> [file] | Export jobs to CSV (stdout if no file) |
import <db> <file> | Import jobs from CSV |
purge <db> --status=X | Delete jobs with specific status |
retry <db> --status=X | Reset jobs to ready status |
CLI Options
--status=<status> | Filter by status (ready/running/done/failed/dead) |
--queue=<queue> | Filter by queue name |
--limit=<n> | Limit results (default: 100) |
Live Pause/Resume
The pause and resume commands work on running workers in real-time. When you pause a queue:
- Running workers immediately stop claiming new jobs
- Jobs currently being processed will complete
- The queue resumes when you run
resume
This allows you to manage workers without restarting your application.
CSV Import for AI Workflows
The CSV import feature makes Workmatic particularly useful for AI-powered automation:
public_id,queue,payload,status,priority,run_at,attempts,max_attempts,lease_until,created_at,updated_at,last_error
job_001,emails,"{""to"":""user@example.com"",""template"":""welcome""}",ready,0,1704067200000,0,3,0,1704067200000,1704067200000,
job_002,emails,"{""to"":""other@example.com"",""template"":""reminder""}",ready,5,1704067200000,0,3,0,1704067200000,1704067200000,
Use cases:
- AI Agents: Tools like Claude, GPT, or custom agents can generate CSV files with batch jobs. Simply ask an AI to "create 100 email jobs for these users" and import the result.
- Spreadsheet workflows: Edit jobs in Excel/Google Sheets, export to CSV, and import into the queue.
- Migration: Move jobs between environments or recover from backups.
- Testing: Generate test datasets with specific job configurations.
- Bulk operations: Create thousands of jobs without writing code.
npx workmatic import ./jobs.db jobs.csv
cat ai-generated-jobs.csv | npx workmatic import ./jobs.db /dev/stdin
The simple CSV format means any tool that can output text can create jobs for your queue.
Architecture
┌──────────────────────────────────────────────────────────┐
│ Your App │
├────────────────────┬─────────────────────────────────────┤
│ Client │ Worker │
│ ┌─────────────┐ │ ┌─────────────┐ ┌─────────────┐ │
│ │ add() │ │ │ pump() │ │ fastq │ │
│ │ stats() │ │ │ claim() │ │ pool │ │
│ └─────────────┘ │ └─────────────┘ └─────────────┘ │
├────────────────────┴─────────────────────────────────────┤
│ Kysely (Query Builder) │
├──────────────────────────────────────────────────────────┤
│ better-sqlite3 (SQLite) │
├──────────────────────────────────────────────────────────┤
│ jobs.db (File) │
└──────────────────────────────────────────────────────────┘
License
MIT