Vercel Queues
A TypeScript client library for interacting with the Vercel Queue Service API, designed for seamless integration with Vercel deployments.
Features
- Simple API:
send and handleCallback are all you need for push-based workflows
- Automatic Triggering on Vercel: Vercel invokes your route handlers when messages are ready
- Works Anywhere:
send and receive work in any Node.js environment, including self-hosted and non-Vercel platforms
- Type Safety: Full TypeScript generics support
- Customizable Serialization: Built-in JSON, Buffer, and Stream transports
- Local Dev Mode: Messages sent locally trigger your handlers automatically
Installation
npm install @vercel/queue
Quick Start
1. Link your Vercel project and pull credentials:
The SDK authenticates via OIDC. Link your project if you haven't already, then pull to get fresh tokens:
npm i -g vercel
vc link
vc env pull
2. Send a message anywhere in your app:
import { send } from "@vercel/queue";
await send("my-topic", { message: "Hello world" });
3. Handle incoming messages with a route handler:
import { handleCallback } from "@vercel/queue";
export const POST = handleCallback(async (message, metadata) => {
console.log("Processing:", message);
});
4. Configure vercel.json:
{
"functions": {
"app/api/queue/my-topic/route.ts": {
"experimentalTriggers": [{ "type": "queue/v2beta", "topic": "my-topic" }]
}
}
}
That's it. The top-level send and handleCallback use an auto-configured default client. The region is auto-detected from VERCEL_REGION (set automatically on Vercel). If the region can't be detected (e.g. local dev), it falls back to iad1.
To target a specific region for sending with the top-level send, pass the region option:
await send("my-topic", payload, { region: "sfo1" });
Note: The region option is only available on the top-level send() convenience export. When using a QueueClient instance, the region is set once in the constructor via new QueueClient({ region: "sfo1" }) and applies to all operations on that client.
Local Development
Queues just work locally. When you send() messages in development mode, the library sends them to the real Vercel Queue Service, then invokes your registered handleCallback handlers directly in-process using the same code path as production. Your handlers are called with the same lifecycle (receive, visibility extension, ack) as in production. If a handler throws, the message is re-delivered after the configured retry delay (from retryAfterSeconds in vercel.json or the retry callback's afterSeconds), with an incrementing deliveryCount, matching production retry semantics.
Works with Next.js (Turbopack and webpack), Nuxt, SvelteKit, and any framework that runs server-side JavaScript. The SDK automatically discovers handlers from your vercel.json configuration and loads route modules on demand — no manual setup required beyond vercel.json.
Note: Local dev mode is enabled when NODE_ENV=development. Most frameworks set this automatically during npm run dev.
Publishing Messages
import { send } from "@vercel/queue";
await send("my-topic", { message: "Hello world" });
await send(
"my-topic",
{ message: "Hello world" },
{
idempotencyKey: "unique-key",
retentionSeconds: 3600,
delaySeconds: 60,
region: "sfo1",
},
);
Example usage in an API route:
import { send } from "@vercel/queue";
export async function POST(request: Request) {
const body = await request.json();
const { messageId } = await send("my-topic", { message: body.message });
return Response.json({ messageId });
}
Note: messageId is null when the server accepts the message for deferred processing (e.g. during a server-side outage). The message will still be delivered.
Consuming Messages
On Vercel
On Vercel, messages are consumed using API route handlers that Vercel automatically invokes when messages are available. Use handleCallback or handleNodeCallback to create these route handlers.
Web API — handleCallback
Returns (Request) => Promise<Response>. For frameworks that export Web API route handlers (Next.js App Router, Hono, etc.).
Next.js App Router:
import { handleCallback } from "@vercel/queue";
export const POST = handleCallback(
async (message, metadata) => {
await processMessage(message);
},
{
visibilityTimeoutSeconds: 600,
},
);
Nuxt:
import { handleCallback } from "@vercel/queue";
const handler = handleCallback(async (message, metadata) => {
await processMessage(message);
});
export default defineEventHandler(async (event) => {
return handler(toWebRequest(event));
});
SvelteKit:
import { handleCallback } from "@vercel/queue";
export const POST = handleCallback(async (message, metadata) => {
await processMessage(message);
});
Hono:
import { Hono } from "hono";
import { handleCallback } from "@vercel/queue";
const app = new Hono();
app.post(
"/api/queue",
handleCallback(async (message, metadata) => {
await processMessage(message);
}),
);
export default app;
Connect-style — handleNodeCallback
Returns (req, res) => Promise<void>. For frameworks that export Connect-style handlers (Vercel Node.js functions, Express, Next.js Pages Router, etc.). handleNodeCallback is not a top-level export — it is only available via a QueueClient instance:
import { QueueClient } from "@vercel/queue";
const queue = new QueueClient();
export const { handleNodeCallback } = queue;
Vercel Node.js Functions (plain api/ directory):
import { handleNodeCallback } from "./lib/queue";
export default handleNodeCallback(async (message, metadata) => {
await processMessage(message);
});
Next.js Pages Router:
import { handleNodeCallback } from "@/lib/queue";
export default handleNodeCallback(async (message, metadata) => {
await processMessage(message);
});
Express:
import express from "express";
import { handleNodeCallback } from "@/lib/queue";
const app = express();
app.use(express.json());
app.post(
"/api/queue/my-topic",
handleNodeCallback(async (message, metadata) => {
await processMessage(message);
}),
);
export default app;
2. Configure vercel.json
Tell Vercel which routes handle which topics:
{
"functions": {
"app/api/queue/my-topic/route.ts": {
"experimentalTriggers": [
{
"type": "queue/v2beta",
"topic": "my-topic",
"retryAfterSeconds": 60,
"initialDelaySeconds": 0
}
]
},
"app/api/queue/orders/fulfillment/route.ts": {
"experimentalTriggers": [
{ "type": "queue/v2beta", "topic": "order-events" }
]
},
"app/api/queue/orders/analytics/route.ts": {
"experimentalTriggers": [
{
"type": "queue/v2beta",
"topic": "order-events",
"retryAfterSeconds": 300
}
]
}
}
}
Multiple route files for the same topic create separate consumer groups — each receives a copy of every message.
3. Retry and Backoff
When a handler throws, the message is not acknowledged and becomes available for redelivery after the retryAfterSeconds interval configured in vercel.json. Retries continue until the handler succeeds or the message expires (default: 24 hours, max: 7 days).
For finer control over retry timing, pass a retry option. You can also set visibilityTimeoutSeconds to control how long the message is locked during processing (default: 300):
import { handleCallback } from "@vercel/queue";
export const POST = handleCallback(
async (message, metadata) => {
await processMessage(message);
},
{
visibilityTimeoutSeconds: 600,
retry: (error, metadata) => {
if (error instanceof RateLimitError) return { afterSeconds: 60 };
},
},
);
When retry returns { afterSeconds: N }, the message is rescheduled for redelivery after N seconds. Return { acknowledge: true } to acknowledge the message so it is never retried. When it returns undefined, the error propagates normally and the message is retried at the default interval.
Exponential backoff uses metadata.deliveryCount (starts at 1, increments each delivery):
import { handleCallback } from "@vercel/queue";
export const POST = handleCallback(
async (message, metadata) => {
await processMessage(message);
},
{
retry: (error, metadata) => {
const delay = Math.min(300, 2 ** metadata.deliveryCount * 5);
return { afterSeconds: delay };
},
},
);
Conditional retry — only retry transient errors:
import { handleCallback } from "@vercel/queue";
export const POST = handleCallback(
async (message, metadata) => {
await processMessage(message);
},
{
retry: (error, metadata) => {
if (error instanceof RateLimitError) return { afterSeconds: 60 };
if (error instanceof TemporaryError) return { afterSeconds: 30 };
},
},
);
Acknowledging poison messages — stop retrying messages that can never succeed:
import { handleCallback } from "@vercel/queue";
export const POST = handleCallback(
async (message, metadata) => {
await processMessage(message);
},
{
retry: (error, metadata) => {
if (error instanceof ValidationError) return { acknowledge: true };
if (metadata.deliveryCount > 5) return { acknowledge: true };
return { afterSeconds: Math.min(300, 2 ** metadata.deliveryCount * 5) };
},
},
);
The retry option is available on handleCallback, handleNodeCallback, and receive.
Custom Client Configuration
For most use cases, the top-level send and handleCallback functions are all you need. For advanced configuration (custom transports, explicit tokens, deployment pinning), create a QueueClient instance directly.
QueueClient (push mode)
For push-based workflows where Vercel delivers messages to your route handlers:
import { QueueClient, BufferTransport } from "@vercel/queue";
const queue = new QueueClient({
region: "iad1",
token: "my-token",
transport: new BufferTransport(),
headers: { "X-Custom": "header" },
deploymentId: null,
});
export const { send, handleCallback, handleNodeCallback } = queue;
PollingQueueClient (poll mode)
For manual polling workflows where you call receive to poll for messages. Works anywhere — on Vercel, self-hosted, or any Node.js environment:
import { PollingQueueClient, BufferTransport } from "@vercel/queue";
const queue = new PollingQueueClient({
region: "iad1",
token: "my-token",
transport: new BufferTransport(),
headers: { "X-Custom": "header" },
deploymentId: null,
});
export const { send, receive } = queue;
Both clients send requests to https://${region}.vercel-queue.com. When handleCallback receives a message, it reads the ce-vqsregion header and routes follow-up API calls to the correct regional endpoint.
To customize the URL scheme, provide a resolveBaseUrl that returns a URL:
const queue = new QueueClient({
resolveBaseUrl: (region) => new URL(`https://${region}.my-proxy.example`),
});
const queue = new QueueClient({
resolveBaseUrl: (region) =>
new URL(`https://my-proxy.example/queues/${region}`),
});
The SDK always appends its own API path (/api/v3/…) to the returned URL.
Transports
The transport controls how message payloads are serialized and deserialized.
| Structured data | JsonTransport | Low | Default, JSON encoding |
| Binary data | BufferTransport | Medium | Raw bytes |
| Large payloads | StreamTransport | Very Low | No buffering, streaming |
import {
QueueClient,
JsonTransport,
BufferTransport,
StreamTransport,
} from "@vercel/queue";
const queue = new QueueClient({
transport: new JsonTransport({
replacer: (key, value) => (key === "password" ? undefined : value),
reviver: (key, value) => (key === "date" ? new Date(value) : value),
}),
});
const binQueue = new QueueClient({
transport: new BufferTransport(),
});
await binQueue.send("binary-topic", myBuffer);
const streamQueue = new QueueClient({
transport: new StreamTransport(),
});
await streamQueue.send("large-file", myReadableStream);
Manual Receive
Use PollingQueueClient to poll for and process messages directly. This is an advanced alternative to handleCallback that works in any Node.js environment, both on and off Vercel.
Region considerations
Messages can only be received from the region they were sent to. When using receive, use a fixed region (e.g. "iad1") for both sending and receiving — do not use VERCEL_REGION, because Vercel may route requests to different regions due to failover or load balancing, distributing your messages across regions unpredictably.
A single region is still highly available — Vercel deploys across 3+ availability zones within each region. If you need multi-region availability, you are responsible for designing your own HA strategy (e.g. sending to multiple regions and receiving from each).
For most use cases on Vercel, handleCallback via QueueClient is the recommended approach — the platform handles region routing automatically and the SDK routes follow-up calls to the correct region via the ce-vqsregion header.
Usage
import { PollingQueueClient } from "@vercel/queue";
const { send, receive } = new PollingQueueClient({ region: "iad1" });
await send("my-topic", { message: "Hello world" });
const result = await receive(
"my-topic",
"my-group",
async (message, metadata) => {
console.log("Processing:", message);
},
);
if (!result.ok) {
console.log("Queue was empty:", result.reason);
}
await receive("my-topic", "my-group", handler, { limit: 10 });
await receive("my-topic", "my-group", handler, { messageId: "msg-123" });
Note: limit and messageId are mutually exclusive options. The handler is never called when the queue is empty — check result.ok instead.
Error Handling
import {
BadRequestError,
DuplicateMessageError,
ForbiddenError,
InternalServerError,
UnauthorizedError,
send,
} from "@vercel/queue";
try {
await send("my-topic", payload);
} catch (error) {
if (error instanceof UnauthorizedError) {
console.log("Invalid token - refresh authentication");
} else if (error instanceof ForbiddenError) {
console.log("Environment mismatch - check configuration");
} else if (error instanceof BadRequestError) {
console.log("Invalid parameters:", error.message);
} else if (error instanceof DuplicateMessageError) {
console.log("Duplicate message:", error.idempotencyKey);
} else if (error instanceof InternalServerError) {
console.log("Server error - retry with backoff");
}
}
All error types:
BadRequestError | Invalid request parameters |
UnauthorizedError | Authentication failed (invalid/missing token) |
ForbiddenError | Access denied (wrong environment/project) |
DuplicateMessageError | Idempotency key already used |
ConsumerDiscoveryError | Could not reach consumer deployment |
ConsumerRegistryNotConfiguredError | Project not configured for queues |
InternalServerError | Unexpected server error |
InvalidLimitError | Batch limit outside valid range (1-10) |
MessageNotFoundError | Message doesn't exist or expired |
MessageNotAvailableError | Message exists but cannot be claimed |
MessageAlreadyProcessedError | Message already successfully processed |
MessageLockedError | Message being processed by another consumer |
MessageCorruptedError | Message data could not be parsed |
QueueEmptyError | No messages available in queue |
Environment Variables
VERCEL_REGION | Current region (auto-set by Vercel) | - |
VERCEL_QUEUE_DEBUG | Enable debug logging (1 or true) | - |
VERCEL_DEPLOYMENT_ID | Deployment ID (auto-set by Vercel) | - |
Service Limits & Constraints
Throughput & Storage
| Message throughput | 10,000+ msg/sec/topic | Scales horizontally |
| Payload size | 100 MB | Smaller messages have lower latency |
| Number of topics | Unlimited | No hard limit |
| Consumer groups per message | ~4,000 | Per-message limit |
| Messages per queue | Unlimited | No hard limit |
Parameter Constraints
Publishing Messages
retentionSeconds | 86,400 (24h) | 60 | 604,800 (7d) | Message TTL |
delaySeconds | 0 | 0 | 604,800 (7d) | Cannot exceed retention |
idempotencyKey | — | — | — | Dedup window: min(retention, 24h) |
Receiving Messages
visibilityTimeoutSeconds | 300 | 30 | 3,600 | Lock duration during processing |
limit | 1 | 1 | 10 | Messages per request |
Identifier Formats
| Topic name | [A-Za-z0-9_-]+ | my-queue, task_queue_v2 |
| Consumer group | [A-Za-z0-9_-]+ | worker-1, analytics_consumer |
| Message ID | Opaque string | 0-1, 3-7K9mNpQrS |
| Receipt handle | Opaque string | Used for acknowledge/visibility ops |
Wildcard Topics
{
"functions": {
"app/api/queue/route.ts": {
"experimentalTriggers": [{ "type": "queue/v2beta", "topic": "user-*" }]
}
}
}
* may only appear once in the pattern
* must be at the end of the topic name
- Valid:
user-*, orders-*
- Invalid:
*-events, user-*-data
API Reference
Top-level send(topicName, payload, options?)
The simplest way to send a message. Uses an auto-configured default client that detects the region from VERCEL_REGION, falling back to "iad1" with a console warning on first use.
import { send } from "@vercel/queue";
const { messageId } = await send("my-topic", payload, {
idempotencyKey: "unique-key",
retentionSeconds: 3600,
delaySeconds: 60,
headers: { "X-Custom": "val" },
region: "sfo1",
});
The region option is exclusive to the top-level send(). It creates a one-off client targeting the given region. When using QueueClient, the region is set once in the constructor and applies to all operations.
Returns { messageId: string | null }. messageId is null when the server accepted the message for deferred processing (e.g. during a server-side outage).
Top-level handleCallback(handler, options?)
The simplest way to handle incoming queue messages. Uses the same auto-configured default client as send.
import { handleCallback } from "@vercel/queue";
export const POST = handleCallback(
async (message, metadata) => {
await processMessage(message);
},
{
visibilityTimeoutSeconds: 300,
retry: (error, metadata) => {
},
},
);
Returns (request: Request) => Promise<Response> — for frameworks that export Web API route handlers. Vercel only. The region for follow-up API calls is determined automatically from the ce-vqsregion header in the incoming event.
QueueClient
Push-based client for workflows where Vercel delivers messages to your route handlers. Use this when you need custom configuration (transport, token, headers, deployment pinning). Region is auto-detected from VERCEL_REGION (set automatically on Vercel), falling back to "iad1" with a console warning.
import { QueueClient } from "@vercel/queue";
const queue = new QueueClient({
region: "iad1",
resolveBaseUrl: (r) => new URL(`https://${r}.vercel-queue.com`),
token: "my-token",
headers: { "X-Custom": "value" },
transport: new JsonTransport(),
deploymentId: undefined,
});
const { send, handleCallback, handleNodeCallback } = queue;
PollingQueueClient
Poll-based client for manually receiving messages. Works in any Node.js environment, including self-hosted and non-Vercel platforms. Region is required to ensure send and receive target the same endpoint.
import { PollingQueueClient } from "@vercel/queue";
const queue = new PollingQueueClient({
region: "iad1",
});
const { send, receive } = queue;
receive(topicName, consumerGroup, handler, options?)
Available on PollingQueueClient only.
Returns a discriminated result: { ok: true } on success, or { ok: false, reason } when no message was processed. The handler is never called when the queue is empty.
For receive-by-id, operational errors are returned instead of thrown:
const result = await receive("my-topic", "my-group", handler, {
messageId: "msg-123",
});
if (!result.ok) {
console.log(result.reason, result.messageId);
}
const result = await receive("my-topic", "my-group", handler, {
limit: 10,
visibilityTimeoutSeconds: 60,
});
handleNodeCallback(handler, options?)
Available on QueueClient instances only (not a top-level export). Vercel only.
Returns (req, res) => Promise<void> — for frameworks that export Connect-style handlers.
import { QueueClient } from "@vercel/queue";
const queue = new QueueClient();
const { handleNodeCallback } = queue;
export default handleNodeCallback(
async (message, metadata) => {
await processMessage(message);
},
{
retry: (error, metadata) => ({ afterSeconds: 60 }),
},
);
Handler Signature
type MessageHandler<T> = (
message: T,
metadata: MessageMetadata,
) => Promise<void> | void;
interface MessageMetadata {
messageId: string;
deliveryCount: number;
createdAt: Date;
expiresAt: Date;
topicName: string;
consumerGroup: string;
region: string;
}
License
MIT