Upstash QStash SDK
[!NOTE]
This project is in GA Stage.
The Upstash Professional Support fully covers this project. It receives regular updates, and bug fixes.
The Upstash team is committed to maintaining and improving its functionality.
QStash is an HTTP based messaging and scheduling solution for serverless and
edge runtimes.
It is 100% built on stateless HTTP requests and designed for:
- Serverless functions (AWS Lambda ...)
- Cloudflare Workers (see
the example)
- Fastly Compute@Edge
- Next.js, including edge
- Deno
- Client side web/mobile applications
- WebAssembly
- and other environments where HTTP is preferred over TCP.
How does QStash work?
QStash is the message broker between your serverless apps. You send an HTTP
request to QStash, that includes a destination, a payload and optional settings.
We durably store your message and will deliver it to the destination API via
HTTP. In case the destination is not ready to receive the message, we will retry
the message later, to guarentee at-least-once delivery.
Quick Start
Install
npm
npm install @upstash/qstash
Get your authorization token
Go to Upstash Console and copy the QSTASH_TOKEN.
Basic Usage:
Publishing a message
import { Client } from "@upstash/qstash";
import "isomorphic-fetch";
const client = new Client({
token: "<QSTASH_TOKEN>",
});
const res = await client.publishJSON({
url: "https://my-api...",
body: {
hello: "world",
},
});
console.log(res);
Receiving a message
How to receive a message depends on your http server. The Receiver.verify
method should be called by you as the first step in your handler function.
import { Receiver } from "@upstash/qstash";
const r = new Receiver({
currentSigningKey: "..",
nextSigningKey: "..",
});
const isValid = await r.verify({
signature: "string";
body: "string";
})
Publishing a message to an LLM provider
No need for complicated setup your LLM request. We'll call LLM and schedule it for your serverless needs.
import { Client, openai } from "@upstash/qstash";
const client = new Client({
token: "<QSTASH_TOKEN>",
});
const result = await client.publishJSON({
api: { name: "llm", provider: openai({ token: process.env.OPENAI_API_KEY! }) },
body: {
model: "gpt-3.5-turbo",
messages: [
{
role: "user",
content: "Where is the capital of Turkey?",
},
],
},
callback: "https://oz.requestcatcher.com/",
});
Docs
See the documentation for details.
Contributing
Setup
This project requires Bun to be installed. Please see the Bun installation documentation for further instructions.
Once you have cloned the project, you will need to install the dependencies and then you can run the project.
bun install
bun run build
Testing
To begin testing, environment variables will need to be setup. First, create a .env
file in the root of the project. .env.template
can be used as a template. Your values can be found in the Qstash Console.
bun run test
Committing
This project uses commitlint. When committing, please ensure your commit message is formatted to include an appropriate prefix with the message.
Examples
fix: typescript bug
feat: use new logger
perf: refactor cache
For a full list of options, please see the commitlint.config.js
file.