OpenPipe Node API Library
This library wraps TypeScript or Javascript OpenAI API calls and logs additional data to the configured OPENPIPE_BASE_URL
for further processing.
It is fully compatible with OpenAI's sdk and logs both streaming and non-streaming requests and responses.
Installation
npm install --save openpipe
yarn add openpipe
Usage
- Create a project at https://app.openpipe.ai
- Find your project's API key at https://app.openpipe.ai/project/settings
- Configure the OpenPipe client as shown below.
import OpenAI from "openpipe/openai";
const openai = new OpenAI({
apiKey: "my api key",
openpipe: {
apiKey: "my api key",
baseUrl: "my url",
},
});
async function main() {
const completion = await openai.chat.completions.create({
messages: [{ role: "user", content: "Say this is a test" }],
model: "gpt-3.5-turbo",
openpipe: {
tags: {
prompt_id: "getCompletion",
any_key: "any_value",
},
logRequest: true,
},
});
console.log(completion.choices);
}
main();
FAQ
How do I report calls to my self-hosted instance?
Start an instance by following the instructions on Running Locally. Once it's running, point your OPENPIPE_BASE_URL
to your self-hosted instance.
What if my OPENPIPE_BASE_URL
is misconfigured or my instance goes down? Will my OpenAI calls stop working?
Your OpenAI calls will continue to function as expected no matter what. The sdk handles logging errors gracefully without affecting OpenAI inference.
See the GitHub repo for more details.