DEPRECATED: @imaginary-dev/openai
This package has been deprecated, please migrate to https://www.npmjs.com/package/@libretto/openai.
TypeScript wrapper around openai library to send events to templatest
Installation
npm install @imaginary-dev/openai
Usage
To use this library, you need to patch the openai library. This will time calls to OpenAI, and report them to Templatest.
You'll need an API key from Templatest. Set it in the environment variable PROMPT_API_KEY
or pass it directly to the patch()
call. You'll also probably want to name which template you are using.
import { patch, objectTemplate } from "@imaginary-dev/openai";
import OpenAI from "openai";
async function main() {
patch({
apiKey: "XXX",
OpenAI,
});
const openai = new OpenAI({
apiKey: "YYY",
});
const completion = await openai.chat.completions.create({
messages: objectTemplate([
{ role: "user", content: "Give a hearty welcome to our new user {name}" },
]) as any,
model: "gpt-3.5-turbo",
ip_prompt_template_name: "ts-client-test-chat",
ip_template_params: { name: "John" },
});
console.log(completion.choices);
}
main();
Advanced Usage
You can "unpatch" the library by calling unpatch()
. This will restore the original create
method on the chat.completions
object.
import { patch, objectTemplate } from "@imaginary-dev/openai";
import OpenAI from "openai";
const unpatch = patch({ OpenAI });
try {
const completion = await openai.chat.completions.create({...});
} finally {
unpatch();
}
Configuration
The following options may be passed to patch
:
promptTemplateName
: A default name to associate with prompts. If provided,
this is the name that will be associated with any create
call that's made
without an ip_prompt_template_name
parameter.allowUnnamedPrompts
: When set to true
, every prompt will be sent to
Templatest even if no prompt template name as been provided (either via the
promptTemplateName
option on patch
or via the ip_prompt_template_name
parameter added to create
).redactPii
: When true
, certain personally identifying information (PII)
will be attempted to be redacted before being sent to the Templatest backend.
See the pii
package for details about the types of PII being detected/redacted.
false
by default.
Additional Parameters
The following parameters are added to the create
call:
ip_template_params
: The parameters to use for template
strings. This is a dictionary of key-value pairs.ip_chat_id
: The id of a "chat session" - if the chat API is
being used in a conversational context, then the same chat id can be
provided so that the events are grouped together, in order. If not provided,
this will be left blank.ip_template_chat
: The chat template to record for chat
requests. This is a list of dictionaries with the following keys:
role
: The role of the speaker. Either "system"
, "user"
or "ai"
.content
: The content of the message. This can be a string or a template
string with {}
placeholders.
ip_template_text
: The text template to record for non-chat
completion requests. This is a string or a template string with {}
placeholders,ip_parent_event_id
: The UUID of the parent event. All calls with the same
parent id are grouped as a "Run Group".ip_feedback_key
: The optional key used to send feedback on the prompt, for
use with sendFeedback()
later. This is normally auto-generated, and the
value is returned in the OpenAI response.
Sending Feedback
Sometimes the answer provided by the LLM is not ideal, and your users may be
able to help you find better responses. There are a few common cases:
- You might use the LLM to suggest the title of a news article, but let the
user edit it. If they change the title, you can send feedback to Templatest
that the answer was not ideal.
- You might provide a chatbot that answers questions, and the user can rate the
answers with a thumbs up (good) or thumbs down (bad).
You can send this feedback to Tepmlatest by calling sendFeedback()
. This will
send a feedback event to Templatest about a prompt that was previously called, and
let you review this feedback in the Templatest dashboard. You can use this
feedback to develop new tests and improve your prompts.
import { patch, sendFeedback } from "@imaginary-dev/openai";
import crypto from "crypto";
import OpenAI from "openai";
async function main() {
patch({ OpenAI });
const completion = await openai.chat.completions.create({
});
const betterAnswer = await askUserForBetterResult(completion.choices[0].text);
if (betterAnswer !== completion.choices[0].text) {
const feedbackKey = completion.ip_feedback_key;
await sendFeedback({
apiKey,
feedbackKey,
betterResponse: betterAnswer,
rating: 0.2,
});
}
}
Note that feedback can include either rating
, betterResponse
, or both.
Parameters:
rating
- a value from 0 (meaning the result was completely wrong) to 1 (meaning the result was correct)betterResponse
- the better response from the user