@libretto/anthropic
A drop-in replacement of the official Anthropic
client for sending events to Libretto.
Installation
npm install @libretto/anthropic
Get Started
To send events to Libretto, you'll need to create a project. From the project you'll need two things:
- API key: (
apiKey
) This is generated for the project and is used to identify the project and environment (dev, staging, prod) that the event is coming from.
- Template Name: (
promptTemplateName
) This uniquely identifies a particular prompt that you are using and allows projects to have multiple prompts. This can be in any format but we recommend using a dash-separated format, e.g. my-prompt-name
.
Note: Prompt template names can be auto-generated if the allowUnnamedPrompts
configuration option is set (see below). However, if you rely on auto-generated names, new revisions of the same prompt will show up as different prompt templates in Libretto.
Usage
You can use the Anthropic
client provided by @libretto/anthropic
anywhere that you're currently using the official client.
When instantiating the client, you can/should provide any of the standard Anthropic
parameters in the constructor. Libretto-specific configuration can be provided via an additional libretto
argument (see below).
To allow our tools to separate the "prompt" from the "prompt parameters", use the included objectTemplate
helper and pass the parameters separately as follows:
import { Anthropic, objectTemplate } from "@libretto/anthropic";
async function main() {
const anthropic = new Anthropic({
apiKey: "<Anthropic API Key>",
libretto: {
apiKey: "<Libretto API Key>",
},
});
const completion = await anthropic.chat.completions.create({
messages: objectTemplate([
{ role: "user", content: "Give a hearty welcome to our new user {name}" },
]),
model: "claude-3-haiku-20240307",
libretto: {
promptTemplateName: "ts-client-test-chat",
templateParams: { name: "John" },
context: { someKey: "somevalue" },
},
});
console.log(completion.choices);
}
main();
Configuration
The following options may be set in the libretto
object that has been added to the Anthropic client constructor:
promptTemplateName
: A default name to associate with prompts. If provided,
this is the name that will be associated with any create
call that's made
without a libretto.promptTemplateName
parameter.
allowUnnamedPrompts
: When set to true
, every prompt will be sent to
Libretto even if no prompt template name as been provided (either via the
promptTemplateName
option here or via the libretto.promptTemplateName
parameter added to create
).
redactPii
: When true
, certain personally identifying information (PII)
will be attempted to be redacted before being sent to the Libretto backend.
See the pii
package for details about the types of PII being detected/redacted.
false
by default.
Additional Parameters
The following parameters can be specified in the libretto
object that has been
added to the base Anthropic create
call interface:
templateParams
: The parameters to use for template strings. This is a
dictionary of key-value pairs.
chatId
: The id of a "chat session" - if the chat API is being used in a
conversational context, then the same chat id can be provided so that the
events are grouped together, in order. If not provided, this will be left
blank.
chainId
: A UUID that groups related events together in a chain. For example,
if you have a multi-step workflow where one LLM call's output feeds into another
LLM call, you can use the same chainId to track the full sequence of events.
feedbackKey
: The optional key used to send feedback on the prompt, for
use with sendFeedback()
later. This is normally auto-generated, and the
value is returned in the Anthropic response.
context
: This optional key/value map lets you send additional information
along with your request such as internal tracing IDs, user IDs etc.
Maintaining Chat History
In scenarios where you are in the context of a chat dialog, you will want to
maintain the history of the chat through each turn based request. Libretto has
a reserved role
that you can use for where you want to insert the full chat
history. In your messages provided to objectTemplate
you simply insert the
following:
{
role: "chat_history",
content: "{chat_history}"
}
Finally, you have control over what gets inserted there in your request sent
to Anthropic. In your templateParams
you provide the aforementioned messages:
templateParams: {
chat_history: [
{
role: "user",
content: "First user message",
},
{
role: "assistant",
content: "First response from model",
},
],
Sending Feedback
Sometimes the answer provided by the LLM is not ideal, and your users may be
able to help you find better responses. There are a few common cases:
- You might use the LLM to suggest the title of a news article, but let the
user edit it. If they change the title, you can send feedback to Libretto
that the answer was not ideal.
- You might provide a chatbot that answers questions, and the user can rate the
answers with a thumbs up (good) or thumbs down (bad).
You can send this feedback to Libretto by calling sendFeedback()
. This will
send feedback about a prompt that was previously called, and let you review
this feedback in the Libretto dashboard. You can use the feedback to develop new
tests and improve your prompts.
import crypto from "crypto";
import { Anthropic, sendFeedback } from "@libretto/anthropic";
async function main() {
const anthropic = new Anthropic();
const completion = await anthropic.chat.completions.create({
});
const betterAnswer = await askUserForBetterResult(completion.choices[0].text);
if (betterAnswer !== completion.choices[0].text) {
const feedbackKey = completion.libretto?.feedbackKey;
await sendFeedback({
apiKey: "<Libretto API Key>",
feedbackKey,
betterResponse: betterAnswer,
rating: 0.2,
});
}
}
Note that feedback can include either rating
, betterResponse
, or both.
Parameters:
rating
- a value from 0 (meaning the result was completely wrong) to 1 (meaning the result was correct)
betterResponse
- the better response from the user