Langtail SDK
Typescript SDK for Langtail.

Install
npm i langtail
Usage
OpenAI chat completion
basic completion without any prompt. This just wraps openAI api and adds a few extra parameters you can use to affect how the request gets logged in langtail.
import OpenAI from "openai"
import { createOpenAIProxy } from "langtail/openai"
const openai = new OpenAI({
apiKey: "<LANGTAIL_API_KEY>",
})
const lt = createOpenAIProxy(openai)
const rawCompletion = await lt.chat.completions.create({
messages: [{ role: "system", content: "You are a helpful assistant." }],
model: "gpt-3.5-turbo",
prompt: "<prompt-slug>",
doNotRecord: false,
metadata: {
"custom-field": "1",
},
})
Deployed prompts
Completion from a deployed prompt can be called with lt.prompts.invoke:
const deployedPromptCompletion = await lt.prompts.invoke({
prompt: "<PROMPT_SLUG>",
environment: "staging",
variables: {
about: "cowboy Bebop",
},
})
Of course this assumes that you have already deployed your prompt to staging environment. If not, you will get an error thrown an error: Error: Failed to fetch prompt: 404 {"error":"Prompt deployment not found"}
LangtailPrompts
In case you only need deployed prompts, you can import just LangtailPrompts like this:
import { LangtailPrompts } from "langtail"
const lt = new LangtailPrompts({
apiKey: "<LANGTAIL_API_KEY>",
})
const deployedPromptCompletion = await lt.invoke({
prompt: "<PROMPT_SLUG>",
environment: "staging",
variables: {
about: "cowboy Bebop",
},
})
You can initialize LangtailPrompts with workspace and project slugs like so:
import { Langtail } from "langtail"
const lt = new Langtail({
apiKey: "<LANGTAIL_API_KEY>",
workspace: "<WORKSPACE_SLUG>",
project: "<PROJECT_SLUG>",
})
which is necessary if your API key is workspace wide. For a project api key this is not necessary.
Streaming responses
both chat.prompts.create and prompts.invoke support streaming responses. All you need to enable it is { stream: true } flag like this:
const deployedPromptCompletion = await lt.prompts.invoke({
prompt: "<PROMPT_SLUG>",
environment: "staging",
stream: true,
})
Full API reference is in API.md
We support the same runtimes as OpenAI.
Proxyless usage
You can avoid langtail API all together by constructing your prompt locally and calling your provider like openAI directly.
let's suppose you have a prompt called joke-teller deployed on staging in langtail. You can get it's template and all the playground config by calling get method like this:
import { LangtailPrompts } from "langtail"
const lt = new LangtailPrompts({
apiKey: "<LANGTAIL_API_KEY>",
})
const playgroundState = await lt.get({
prompt: "<PROMPT_SLUG>",
environment: "preview",
version: "<PROMPT_VERSION>",
})
get will return something like this depending on how your prompt configured when it was deployed:
{
"chatInput": {
"optionalExtra": "",
},
"state": {
"args": {
"frequency_penalty": 0,
"jsonmode": false,
"max_tokens": 800,
"model": "gpt-3.5-turbo",
"presence_penalty": 0,
"stop": [],
"stream": true,
"temperature": 0.5,
"top_p": 1,
},
"functions": [],
"template": [
{
"content": "I want you to tell me a joke. Topic of the joke: {{topic}}",
"role": "system",
},
],
"tools": [],
"type": "chat",
},
}
render your template and builds the final open AI compatible payload:
import { getOpenAIBody } from "langtail/getOpenAIBody"
const openAiBody = getOpenAIBody(playgroundState, {
stream: true,
variables: {
topic: "iron man",
},
})
openAiBody now contains this object:
{
"frequency_penalty": 0,
"max_tokens": 800,
"messages": [
{
"content": "I want you to tell me a joke. Topic of the joke: iron man",
"role": "system",
},
],
"model": "gpt-3.5-turbo",
"presence_penalty": 0,
"temperature": 0.5,
"top_p": 1,
}
Notice that your langtail template was replaced with a variable passed in. You can directly call openAI SDK with this object:
import OpenAI from "openai"
const openai = new OpenAI()
const joke = await openai.chat.completions.create(openAiBody)
This way you are still using langtail prompts without exposing potentially sensitive data in your variables.
Typed inputs
You can override input types to improve IntelliSense for the prompt, environment, version and variables when calling a prompt. Use the command npx langtail generate-types.
Vercel AI provider
You can use Langtail with Vercel AI SDK.
Import langtail from langtail/vercel-ai and provide your prompt slug as an argument.
import { generateText } from 'ai'
import { langtail } from 'langtail/vercel-ai'
async function main() {
const result = await generateText({
model: langtail('stock-simple', {
variables: { 'ticker': 'TSLA' },
environment: "production",
version: "2",
doNotRecord: false,
metadata: {},
}),
prompt: 'show me the price',
temperature: 0,
})
console.log(result.text)
}
main().catch(console.error);
You can also use aiBridge from langtail/vercel-ai to use already existing Langtail instance:
const langtail = new Langtail({ apiKey })
const lt = aiBridge(langtail)
const result = await generateText({
model: lt('stock-simple', {
variables: { 'ticker': 'TSLA' },
}),
prompt: 'show me the price',
})
Using tools from Langtail
If your prompts in Langtail contain tools, you can generate a file containing tool parameters for every prompt deployment in your project. Run npx langtail generate-tools --out [output_filepath] to generate the file. For typings of the tools helper to work correctly, you also need to generate types.
After the file is generated, you can provide the Langtail tools to AI SDK like this:
import { generateText } from 'ai'
import { langtail } from 'langtail/vercel-ai'
import tools from './langtailTools';
const ltModel = langtail('stock-simple',
{
environment: "production",
version: "3"
}
);
const result = await generateText({
model: ltModel,
prompt: 'Show me the current price!',
tools: tools(ltModel),
});
You can also define custom execute functions for your tools as follows:
tools(ltModel, {
get_current_stock_price: {
execute: async ({ ticker }) => {
return ({
ticker,
price: 200 + Math.floor(Math.random() * 50),
});
},
},
})
Stream helpers
The AI streams are delivered as JSON objects, which are split into chunks. This can pose a challenge because JSON objects might be distributed across multiple chunks. We have provide you with helper functions to manage these JSON streams more effectively.
Here's an example:
import {
chatStreamToRunner,
type ChatCompletionStream,
} from "langtail/stream"
const stream = await fetch(`/api/langtail`, {
method: "POST",
body: JSON.stringify({ messages: localMessages }),
headers: {
"Content-Type": "application/json",
},
}).then((res) => res.body)
const runner = chatStreamToRunner(stream)
runner.on("message", (messageDelta: string) => {
console.log(messageDelta)
})
runner.on("chunk", (chunk: ChatCompletionChunk) => {
})