Prompts that are pushed to Phoenix are versioned and can be tagged.
Pulling a Prompt from Phoenix
The getPrompt function can be used to pull a prompt from Phoenix based on some Prompt Identifier and returns it in the Phoenix SDK Prompt type.
import { getPrompt } from"@arizeai/phoenix-client/prompts";
const prompt = awaitgetPrompt({ name: "my-prompt" });
// ^ you now have a strongly-typed prompt object, in the Phoenix SDK Prompt typeconst promptByTag = awaitgetPrompt({ tag: "production", name: "my-prompt" });
// ^ you can optionally specify a tag to filter byconst promptByVersionId = awaitgetPrompt({
versionId: "1234567890",
});
// ^ you can optionally specify a prompt version Id to filter by
Using a Phoenix Prompt with an LLM Provider SDK
The toSDK helper function can be used to convert a Phoenix Prompt to the format expected by an LLM provider SDK. You can then use the LLM provider SDK as normal, with your prompt.
If your Prompt is saved in Phoenix as openai, you can use the toSDK function to convert the prompt to the format expected by OpenAI, or even Anthropic and Vercel AI SDK. We will do a best
effort conversion to your LLM provider SDK of choice.
import { generateText } from"ai";
import { openai } from"@ai-sdk/openai";
import { getPrompt, toSDK } from"@arizeai/phoenix-client/prompts";
const prompt = awaitgetPrompt({ name: "my-prompt" });
const promptAsAI = toSDK({
sdk: "ai",
// ^ the SDK you want to convert the prompt to, supported SDKs are listed abovevariables: {
"my-variable": "my-value",
},
// ^ you can format the prompt with variables, if the prompt has any variables in its template// the format (Mustache, F-string, etc.) is specified in the Prompt itself
prompt,
});
// ^ promptAsAI is now in the format expected by the Vercel AI SDK generateText functionconst response = awaitgenerateText({
model: openai(prompt.model_name),
// ^ the model adapter provided by the Vercel AI SDK can be swapped out for any other model// adapter supported by the Vercel AI SDK. Take care to use the correct model name for the// LLM provider you are using.
...promptAsAI,
});
REST Endpoints
The client provides a REST API for all endpoints defined in the Phoenix OpenAPI spec.
Endpoints are accessible via strongly-typed string literals and TypeScript auto-completion inside of the client object.
import { createClient } from"@arizeai/phoenix-client";
const phoenix = createClient();
// Get all datasetsconst datasets = await phoenix.GET("/v1/datasets");
// Get specific promptconst prompt = await phoenix.GET("/v1/prompts/{prompt_identifier}/latest", {
params: {
path: {
prompt_identifier: "my-prompt",
},
},
});
A comprehensive overview of the available endpoints and their parameters is available in the OpenAPI viewer within Phoenix, or in the Phoenix OpenAPI spec.
Datasets
The @arizeai/phoenix-client package allows you to create and manage datasets, which are collections of examples used for experiments and evaluation.
Creating a Dataset
You can create a dataset by providing a name, description, and an array of examples (each with input, output, and optional metadata).
import { createDataset } from"@arizeai/phoenix-client/datasets";
const { datasetId } = awaitcreateDataset({
name: "questions",
description: "a simple dataset of questions",
examples: [
{
input: { question: "What is the capital of France" },
output: { answer: "Paris" },
metadata: {},
},
{
input: { question: "What is the capital of the USA" },
output: { answer: "Washington D.C." },
metadata: {},
},
],
});
// You can now use datasetId to run experiments or add more examples
Experiments
The @arizeai/phoenix-client package provides an experiments API for running and evaluating tasks on datasets. This is useful for benchmarking models, evaluating outputs, and tracking experiment results in Phoenix.
Running an Experiment
To run an experiment, you typically:
Create a dataset (or use an existing one)
Define a task function to run on each example
Define one or more evaluators to score or label the outputs
Hint: Tasks and evaluators are instrumented using OpenTelemetry. You can view detailed traces of experiment runs and evaluations directly in the Phoenix UI for debugging and performance analysis.
Examples
To run examples, install dependencies using pnpm and run:
pnpm install
pnpx tsx examples/list_datasets.ts
# change the file name to run other examples
Compatibility
This package utilizes openapi-ts to generate the types from the Phoenix OpenAPI spec.
Because of this, this package only works with the arize-phonix server 8.0.0 and above.
Compatibility Table:
Phoenix Client Version
Phoenix Server Version
^2.0.0
^9.0.0
^1.0.0
^8.0.0
Community
Join our community to connect with thousands of AI builders:
The npm package @arizeai/phoenix-client receives a total of 71,398 weekly downloads. As such, @arizeai/phoenix-client popularity was classified as popular.
We found that @arizeai/phoenix-client demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.It has 7 open source maintainers collaborating on the project.
Package last updated on 08 Oct 2025
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Socket now integrates with Bun 1.3’s Security Scanner API to block risky packages at install time and enforce your organization’s policies in local dev and CI.
By Ahmad Nassri, Bradley Meck Farias - Oct 10, 2025