cross-llm
Use LLM and Vector Embedding APIs on the web platform. Uses standard fetch()
and thus runs everywhere, including in Service Workers.
🌟 Features
The most simple API to use LLMs. It can hardly be easier than 1 function call 😉
And what's bes?
AI models currently supported:
- ✅ OpenAI: Any OpenAI LLM, including GPT-4 and newer models.
- ✅ Promise-based
- ✅ Streaming
- ✅ Single message system prompt (instruct)
- ✅ Multi-message prompt (chat)
- ✅ Cost model
- ✅ Text Embedding
- ✅ Anthropic: The whole Claude model-series, including Opus.
- ✅ Promise-based
- ✅ Streaming
- ✅ Single message system prompt (instruct)
- ✅ Multi-message prompt (chat)
- ✅ Cost model
- 〰️ Text Embedding (Anthropic doesn't provide embedding endpoints)
- ✅ Perplexity: All models supported.
- ✅ Promise-based
- ✅ Streaming
- ✅ Single message system prompt (instruct)
- ✅ Multi-message prompt (chat)
- ✅ Cost model (including flat fee)
- 〰️ Text Embedding (Perplexity doesn't provide embedding endpoints)
- ✅ VoyageAI: Text Embedding models
- ✅ Mixedbread AI: Text Embedding models, specifically for German
AI providers and models to be supported soon:
- ❌ Google: The whole Gemeni model-series, including 1.5 Pro, Advanced.
- ❌ Cohere: The whole Command model-series, including Command R Plus.
- ❌ Ollama: All Ollama LLMs, including Llama 3.
- ❌ HuggingFace: All HuggingFace LLMs.
📚 Usage
-
🔨 First install the library:
npm/pnpm/yarn/bun install cross-llm
-
💡 Take a look at the super-simple code examples.
Single System Prompt
import { systemPrompt } from "cross-llm";
const promptResonse = await systemPrompt("Respond with JSON: { works: true }", "anthropic", {
model: "claude-3-haiku-20240307",
temperature: 0.7,
max_tokens: 4096
}, { apiKey: import.meta.env[`anthropic_api_key`] });
Text Embedding
import { embed } from "cross-llm";
const textEmbedding = await embed(["Let's have fun with JSON, shall we?"], "voyageai", {
model: "voyage-large-2-instruct",
}, { apiKey: import.meta.env[`voyageai_api_key`], });
Multi-Message Prompt, Streaming
import { promptStreaming, type PromptFinishReason, type Usage, type Price } from "cross-llm";
await promptStreaming(
[
{
role: "user",
content: "Let's have fun with JSON, shall we?",
},
{
role: "assistant",
content: "Yeah. Let's have fun with JSON.",
},
{
role: "user",
content: "Respond with JSON: { works: true }",
},
],
"openai",
async (partialText: string, elapsedMs: number) => {
process.stdout.write(partialText);
},
async (fullText: string,
elapsedMs: number,
usage: Usage,
finishReason: PromptFinishReason,
price: Price) => {
console.log("")
console.log("parsed JSON", JSON.parse(fullText));
console.log("finishReason", finishReason);
console.log("elapsedMs", elapsedMs);
console.log("usage", usage);
console.log("price", price);
},
async (error: unknown, elapsedMs: number) => {
console.log("error", error, elapsedMs, 'ms elapsed');
},
{
model: "gpt-4-turbo",
temperature: 0.7,
response_format: {
type: "json_object",
}
},
{
apiKey: import.meta.env[`openai_api_key`],
},
);
- 📋 Copy & Paste -> enjoy! 🎉
🔥 Contributing
Simply create an issue or fork this repository, clone it and create a Pull Request (PR).
I'm just implementing the features, AI model providers, cost model mappings that I need,
but feel free to simply add your models or implement new AI providers.
Every contribution is very welcome! 🤗
List/verify supported models
Please verify that your model/provider has been added correctly in ./src/models
.
npm run print-models
Write and verify example code
Please add example code for when you implement a new AI provider in ./examples
.
npm run example openai.ts
or
npm run example voyageai-embedding.ts
Write tests for new AI providers
Please write and run unit/integration/e2e tests using jest
by creating ./src/*.spec.ts
test suites:
npm run test
Build a release
Run the following command to update the ./dist
files:
npm run build
Create a new NPM release build:
npm pack
Check the package contents for integrity.
npm publish