
Product
Rust Support Now in Beta
Socket's Rust support is moving to Beta: all users can scan Cargo projects and generate SBOMs, including Cargo.toml-only crates, with Rust-aware supply chain checks.
@synstack/llm
Advanced tools
Immutable, chainable, and type-safe wrapper of Vercel's AI SDK.
pnpm add @synstack/llm ai zod
yarn add @synstack/llm ai zod
npm install @synstack/llm ai zod
To add models you need to install the appropriate provider package:
pnpm add @ai-sdk/openai // or @ai-sdk/[provider-name]
yarn add @ai-sdk/openai // or @ai-sdk/[provider-name]
npm install @ai-sdk/openai // or @ai-sdk/[provider-name]
The completion builder provides a type-safe API to configure LLM completions:
import { completion } from "@synstack/llm"; // or @synstack/synscript/llm
import { openai } from "@ai-sdk/openai";
const baseCompletion = completion
.model(openai("gpt-4"))
.maxTokens(20)
.temperature(0.8);
const imageToLanguagePrompt = (imagePath: string) => [
systemMsg`
You are a helpful assistant that can identify the language of the text in the image.
`,
userMsg`
Here is the image: ${filePart.fromPath(imagePath)}
`,
assistantMsg`
The language of the text in the image is
`,
];
const imageToLanguageAgent = (imagePath: string) =>
baseCompletion.prompt(imageToLanguagePrompt(imagePath)).generateText();
model()
: Set the language modelmaxTokens()
: Set maximum tokens to generatetemperature()
: Set temperature (0-1)topP()
, topK()
: Configure sampling parametersfrequencyPenalty()
, presencePenalty()
: Adjust output diversityseed()
: Set random seed for deterministic resultsmaxSteps()
: Maximum number of sequential LLM callsmaxRetries()
: Number of retry attemptsstopSequences()
: Define sequences that stop generationabortSignal()
: Cancel ongoing completionsgenerateText()
: Generate text completionstreamText()
: Stream text completiongenerateObject()
: Generate structured objectstreamObject()
: Stream structured objectMessages can be built using template strings with various features:
Template-based message builders for different roles:
// System messages
systemMsg`
You are a helpful assistant.
`;
// User messages with support for text, images and files
userMsg`
Here is the image: ${filePart.fromPath("./image.png")}
`;
// Assistant messages with support for text and tool calls
assistantMsg`
The language of the text in the image is
`;
The package provides customization options for messages with provider-specific settings:
// User message with cache control
const cachedUserMsg = userMsg.cached`
Here is the image: ${filePart.fromPath("./image.png")}
`;
// Custom provider options for user messages
const customUserMsg = userMsgWithOptions({
providerOptions: { anthropic: { cacheControl: { type: "ephemeral" } } },
})`Hello World`;
// Custom provider options for assistant messages
const customAssistantMsg = assistantMsgWithOptions({
providerOptions: { openai: { cacheControl: { type: "ephemeral" } } },
})`Hello World`;
// Custom provider options for system messages
const customSystemMsg = systemMsgWithOptions({
providerOptions: { anthropic: { system_prompt_behavior: "default" } },
})`Hello World`;
The filePart
utility provides methods to handle files and images, and supports automatic mime-type detection:
// Load from file system path
filePart.fromPath(path, mimeType?)
// Load from base64 string
filePart.fromBase64(base64, mimeType?)
// Load from URL
filePart.fromUrl(url, mimeType?)
Tools can be configured in completions for function calling with type safety:
const completion = baseCompletion
.tools({
search: {
description: "Search for information",
parameters: z.object({
query: z.string(),
}),
},
})
.activeTools(["search"])
.toolChoice("auto"); // or 'none', 'required', or { type: 'tool', toolName: 'search' }
The library provides middleware utilities to enhance model behavior:
import { includeAssistantMessage, cacheCalls } from "@synstack/llm/middleware";
import { fsCache } from "@synstack/fs-cache";
// Apply middlewares to completion
const completion = baseCompletion
.middlewares([includeAssistantMessage]) // Include last assistant message in output
.prependMiddlewares([cacheCalls(cache)]); // Cache model responses
// Apply middlewares directly to the model
const modelWithAssistant = includeAssistantMessage(baseModel);
const modelWithCache = cacheCalls(cache)(baseModel);
middlewares()
: Replace the middlewaresprependMiddlewares()
: Add middlewares to the beginning of the chain to be executed firstappendMiddlewares()
: Add middlewares to the end of the chain to be executed lastFor more details on available options, please refer to Vercel's AI SDK documentation:
FAQs
Immutable, chainable, and type-safe wrapper of Vercel's AI SDK
The npm package @synstack/llm receives a total of 164 weekly downloads. As such, @synstack/llm popularity was classified as not popular.
We found that @synstack/llm demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Product
Socket's Rust support is moving to Beta: all users can scan Cargo projects and generate SBOMs, including Cargo.toml-only crates, with Rust-aware supply chain checks.
Product
Socket Fix 2.0 brings targeted CVE remediation, smarter upgrade planning, and broader ecosystem support to help developers get to zero alerts.
Security News
Socket CEO Feross Aboukhadijeh joins Risky Business Weekly to unpack recent npm phishing attacks, their limited impact, and the risks if attackers get smarter.