
Product
Announcing Socket Fix 2.0
Socket Fix 2.0 brings targeted CVE remediation, smarter upgrade planning, and broader ecosystem support to help developers get to zero alerts.
@built-in-ai/core
Advanced tools
Browser Built-in AI API provider for Vercel AI SDK v5+ (Chrome & Edge)
A TypeScript library that provides access to browser-based AI capabilities with seamless fallback to using server-side models using the Vercel AI SDK. This library enables you to leverage Chrome and Edge's built-in AI features (Prompt API) with the AI SDK.
[!IMPORTANT] This package is under constant development as the Prompt API matures, and may contain errors and incompatible changes.
npm i @built-in-ai/core
The @built-in-ai/core
package is the AI SDK provider for your Chrome and Edge browser's built-in AI models. It provides seamless access to both language models and text embeddings through browser-native APIs.
[!IMPORTANT] The Prompt API is currently experimental and might change as it matures. The below enablement guide of the API might also change in the future.
You need Chrome (v. 128 or higher) or Edge Dev/Canary (v. 138.0.3309.2 or higher)
Enable these experimental flags:
chrome://flags/
, search for 'Prompt API for Gemini Nano with Multimodal Input' and set it to Enabledchrome://components
and click Check for Update on Optimization Guide On Device Modeledge://flags/#prompt-api-for-phi-mini
and set it to EnabledFor more information, check out this guide
import { streamText } from "ai";
import { builtInAI } from "@built-in-ai/core";
const result = streamText({
// or generateText
model: builtInAI(),
messages: [{ role: "user", content: "Hello, how are you?" }],
});
for await (const chunk of result.textStream) {
console.log(chunk);
}
import { generateText } from "ai";
import { builtInAI } from "@built-in-ai/core";
const model = builtInAI();
const result = await generateText({
model,
messages: [{ role: "user", content: "Write a short poem about AI" }],
});
import { embed, embedMany } from "ai";
import { builtInAI } from "@built-in-ai/core";
// Single embedding
const result = await embed({
model: builtInAI.textEmbedding("embedding"),
value: "Hello, world!",
});
console.log(result.embedding); // [0.1, 0.2, 0.3, ...]
// Multiple embeddings
const results = await embedMany({
model: builtInAI.textEmbedding("embedding"),
values: ["Hello", "World", "AI"],
});
console.log(results.embeddings); // [[...], [...], [...]]
When using the built-in AI models in Chrome & Edge for the first time, the model needs to be downloaded first.
You'll probably want to show download progress in your applications to improve UX.
import { streamText } from "ai";
import { builtInAI } from "@built-in-ai/core";
const model = builtInAI();
const availability = await model.availability();
if (availability === "unavailable") {
console.log("Browser doesn't support built-in AI");
return;
}
if (availability === "downloadable") {
await model.createSessionWithProgress((progress) => {
console.log(`Download progress: ${Math.round(progress * 100)}%`);
});
}
// Model is ready
const result = streamText({
model,
messages: [{ role: "user", content: "Hello!" }],
});
When using this library with the useChat
hook, you'll need to create a custom transport implementation to handle client-side AI with download progress. You can do this by importing BuiltInAIUIMessage
from @built-in-ai/core
that extends UIMessage
to include data parts such as download progress.
See the complete working example: /examples/next-hybrid/app/(core)/util/client-side-chat-transport.ts
and the /examples/next-hybrid/app/page.tsx
components.
This example includes:
useChat
hookThe Prompt API supports both images and audio files:
import { streamText } from "ai";
import { builtInAI } from "@built-in-ai/core";
const result = streamText({
model: builtInAI(),
messages: [
{
role: "user",
content: [
{ type: "text", text: "What's in this image?" },
{ type: "file", mediaType: "image/png", data: base64ImageData },
],
},
{
role: "user",
content: [{ type: "file", mediaType: "audio/mp3", data: audioData }],
},
],
});
for await (const chunk of result.textStream) {
console.log(chunk);
}
The builtInAI
model also allows using the AI SDK generateObject
and streamObject
:
import { streamObject } from "ai";
import { builtInAI } from "@built-in-ai/core";
const { object } = await streamObject({
model: builtInAI(),
schema: z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(z.object({ name: z.string(), amount: z.string() })),
steps: z.array(z.string()),
}),
}),
prompt: "Generate a lasagna recipe.",
});
const { object } = await generateObject({
model: builtInAI(),
schema: z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(z.object({ name: z.string(), amount: z.string() })),
steps: z.array(z.string()),
}),
}),
prompt: "Generate a lasagna recipe.",
});
generateText()
)streamText()
)generateObject()/streamObject()
)*Multimodal functionality is currently only available in Chrome's Prompt API implementation
builtInAI(modelId?, settings?)
Creates a browser AI model instance for chat or embeddings.
For Chat Models:
modelId
(optional): The model identifier, defaults to 'text'settings
(optional): Configuration options for the chat model
temperature?: number
- Controls randomness (0-1)topK?: number
- Limits vocabulary selectionReturns: BuiltInAIChatLanguageModel
instance
For Embedding Models:
modelId
: Must be 'embedding'settings
(optional): Configuration options for the embedding model
wasmLoaderPath?: string
- Path to WASM loader (default: CDN hosted)wasmBinaryPath?: string
- Path to WASM binary (default: CDN hosted)modelAssetPath?: string
- Path to model asset file (default: CDN hosted)l2Normalize?: boolean
- Whether to normalize with L2 norm (default: false)quantize?: boolean
- Whether to quantize embeddings to bytes (default: false)delegate?: 'CPU' | 'GPU'
- Backend to use for inferenceReturns: BuiltInAIEmbeddingModel
instance
doesBrowserSupportBuiltInAI(): boolean
Quick check if the browser supports the built-in AI API. Useful for component-level decisions and feature flags.
Returns: boolean
- true
if browser supports the Prompt API, false
otherwise
Example:
import { doesBrowserSupportBuiltInAI } from "@built-in-ai/core";
if (doesBrowserSupportBuiltInAI()) {
// Show built-in AI option in UI
} else {
// Show server-side option only
}
BuiltInAIUIMessage
Extended UI message type for use with the useChat
hook that includes custom data parts for built-in AI functionality.
Type Definition:
type BuiltInAIUIMessage = UIMessage<
never,
{
modelDownloadProgress: {
status: "downloading" | "complete" | "error";
progress?: number;
message: string;
};
notification: {
message: string;
level: "info" | "warning" | "error";
};
}
>;
Data Parts:
modelDownloadProgress
- Tracks browser AI model download status and progressnotification
- Displays temporary messages and alerts to usersBuiltInAIChatLanguageModel.createSessionWithProgress(onDownloadProgress?)
Creates a language model session with optional download progress monitoring.
Parameters:
onDownloadProgress?: (progress: number) => void
- Optional callback that receives progress values from 0 to 1 during model downloadReturns: Promise<LanguageModel>
- The configured language model session
Example:
const model = builtInAI();
await model.createSessionWithProgress((progress) => {
console.log(`Download: ${Math.round(progress * 100)}%`);
});
BuiltInAIChatLanguageModel.availability()
Checks the current availability status of the built-in AI model.
Returns: Promise<"unavailable" | "downloadable" | "downloading" | "available">
"unavailable"
- Model is not supported in the browser"downloadable"
- Model is supported but needs to be downloaded first"downloading"
- Model is currently being downloaded"available"
- Model is ready to use2025 © Jakob Hoeg Mørk
FAQs
Browser Built-in AI API provider for Vercel AI SDK v5+ (Chrome & Edge)
The npm package @built-in-ai/core receives a total of 40 weekly downloads. As such, @built-in-ai/core popularity was classified as not popular.
We found that @built-in-ai/core demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Product
Socket Fix 2.0 brings targeted CVE remediation, smarter upgrade planning, and broader ecosystem support to help developers get to zero alerts.
Security News
Socket CEO Feross Aboukhadijeh joins Risky Business Weekly to unpack recent npm phishing attacks, their limited impact, and the risks if attackers get smarter.
Product
Socket’s new Tier 1 Reachability filters out up to 80% of irrelevant CVEs, so security teams can focus on the vulnerabilities that matter.