Vercel AI SDK - Google Generative AI Provider
The Google provider for the Vercel AI SDK contains language model support for the Google Generative AI APIs.
It creates language model objects that can be used with the generateText
, streamText
, generateObject
, and streamObject
AI functions.
Setup
The Google provider is available in the @ai-sdk/google
module. You can install it with
npm i @ai-sdk/google
Provider Instance
You can import the default provider instance google
from @ai-sdk/google
:
import { google } from '@ai-sdk/google';
If you need a customized setup, you can import createGoogleGenerativeAI
from @ai-sdk/google
and create a provider instance with your settings:
import { createGoogleGenerativeAI } from '@ai-sdk/google';
const google = createGoogleGenerativeAI({
});
You can use the following optional settings to customize the Google Generative AI provider instance:
-
baseURL string
Use a different URL prefix for API calls, e.g. to use proxy servers.
The default prefix is https://generativelanguage.googleapis.com/v1beta
.
-
apiKey string
API key that is being send using the x-goog-api-key
header.
It defaults to the GOOGLE_GENERATIVE_AI_API_KEY
environment variable.
-
headers Record<string,string>
Custom headers to include in the requests.
Models
You can create models that call the Google Generative AI API using the provider instance.
The first argument is the model id, e.g. models/gemini-pro
.
The models support tool calls and some have multi-modal capabilities.
const model = google('models/gemini-pro');
Google Generative AI models support also some model specific settings that are not part of the standard call settings.
You can pass them as an options argument:
const model = google('models/gemini-pro', {
topK: 0.2,
});
The following optional settings are available for Google Generative AI models:
-
topK number
Optional. The maximum number of tokens to consider when sampling.
Models use nucleus sampling or combined Top-k and nucleus sampling.
Top-k sampling considers the set of topK most probable tokens.
Models running with nucleus sampling don't allow topK setting.