Research
Security News
Malicious npm Packages Inject SSH Backdoors via Typosquatted Libraries
Socket’s threat research team has detected six malicious npm packages typosquatting popular libraries to insert SSH backdoors.
@ai-sdk/openai
Advanced tools
The [OpenAI](https://platform.openai.com/) provider for the [Vercel AI SDK](https://sdk.vercel.ai/docs) contains language model support for the OpenAI chat and completion APIs. It creates language model objects that can be used with the `generateText`, `s
The OpenAI provider for the Vercel AI SDK contains language model support for the OpenAI chat and completion APIs.
It creates language model objects that can be used with the generateText
, streamText
, generateObject
, and streamObject
AI functions.
The OpenAI provider is available in the @ai-sdk/openai
module. You can install it with
npm i @ai-sdk/openai
You can import the default provider instance openai
from @ai-sdk/openai
:
import { openai } from '@ai-sdk/openai';
If you need a customized setup, you can import createOpenAI
from @ai-sdk/openai
and create a provider instance with your settings:
import { createOpenAI } from '@ai-sdk/openai';
const openai = createOpenAI({
// custom settings
});
You can use the following optional settings to customize the OpenAI provider instance:
baseURL string
Use a different URL prefix for API calls, e.g. to use proxy servers.
The default prefix is https://api.openai.com/v1
.
apiKey string
API key that is being send using the Authorization
header.
It defaults to the OPENAI_API_KEY
environment variable.
organization string
OpenAI Organization.
project string
OpenAI project.
headers Record<string,string>
Custom headers to include in the requests.
The OpenAI provider instance is a function that you can invoke to create a model:
const model = openai('gpt-3.5-turbo');
It automatically selects the correct API based on the model id. You can also pass additional settings in the second argument:
const model = openai('gpt-3.5-turbo', {
// additional settings
});
The available options depend on the API that's automatically chosen for the model (see below).
If you want to explicitly select a specific model API, you can use .chat
or .completion
.
You can create models that call the OpenAI chat API using the .chat()
factory method.
The first argument is the model id, e.g. gpt-4
.
The OpenAI chat models support tool calls and some have multi-modal capabilities.
const model = openai.chat('gpt-3.5-turbo');
OpenAI chat models support also some model specific settings that are not part of the standard call settings. You can pass them as an options argument:
const model = openai.chat('gpt-3.5-turbo', {
logitBias: {
// optional likelihood for specific tokens
'50256': -100,
},
user: 'test-user', // optional unique user identifier
});
The following optional settings are available for OpenAI chat models:
logitBias Record<number, number>
Modifies the likelihood of specified tokens appearing in the completion.
Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
As an example, you can pass {"50256": -100} to prevent the <|endoftext|> token from being generated.
logProbs boolean | number
Return the log probabilities of the tokens. Including logprobs will increase the response size and can slow down response times. However, it can be useful to better understand how the model is behaving.
Setting to true will return the log probabilities of the tokens that were generated.
Setting to a number will return the log probabilities of the top n tokens that were generated.
user string
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.
You can create models that call the OpenAI completions API using the .completion()
factory method.
The first argument is the model id.
Currently only gpt-3.5-turbo-instruct
is supported.
const model = openai.completion('gpt-3.5-turbo-instruct');
OpenAI completion models support also some model specific settings that are not part of the standard call settings. You can pass them as an options argument:
const model = openai.completion('gpt-3.5-turbo-instruct', {
echo: true, // optional, echo the prompt in addition to the completion
logitBias: {
// optional likelihood for specific tokens
'50256': -100,
},
suffix: 'some text', // optional suffix that comes after a completion of inserted text
user: 'test-user', // optional unique user identifier
});
The following optional settings are available for OpenAI completion models:
echo: boolean
Echo back the prompt in addition to the completion.
logitBias Record<number, number>
Modifies the likelihood of specified tokens appearing in the completion.
Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
As an example, you can pass {"50256": -100} to prevent the <|endoftext|> token from being generated.
logProbs boolean | number
Return the log probabilities of the tokens. Including logprobs will increase the response size and can slow down response times. However, it can be useful to better understand how the model is behaving.
Setting to true will return the log probabilities of the tokens that were generated.
Setting to a number will return the log probabilities of the top n tokens that were generated.
suffix string
The suffix that comes after a completion of inserted text.
user string
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.
FAQs
The **[OpenAI provider](https://sdk.vercel.ai/providers/ai-sdk-providers/openai)** for the [AI SDK](https://sdk.vercel.ai/docs) contains language model support for the OpenAI chat and completion APIs and embedding model support for the OpenAI embeddings A
The npm package @ai-sdk/openai receives a total of 182,453 weekly downloads. As such, @ai-sdk/openai popularity was classified as popular.
We found that @ai-sdk/openai demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket’s threat research team has detected six malicious npm packages typosquatting popular libraries to insert SSH backdoors.
Security News
MITRE's 2024 CWE Top 25 highlights critical software vulnerabilities like XSS, SQL Injection, and CSRF, reflecting shifts due to a refined ranking methodology.
Security News
In this segment of the Risky Business podcast, Feross Aboukhadijeh and Patrick Gray discuss the challenges of tracking malware discovered in open source softare.