Security News
RubyGems.org Adds New Maintainer Role
RubyGems.org has added a new "maintainer" role that allows for publishing new versions of gems. This new permission type is aimed at improving security for gem owners and the service overall.
The openai npm package is a Node.js client library for accessing the OpenAI API, which provides access to powerful AI models such as GPT-3 for natural language processing tasks, including text generation, translation, summarization, and more. The package allows developers to easily integrate OpenAI's AI capabilities into their Node.js applications.
Text Completion
Generates text completions for a given prompt using the GPT-3 model.
const { Configuration, OpenAIApi } = require('openai');
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
openai.createCompletion({
model: 'text-davinci-003',
prompt: 'Translate the following English text to French: Hello, how are you?',
max_tokens: 60
}).then(response => {
console.log(response.data.choices[0].text);
});
Text Classification
Classifies a piece of text into one of the specified categories.
const { Configuration, OpenAIApi } = require('openai');
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
openai.createClassification({
model: 'text-davinci-003',
examples: [
['A movie about space wars and intergalactic politics', 'Science Fiction'],
['A film focusing on the love life of a New York City woman', 'Romance']
],
query: 'A story about a boy who learns he is a wizard and attends a magical school',
labels: ['Science Fiction', 'Romance', 'Fantasy']
}).then(response => {
console.log(response.data);
});
Text Summarization
Summarizes a longer piece of text into a concise version.
const { Configuration, OpenAIApi } = require('openai');
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
openai.createCompletion({
model: 'text-davinci-003',
prompt: 'Summarize the following text: ...',
max_tokens: 60,
temperature: 0.7
}).then(response => {
console.log(response.data.choices[0].text);
});
This package provides access to IBM Watson's AI services, which include natural language processing, speech to text, text to speech, and language translation. It is similar to openai in providing AI-powered language services, but it uses IBM's Watson AI instead of OpenAI's models.
The Google Cloud npm package allows developers to interact with Google Cloud services, including its AI and machine learning services like the Natural Language API and the Translation API. It offers functionalities similar to openai but is integrated with Google's cloud ecosystem.
This package is part of Microsoft's Azure Cognitive Services and provides capabilities for speech recognition, text-to-speech, and speech translation. It offers different services compared to openai, focusing more on speech technologies rather than text-based AI models.
The OpenAI Node library provides convenient access to the OpenAI REST API from applications written in server-side JavaScript. It includes TypeScript definitions for all request params and response fields.
⚠️ Important note: this library is meant for server-side usage only, as using it in client-side browser code will expose your secret API key. See here for more details.
To learn how to use the OpenAI API, check out our API Reference and Documentation.
npm install --save openai
# or
yarn add openai
import OpenAI from 'openai';
const openAI = new OpenAI({
apiKey: 'my api key', // defaults to process.env["OPENAI_API_KEY"]
});
async function main() {
const completion = await openAI.completions.create({
model: 'text-davinci-002',
prompt: 'Say this is a test',
max_tokens: 6,
temperature: 0,
});
console.log(completion.choices);
}
main().catch(console.error);
We provide support for streaming responses using Server Side Events (SSE).
import OpenAI from 'openai';
const client = new OpenAI();
const stream = await client.completions.create({
prompt: 'Say this is a test',
model: 'text-davinci-003',
stream: true,
});
for await (const part of stream) {
process.stdout.write(part.choices[0]?.text || '');
}
If you need to cancel a stream, you can break
from the loop
or call stream.controller.abort()
.
Importing, instantiating, and interacting with the library are the same as above. If you like, you may reference our types directly:
import OpenAI from 'openai';
const openAI = new OpenAI({
apiKey: 'my api key', // defaults to process.env["OPENAI_API_KEY"]
});
async function main() {
const params: OpenAI.CompletionCreateParams = {
model: 'text-davinci-002',
prompt: 'Say this is a test',
max_tokens: 6,
temperature: 0,
};
const completion: OpenAI.Completion = await openAI.completions.create(params);
}
main().catch(console.error);
Documentation for each method, request param, and response field are available in docstrings and will appear on hover in most modern editors.
Request parameters that correspond to file uploads can be passed as either a FormData.Blob
or a FormData.File
instance.
We provide a fileFromPath
helper function to easily create FormData.File
instances from a given class.
import OpenAI, { fileFromPath } from 'openai';
const openAI = new OpenAI();
const file = await fileFromPath('input.jsonl');
await openAI.files.create({ file: file, purpose: 'fine-tune' });
When the library is unable to connect to the API,
or if the API returns a non-success status code (i.e., 4xx or 5xx response),
a subclass of APIError
will be thrown:
async function main() {
const fineTune = await openAI.fineTunes
.create({ training_file: 'file-XGinujblHPwGLSztz8cPS8XY' })
.catch((err) => {
if (err instanceof OpenAI.APIError) {
console.log(err.status); // 400
console.log(err.name); // BadRequestError
console.log(err.headers); // {server: 'nginx', ...}
}
});
}
main().catch(console.error);
Error codes are as followed:
Status Code | Error Type |
---|---|
400 | BadRequestError |
401 | AuthenticationError |
403 | PermissionDeniedError |
404 | NotFoundError |
422 | UnprocessableEntityError |
429 | RateLimitError |
>=500 | InternalServerError |
N/A | APIConnectionError |
Certain errors will be automatically retried 2 times by default, with a short exponential backoff. Connection errors (for example, due to a network connectivity problem), 409 Conflict, 429 Rate Limit, and >=500 Internal errors will all be retried by default.
You can use the maxRetries
option to configure or disable this:
// Configure the default for all requests:
const openAI = new OpenAI({
maxRetries: 0, // default is 2
});
// Or, configure per-request:
openAI.embeddings.create({ model: 'text-similarity-babbage-001',input: 'The food was delicious and the waiter...' }, {
maxRetries: 5,
});
Requests time out after 60 seconds by default. You can configure this with a timeout
option:
// Configure the default for all requests:
const openAI = new OpenAI({
timeout: 20 * 1000, // 20 seconds (default is 60s)
});
// Override per-request:
openAI.edits.create({ model: 'text-davinci-edit-001',input: 'What day of the wek is it?',instruction: 'Fix the spelling mistakes' }, {
timeout: 5 * 1000,
});
On timeout, an APIConnectionTimeoutError
is thrown.
Note that requests which time out will be retried twice by default.
By default, this library uses a stable agent for all http/https requests to reuse TCP connections, eliminating many TCP & TLS handshakes and shaving around 100ms off most requests.
If you would like to disable or customize this behavior, for example to use the API behind a proxy, you can pass an httpAgent
which is used for all requests (be they http or https), for example:
import http from 'http';
import HttpsProxyAgent from 'https-proxy-agent';
// Configure the default for all requests:
const openAI = new OpenAI({
httpAgent: new HttpsProxyAgent(process.env.PROXY_URL),
});
// Override per-request:
openAI.models.list({
baseURL: 'http://localhost:8080/test-api',
httpAgent: new http.Agent({ keepAlive: false }),
})
This package is in beta. Its internals and interfaces are not stable and subject to change without a major semver bump; please reach out if you rely on any undocumented behavior.
We are keen for your feedback; please open an issue with questions, bugs, or suggestions.
The following runtimes are supported:
import OpenAI from "npm:openai"
.If you are interested in other runtime environments, please open or upvote an issue on GitHub.
FAQs
The official TypeScript library for the OpenAI API
The npm package openai receives a total of 979,439 weekly downloads. As such, openai popularity was classified as popular.
We found that openai demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 5 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
RubyGems.org has added a new "maintainer" role that allows for publishing new versions of gems. This new permission type is aimed at improving security for gem owners and the service overall.
Security News
Node.js will be enforcing stricter semver-major PR policies a month before major releases to enhance stability and ensure reliable release candidates.
Security News
Research
Socket's threat research team has detected five malicious npm packages targeting Roblox developers, deploying malware to steal credentials and personal data.