
Security News
vlt Launches "reproduce": A New Tool Challenging the Limits of Package Provenance
vlt's new "reproduce" tool verifies npm packages against their source code, outperforming traditional provenance adoption in the JavaScript ecosystem.
@baseai/core
Advanced tools
The AI framework for building declarative and composable AI-powered LLM products.
The AI framework for building declarative and composable AI-powered LLM products.
First, install the @baseai/core
package using npm or yarn:
npm install @baseai/core
or
pnpm add @baseai/core
or
yarn add @baseai/core
To use the generate
function from the @baseai/core
package, follow these steps:
Import the generate
function:
import {generate} from '@baseai/core';
Set up environment variables:
Ensure you have the following environment variables set in your .env
file:
OPENAI_API_KEY=your_openai_api_key
Generate a response using a prompt:
import {generate} from '@baseai/core';
async function exampleWithPrompt() {
const response = await generate({
model: 'gpt-3.5-turbo-0125',
prompt: '1+1',
});
console.log(response); // Output: '2'
}
exampleWithPrompt();
Generate a response using messages array:
import {generate} from '@baseai/core';
async function exampleWithMessages() {
const response = await generate({
model: 'gpt-3.5-turbo-0125',
messages: [
{role: 'system', content: 'You are a helpful assistant.'},
{role: 'user', content: 'Give me 5 title ideas'},
{role: 'assistant', content: 'Sure, here you go … …'},
],
});
console.log(response);
}
exampleWithMessages();
generate
Generates a response using the specified model, prompt, or messages array.
async function generate(params: GenerateParams): Promise<string>;
params
: An object containing the following properties:
model
(string): The model to use for generating the response.prompt
(optional string): The prompt to use for generating the response. Either prompt
or messages
must be provided.messages
(optional Message[]
): An array of message objects. Each message object should contain role
and content
properties. Either prompt
or messages
must be provided.import {generate} from '@baseai/core';
const responseFromPrompt = await generate({
model: 'gpt-3.5-turbo-0125',
prompt: '1+1',
});
console.log(responseFromPrompt);
const responseFromMessages = await generate({
model: 'gpt-3.5-turbo-0125',
messages: [
{role: 'system', content: 'You are a helpful assistant.'},
{role: 'user', content: 'Give me 5 title ideas'},
{role: 'assistant', content: 'Sure, here you go … …'},
],
});
console.log(responseFromMessages);
validateInput
Validates the input parameters and environment variables.
function validateInput(params: GenerateParams): ValidatedParams;
params
: An object containing the following properties:
model
(string): The model to use for generating the response.prompt
(optional string): The prompt to use for generating the response.messages
(optional Message[]
): An array of message objects.const validatedParams = validateInput({
model: 'gpt-3.5-turbo-0125',
prompt: 'Hi',
});
buildMessages
Constructs the messages array using the provided prompt or messages array.
function buildMessages({
prompt,
messages,
}: {
prompt?: string;
messages?: Message[];
}): Message[];
prompt
(optional string): The prompt to use for generating the response.messages
(optional Message[]
): An array of message objects.const messages = buildMessages({prompt: 'Hi'});
buildHeaders
Constructs the headers for the API request using the provided API key.
function buildHeaders(API_KEY: string): Record<string, string>;
API_KEY
(string): The API key to use for the request.const headers = buildHeaders('your-api-key');
handleResponse
Processes the API response and extracts the generated message content.
async function handleResponse(response: Response): Promise<string>;
response
(Response): The response object from the API request.const content = await handleResponse(response);
GenerateParams
Type definition for the parameters of the generate
function.
interface GenerateParams {
model: string;
prompt?: string;
messages?: Message[];
}
Message
Type definition for a message object.
interface Message {
role: 'system' | 'user' | 'assistant';
content: string;
}
ValidatedEnv
Type definition for the validated environment variables.
interface ValidatedEnv {
API_KEY: string;
API_URL_CHAT: string;
}
OPEN_AI_API_KEY
The API key for authenticating requests to the OpenAI API.
OPEN_AI_API_URL_CHAT
The URL for the OpenAI API chat endpoint.
import {generate} from '@baseai/core';
const responseFromPrompt = await generate({
model: 'gpt-3.5-turbo-0125',
prompt: '1+1',
});
console.log(responseFromPrompt);
const responseFromMessages = await generate({
model: 'gpt-3.5-turbo-0125',
messages: [
{role: 'system', content: 'You are a helpful assistant.'},
{role: 'user', content: 'Give me 5 title ideas'},
{role: 'assistant', content: 'Sure, here you go … …'},
],
});
console.log(responseFromMessages);
This documentation provides a comprehensive guide for getting started with the @baseai/core
package, as well as a detailed API reference for the generate
function and its related components.
FAQs
The Web AI Framework's core - BaseAI.dev
The npm package @baseai/core receives a total of 1,268 weekly downloads. As such, @baseai/core popularity was classified as popular.
We found that @baseai/core demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
vlt's new "reproduce" tool verifies npm packages against their source code, outperforming traditional provenance adoption in the JavaScript ecosystem.
Research
Security News
Socket researchers uncovered a malicious PyPI package exploiting Deezer’s API to enable coordinated music piracy through API abuse and C2 server control.
Research
The Socket Research Team discovered a malicious npm package, '@ton-wallet/create', stealing cryptocurrency wallet keys from developers and users in the TON ecosystem.