New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details
Socket
Book a DemoSign in
Socket

@embedapi/core

Package Overview
Dependencies
Maintainers
1
Versions
11
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@embedapi/core

A general-purpose embedding solution for APIs, providing tools for easy integration of AI models.

npmnpm
Version
1.0.3
Version published
Weekly downloads
3
-94.55%
Maintainers
1
Weekly downloads
 
Created
Source

EmbedAPIClient - Documentation

Overview

EmbedAPIClient is a Node.js client library that interacts with the EmbedAPI service. It provides methods to generate AI responses, list available models, and test API connectivity. This library makes it easy to interact with the EmbedAPI for various AI services, such as OpenAI, Anthropic, VertexAI, and others.

Installation

To install the EmbedAPIClient in your project, use the following command:

npm install @embedapi/core

Usage

Import the Client

You can import the EmbedAPIClient class using CommonJS syntax:

const EmbedAPIClient = require('@embedapi/core');

Initialize the Client

To initialize the client, you need to provide your API key. The API key is required to authenticate requests to the EmbedAPI service.

const apiKey = 'your-api-key-here';
const client = new EmbedAPIClient(apiKey);

Methods

1. generate({ service, model, messages, ...options })

Generates text using the specified AI service and model.

  • Parameters:

    • service (string): The name of the AI service (e.g., 'openai').
    • model (string): The model to use (e.g., 'gpt-4').
    • messages (array): An array of message objects containing conversation history.
    • maxTokens (number, optional): The maximum number of tokens to generate.
    • temperature (number, optional): Sampling temperature.
    • topP (number, optional): Top-p sampling parameter.
    • frequencyPenalty (number, optional): Frequency penalty parameter.
    • presencePenalty (number, optional): Presence penalty parameter.
    • stopSequences (array, optional): Stop sequences for controlling response generation.
  • Usage Example:

const response = await client.generate({
    service: 'openai',
    model: 'gpt-4',
    messages: [{ role: 'user', content: 'Hello' }],
    maxTokens: 150,
    temperature: 0.7
});
console.log(response);

2. listModels()

Lists all available models for the specified service.

  • Usage Example:
const models = await client.listModels();
console.log(models);

3. testAPIConnection()

Tests the connection to the API to verify that the API key is valid.

  • Usage Example:
const isConnected = await client.testAPIConnection();
console.log('API Connection Successful:', isConnected);

Error Handling

The client methods throw errors if the API request fails. Make sure to use try...catch blocks when calling these methods to handle potential errors gracefully.

try {
    const response = await client.generate({
        service: 'openai',
        model: 'gpt-4',
        messages: [{ role: 'user', content: 'Hello' }]
    });
    console.log(response);
} catch (error) {
    console.error('Error generating text:', error);
}

API Reference

Base URL

The base URL for all API requests is:

https://embedapi.com/api/v1

Endpoints

  • POST /generate: Generates AI responses based on the provided input.
  • GET /models: Lists all available models.
  • GET /test: Tests the API connection.

Example Project

Here is a quick example of using EmbedAPIClient in a Node.js project:

const EmbedAPIClient = require('@embedapi/core');

const apiKey = 'your-api-key-here';
const client = new EmbedAPIClient(apiKey);

async function main() {
    try {
        // Test API connection
        const isConnected = await client.testAPIConnection();
        console.log('API Connection Successful:', isConnected);

        // List available models
        const models = await client.listModels();
        console.log('Available Models:', models);

        // Generate text
        const response = await client.generate({
            service: 'openai',
            model: 'gpt-4',
            messages: [{ role: 'user', content: 'Hello' }],
            maxTokens: 100
        });
        console.log('Generated Response:', response);
    } catch (error) {
        console.error('An error occurred:', error);
    }
}

main();

Contributing

We welcome contributions! Please feel free to submit a pull request or open an issue if you find a bug or have suggestions for improvements.

License

This project is licensed under the MIT License. See the LICENSE file for details.

Keywords

embed

FAQs

Package last updated on 07 Nov 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts