New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

@embedapi/core

Package Overview
Dependencies
Maintainers
1
Versions
11
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@embedapi/core

🔥 ONE API KEY TO RULE THEM ALL! Access ANY AI model instantly through our game-changing unified API. Build AI apps in minutes, not months! The ultimate all-in-one AI agent solution you've been waiting for! 🚀

npmnpm
Version
1.0.7
Version published
Weekly downloads
2
-95.12%
Maintainers
1
Weekly downloads
 
Created
Source

EmbedAPIClient

🔥 ONE API KEY TO RULE THEM ALL! Access ANY AI model instantly through our game-changing unified API. Build AI apps in minutes, not months! The ultimate all-in-one AI agent solution you've been waiting for! 🚀

EmbedAPI is a powerful Node.js client library that gives you instant access to ANY AI model through a single, unified API. Stop juggling multiple API keys and implementations - with EmbedAPI you can build sophisticated AI applications in minutes, not months.

Installation

npm install @embedapi/client

Initialization

const EmbedAPIClient = require('@embedapi/client');

// Regular API client
const client = new EmbedAPIClient('your-api-key');

// Agent mode client
const agentClient = new EmbedAPIClient('your-agent-id', { isAgent: true });

// Debug mode client
const debugClient = new EmbedAPIClient('your-api-key', { debug: true });

// Agent and debug mode client
const debugAgentClient = new EmbedAPIClient('your-agent-id', { isAgent: true, debug: true });

Constructor Parameters

  • apiKey (string): Your API key for regular mode, or agent ID for agent mode
  • options (object, optional): Configuration options
    • isAgent (boolean, optional): Set to true to use agent mode. Defaults to false
    • debug (boolean, optional): Set to true to enable debug logging. Defaults to false

Methods

1. generate({ service, model, messages, ...options })

Generates text using the specified AI service and model.

Parameters

  • service (string): The name of the AI service (e.g., 'openai')
  • model (string): The model to use (e.g., 'gpt-4o')
  • messages (array): An array of message objects containing conversation history
  • maxTokens (number, optional): Maximum number of tokens to generate
  • temperature (number, optional): Sampling temperature
  • topP (number, optional): Top-p sampling parameter
  • frequencyPenalty (number, optional): Frequency penalty parameter
  • presencePenalty (number, optional): Presence penalty parameter
  • stopSequences (array, optional): Stop sequences for controlling response generation
  • tools (array, optional): Array of function definitions for tool use
  • toolChoice (string|object, optional): Tool selection preferences
  • enabledTools (array, optional): List of enabled tool names
  • userId (string, optional): Optional user identifier for agent mode

Usage Example

// Regular mode
const response = await client.generate({
    service: 'openai',
    model: 'gpt-4o',
    messages: [{ role: 'user', content: 'Hello' }]
});

// Agent mode
const agentResponse = await agentClient.generate({
    service: 'openai',
    model: 'gpt-4o',
    messages: [{ role: 'user', content: 'Hello' }]
});

2. stream({ service, model, messages, ...options })

Streams text generation using the specified AI service and model.

Parameters

Same as generate(), plus:

  • streamOptions (object, optional): Stream-specific configuration options

Response Format

The stream emits Server-Sent Events (SSE) with two types of messages:

  • Content Chunks:
{
    "content": "Generated text chunk",
    "role": "assistant"
}
  • Final Statistics:
{
    "type": "done",
    "tokenUsage": 17,
    "cost": 0.000612
}

Usage Example

// Regular mode
const streamResponse = await client.stream({
    service: 'openai',
    model: 'gpt-4o',
    messages: [{ role: 'user', content: 'Hello' }]
});

// Agent mode
const agentStreamResponse = await agentClient.stream({
    service: 'openai',
    model: 'gpt-4o',
    messages: [{ role: 'user', content: 'Hello' }]
});

// Process the stream
const reader = streamResponse.body.getReader();
const decoder = new TextDecoder();

while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    
    const chunk = decoder.decode(value);
    const lines = chunk.split('\n');
    
    for (const line of lines) {
        if (line.startsWith('data: ')) {
            const data = JSON.parse(line.slice(6));
            if (data.type === 'done') {
                console.log('Stream stats:', {
                    tokenUsage: data.tokenUsage,
                    cost: data.cost
                });
            } else {
                console.log('Content:', data.content);
            }
        }
    }
}

3. listModels()

Lists all available models.

const models = await client.listModels();

4. testAPIConnection()

Tests the connection to the API.

const isConnected = await client.testAPIConnection();

Error Handling

All methods throw errors if the API request fails:

try {
    const response = await client.generate({
        service: 'openai',
        model: 'gpt-4o',
        messages: [{ role: 'user', content: 'Hello' }]
    });
} catch (error) {
    console.error('Error:', error.message);
}

Authentication

The client supports two authentication modes:

  • Regular Mode (default)

    • Uses API key in request headers
    • Initialize with: new EmbedAPIClient('your-api-key')
  • Agent Mode

    • Uses agent ID in request body
    • Initialize with: new EmbedAPIClient('your-agent-id', { isAgent: true })
    • Optional userId parameter available for request tracking

License

MIT

Keywords

embed

FAQs

Package last updated on 03 Dec 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts