Socket
Socket
Sign inDemoInstall

llm-interface

Package Overview
Dependencies
Maintainers
0
Versions
38
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

llm-interface

A simple, unified interface for integrating and interacting with multiple Large Language Model (LLM) APIs, including OpenAI, AI21 Studio, Anthropic, Cloudflare AI, Cohere, Fireworks AI, Google Gemini, Goose AI, Groq, Hugging Face, Mistral AI, Perplexity,


Version published
Weekly downloads
79
decreased by-1.25%
Maintainers
0
Weekly downloads
 
Created
Source

llm-interface

Star on GitHub Fork on GitHub Watch on GitHub

Version 2.0.4 License: MIT Built with Node.js

Introduction

The LLM Interface project is a versatile and comprehensive wrapper designed to interact with multiple Large Language Model (LLM) APIs. It simplifies integrating various LLM providers, including OpenAI, AI21 Studio, Anthropic, Cloudflare AI, Cohere, Fireworks AI, Google Gemini, Goose AI, Groq, Hugging Face, Mistral AI, Perplexity, Reka AI, and LLaMA.cpp, into your applications. This project aims to provide a simplified and unified interface for sending messages and receiving responses from different LLM services, making it easier for developers to work with multiple LLMs without worrying about the specific intricacies of each API.

Features

  • Unified Interface: LLMInterfaceSendMessage is a single, consistent interface to interact with fourteen different LLM APIs.
  • Dynamic Module Loading: Automatically loads and manages different LLM LLMInterfaces.
  • Error Handling: Robust error handling mechanisms to ensure reliable API interactions.
  • Extensible: Easily extendable to support additional LLM providers as needed.
  • Response Caching: Efficiently caches LLM responses to reduce costs and enhance performance.
  • Graceful Retries: Automatically retry failed prompts with increasing delays to ensure successful responses.
  • JSON Output: Simple to use native JSON output for OpenAI, Fireworks AI, and Gemini responses.
  • JSON Repair: Detect and repair invalid JSON responses.

Updates

v2.0.3

  • New LLM Providers Functions: LLMInterface.getAllModelNames() and LLMInterface.getModelConfigValue(provider, configValueKey).

v2.0.2

  • New LLM Providers: Added support for Cloudflare AI, and Fireworks AI
  • JSON Consistency: A breaking change has been introduced: all responses now return as valid JSON objects.
  • JSON Repair: Use interfaceOptions.attemptJsonRepair to repair invalid JSON responses when they occur.
  • Improved Hugging Face Interface: Refactored interface to support the undocumented chat completion endpoint.
  • Interface Name Changes:reka becomes rekaai, goose becomes gooseai, mistral becomes mistralai.
  • Deprecated: handlers has been removed.
  • Updated LLM Model Definitions: Revised small models for various providers.

Dependencies

The project relies on several npm packages and APIs. Here are the primary dependencies:

  • axios: For making HTTP requests (used for various HTTP AI APIs).
  • @anthropic-ai/sdk: SDK for interacting with the Anthropic API.
  • @google/generative-ai: SDK for interacting with the Google Gemini API.
  • groq-sdk: SDK for interacting with the Groq API.
  • openai: SDK for interacting with the OpenAI API.
  • dotenv: For managing environment variables. Used by test cases.
  • flat-cache: For caching API responses to improve performance and reduce redundant requests.
  • jsonrepair: Used to repair invalid JSON responses.
  • jest: For running test cases.

Installation

To install the llm-interface package, you can use npm:

npm install llm-interface

Usage

Example

First import LLMInterfaceSendMessage. You can do this using either the CommonJS require syntax:

const { LLMInterfaceSendMessage } = require('llm-interface');

or the ES6 import syntax:

import { LLMInterfaceSendMessage } from 'llm-interface';

then send your prompt to the LLM provider of your choice:

const message = {
  model: 'gpt-3.5-turbo',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Explain the importance of low latency LLMs.' },
  ],
};

LLMInterfaceSendMessage('openai', process.env.OPENAI_API_KEY, message, {
  max_tokens: 150,
})
  .then((response) => {
    console.log(response.results);
  })
  .catch((error) => {
    console.error(error);
  });

or if you want to keep things simple you can use:

LLMInterfaceSendMessage(
  'openai',
  process.env.OPENAI_API_KEY,
  'Explain the importance of low latency LLMs.',
)
  .then((response) => {
    console.log(response.results);
  })
  .catch((error) => {
    console.error(error);
  });

If you need API Keys, use this starting point. Additional usage examples and an API reference are available. You may also wish to review the test cases for further examples.

Running Tests

The project includes tests for each LLM handler. To run the tests, use the following command:

npm test
Test Results (v2.0.0)
Test Suites: 43 passed, 43 total
Tests:       172 passed, 172 total

Contribute

Contributions to this project are welcome. Please fork the repository and submit a pull request with your changes or improvements.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Share

Twitter Facebook LinkedIn

Keywords

FAQs

Package last updated on 24 Jun 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc