Socket
Book a DemoInstallSign in
Socket

@cjpais/inference

Package Overview
Dependencies
Maintainers
0
Versions
17
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@cjpais/inference

Trying to wrap a bunch of different inference providers models and rate limit them. As well as getting them to support typescript more natively.

0.0.18
latest
npmnpm
Version published
Weekly downloads
0
-100%
Maintainers
0
Weekly downloads
 
Created
Source

inference

Trying to wrap a bunch of different inference providers models and rate limit them. As well as getting them to support typescript more natively.

My specific application may send many parallel requests to inference models and I need to rate limit these requests across the application per provider. This effectively solves that problem

This is a major WIP so a bunch of things are left unimplmented for the time being. However the basic functionality should be there

Supported providers:

  • OpenAI (for chat, audio, image, embedding)
  • Together (for chat)
  • Mistral (for chat)
  • Whisper.cpp (for audio)

WIP Stuff:

  • consistent JSON mode
  • error handling
  • more rate limiting options
  • more providers (llama.cpp for chat, image and embedding)
  • move to config file & code gen for better typing?

Usage

Check out test/index.test.ts for usage examples.

Generally speaking

  • Instantiate a provider
const oai = new OpenAIProvider({
  apiKey: process.env.OPENAI_API_KEY!,
});
  • Create a rate limiter based on your own usage (this is in requests per second)
const oaiLimiter = createRateLimiter(2);
  • Define what models you want to use and their alias
const CHAT_MODELS: Record<string, ChatModel> = {
  "gpt-3.5": {
    provider: oai,
    name: "gpt-3.5",
    providerModel: "gpt-3.5-turbo-0125",
    rateLimiter: oaiLimiter,
  },
  "gpt-4": {
    provider: oai,
    name: "gpt-4",
    providerModel: "gpt-4-0125-preview",
    rateLimiter: oaiLimiter,
  }
}
  • Create inference with the models you want
const inference = new Inference({chatModels: CHAT_MODELS});
  • Call the inference with the model you want to use
const result = await inference.chat({model: "gpt-3.5", prompt: "Hello, world!"});

To install dependencies:

bun install

To run:

bun run index.ts

FAQs

Package last updated on 18 Oct 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

About

Packages

Stay in touch

Get open source security insights delivered straight into your inbox.

  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc

U.S. Patent No. 12,346,443 & 12,314,394. Other pending.