Socket
Socket
Sign inDemoInstall

litellm

Package Overview
Dependencies
4
Maintainers
1
Alerts
File Explorer

Advanced tools

Install Socket

Detect and block malicious and high-risk dependencies

Install

litellm

Library to easily interface with LLM API providers


Maintainers
1

Readme

šŸš… LiteLLM

Deploy to Render Deploy on Railway

Call all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, etc.]

OpenAI Proxy Server | Hosted Proxy (Preview) | Enterprise Tier
PyPI Version CircleCI Y Combinator W23 Whatsapp Discord

LiteLLM manages:

  • Translate inputs to provider's completion, embedding, and image_generation endpoints
  • Consistent output, text responses will always be available at ['choices'][0]['message']['content']
  • Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - Router
  • Set Budgets & Rate limits per project, api key, model OpenAI Proxy Server

Jump to OpenAI Proxy Docs
Jump to Supported LLM Providers

šŸšØ Stable Release: Use docker images with the -stable tag. These have undergone 12 hour load tests, before being published.

Support for more providers. Missing a provider or LLM Platform, raise a feature request.

Usage (Docs)

[!IMPORTANT] LiteLLM v1.0.0 now requires openai>=1.0.0. Migration guide here
LiteLLM v1.40.14+ now requires pydantic>=2.0.0. No changes required.

Open In Colab
pip install litellm
from litellm import completion
import os

## set ENV variables
os.environ["OPENAI_API_KEY"] = "your-openai-key"
os.environ["COHERE_API_KEY"] = "your-cohere-key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)
print(response)

Call any model supported by a provider, with model=<provider_name>/<model_name>. There might be provider-specific details here, so refer to provider docs for more information

Async (Docs)

from litellm import acompletion
import asyncio

async def test_get_response():
    user_message = "Hello, how are you?"
    messages = [{"content": user_message, "role": "user"}]
    response = await acompletion(model="gpt-3.5-turbo", messages=messages)
    return response

response = asyncio.run(test_get_response())
print(response)

Streaming (Docs)

liteLLM supports streaming the model response back, pass stream=True to get a streaming iterator in response.
Streaming is supported for all models (Bedrock, Huggingface, TogetherAI, Azure, OpenAI, etc.)

from litellm import completion
response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
for part in response:
    print(part.choices[0].delta.content or "")

# claude 2
response = completion('claude-2', messages, stream=True)
for part in response:
    print(part.choices[0].delta.content or "")

Logging Observability (Docs)

LiteLLM exposes pre defined callbacks to send data to Lunary, Langfuse, DynamoDB, s3 Buckets, Helicone, Promptlayer, Traceloop, Athina, Slack

from litellm import completion

## set env variables for logging tools
os.environ["LUNARY_PUBLIC_KEY"] = "your-lunary-public-key"
os.environ["LANGFUSE_PUBLIC_KEY"] = ""
os.environ["LANGFUSE_SECRET_KEY"] = ""
os.environ["ATHINA_API_KEY"] = "your-athina-api-key"

os.environ["OPENAI_API_KEY"]

# set callbacks
litellm.success_callback = ["lunary", "langfuse", "athina"] # log input/output to lunary, langfuse, supabase, athina etc

#openai call
response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi šŸ‘‹ - i'm openai"}])

OpenAI Proxy - (Docs)

Track spend + Load Balance across multiple projects

Hosted Proxy (Preview)

The proxy provides:

  1. Hooks for auth
  2. Hooks for logging
  3. Cost tracking
  4. Rate Limiting

šŸ“– Proxy Endpoints - Swagger Docs

Quick Start Proxy - CLI

pip install 'litellm[proxy]'

Step 1: Start litellm proxy

$ litellm --model huggingface/bigcode/starcoder

#INFO: Proxy running on http://0.0.0.0:4000

Step 2: Make ChatCompletions Request to Proxy

import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:4000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
    {
        "role": "user",
        "content": "this is a test request, write a short poem"
    }
])

print(response)

Proxy Key Management (Docs)

Connect the proxy with a Postgres DB to create proxy keys

# Get the code
git clone https://github.com/BerriAI/litellm

# Go to folder
cd litellm

# Add the master key
echo 'LITELLM_MASTER_KEY="sk-1234"' > .env
source .env

# Start
docker-compose up

UI on /ui on your proxy server ui_3

Set budgets and rate limits across multiple projects POST /key/generate

Request

curl 'http://0.0.0.0:4000/key/generate' \
--header 'Authorization: Bearer sk-1234' \
--header 'Content-Type: application/json' \
--data-raw '{"models": ["gpt-3.5-turbo", "gpt-4", "claude-2"], "duration": "20m","metadata": {"user": "ishaan@berri.ai", "team": "core-infra"}}'

Expected Response

{
    "key": "sk-kdEXbIqZRwEeEiHwdg7sFA", # Bearer token
    "expires": "2023-11-19T01:38:25.838000+00:00" # datetime object
}

Supported Providers (Docs)

ProviderCompletionStreamingAsync CompletionAsync StreamingAsync EmbeddingAsync Image Generation
openaiāœ…āœ…āœ…āœ…āœ…āœ…
azureāœ…āœ…āœ…āœ…āœ…āœ…
aws - sagemakerāœ…āœ…āœ…āœ…āœ…
aws - bedrockāœ…āœ…āœ…āœ…āœ…
google - vertex_aiāœ…āœ…āœ…āœ…āœ…āœ…
google - palmāœ…āœ…āœ…āœ…
google AI Studio - geminiāœ…āœ…āœ…āœ…
mistral ai apiāœ…āœ…āœ…āœ…āœ…
cloudflare AI Workersāœ…āœ…āœ…āœ…
cohereāœ…āœ…āœ…āœ…āœ…
anthropicāœ…āœ…āœ…āœ…
huggingfaceāœ…āœ…āœ…āœ…āœ…
replicateāœ…āœ…āœ…āœ…
together_aiāœ…āœ…āœ…āœ…
openrouterāœ…āœ…āœ…āœ…
ai21āœ…āœ…āœ…āœ…
basetenāœ…āœ…āœ…āœ…
vllmāœ…āœ…āœ…āœ…
nlp_cloudāœ…āœ…āœ…āœ…
aleph alphaāœ…āœ…āœ…āœ…
petalsāœ…āœ…āœ…āœ…
ollamaāœ…āœ…āœ…āœ…āœ…
deepinfraāœ…āœ…āœ…āœ…
perplexity-aiāœ…āœ…āœ…āœ…
Groq AIāœ…āœ…āœ…āœ…
Deepseekāœ…āœ…āœ…āœ…
anyscaleāœ…āœ…āœ…āœ…
IBM - watsonx.aiāœ…āœ…āœ…āœ…āœ…
voyage aiāœ…
xinference [Xorbits Inference]āœ…
FriendliAIāœ…āœ…āœ…āœ…

Read the Docs

Contributing

To contribute: Clone the repo locally -> Make a change -> Submit a PR with the change.

Here's how to modify the repo locally: Step 1: Clone the repo

git clone https://github.com/BerriAI/litellm.git

Step 2: Navigate into the project, and install dependencies:

cd litellm
poetry install -E extra_proxy -E proxy

Step 3: Test your change:

cd litellm/tests # pwd: Documents/litellm/litellm/tests
poetry run flake8
poetry run pytest .

Step 4: Submit a PR with your changes! šŸš€

  • push your fork to your GitHub repo
  • submit a PR from there

Enterprise

For companies that need better security, user management and professional support

Talk to founders

This covers:

  • āœ… Features under the LiteLLM Commercial License:
  • āœ… Feature Prioritization
  • āœ… Custom Integrations
  • āœ… Professional Support - Dedicated discord + slack
  • āœ… Custom SLAs
  • āœ… Secure access with Single Sign-On

Support / talk with founders

Why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI and Cohere.

Contributors

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap

Packages

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with āš”ļø by Socket Inc