Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

llama-index-llms-litellm

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

llama-index-llms-litellm

llama-index llms litellm integration

  • 0.3.0
  • PyPI
  • Socket score

Maintainers
1

LlamaIndex Llms Integration: Litellm

Installation

  1. Install the required Python packages:

    %pip install llama-index-llms-litellm
    !pip install llama-index
    

Usage

Import Required Libraries

import os
from llama_index.llms.litellm import LiteLLM
from llama_index.core.llms import ChatMessage

Set Up Environment Variables

Set your API keys as environment variables:

os.environ["OPENAI_API_KEY"] = "your-api-key"
os.environ["COHERE_API_KEY"] = "your-api-key"

Example: OpenAI Call

To interact with the OpenAI model:

message = ChatMessage(role="user", content="Hey! how's it going?")
llm = LiteLLM("gpt-3.5-turbo")
chat_response = llm.chat([message])
print(chat_response)

Example: Cohere Call

To interact with the Cohere model:

llm = LiteLLM("command-nightly")
chat_response = llm.chat([message])
print(chat_response)

Example: Chat with System Message

To have a chat with a system role:

messages = [
    ChatMessage(
        role="system", content="You are a pirate with a colorful personality"
    ),
    ChatMessage(role="user", content="Tell me a story"),
]
resp = LiteLLM("gpt-3.5-turbo").chat(messages)
print(resp)

Streaming Responses

To use the streaming feature with stream_complete:

llm = LiteLLM("gpt-3.5-turbo")
resp = llm.stream_complete("Paul Graham is ")
for r in resp:
    print(r.delta, end="")

Streaming Chat Example

To stream chat messages:

llm = LiteLLM("gpt-3.5-turbo")
resp = llm.stream_chat(messages)
for r in resp:
    print(r.delta, end="")

Asynchronous Example

For asynchronous calls, use:

llm = LiteLLM("gpt-3.5-turbo")
resp = await llm.acomplete("Paul Graham is ")
print(resp)

LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/litellm/

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc