New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

openllm

Package Overview
Dependencies
Maintainers
3
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

openllm

OpenLLM: Self-hosting LLMs Made Easy.

  • 0.6.19
  • PyPI
  • Socket score

Maintainers
3

🦾 OpenLLM: Self-Hosting LLMs Made Easy

License: Apache-2.0 Releases CI X Community

OpenLLM allows developers to run any open-source LLMs (Llama 3.3, Qwen2.5, Phi3 and more) or custom models as OpenAI-compatible APIs with a single command. It features a built-in chat UI, state-of-the-art inference backends, and a simplified workflow for creating enterprise-grade cloud deployment with Docker, Kubernetes, and BentoCloud.

Understand the design philosophy of OpenLLM.

Get Started

Run the following commands to install OpenLLM and explore it interactively.

pip install openllm  # or pip3 install openllm
openllm hello

hello

Supported models

OpenLLM supports a wide range of state-of-the-art open-source LLMs. You can also add a model repository to run custom models with OpenLLM.

ModelParametersRequired GPUStart a Server
deepseek-r1671B80Gx16openllm serve deepseek-r1:671b-fc3d
deepseek-r1-distill14B80Gopenllm serve deepseek-r1-distill:qwen2.5-14b-98a9
deepseek-v3671B80Gx16openllm serve deepseek-v3:671b-instruct-d7ec
gemma22B12Gopenllm serve gemma2:2b-instruct-747d
llama3.18B24Gopenllm serve llama3.1:8b-instruct-3c0c
llama3.21B24Gopenllm serve llama3.2:1b-instruct-f041
llama3.370B80Gx2openllm serve llama3.3:70b-instruct-b850
mistral8B24Gopenllm serve mistral:8b-instruct-50e8
mistral-large123B80Gx4openllm serve mistral-large:123b-instruct-1022
mistralai24B80Gopenllm serve mistralai:24b-small-instruct-2501-0e69
mixtral7B80Gx2openllm serve mixtral:8x7b-instruct-v0.1-b752
phi414B80Gopenllm serve phi4:14b-c12d
pixtral12B80Gopenllm serve pixtral:12b-240910-c344
qwen2.57B24Gopenllm serve qwen2.5:7b-instruct-3260
qwen2.5-coder7B24Gopenllm serve qwen2.5-coder:7b-instruct-e75d
qwen2.5vl3B24Gopenllm serve qwen2.5vl:3b-instruct-4686

...

For the full model list, see the OpenLLM models repository.

Start an LLM server

To start an LLM server locally, use the openllm serve command and specify the model version.

[!NOTE] OpenLLM does not store model weights. A Hugging Face token (HF_TOKEN) is required for gated models.

  1. Create your Hugging Face token here.
  2. Request access to the gated model, such as meta-llama/Llama-3.2-1B-Instruct.
  3. Set your token as an environment variable by running:
    export HF_TOKEN=<your token>
    
openllm serve openllm serve llama3.2:1b-instruct-f041

The server will be accessible at http://localhost:3000, providing OpenAI-compatible APIs for interaction. You can call the endpoints with different frameworks and tools that support OpenAI-compatible APIs. Typically, you may need to specify the following:

  • The API host address: By default, the LLM is hosted at http://localhost:3000.
  • The model name: The name can be different depending on the tool you use.
  • The API key: The API key used for client authentication. This is optional.

Here are some examples:

OpenAI Python client
from openai import OpenAI

client = OpenAI(base_url='http://localhost:3000/v1', api_key='na')

# Use the following func to get the available models
# model_list = client.models.list()
# print(model_list)

chat_completion = client.chat.completions.create(
    model="meta-llama/Llama-3.2-1B-Instruct",
    messages=[
        {
            "role": "user",
            "content": "Explain superconductors like I'm five years old"
        }
    ],
    stream=True,
)
for chunk in chat_completion:
    print(chunk.choices[0].delta.content or "", end="")
LlamaIndex
from llama_index.llms.openai import OpenAI

llm = OpenAI(api_bese="http://localhost:3000/v1", model="meta-llama/Llama-3.2-1B-Instruct", api_key="dummy")
...

Chat UI

OpenLLM provides a chat UI at the /chat endpoint for the launched LLM server at http://localhost:3000/chat.

openllm_ui

Chat with a model in the CLI

To start a chat conversation in the CLI, use the openllm run command and specify the model version.

openllm run llama3:8b

Model repository

A model repository in OpenLLM represents a catalog of available LLMs that you can run. OpenLLM provides a default model repository that includes the latest open-source LLMs like Llama 3, Mistral, and Qwen2, hosted at this GitHub repository. To see all available models from the default and any added repository, use:

openllm model list

To ensure your local list of models is synchronized with the latest updates from all connected repositories, run:

openllm repo update

To review a model’s information, run:

openllm model get openllm serve llama3.2:1b-instruct-f041

Add a model to the default model repository

You can contribute to the default model repository by adding new models that others can use. This involves creating and submitting a Bento of the LLM. For more information, check out this example pull request.

Set up a custom repository

You can add your own repository to OpenLLM with custom models. To do so, follow the format in the default OpenLLM model repository with a bentos directory to store custom LLMs. You need to build your Bentos with BentoML and submit them to your model repository.

First, prepare your custom models in a bentos directory following the guidelines provided by BentoML to build Bentos. Check out the default model repository for an example and read the Developer Guide for details.

Then, register your custom model repository with OpenLLM:

openllm repo add <repo-name> <repo-url>

Note: Currently, OpenLLM only supports adding public repositories.

Deploy to BentoCloud

OpenLLM supports LLM cloud deployment via BentoML, the unified model serving framework, and BentoCloud, an AI inference platform for enterprise AI teams. BentoCloud provides fully-managed infrastructure optimized for LLM inference with autoscaling, model orchestration, observability, and many more, allowing you to run any AI model in the cloud.

Sign up for BentoCloud for free and log in. Then, run openllm deploy to deploy a model to BentoCloud:

openllm deploy openllm serve llama3.2:1b-instruct-f041

[!NOTE] If you are deploying a gated model, make sure to set HF_TOKEN in enviroment variables.

Once the deployment is complete, you can run model inference on the BentoCloud console:

bentocloud_ui

Community

OpenLLM is actively maintained by the BentoML team. Feel free to reach out and join us in our pursuit to make LLMs more accessible and easy to use 👉 Join our Slack community!

Contributing

As an open-source project, we welcome contributions of all kinds, such as new features, bug fixes, and documentation. Here are some of the ways to contribute:

Acknowledgements

This project uses the following open-source projects:

We are grateful to the developers and contributors of these projects for their hard work and dedication.

Keywords

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc