Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

local-llm

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

local-llm

Local-LLM is a llama.cpp server in Docker with OpenAI Style Endpoints.

  • 0.1.1
  • PyPI
  • Socket score

Maintainers
1

Local-LLM

GitHub Dockerhub

Local-LLM is a simple llama.cpp server that easily exposes a list of local language models to choose from to run on your own computer. It is designed to be as easy as possible to get started with running local models. It automatically handles downloading the model of your choice and configuring the server based on your CPU, RAM, and GPU. It also includes OpenAI Style endpoints for easy integration with other applications. Additional functionality is built in for voice cloning text to speech and a voice to text for easy voice communication entirely offline after the initial setup.

Prerequisites

Additional Linux Prerequisites

Installation

git clone https://github.com/Josh-XT/Local-LLM
cd Local-LLM

Environment Setup

Expand Environment Setup if you would like to modify the default environment variables, otherwise skip to Usage. All environment variables are optional and have useful defaults. Change the default model that starts with Local-LLM in your .env file.

Environment Setup (Optional)

None of the values need modified in order to run the server. If you are using an NVIDIA GPU, I would recommend setting the GPU_LAYERS and MAIN_GPU environment variables. If you plan to expose the server to the internet, I would recommend setting the LOCAL_LLM_API_KEY environment variable for security. THREADS is set to your CPU thread count minus 2 by default, if this causes significant performance issues, consider setting the THREADS environment variable manually to a lower number.

Modify the .env file to your desired settings. Assumptions will be made on all of these values if you choose to accept the defaults.

Replace the environment variables with your desired settings. Assumptions will be made on all of these values if you choose to accept the defaults.

  • LOCAL_LLM_API_KEY - The API key to use for the server. If not set, the server will not require an API key when accepting requests.
  • DEFAULT_MODEL - The default model to use when no model is specified. Default is phi-2-dpo.
  • WHISPER_MODEL - The model to use for speech-to-text. Default is base.en.
  • AUTO_UPDATE - Whether or not to automatically update Local-LLM. Default is true.
  • THREADS - The number of CPU threads Local-LLM is allowed to use. Default is your CPU thread count minus 2.
  • GPU_LAYERS (Only applicable to NVIDIA GPU) - The number of layers to use on the GPU. Default is 0. Local-LLM will automatically determine the optimal number of layers to use based on your GPU's memory if it is set to 0 and you have an NVIDIA GPU.
  • MAIN_GPU (Only applicable to NVIDIA GPU) - The GPU to use for the main model. Default is 0.

Usage

./start.ps1

For examples on how to use the server to communicate with the models, see the Examples Jupyter Notebook.

OpenAI Style Endpoint Usage

OpenAI Style endpoints available at http://<YOUR LOCAL IP ADDRESS>:8091/v1/ by default. Documentation can be accessed at that http://localhost:8091 when the server is running.

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc