Llama Stack

Quick Start | Documentation | Colab Notebook | Discord
🚀 One-Line Installer 🚀
To try Llama Stack locally, run:
curl -LsSf https://github.com/llamastack/llama-stack/raw/main/scripts/install.sh | bash
Overview
Llama Stack defines and standardizes the core building blocks that simplify AI application development. It provides a unified set of APIs with implementations from leading service providers. More specifically, it provides:
- Unified API layer for Inference, RAG, Agents, Tools, Safety, Evals.
- Plugin architecture to support the rich ecosystem of different API implementations in various environments, including local development, on-premises, cloud, and mobile.
- Prepackaged verified distributions which offer a one-stop solution for developers to get started quickly and reliably in any environment.
- Multiple developer interfaces like CLI and SDKs for Python, Typescript, iOS, and Android.
- Standalone applications as examples for how to build production-grade AI applications with Llama Stack.
Llama Stack Benefits
- Flexibility: Developers can choose their preferred infrastructure without changing APIs and enjoy flexible deployment choices.
- Consistent Experience: With its unified APIs, Llama Stack makes it easier to build, test, and deploy AI applications with consistent application behavior.
- Robust Ecosystem: Llama Stack is integrated with distribution partners (cloud providers, hardware vendors, and AI-focused companies) that offer tailored infrastructure, software, and services for deploying Llama models.
For more information, see the Benefits of Llama Stack documentation.
API Providers
Here is a list of the various API providers and available distributions that can help developers get started easily with Llama Stack.
Please checkout for full list
| Meta Reference | Single Node | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
| SambaNova | Hosted | | ✅ | | ✅ | | | |
| Cerebras | Hosted | | ✅ | | | | | |
| Fireworks | Hosted | ✅ | ✅ | ✅ | | | | |
| AWS Bedrock | Hosted | | ✅ | | ✅ | | | |
| Together | Hosted | ✅ | ✅ | | ✅ | | | |
| Groq | Hosted | | ✅ | | | | | |
| Ollama | Single Node | | ✅ | | | | | |
| TGI | Hosted/Single Node | | ✅ | | | | | |
| NVIDIA NIM | Hosted/Single Node | | ✅ | | ✅ | | | |
| ChromaDB | Hosted/Single Node | | | ✅ | | | | |
| Milvus | Hosted/Single Node | | | ✅ | | | | |
| Qdrant | Hosted/Single Node | | | ✅ | | | | |
| Weaviate | Hosted/Single Node | | | ✅ | | | | |
| SQLite-vec | Single Node | | | ✅ | | | | |
| PG Vector | Single Node | | | ✅ | | | | |
| PyTorch ExecuTorch | On-device iOS | ✅ | ✅ | | | | | |
| vLLM | Single Node | | ✅ | | | | | |
| OpenAI | Hosted | | ✅ | | | | | |
| Anthropic | Hosted | | ✅ | | | | | |
| Gemini | Hosted | | ✅ | | | | | |
| WatsonX | Hosted | | ✅ | | | | | |
| HuggingFace | Single Node | | | | | ✅ | | ✅ |
| TorchTune | Single Node | | | | | ✅ | | |
| NVIDIA NEMO | Hosted | | ✅ | ✅ | | ✅ | ✅ | ✅ |
| NVIDIA | Hosted | | | | | ✅ | ✅ | ✅ |
Note: Additional providers are available through external packages. See External Providers documentation.
Distributions
A Llama Stack Distribution (or "distro") is a pre-configured bundle of provider implementations for each API component. Distributions make it easy to get started with a specific deployment scenario. For example, you can begin with a local setup of Ollama and seamlessly transition to production, with fireworks, without changing your application code.
Here are some of the distributions we support:
For full documentation on the Llama Stack distributions see the Distributions Overview page.
Documentation
Please checkout our Documentation page for more details.
Llama Stack Client SDKs
Check out our client SDKs for connecting to a Llama Stack server in your preferred language.
You can find more example scripts with client SDKs to talk with the Llama Stack server in our llama-stack-apps repo.
🌟 GitHub Star History
Star History

✨ Contributors
Thanks to all of our amazing contributors!