πŸš€ Big News: Socket Acquires Coana to Bring Reachability Analysis to Every Appsec Team.Learn more β†’
Socket
Sign inDemoInstall
Socket

llama-stack

Package Overview
Dependencies
Maintainers
5
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

llama-stack

Llama Stack

0.2.7
PyPI
Maintainers
5

Llama Stack

PyPI version PyPI - Downloads License Discord Unit Tests Integration Tests

Quick Start | Documentation | Colab Notebook | Discord

βœ¨πŸŽ‰ Llama 4 Support πŸŽ‰βœ¨

We released Version 0.2.0 with support for the Llama 4 herd of models released by Meta.

πŸ‘‹ Click here to see how to run Llama 4 models on Llama Stack


Note you need 8xH100 GPU-host to run these models

pip install -U llama_stack

MODEL="Llama-4-Scout-17B-16E-Instruct"
# get meta url from llama.com
llama model download --source meta --model-id $MODEL --meta-url <META_URL>

# start a llama stack server
INFERENCE_MODEL=meta-llama/$MODEL llama stack build --run --template meta-reference-gpu

# install client to interact with the server
pip install llama-stack-client

CLI

# Run a chat completion
llama-stack-client --endpoint http://localhost:8321 \
inference chat-completion \
--model-id meta-llama/$MODEL \
--message "write a haiku for meta's llama 4 models"

ChatCompletionResponse(
    completion_message=CompletionMessage(content="Whispers in code born\nLlama's gentle, wise heartbeat\nFuture's soft unfold", role='assistant', stop_reason='end_of_turn', tool_calls=[]),
    logprobs=None,
    metrics=[Metric(metric='prompt_tokens', value=21.0, unit=None), Metric(metric='completion_tokens', value=28.0, unit=None), Metric(metric='total_tokens', value=49.0, unit=None)]
)

Python SDK

from llama_stack_client import LlamaStackClient

client = LlamaStackClient(base_url=f"http://localhost:8321")

model_id = "meta-llama/Llama-4-Scout-17B-16E-Instruct"
prompt = "Write a haiku about coding"

print(f"User> {prompt}")
response = client.inference.chat_completion(
    model_id=model_id,
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": prompt},
    ],
)
print(f"Assistant> {response.completion_message.content}")

As more providers start supporting Llama 4, you can use them in Llama Stack as well. We are adding to the list. Stay tuned!

πŸš€ One-Line Installer πŸš€

To try Llama Stack locally, run:

curl -LsSf https://github.com/meta-llama/llama-stack/raw/main/install.sh | sh

Overview

Llama Stack standardizes the core building blocks that simplify AI application development. It codifies best practices across the Llama ecosystem. More specifically, it provides

  • Unified API layer for Inference, RAG, Agents, Tools, Safety, Evals, and Telemetry.
  • Plugin architecture to support the rich ecosystem of different API implementations in various environments, including local development, on-premises, cloud, and mobile.
  • Prepackaged verified distributions which offer a one-stop solution for developers to get started quickly and reliably in any environment.
  • Multiple developer interfaces like CLI and SDKs for Python, Typescript, iOS, and Android.
  • Standalone applications as examples for how to build production-grade AI applications with Llama Stack.
Llama Stack

Llama Stack Benefits

  • Flexible Options: Developers can choose their preferred infrastructure without changing APIs and enjoy flexible deployment choices.
  • Consistent Experience: With its unified APIs, Llama Stack makes it easier to build, test, and deploy AI applications with consistent application behavior.
  • Robust Ecosystem: Llama Stack is already integrated with distribution partners (cloud providers, hardware vendors, and AI-focused companies) that offer tailored infrastructure, software, and services for deploying Llama models.

By reducing friction and complexity, Llama Stack empowers developers to focus on what they do best: building transformative generative AI applications.

API Providers

Here is a list of the various API providers and available distributions that can help developers get started easily with Llama Stack.

API Provider BuilderEnvironmentsAgentsInferenceMemorySafetyTelemetry
Meta ReferenceSingle Nodeβœ…βœ…βœ…βœ…βœ…
SambaNovaHostedβœ…
CerebrasHostedβœ…
FireworksHostedβœ…βœ…βœ…
AWS BedrockHostedβœ…βœ…
TogetherHostedβœ…βœ…βœ…
GroqHostedβœ…
OllamaSingle Nodeβœ…
TGIHosted and Single Nodeβœ…
NVIDIA NIMHosted and Single Nodeβœ…
ChromaSingle Nodeβœ…
PG VectorSingle Nodeβœ…
PyTorch ExecuTorchOn-device iOSβœ…βœ…
vLLMHosted and Single Nodeβœ…
OpenAIHostedβœ…
AnthropicHostedβœ…
GeminiHostedβœ…
watsonxHostedβœ…

Distributions

A Llama Stack Distribution (or "distro") is a pre-configured bundle of provider implementations for each API component. Distributions make it easy to get started with a specific deployment scenario - you can begin with a local development setup (eg. ollama) and seamlessly transition to production (eg. Fireworks) without changing your application code. Here are some of the distributions we support:

DistributionLlama Stack DockerStart This Distribution
Meta Referencellamastack/distribution-meta-reference-gpuGuide
SambaNovallamastack/distribution-sambanovaGuide
Cerebrasllamastack/distribution-cerebrasGuide
Ollamallamastack/distribution-ollamaGuide
TGIllamastack/distribution-tgiGuide
Togetherllamastack/distribution-togetherGuide
Fireworksllamastack/distribution-fireworksGuide
vLLMllamastack/distribution-remote-vllmGuide

Documentation

Please checkout our Documentation page for more details.

Llama Stack Client SDKs

LanguageClient SDKPackage
Pythonllama-stack-client-pythonPyPI version
Swiftllama-stack-client-swiftSwift Package Index
Typescriptllama-stack-client-typescriptNPM version
Kotlinllama-stack-client-kotlinMaven version

Check out our client SDKs for connecting to a Llama Stack server in your preferred language, you can choose from python, typescript, swift, and kotlin programming languages to quickly build your applications.

You can find more example scripts with client SDKs to talk with the Llama Stack server in our llama-stack-apps repo.

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts