
Research
PyPI Package Disguised as Instagram Growth Tool Harvests User Credentials
A deceptive PyPI package posing as an Instagram growth tool collects user credentials and sends them to third-party bot services.
Quick Start | Documentation | Colab Notebook | Discord
We released Version 0.2.0 with support for the Llama 4 herd of models released by Meta.
Note you need 8xH100 GPU-host to run these models
pip install -U llama_stack
MODEL="Llama-4-Scout-17B-16E-Instruct"
# get meta url from llama.com
llama model download --source meta --model-id $MODEL --meta-url <META_URL>
# start a llama stack server
INFERENCE_MODEL=meta-llama/$MODEL llama stack build --run --template meta-reference-gpu
# install client to interact with the server
pip install llama-stack-client
# Run a chat completion
llama-stack-client --endpoint http://localhost:8321 \
inference chat-completion \
--model-id meta-llama/$MODEL \
--message "write a haiku for meta's llama 4 models"
ChatCompletionResponse(
completion_message=CompletionMessage(content="Whispers in code born\nLlama's gentle, wise heartbeat\nFuture's soft unfold", role='assistant', stop_reason='end_of_turn', tool_calls=[]),
logprobs=None,
metrics=[Metric(metric='prompt_tokens', value=21.0, unit=None), Metric(metric='completion_tokens', value=28.0, unit=None), Metric(metric='total_tokens', value=49.0, unit=None)]
)
from llama_stack_client import LlamaStackClient
client = LlamaStackClient(base_url=f"http://localhost:8321")
model_id = "meta-llama/Llama-4-Scout-17B-16E-Instruct"
prompt = "Write a haiku about coding"
print(f"User> {prompt}")
response = client.inference.chat_completion(
model_id=model_id,
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt},
],
)
print(f"Assistant> {response.completion_message.content}")
As more providers start supporting Llama 4, you can use them in Llama Stack as well. We are adding to the list. Stay tuned!
To try Llama Stack locally, run:
curl -LsSf https://github.com/meta-llama/llama-stack/raw/main/install.sh | sh
Llama Stack standardizes the core building blocks that simplify AI application development. It codifies best practices across the Llama ecosystem. More specifically, it provides
By reducing friction and complexity, Llama Stack empowers developers to focus on what they do best: building transformative generative AI applications.
Here is a list of the various API providers and available distributions that can help developers get started easily with Llama Stack.
API Provider Builder | Environments | Agents | Inference | Memory | Safety | Telemetry | Post Training |
---|---|---|---|---|---|---|---|
Meta Reference | Single Node | ✅ | ✅ | ✅ | ✅ | ✅ | |
SambaNova | Hosted | ✅ | ✅ | ||||
Cerebras | Hosted | ✅ | |||||
Fireworks | Hosted | ✅ | ✅ | ✅ | |||
AWS Bedrock | Hosted | ✅ | ✅ | ||||
Together | Hosted | ✅ | ✅ | ✅ | |||
Groq | Hosted | ✅ | |||||
Ollama | Single Node | ✅ | |||||
TGI | Hosted and Single Node | ✅ | |||||
NVIDIA NIM | Hosted and Single Node | ✅ | |||||
Chroma | Single Node | ✅ | |||||
PG Vector | Single Node | ✅ | |||||
PyTorch ExecuTorch | On-device iOS | ✅ | ✅ | ||||
vLLM | Hosted and Single Node | ✅ | |||||
OpenAI | Hosted | ✅ | |||||
Anthropic | Hosted | ✅ | |||||
Gemini | Hosted | ✅ | |||||
watsonx | Hosted | ✅ | |||||
HuggingFace | Single Node | ✅ | |||||
TorchTune | Single Node | ✅ | |||||
NVIDIA NEMO | Hosted | ✅ |
A Llama Stack Distribution (or "distro") is a pre-configured bundle of provider implementations for each API component. Distributions make it easy to get started with a specific deployment scenario - you can begin with a local development setup (eg. ollama) and seamlessly transition to production (eg. Fireworks) without changing your application code. Here are some of the distributions we support:
Distribution | Llama Stack Docker | Start This Distribution |
---|---|---|
Meta Reference | llamastack/distribution-meta-reference-gpu | Guide |
SambaNova | llamastack/distribution-sambanova | Guide |
Cerebras | llamastack/distribution-cerebras | Guide |
Ollama | llamastack/distribution-ollama | Guide |
TGI | llamastack/distribution-tgi | Guide |
Together | llamastack/distribution-together | Guide |
Fireworks | llamastack/distribution-fireworks | Guide |
vLLM | llamastack/distribution-remote-vllm | Guide |
Please checkout our Documentation page for more details.
llama
CLI to work with Llama models (download, study prompts), and building/starting a Llama Stack distribution.llama-stack-client
CLI, which allows you to query information about the distribution.Language | Client SDK | Package |
---|---|---|
Python | llama-stack-client-python | |
Swift | llama-stack-client-swift | |
Typescript | llama-stack-client-typescript | |
Kotlin | llama-stack-client-kotlin |
Check out our client SDKs for connecting to a Llama Stack server in your preferred language, you can choose from python, typescript, swift, and kotlin programming languages to quickly build your applications.
You can find more example scripts with client SDKs to talk with the Llama Stack server in our llama-stack-apps repo.
FAQs
Llama Stack
We found that llama-stack demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 5 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
A deceptive PyPI package posing as an Instagram growth tool collects user credentials and sends them to third-party bot services.
Product
Socket now supports pylock.toml, enabling secure, reproducible Python builds with advanced scanning and full alignment with PEP 751's new standard.
Security News
Research
Socket uncovered two npm packages that register hidden HTTP endpoints to delete all files on command.