Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

llama-models

Package Overview
Dependencies
Maintainers
5
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

llama-models

Llama models

  • 0.0.61
  • PyPI
  • Socket score

Maintainers
5

🤗 Models on Hugging Face  | Blog  | Website  | Get Started 


Llama Models

Llama is an accessible, open large language model (LLM) designed for developers, researchers, and businesses to build, experiment, and responsibly scale their generative AI ideas. Part of a foundational system, it serves as a bedrock for innovation in the global community. A few key aspects:

  1. Open access: Easy accessibility to cutting-edge large language models, fostering collaboration and advancements among developers, researchers, and organizations
  2. Broad ecosystem: Llama models have been downloaded hundreds of millions of times, there are thousands of community projects built on Llama and platform support is broad from cloud providers to startups - the world is building with Llama!
  3. Trust & safety: Llama models are part of a comprehensive approach to trust and safety, releasing models and tools that are designed to enable community collaboration and encourage the standardization of the development and usage of trust and safety tools for generative AI

Our mission is to empower individuals and industry through this opportunity while fostering an environment of discovery and ethical AI advancements. The model weights are licensed for researchers and commercial entities, upholding the principles of openness.

Llama Models

PyPI - Downloads Discord

ModelLaunch dateModel sizesContext LengthTokenizerAcceptable use policyLicenseModel Card
Llama 27/18/20237B, 13B, 70B4KSentencepieceUse PolicyLicenseModel Card
Llama 34/18/20248B, 70B8KTikToken-basedUse PolicyLicenseModel Card
Llama 3.17/23/20248B, 70B, 405B128KTikToken-basedUse PolicyLicenseModel Card
Llama 3.29/25/20241B, 3B128KTikToken-basedUse PolicyLicenseModel Card
Llama 3.2-Vision9/25/202411B, 90B128KTikToken-basedUse PolicyLicenseModel Card

Download

To download the model weights and tokenizer:

  1. Visit the Meta Llama website.

  2. Read and accept the license.

  3. Once your request is approved you will receive a signed URL via email.

  4. Install the Llama CLI: pip install llama-stack. (<-- Start Here if you have received an email already.)

  5. Run llama model list to show the latest available models and determine the model ID you wish to download. NOTE: If you want older versions of models, run llama model list --show-all to show all the available Llama models.

  6. Run: llama download --source meta --model-id CHOSEN_MODEL_ID

  7. Pass the URL provided when prompted to start the download.

Remember that the links expire after 24 hours and a certain amount of downloads. You can always re-request a link if you start seeing errors such as 403: Forbidden.

Running the models

You need to install the following dependencies (in addition to the requirements.txt in the root directory of this repository) to run the models:

pip install torch fairscale fire blobfile

After installing the dependencies, you can run the example scripts (within llama_models/scripts/ sub-directory) as follows:

#!/bin/bash

CHECKPOINT_DIR=~/.llama/checkpoints/Meta-Llama3.1-8B-Instruct
PYTHONPATH=$(git rev-parse --show-toplevel) torchrun llama_models/scripts/example_chat_completion.py $CHECKPOINT_DIR

The above script should be used with an Instruct (Chat) model. For a Base model, use the script llama_models/scripts/example_text_completion.py. Note that you can use these scripts with both Llama3 and Llama3.1 series of models.

For running larger models with tensor parallelism, you should modify as:

#!/bin/bash

NGPUS=8
PYTHONPATH=$(git rev-parse --show-toplevel) torchrun \
  --nproc_per_node=$NGPUS \
  llama_models/scripts/example_chat_completion.py $CHECKPOINT_DIR \
  --model_parallel_size $NGPUS

For more flexibility in running inference (including running FP8 inference), please see the Llama Stack repository.

Access to Hugging Face

We also provide downloads on Hugging Face, in both transformers and native llama3 formats. To download the weights from Hugging Face, please follow these steps:

  • Visit one of the repos, for example meta-llama/Meta-Llama-3.1-8B-Instruct.
  • Read and accept the license. Once your request is approved, you'll be granted access to all Llama 3.1 models as well as previous versions. Note that requests used to take up to one hour to get processed.
  • To download the original native weights to use with this repo, click on the "Files and versions" tab and download the contents of the original folder. You can also download them from the command line if you pip install huggingface-hub:
huggingface-cli download meta-llama/Meta-Llama-3.1-8B-Instruct --include "original/*" --local-dir meta-llama/Meta-Llama-3.1-8B-Instruct

NOTE The original native weights of meta-llama/Meta-Llama-3.1-405B would not be available through this HugginFace repo.

  • To use with transformers, the following pipeline snippet will download and cache the weights:

    import transformers
    import torch
    
    model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct"
    
    pipeline = transformers.pipeline(
      "text-generation",
      model="meta-llama/Meta-Llama-3.1-8B-Instruct",
      model_kwargs={"torch_dtype": torch.bfloat16},
      device="cuda",
    )
    

Installations

You can install this repository as a package by just doing pip install llama-models

Responsible Use

Llama models are a new technology that carries potential risks with use. Testing conducted to date has not — and could not — cover all scenarios. To help developers address these risks, we have created the Responsible Use Guide.

Issues

Please report any software “bug” or other problems with the models through one of the following means:

Questions

For common questions, the FAQ can be found here, which will be updated over time as new questions arise.

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc