![Maven Central Adds Sigstore Signature Validation](https://cdn.sanity.io/images/cgdhsj6q/production/7da3bc8a946cfb5df15d7fcf49767faedc72b483-1024x1024.webp?w=400&fit=max&auto=format)
Security News
Maven Central Adds Sigstore Signature Validation
Maven Central now validates Sigstore signatures, making it easier for developers to verify the provenance of Java packages.
Your go-to toolkit for lightning-fast, flexible LLM evaluation, from Hugging Face's Leaderboard and Evals Team.
Documentation: Lighteval's Doc
Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends—whether it's transformers, tgi, vllm, or nanotron—with ease. Dive deep into your model’s performance by saving and exploring detailed, sample-by-sample results to debug and see how your models stack-up.
Customization at your fingertips: letting you either browse all our existing tasks and metrics or effortlessly create your own, tailored to your needs.
Seamlessly experiment, benchmark, and store your results on the Hugging Face Hub, S3, or locally.
pip install lighteval
Lighteval allows for many extras when installing, see here for a complete list.
If you want to push results to the Hugging Face Hub, add your access token as an environment variable:
huggingface-cli login
Lighteval offers two main entry points for model evaluation:
lighteval accelerate
: evaluate models on CPU or one or more GPUs using 🤗
Acceleratelighteval nanotron
: evaluate models in distributed settings using ⚡️
Nanotronlighteval vllm
: evaluate models on one or more GPUs using 🚀
VLLMlighteval endpoint
inference-endpoint
: evaluate models on one or more GPUs using 🔗
Inference Endpointtgi
: evaluate models on one or more GPUs using 🔗 Text Generation Inferenceopenai
: evaluate models on one or more GPUs using 🔗 OpenAI APIHere’s a quick command to evaluate using the Accelerate backend:
lighteval accelerate \
"pretrained=gpt2" \
"leaderboard|truthfulqa:mc|0|0"
Lighteval started as an extension of the fantastic Eleuther AI Harness (which powers the Open LLM Leaderboard) and draws inspiration from the amazing HELM framework.
While evolving Lighteval into its own standalone tool, we are grateful to the Harness and HELM teams for their pioneering work on LLM evaluations.
Got ideas? Found a bug? Want to add a task or metric? Contributions are warmly welcomed!
If you're adding a new feature, please open an issue first.
If you open a PR, don't forget to run the styling!
pip install -e .[dev]
pre-commit install
pre-commit run --all-files
@misc{lighteval,
author = {Fourrier, Clémentine and Habib, Nathan and Wolf, Thomas and Tunstall, Lewis},
title = {LightEval: A lightweight framework for LLM evaluation},
year = {2023},
version = {0.5.0},
url = {https://github.com/huggingface/lighteval}
}
FAQs
A lightweight and configurable evaluation package
We found that lighteval demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 3 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Maven Central now validates Sigstore signatures, making it easier for developers to verify the provenance of Java packages.
Security News
CISOs are racing to adopt AI for cybersecurity, but hurdles in budgets and governance may leave some falling behind in the fight against cyber threats.
Research
Security News
Socket researchers uncovered a backdoored typosquat of BoltDB in the Go ecosystem, exploiting Go Module Proxy caching to persist undetected for years.