Star Us, You will receive all release notifications from GitHub without any delay ~ ⭐️
Star History
🧭 Welcome
to OpenCompass!
Just like a compass guides us on our journey, OpenCompass will guide you through the complex landscape of evaluating large language models. With its powerful algorithms and intuitive interface, OpenCompass makes it easy to assess the quality and effectiveness of your NLP models.
🚩🚩🚩 Explore opportunities at OpenCompass! We're currently hiring full-time researchers/engineers and interns. If you're passionate about LLM and OpenCompass, don't hesitate to reach out to us via email. We'd love to hear from you!
🔥🔥🔥 We are delighted to announce that the OpenCompass has been recommended by the Meta AI, click Get Started of Llama for more information.
Attention
Breaking Change Notice: In version 0.4.0, we are consolidating all AMOTIC configuration files (previously located in ./configs/datasets, ./configs/models, and ./configs/summarizers) into the opencompass package. Users are advised to update their configuration references to reflect this structural change.
🚀 What's New
[2024.11.14] OpenCompass now offers support for a sophisticated benchmark designed to evaluate complex reasoning skills — MuSR. Check out the demo and give it a spin! 🔥🔥🔥
[2024.11.14] OpenCompass now supports the brand new long-context language model evaluation benchmark — BABILong. Have a look at the demo and give it a try! 🔥🔥🔥
[2024.10.14] We now support the OpenAI multilingual QA dataset MMMLU. Feel free to give it a try! 🔥🔥🔥
[2024.09.19] We now support Qwen2.5(0.5B to 72B) with multiple backend(huggingface/vllm/lmdeploy). Feel free to give them a try! 🔥🔥🔥
[2024.09.17] We now support OpenAI o1(o1-mini-2024-09-12 and o1-preview-2024-09-12). Feel free to give them a try! 🔥🔥🔥
[2024.09.05] We now support answer extraction through model post-processing to provide a more accurate representation of the model's capabilities. As part of this update, we have integrated XFinder as our first post-processing model. For more detailed information, please refer to the documentation, and give it a try! 🔥🔥🔥
[2024.08.20] OpenCompass now supports the SciCode: A Research Coding Benchmark Curated by Scientists. 🔥🔥🔥
[2024.08.16] OpenCompass now supports the brand new long-context language model evaluation benchmark — RULER. RULER provides an evaluation of long-context including retrieval, multi-hop tracing, aggregation, and question answering through flexible configurations. Check out the RULER evaluation config now! 🔥🔥🔥
[2024.08.09] We have released the example data and configuration for the CompassBench-202408, welcome to CompassBench for more details. 🔥🔥🔥
[2024.08.01] We supported the Gemma2 models. Welcome to try! 🔥🔥🔥
[2024.07.23] We supported the ModelScope datasets, you can load them on demand without downloading all the data to your local disk. Welcome to try! 🔥🔥🔥
[2024.07.17] We are excited to announce the release of NeedleBench's technical report. We invite you to visit our support documentation for detailed evaluation guidelines. 🔥🔥🔥
[2024.07.04] OpenCompass now supports InternLM2.5, which has outstanding reasoning capability, 1M Context window and and stronger tool use, you can try the models in OpenCompass Config and InternLM .🔥🔥🔥.
[2024.06.20] OpenCompass now supports one-click switching between inference acceleration backends, enhancing the efficiency of the evaluation process. In addition to the default HuggingFace inference backend, it now also supports popular backends LMDeploy and vLLM. This feature is available via a simple command-line switch and through deployment APIs. For detailed usage, see the documentation.🔥🔥🔥.
We provide OpenCompass Leaderboard for the community to rank all public models and API models. If you would like to join the evaluation, please provide the model repository URL or a standard API interface to the email address opencompass@pjlab.org.cn.
pip install -U opencompass
## Full installation (with support for more datasets)# pip install "opencompass[full]"## Environment with model acceleration frameworks## Manage different acceleration frameworks using virtual environments## since they usually have dependency conflicts with each other.# pip install "opencompass[lmdeploy]"# pip install "opencompass[vllm]"## API evaluation (i.e. Openai, Qwen)# pip install "opencompass[api]"
Install OpenCompass from source
If you want to use opencompass's latest features, or develop new features, you can also build it from source
You can choose one for the following method to prepare datasets.
Offline Preparation
You can download and extract the datasets with the following commands:
# Download dataset to data/ folder
wget https://github.com/open-compass/opencompass/releases/download/0.2.2.rc1/OpenCompassData-core-20240207.zip
unzip OpenCompassData-core-20240207.zip
Automatic Download from OpenCompass
We have supported download datasets automatic from the OpenCompass storage server. You can run the evaluation with extra --dry-run to download these datasets.
Currently, the supported datasets are listed in here. More datasets will be uploaded recently.
(Optional) Automatic Download with ModelScope
Also you can use the ModelScope to load the datasets on demand.
Some third-party features, like Humaneval and Llama, may require additional steps to work properly, for detailed steps please refer to the Installation Guide.
After ensuring that OpenCompass is installed correctly according to the above steps and the datasets are prepared. Now you can start your first evaluation using OpenCompass!
Your first evaluation with OpenCompass!
OpenCompass support setting your configs via CLI or a python script. For simple evaluation settings we recommend using CLI, for more complex evaluation, it is suggested using the script way. You can find more example scripts under the configs folder.
You can find more script examples under configs folder.
API evaluation
OpenCompass, by its design, does not really discriminate between open-source models and API models. You can evaluate both model types in the same way or even in one settings.
export OPENAI_API_KEY="YOUR_OPEN_API_KEY"# CLI
opencompass --models gpt_4o_2024_05_13 --datasets demo_gsm8k_chat_gen
# Python scripts
opencompass ./configs/eval_api_demo.py
# You can use o1_mini_2024_09_12/o1_preview_2024_09_12 for o1 models, we set max_completion_tokens=8192 as default.
Accelerated Evaluation
Additionally, if you want to use an inference backend other than HuggingFace for accelerated evaluation, such as LMDeploy or vLLM, you can do so with the command below. Please ensure that you have installed the necessary packages for the chosen backend and that your model supports accelerated inference with it. For more information, see the documentation on inference acceleration backends here. Below is an example using LMDeploy:
OpenCompass has predefined configurations for many models and datasets. You can list all available model and dataset configurations using the tools.
# List all configurations
python tools/list_configs.py
# List all configurations related to llama and mmlu
python tools/list_configs.py llama mmlu
If the model is not on the list but supported by Huggingface AutoModel class, you can also evaluate it with OpenCompass. You are welcome to contribute to the maintenance of the OpenCompass supported model and dataset lists.
--hf-num-gpus is used for model parallel(huggingface format), --max-num-worker is used for data parallel.
[!TIP]
configuration with _ppl is designed for base model typically.
configuration with _gen can be used for both base model and chat model.
Through the command line or configuration files, OpenCompass also supports evaluating APIs or custom models, as well as more diversified evaluation strategies. Please read the Quick Start to learn how to run an evaluation task.
We are thrilled to introduce OpenCompass 2.0, an advanced suite featuring three key components: CompassKit, CompassHub, and CompassRank.
CompassRank has been significantly enhanced into the leaderboards that now incorporates both open-source benchmarks and proprietary benchmarks. This upgrade allows for a more comprehensive evaluation of models across the industry.
CompassHub presents a pioneering benchmark browser interface, designed to simplify and expedite the exploration and utilization of an extensive array of benchmarks for researchers and practitioners alike. To enhance the visibility of your own benchmark within the community, we warmly invite you to contribute it to CompassHub. You may initiate the submission process by clicking here.
CompassKit is a powerful collection of evaluation toolkits specifically tailored for Large Language Models and Large Vision-language Models. It provides an extensive set of tools to assess and measure the performance of these complex models effectively. Welcome to try our toolkits for in your research and products.
✨ Introduction
OpenCompass is a one-stop platform for large model evaluation, aiming to provide a fair, open, and reproducible benchmark for large model evaluation. Its main features include:
Comprehensive support for models and datasets: Pre-support for 20+ HuggingFace and API models, a model evaluation scheme of 70+ datasets with about 400,000 questions, comprehensively evaluating the capabilities of the models in five dimensions.
Efficient distributed evaluation: One line command to implement task division and distributed evaluation, completing the full evaluation of billion-scale models in just a few hours.
Diversified evaluation paradigms: Support for zero-shot, few-shot, and chain-of-thought evaluations, combined with standard or dialogue-type prompt templates, to easily stimulate the maximum performance of various models.
Modular design with high extensibility: Want to add new models or datasets, customize an advanced task division strategy, or even support a new cluster management system? Everything about OpenCompass can be easily expanded!
Experiment management and reporting mechanism: Use config files to fully record each experiment, and support real-time reporting of results.
📖 Dataset Support
Language
Knowledge
Reasoning
Examination
Word Definition
WiC
SummEdits
Idiom Learning
CHID
Semantic Similarity
AFQMC
BUSTM
Coreference Resolution
CLUEWSC
WSC
WinoGrande
Translation
Flores
IWSLT2017
Multi-language Question Answering
TyDi-QA
XCOPA
Multi-language Summary
XLSum
Knowledge Question Answering
BoolQ
CommonSenseQA
NaturalQuestions
TriviaQA
Textual Entailment
CMNLI
OCNLI
OCNLI_FC
AX-b
AX-g
CB
RTE
ANLI
Commonsense Reasoning
StoryCloze
COPA
ReCoRD
HellaSwag
PIQA
SIQA
Mathematical Reasoning
MATH
GSM8K
Theorem Application
TheoremQA
StrategyQA
SciBench
Comprehensive Reasoning
BBH
Junior High, High School, University, Professional Examinations
A lightweight toolkit for evaluating LLMs based on OpenCompass.
We found that ms-opencompass demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.