
Security News
NVD Concedes Inability to Keep Pace with Surging CVE Disclosures in 2025
Security experts warn that recent classification changes obscure the true scope of the NVD backlog as CVE volume hits all-time highs.
The official SWE-bench package - a benchmark for evaluating LMs on software engineering
Code and data for the following works:
SWE-bench is a benchmark for evaluating large language models on real world software issues collected from GitHub. Given a codebase and an issue, a language model is tasked with generating a patch that resolves the described problem.
To access SWE-bench, copy and run the following code:
from datasets import load_dataset
swebench = load_dataset('princeton-nlp/SWE-bench', split='test')
SWE-bench uses Docker for reproducible evaluations. Follow the instructions in the Docker setup guide to install Docker on your machine. If you're setting up on Linux, we recommend seeing the post-installation steps as well.
Finally, to build SWE-bench from source, follow these steps:
git clone git@github.com:princeton-nlp/SWE-bench.git
cd SWE-bench
pip install -e .
Test your installation by running:
python -m swebench.harness.run_evaluation \
--predictions_path gold \
--max_workers 1 \
--instance_ids sympy__sympy-20590 \
--run_id validate-gold
Evaluate patch predictions on SWE-bench Lite with the following command:
python -m swebench.harness.run_evaluation \
--dataset_name princeton-nlp/SWE-bench_Lite \
--predictions_path <path_to_predictions> \
--max_workers <num_workers> \
--run_id <run_id>
# use --predictions_path 'gold' to verify the gold patches
# use --run_id to name the evaluation run
This command will generate docker build logs (logs/build_images
) and evaluation logs (logs/run_evaluation
) in the current directory.
The final evaluation results will be stored in the evaluation_results
directory.
[!WARNING] SWE-bench evaluation can be resource intensive We recommend running on an
x86_64
machine with at least 120GB of free storage, 16GB of RAM, and 8 CPU cores. We recommend using fewer thanmin(0.75 * os.cpu_count(), 24)
for--max_workers
.If running with Docker desktop, make sure to increase your virtual disk space to ~120 free GB. Set max_workers to be consistent with the above for the CPUs available to Docker.
Support for
arm64
machines is experimental.
To see the full list of arguments for the evaluation harness, run:
python -m swebench.harness.run_evaluation --help
See the evaluation tutorial for the full rundown on datasets you can evaluate. If you're looking for non-local, cloud based evaluations, check out...
Additionally, you can also:
We would love to hear from the broader NLP, Machine Learning, and Software Engineering research communities, and we welcome any contributions, pull requests, or issues! To do so, please either file a new pull request or issue and fill in the corresponding templates accordingly. We'll be sure to follow up shortly!
Contact person: Carlos E. Jimenez and John Yang (Email: carlosej@princeton.edu, johnby@stanford.edu).
If you find our work helpful, please use the following citations.
@inproceedings{
jimenez2024swebench,
title={{SWE}-bench: Can Language Models Resolve Real-world Github Issues?},
author={Carlos E Jimenez and John Yang and Alexander Wettig and Shunyu Yao and Kexin Pei and Ofir Press and Karthik R Narasimhan},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=VTF8yNQM66}
}
@inproceedings{
yang2024swebenchmultimodal,
title={{SWE}-bench Multimodal: Do AI Systems Generalize to Visual Software Domains?},
author={John Yang and Carlos E. Jimenez and Alex L. Zhang and Kilian Lieret and Joyce Yang and Xindi Wu and Ori Press and Niklas Muennighoff and Gabriel Synnaeve and Karthik R. Narasimhan and Diyi Yang and Sida I. Wang and Ofir Press},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=riTiq3i21b}
}
MIT. Check LICENSE.md
.
FAQs
The official SWE-bench package - a benchmark for evaluating LMs on software engineering
We found that swebench demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Security experts warn that recent classification changes obscure the true scope of the NVD backlog as CVE volume hits all-time highs.
Security Fundamentals
Attackers use obfuscation to hide malware in open source packages. Learn how to spot these techniques across npm, PyPI, Maven, and more.
Security News
Join Socket for exclusive networking events, rooftop gatherings, and one-on-one meetings during BSidesSF and RSA 2025 in San Francisco.