🚀 Big News: Socket Acquires Coana to Bring Reachability Analysis to Every Appsec Team.Learn more

multi-swe-bench

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

multi-swe-bench

Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving

0.1.1
Maintainers
1
👋 Hi, everyone!
We are ByteDance Seed team.

You can get to know us better through the following channels👇

seed logo

🚀 Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving


We are extremely delighted to release Multi-SWE-bench! Multi-SWE-bench addresses the lack of multilingual benchmarks for evaluating LLMs in real-world code issue resolution. Unlike existing Python-centric benchmarks (e.g., SWE-bench), our framework spans ​7 languages (i.e., Java, TypeScript, JavaScript, Go, Rust, C, and C++) with ​1,632 high-quality instances, curated from 2,456 candidates by ​68 expert annotators for reliability.

We aim to accelerate progress in automated issue resolution and RL, bridging the gap toward AGI. Let's join the Multi-SWE-RL community to expand datasets, tools, and research collaboration!

📢 News

[2025/04/15]🔥We released Multi-SWE-bench mini! A lightweight version of the full benchmark — 400 instances in total, covering 8 languages, designed to reduce compute cost and make evaluation faster and easier.

[2025/04/03]🔥We released Multi-SWE-bench and Multi-SWE-RL.

⚡ Features

  • Comprehensive Evaluation: Evaluating nine powerful models (GPT-4o, OpenAI-o1, OpenAI-o3-mini-high, Claude-3.5-Sonnet, Claude-3.7-Sonnet, DeepSeek-V3, DeepSeek-R1, Qwen2.5-72B-Instruct, and Doubao-1.5-Pro) across three agent frameworks (Agentless, SWE-agent, OpenHands), yielding several valuable insights.
  • Multi-SWE-RL Community: Open-source initiative for large-scale RL datasets. Initial release includes 4723 instances to advance RL research.
  • Fully Open Source Data, Code, and Environment: All data, code, and container images are publicly released, along with detailed tutorials, to foster community contributions and enable scalable extension.

📊 Evaluation

Run Evaluation

To run the evaluation, you need to prepare the following:

  • Patch Files: Some patch files in JSONL format, each item containing:

    • org: Organization Name
    • repo: Repository Name
    • number: Pull Request Number
    • fix_patch: Fix Patch Content

    Example:

    {
        "org": "zeromicro",
        "repo": "go-zero",
        "number": "2787",
        "fix_patch": "diff --git ...."
    }
    
  • Dataset Files: Dataset files in JSONL format available on Hugging Face, such as Multi-SWE-bench or Multi-SWE-RL

  • (Optional) Docker Images: You can download required Docker images using scripts/download_images.ps1 (for Windows) or scripts/download_images.sh (for Linux/macOS) with either verified images or RL images:

    # For Windows
    .\scripts\download_images.ps1 scripts\images_verified.txt  # For verified images
    .\scripts\download_images.ps1 scripts\images_rl.txt        # For RL images
    
    # For Linux/macOS
    bash scripts/download_images.sh scripts/images_verified.txt  # For verified images
    bash scripts/download_images.sh scripts/images_rl.txt        # For RL images
    

    This step is optional. If images don't exist locally, they will be built during evaluation.

Then you can run the evaluation using the following command:

python -m multi_swe_bench.harness.run_evaluation --config /path/to/your/config.json

The evaluation process will generate a final_report.json file in your specified output_dir, which provides a summary of results including resolved_instances, unresolved_instances, and other metrics. For detailed information about failed instances and specific error reasons, you can check the log files in the log_dir directory.

Configuration File Example

{
    "mode": "evaluation",
    "workdir": "./data/workdir",
    "patch_files": [
        "./data/patches/<your_patch_file>.jsonl"
    ],
    "dataset_files": [
        "./data/patches/<to_evaluate_dataset_file>.jsonl"
    ],
    "force_build": false,
    "output_dir": "./data/dataset",
    "specifics": [],
    "skips": [],
    "repo_dir": "./data/repos",
    "need_clone": false,
    "global_env": [],
    "clear_env": true,
    "stop_on_error": true,
    "max_workers": 8,
    "max_workers_build_image": 8,
    "max_workers_run_instance": 8,
    "log_dir": "./data/logs",
    "log_level": "DEBUG"
}

Configuration Parameters

ParameterDescription
modeExecution mode for the script. Options: "evaluation", "instance", "instance_only", "image". Default: "evaluation"
workdirWorking directory path for evaluation operations
patch_filesList of patch file paths in JSONL format (supports glob patterns)
dataset_filesList of dataset file paths in JSONL format (supports glob patterns)
force_buildWhether to force rebuild Docker images even if they already exist
output_dirDirectory path for output results
specificsList of specific PR IDs to evaluate (empty = all)
skipsList of PR IDs to skip during evaluation
repo_dirDirectory containing cloned repositories
need_cloneWhether repositories should be cloned if not present
global_envGlobal environment variables to pass to Docker containers (format: "KEY=VALUE")
clear_envWhether to clear environment variables in Docker containers
stop_on_errorWhether to stop execution when an error occurs
max_workersMaximum number of concurrent worker threads for general tasks
max_workers_build_imageMaximum number of concurrent worker threads for building Docker images
max_workers_run_instanceMaximum number of concurrent worker threads for running instances
log_dirDirectory for log files
log_levelLogging level. Options: "DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"

✅ Integration Checklist

We are working to unify instances from prior benchmarks or training dataset into our framework for consistent comparison and reuse.

  • Integrate 500 Python instances from SWE-bench verified
  • Integrate 78 Java instances from SWE-bench-java
  • Support customizing run.sh, test-run.sh, and fix-run.sh commands via configuration file using "run_cmd", "test_patch_run_cmd", and "fix_patch_run_cmd"
  • Publish as a pip package for easier installation and reuse
  • Integrate 2,438 Python instances from SWE-gym
  • Integrate instances from R2E-Gym

🏆 Multi-SWE-RL Community

📋 Multi-SWE-RL Dataset Overview

The Multi-SWE-RL Community is an open-source initiative focused on collaborative dataset creation for software engineering and reinforcement learning research. To foster active participation and recognize contributors, we introduce this Contribution Incentive Plan. By contributing high-quality data, you directly support advancements in AI research and earn recognition within the community.

Incentive Tiers:

Full details: Contribution Incentive Plan

Get Started in 2 Steps:

Welcome to our Discord to join in Multi-SWE-RL and Multi-SWE-bench related discussions!

Star History Chart

🙏 Acknowledgements

We express our deepest gratitude to the creators of the SWE-bench dataset. This project references their repository and builds upon their work.

📖 Citation

If you find Multi-SWE-bench useful for your research and applications, feel free to give us a star ⭐ or cite us using:

@misc{zan2025multiswebench,
      title={Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving}, 
      author={Daoguang Zan and Zhirong Huang and Wei Liu and Hanwu Chen and Linhao Zhang and Shulin Xin and Lu Chen and Qi Liu and Xiaojian Zhong and Aoyan Li and Siyao Liu and Yongsheng Xiao and Liangqiang Chen and Yuyu Zhang and Jing Su and Tianyu Liu and Rui Long and Kai Shen and Liang Xiang},
      year={2025},
      eprint={2504.02605},
      archivePrefix={arXiv},
      primaryClass={cs.SE},
      url={https://arxiv.org/abs/2504.02605}, 
}

📜 License

This project is licensed under Apache License 2.0. See the LICENSE file for details.

🏢 About ByteDance Seed Team

Founded in 2023, ByteDance Seed Team is dedicated to crafting the industry's most advanced AI foundation models. The team aspires to become a world-class research team and make significant contributions to the advancement of science and society.

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts