SWE-bench +

โ– SWE-bench 2025Aug 5

SWE-bench is a benchmark for evaluating large language models on real world software issues collected from GitHub. Given a codebase and an issue, a language model is tasked with generating a patch that resolves the described problem. SWE-bench Verified is a human-validated subset that more reliably evaluates AI modelsโ€™ ability to solve issues. International Olympiad in Informatics (IOI) competition features standardized and automated grading.

๐Ÿ”

ModelSWE-bench๐Ÿ† IOIOrganizationLicenseDateAgent
Grok 472.626.2xAIProprietary2025-07-09OpenHands
GPT-5 (high)72.520OpenAIProprietary2025-08-07OpenHands
Claude Opus 4.172.115.2AnthropicProprietary2025-08-05OpenHands
Claude Sonnet 4686.5AnthropicProprietary2025-05-14OpenHands
Claude Opus 467.6AnthropicProprietary2025-05-14mini-SWE-agent
Qwen3-Coder-480B-A35B-Instruct67AlibabaApache 2.02025-07-22OpenHands
Kimi K265.41.3MoonshotModified MIT2025-07-11OpenHands
GPT-5 (medium)65OpenAIProprietary2025-08-07mini-SWE-agent
GLM-4.564.2Z.aiMIT2025-07-28OpenHands
GPT-5 mini59.8OpenAIProprietary2025-08-07mini-SWE-agent
o358.4OpenAIProprietary2025-04-16mini-SWE-agent
GLM-4.5-Air57.6Z.aiMIT2025-07-28OpenHands
Gemini 2.5 Pro53.617.1GoogleProprietary2025-05-06mini-SWE-agent
Claude 3.7 Sonnet52.8AnthropicProprietary2025-02-19mini-SWE-agent
Qwen3-Coder-30B-A3B-Instruct51.6AlibabaApache 2.02025-07-30OpenHands
GPT-4.148.6OpenAIProprietary2025-04-14OpenHands
o4-mini455.3OpenAIProprietary2025-04-16mini-SWE-agent
DeepSeek-R1-052841.4DeepSeekMIT2025-05-28OpenHands
DeepSeek-V3-032438.81.7DeepSeekMIT2025-03-24OpenHands
GPT-5 nano34.8OpenAIProprietary2025-08-07mini-SWE-agent
Gemini 2.5 Flash28.733.9GoogleProprietary2025-04-17mini-SWE-agent
GPT-4.1 mini23.94OpenAIProprietary2025-04-14mini-SWE-agent
GPT-4o21.62OpenAIProprietary2024-11-20mini-SWE-agent
Llama 4 Maverick Instruct21.04MetaLlama 42025-04-05mini-SWE-agent
Gemini 2.0 Flash13.52GoogleProprietary2025-02-05mini-SWE-agent
Llama 4 Scout Instruct9.06MetaLlama 42025-04-05mini-SWE-agent
Qwen2.5-Coder-32B-Instruct9AlibabaApache 2.02024-11-12mini-SWE-agent

SWE-bench Verified (100 turns)
IOI Benchmark (2024 and 2025 exams)
Chatbot Arena + | OpenHands

๐Ÿ‘‹ Overview

SWE-bench tests AI systems’ ability to solve GitHub issues.

We collect 2,294 task instances by crawling Pull Requests and Issues from 12 popular Python repositories. Each instance is based on a pull request that (1) is associated with an issue, and (2) modified 1+ testing related files.

Per instance, we construct an execution environment (Docker Image) with the repository successfully installed at the commit that the Pull Request is based on. Without the Pull Request’s changes, a number of test(s) fail. After the Pull Request is merged, the same set of test(s) pass. These “Fail-to-Pass” tests are the primary signal for evaluation.

SWE-bench evaluation works as follows. Per task instance, an AI system is given the issue text. The AI system should then modify the codebase in order to resolve the described issues. When the AI system is finished, we run the aforementioned Fail-to-Pass tests to check if the issue was successfully resolved.

Code and data for the following works:

๐Ÿ“ฐ News

  • [Jan. 13, 2025]: We’ve integrated SWE-bench Multimodal (paper, dataset) into this repository! Unlike SWE-bench, we’ve kept evaluation for the test split private. Submit to the leaderboard using sb-cli, our new cloud-based evaluation tool.
  • [Jan. 11, 2025]: Thanks to Modal, you can now run evaluations entirely on the cloud! See here for more details.
  • [Aug. 13, 2024]: Introducing SWE-bench Verified! Part 2 of our collaboration with OpenAI Preparedness. A subset of 500 problems that real software engineers have confirmed are solvable. Check out more in the report!
  • [Jun. 27, 2024]: We have an exciting update for SWE-bench - with support from OpenAI’s Preparedness team: We’re moving to a fully containerized evaluation harness using Docker for more reproducible evaluations! Read more in our report.
  • [Apr. 2, 2024]: We have released SWE-agent, which sets the state-of-the-art on the full SWE-bench test set! (Tweet ๐Ÿ”—)
  • [Jan. 16, 2024]: SWE-bench has been accepted to ICLR 2024 as an oral presentation! (OpenReview ๐Ÿ”—)

๐Ÿš€ Set Up

SWE-bench uses Docker for reproducible evaluations. Follow the instructions in the Docker setup guide to install Docker on your machine. If you’re setting up on Linux, we recommend seeing the post-installation steps as well.

Finally, to build SWE-bench from source, follow these steps:

git clone git@github.com:princeton-nlp/SWE-bench.git
cd SWE-bench
pip install -e .

Test your installation by running:

python -m swebench.harness.run_evaluation \
    --predictions_path gold \
    --max_workers 1 \
    --instance_ids sympy__sympy-20590 \
    --run_id validate-gold

โ„น๏ธ Note

If using a MacOS M-series or other ARM-based systems, add --namespace '' to the above script. By default, the evaluation script pulls images (built for Linux) from DockerHub. Adding --namespace '' will cause evaluation images to be built locally instead.

๐Ÿ’ฝ Usage

Evaluate patch predictions on SWE-bench Lite with the following command:

python -m swebench.harness.run_evaluation \
    --dataset_name princeton-nlp/SWE-bench_Lite \
    --predictions_path <path_to_predictions> \
    --max_workers <num_workers> \
    --run_id <run_id>
    # use --predictions_path 'gold' to verify the gold patches
    # use --run_id to name the evaluation run
    # use --modal true to run on Modal

This command will generate docker build logs (logs/build_images) and evaluation logs (logs/run_evaluation) in the current directory.

The final evaluation results will be stored in the evaluation_results directory.

โš ๏ธ Warning

SWE-bench evaluation can be resource intensive We recommend running on an x86_64 machine with at least 120GB of free storage, 16GB of RAM, and 8 CPU cores. We recommend using fewer than min(0.75 * os.cpu_count(), 24) for --max_workers.

If running with Docker desktop, make sure to increase your virtual disk space to ~120 free GB. Set max_workers to be consistent with the above for the CPUs available to Docker.

Support for arm64 machines is experimental.

To see the full list of arguments for the evaluation harness, run:

python -m swebench.harness.run_evaluation --help

โœ๏ธ Citation

If you find our work helpful, please use the following citations.

@inproceedings{
    jimenez2024swebench,
    title={{SWE}-bench: Can Language Models Resolve Real-world Github Issues?},
    author={Carlos E Jimenez and John Yang and Alexander Wettig and Shunyu Yao and Kexin Pei and Ofir Press and Karthik R Narasimhan},
    booktitle={The Twelfth International Conference on Learning Representations},
    year={2024},
    url={https://openreview.net/forum?id=VTF8yNQM66}
}

@inproceedings{
    yang2024swebenchmultimodal,
    title={{SWE}-bench Multimodal: Do AI Systems Generalize to Visual Software Domains?},
    author={John Yang and Carlos E. Jimenez and Alex L. Zhang and Kilian Lieret and Joyce Yang and Xindi Wu and Ori Press and Niklas Muennighoff and Gabriel Synnaeve and Karthik R. Narasimhan and Diyi Yang and Sida I. Wang and Ofir Press},
    booktitle={The Thirteenth International Conference on Learning Representations},
    year={2025},
    url={https://openreview.net/forum?id=riTiq3i21b}
}