Datasets:
language:
- en
license: apache-2.0
size_categories:
- 1K<n<10K
task_categories:
- text-generation
pretty_name: MEnvData-SWE
tags:
- code
- software-engineering
- docker
- environment
- multi-language
- swe-bench
MEnvData-SWE: Polyglot Software Engineering Dataset with Executable Environments
π Dataset Description
MEnvData-SWE is the largest open-source polyglot dataset of realistic verifiable Docker environments, comprising 3,005 task instances from 942 repositories across 10 programming languages. Each instance includes a fully executable Docker environment with pre-verified test cases, environment setup scripts, and evaluation scripts.
Key Features
- π Multi-Language Support: 10 programming languages (Python, Java, TypeScript, JavaScript, Rust, Go, C++, Ruby, PHP, C)
- π³ 3,005 Docker Images: Pre-built environment images with verified dependencies
- β Verified Quality: Each instance includes Fail2Pass validated test cases
- π§ Complete Setup Scripts: Both incremental and full environment configuration scripts
- π Evaluation Scripts: Ready-to-use verification scripts with exit code validation
- β‘ Efficient Verification: Exit code-based validation (no complex log parsing required)
Dataset
| Name | Value |
|---|---|
| Total Instances | 3,005 |
| Repositories | 942 |
| Languages | 10 |
| Docker Images | 3,005 (one per instance) |
π³ Docker Image Registry
All pre-built Docker images are publicly available on Docker Hub:
π Docker Hub Repository: https://hub.docker.com/u/mcatwj
Each instance's image_name field references a specific image that can be pulled directly:
# Example: Pull a specific instance image
docker pull mcatwj/swe-images-c:systemd-systemd-pr-24645
# Or use the image_name from the dataset
docker pull mcatwj/<image_name_from_dataset>
All images are fully configured with:
- Project dependencies installed
- Repository cloned at
/testbed - Ready for immediate test execution
π Dataset Structure
MEnvData-SWE extends the standard SWE-Bench schema with executable environment configurations. Each instance contains the following fields:
| Field | Type | Description |
|---|---|---|
| repo | str |
The full GitHub repository name (e.g., "home-assistant/core"). |
| pull_number | int |
The pull request number associated with the fix. |
| instance_id | str |
A unique identifier for the task instance. |
| issue_numbers | list |
A list of linked issue numbers. |
| base_commit | str |
The commit SHA of the repository prior to the fix. |
| version | str |
The version of the dataset. |
| patch | str |
The ground-truth patch (git diff) that resolves the issue. |
| test_patch | str |
The test patch (git diff) containing new tests to reproduce the issue. |
| problem_statement | str |
The natural language description of the issue. |
| hints_text | str |
Hints extracted from the issue discussion. |
| all_hints_text | str |
Comprehensive context including all comments and reviews. |
| commit_urls | list |
A list of URLs pointing to the relevant commits. |
| created_at | str |
The creation timestamp (e.g., "2015-12-27T19:33:55Z"). |
| language | str |
The programming language (e.g., "Python"). |
| env_setup_script | str |
π Incremental bash commands used to configure the environment (for reuse scenarios). |
| original_env_setup_script | str |
π The foundational setup commands. Represents the reused base image's setup or the full build script if built from scratch. |
| eval_script | str |
π The complete verification script that applies the test_patch and executes the test commands. |
| image_name | str |
π The specific Docker image name/tag available for this instance. |
π Usage
Loading the Dataset
from datasets import load_dataset
# Load the full dataset
dataset = load_dataset("ernie-research/MEnvData-SWE")
# Access a single instance
instance = dataset['train'][0]
print(f"Repository: {instance['repo']}")
print(f"Language: {instance['language']}")
print(f"Docker Image: {instance['image_name']}")
Example Instance
{
"repo": "systemd/systeinstance_id": "systemd__systemd-24645",
"language": "C",
"image_name": "swe-images-c:systemd-systemd-pr-24645",
"base_commit": "6d64cb0625691e2b9eda8babe07ac8281f9467ee",
"env_setup_script": "#!/usr/bin/env bash\nset -e\ncd /testbed\ngit reset --hard 6d64cb0625...",
"eval_script": "#!/usr/bin/env bash\nset -uxo pipefail\nexport LANG=en_US.UTF-8...",
...
}
π― Use Cases
- Agent Training: Train code generation and debugging agents on realistic software engineering tasks
- Benchmark Evaluation: Evaluate model performance on multi-language software issues
π Related Datasets
- MEnvData-SWE-Trajectory: Extended version with 3,872 agent execution trajectories
π Citation
If MEnvData-SWE helps your research, please cite:
@misc{guo2026menvagent,
title={MEnvAgent: Scalable Polyglot Environment Construction for Verifiable Software Engineering},
author={Chuanzhend Jingjing Wu and Sijun He and Yang Chen and Zhaoqi Kuang and Shilong Fan and Bingjin Chen and Siqi Bao and Jing Liu and Hua Wu and Qingfu Zhu and Wanxiang Che and Haifeng Wang},
year={2026},
url={https://arxiv.org/abs/2601.22859},
}
π§ Contact
For questions or issues:
- GitHub: MEnvAgent Repository
- Email: czguo@ir.hit.edu.cn
π Acknowledgements
We thank all open-source maintainers whose projects contributed to this dataset.