Datasets:
Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Code-Regression
|
| 2 |
+
|
| 3 |
+
A unified regression dataset collated from three sources (APPS, KBSS, CDSS) along with our own custom profiling for training and evaluating regression models that map code strings to a target metric.
|
| 4 |
+
|
| 5 |
+
## Schema
|
| 6 |
+
- **identifier** *(string)*: Source key for the example, e.g. `APPS_0`, `KBSS_1`, `CDSS_42`.
|
| 7 |
+
- **space** *(string)*: Logical dataset split/source (`APPS`, `KBSS`, or `CDSS`).
|
| 8 |
+
- **input** *(string)*: The input string (`shortest_onnx`).
|
| 9 |
+
- **target_metric** *(string)*: Always `val_accuracy`.
|
| 10 |
+
- **val_accuracy** *(number | null)*: The regression target.
|
| 11 |
+
- **metric_type** *(string)*: Auxiliary metric family for this row:
|
| 12 |
+
- `memory_bytes` for APPS and CDSS
|
| 13 |
+
- `latency_ms` for KBSS
|
| 14 |
+
- **metadata** *(string)*: A Python-dict-like string with source-specific information:
|
| 15 |
+
- APPS: `problem_metainformation` cast to string.
|
| 16 |
+
- KBSS: `{'stddev_ms': <value>}`.
|
| 17 |
+
- CDSS: subset of fields `{s_id, p_id, u_id, date, language, original_language, filename_ext, status, cpu_time, memory, code_size}`.
|
| 18 |
+
|
| 19 |
+
> Tip: turn `metadata` back into a dict with:
|
| 20 |
+
> ```python
|
| 21 |
+
> from ast import literal_eval
|
| 22 |
+
> meta = literal_eval(row["metadata"])
|
| 23 |
+
> ```
|
| 24 |
+
|
| 25 |
+
## How to load with 🤗 Datasets
|
| 26 |
+
```python
|
| 27 |
+
from datasets import load_dataset
|
| 28 |
+
|
| 29 |
+
# After you upload this folder to a dataset repo, e.g. your-username/Code-Regression
|
| 30 |
+
ds = load_dataset("your-username/Code-Regression")
|
| 31 |
+
|
| 32 |
+
# Or from a local clone:
|
| 33 |
+
# ds = load_dataset("json", data_files="Code-Regression/data.jsonl", split="train")
|
| 34 |
+
```
|
| 35 |
+
|
| 36 |
+
# Credits
|
| 37 |
+
|
| 38 |
+
This dataset was collated from several sources, along with our own latency and memory profiling. We thank the authors for their efforts.
|
| 39 |
+
|
| 40 |
+
APPS:
|
| 41 |
+
Hendrycks, D., Basart, S., Kadavath, S., Mazeika, M., Arora, A., Guo, E., Burns, C., Puranik, S., He, H., Song, D., & Steinhardt, J. (2021). Measuring Coding Challenge Competence With APPS. NeurIPS.
|
| 42 |
+
|
| 43 |
+
CDSS (CodeNet):
|
| 44 |
+
Puri, R., Kung, D. S., Janssen, G., Zhang, W., Domeniconi, G., Zolotov, V., Dolby, J., Chen, J., Choudhury, M., Decker, L., & others. (2021). Codenet: A large-scale ai for code dataset for learning a diversity of coding tasks. ArXiv Preprint ArXiv:2105.12655.
|
| 45 |
+
|
| 46 |
+
KBSS (KernelBook):
|
| 47 |
+
Paliskara, S., & Saroufim, M. (2025). KernelBook. https://huggingface.co/datasets/GPUMODE/KernelBook
|
| 48 |
+
|
| 49 |
+
## Citations
|
| 50 |
+
|
| 51 |
+
If you found this dataset useful for your research, please cite the original sources above as well as:
|
| 52 |
+
|
| 53 |
+
```
|
| 54 |
+
@article{akhauri2025performance,
|
| 55 |
+
title={Performance Prediction for Large Systems via Text-to-Text Regression},
|
| 56 |
+
author={Akhauri, Yash and Lewandowski, Bryan and Lin, Cheng-Hsi and Reyes, Adrian N and Forbes, Grant C and Wongpanich, Arissa and Yang, Bangding and Abdelfattah, Mohamed S and Perel, Sagi and Song, Xingyou},
|
| 57 |
+
journal={arXiv preprint arXiv:2506.21718},
|
| 58 |
+
year={2025}
|
| 59 |
+
}
|
| 60 |
+
```
|
| 61 |
+
|
| 62 |
+
(Original Paper Coming Soon!)
|
| 63 |
+
|