File size: 7,409 Bytes
fc80343
 
 
 
 
 
 
 
 
c761060
 
fc80343
c761060
45e68b1
 
c761060
45e68b1
c761060
d0029cf
 
 
 
926d968
45e68b1
 
 
 
 
 
 
 
 
 
 
 
 
 
0baa57c
 
 
 
 
45e68b1
 
 
 
 
 
 
 
 
f4e6654
 
 
c761060
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f4e6654
 
 
 
 
 
 
 
 
 
 
 
fb0bd10
f4e6654
 
 
 
 
 
 
 
 
45e68b1
f4e6654
 
 
 
 
 
 
 
 
c761060
 
 
f4e6654
c761060
 
f4e6654
 
 
 
 
 
 
 
 
 
 
 
 
 
45e68b1
f4e6654
 
 
 
 
 
 
 
 
 
 
45e68b1
 
 
 
 
 
 
 
 
 
d85ebea
45e68b1
 
 
 
 
 
 
 
c761060
2410c30
 
 
 
 
 
 
45e68b1
 
 
 
 
 
2410c30
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
---
license: mit
tags:
- regression
- latency
- triton
- leetcode
- kernel
- text regression
task_categories:
- text-generation
---

# Code-Regression

[Paper](https://huggingface.co/papers/2509.26476) | [GitHub Repository](https://github.com/google-deepmind/regress-lm/tree/main) | [Project Page](https://research.google/blog/simulating-large-systems-with-regression-language-models/)

A unified regression dataset collated from three sources (APPS, KBSS, CDSS) along with our own custom profiling for training and evaluating regression models that map code strings to a target metric. This dataset supports "code-to-metric regression," which involves predicting numeric outcomes of code executions using Regression Language Models (RLM), as described in the linked paper.

**Link for Graph-Regression dataset**: https://huggingface.co/datasets/akhauriyash/GraphArch-Regression

**Link for Base Gemma-Adapted RLM model**: https://huggingface.co/akhauriyash/RLM-GemmaS-Code-v0

## Schema
- **identifier** *(string)*: Source key for the example, e.g. `APPS_0`, `KBSS_1`, `CDSS_42`.
- **space** *(string)*: Logical dataset split/source (`APPS`, `KBSS`, or `CDSS`).
- **input** *(string)*: The input string (`shortest_onnx`).
- **target_metric** *(string)*: Always `val_accuracy`.
- **val_accuracy** *(number | null)*: The regression target.
- **metric_type** *(string)*: Auxiliary metric family for this row:
  - `memory_bytes` for APPS and CDSS
  - `latency_ms` for KBSS
- **metadata** *(string)*: A Python-dict-like string with source-specific information:
  - APPS: `problem_metainformation` cast to string.
  - KBSS: `{'stddev_ms': <value>}`.
  - CDSS: subset of fields `{s_id, p_id, u_id, date, language, original_language, filename_ext, status, cpu_time, memory, code_size}`.

This dataset has 7502559 rows:
  - APPS: 98932
  - CDSS (CodeNets): 7391012
  - KBSS (Triton Kernels): 12615

> Tip: turn `metadata` back into a dict with:
> ```python
> from ast import literal_eval
> meta = literal_eval(row["metadata"])
> ```

## How to load with 🤗 Datasets
```python
from datasets import load_dataset
ds = load_dataset("akhauriyash/Code-Regression")
```

## Sample Usage with `RegressLM`

The `regress_lm` library provides the `RegressLM` class for decoding floating-point predictions from a given input and fine-tuning against new data. Below is an example of how to instantiate `RegressLM` and use it for inference.

```python
from regress_lm import core
from regress_lm import rlm

# Create RegressLM from scratch. Optionally, use `from_t5gemma_encoder`.
reg_lm = rlm.RegressLM.from_scratch(max_input_len=2048)

# Example (x,y) pairs, which can be fine-tuned against.
examples = [core.Example(x='hello', y=0.3), core.Example(x='world', y=-0.3)]
reg_lm.fine_tune(examples)

# Query inputs.
query1, query2 = core.ExampleInput(x='hi'), core.ExampleInput(x='bye')
samples1, samples2 = reg_lm.sample([query1, query2], num_samples=128)
```

## Testing Code-Regression with a basic Gemma RLM model

Use the code below as reference for evaluating a basic RegressLM model ( better, more models to come! :) )

```
import torch
import numpy as np
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from scipy.stats import spearmanr
from tqdm import tqdm

REPO_ID = "akhauriyash/RLM-GemmaS-Code-v0"
DATASET = "akhauriyash/Code-Regression"
dataset = load_dataset(DATASET, split="train")
tok = AutoTokenizer.from_pretrained(REPO_ID, trust_remote_code=True)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = AutoModelForSeq2SeqLM.from_pretrained(REPO_ID, trust_remote_code=True).to(device).eval()
MAX_ITEMS, BATCH_SIZE, spaces, results = 512, 16, ["KBSS", "CDSS", "APPS"], {}
language = None # Specify language for CDSS, e.g. "python"
n_out_tokens = getattr(model.config, "num_tokens_per_obj", 8) * getattr(model.config, "max_num_objs", 1)
n_out_tokens = model.config.num_tokens_per_obj * model.config.max_num_objs

for SPACE in spaces:
    inputs, targets = [], []
    for row in tqdm(dataset, desc=f"Processing {SPACE} till {MAX_ITEMS} items"):
        if row.get("space") == SPACE and "input" in row and "target" in row:
            try:
                lang = eval(row['metadata'])['language'] if SPACE == "CDSS" else None
                if SPACE != "CDSS" or language is None or lang == language:
                    targets.append(float(row["target"]))
                    if SPACE == "CDSS":
                        inputs.append(f"# {SPACE}
# Language: {lang}
{row['input']}")
                    else:
                        inputs.append(f"{SPACE}
{row['input']}")
            except: continue
            if len(inputs) >= MAX_ITEMS: break
    preds = []
    for i in tqdm(range(0, len(inputs), BATCH_SIZE)):
        enc = tok(inputs[i:i+BATCH_SIZE], return_tensors="pt", truncation=True, padding=True, max_length=2048).to(device)
        batch_preds = []
        for _ in range(8):
            out = model.generate(**enc, max_new_tokens=n_out_tokens, min_new_tokens=n_out_tokens, do_sample=True, top_p=0.95, temperature=1.0)
            decoded = [tok.token_ids_to_floats(seq.tolist()) for seq in out]
            decoded = [d[0] if isinstance(d, list) and d else float("nan") for d in decoded]
            batch_preds.append(decoded)
        preds.extend(torch.tensor(batch_preds).median(dim=0).values.tolist())
    spear, _ = spearmanr(np.array(targets), np.array(preds))
    results[SPACE] = spear; print(f"Spearman ρ for {SPACE}: {spear:.3f}")

print("Spearman ρ | KBSS | CDSS | APPS")
print(f"{REPO_ID} | " + " | ".join(f"{results[s]:.3f}" for s in spaces))

```


We got the following results when testing on a random subset of the Code-Regression dataset.

```
Model ID                                 | KBSS  | CDSS  | APPS
akhauriyash/RegressLM-gemma-s-RLM-table3 | 0.527 | 0.787 | 0.926 
```

# Credits

This dataset was collated from several sources, along with our own latency and memory profiling. We thank the authors for their efforts.

APPS:
Hendrycks, D., Basart, S., Kadavath, S., Mazeika, M., Arora, A., Guo, E., Burns, C., Puranik, S., He, H., Song, D., & Steinhardt, J. (2021). Measuring Coding Challenge Competence With APPS. NeurIPS.

CDSS (CodeNet):
Puri, R., Kung, D. S., Janssen, G., Zhang, W., Domeniconi, G., Zolotov, V., Dolby, J., Chen, J., Choudhury, M., Decker, L., & others. (2021). Codenet: A large-scale ai for code dataset for learning a diversity of coding tasks. 

KBSS (KernelBook):
Paliskara, S., & Saroufim, M. (2025). KernelBook. https://huggingface.co/datasets/GPUMODE/KernelBook

## Citations

If you found this dataset useful for your research, please cite the original sources above as well as:

```bibtex
@article{akhauri2025regressionlanguagemodelscode,
      title={Regression Language Models for Code}, 
      author={Yash Akhauri and Xingyou Song and Arissa Wongpanich and Bryan Lewandowski and Mohamed S. Abdelfattah},
      journal={arXiv preprint arXiv:2509.26476},
      year={2025}
}

@article{akhauri2025performance,
  title={Performance Prediction for Large Systems via Text-to-Text Regression},
  author={Akhauri, Yash and Lewandowski, Bryan and Lin, Cheng-Hsi and Reyes, Adrian N and Forbes, Grant C and Wongpanich, Arissa and Yang, Bangding and Abdelfattah, Mohamed S and Perel, Sagi and Song, Xingyou},
  journal={arXiv preprint arXiv:2506.21718},
  year={2025}
}
```