Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
troll_data / README.md
philippbecker's picture
Update README.md
5d083f2 verified
metadata
license: apache-2.0

TROLL Data

This repo contains the datasets used in

TROLL: Trust Regions improve Reinforcement Learning for Large Language Models (Philipp Becker∗, Niklas Freymuth∗, Serge Thilges, Fabian Otto, Gerhard Neumann, *shared first author)

Datasets

GSM8k

Taken from https://huggingface.co/datasets/openai/gsm8k

DAPO

Build DAPO Train and DAPO Eval on the version of the DAPO-Math dataset provided by Cui et al. (2025) (https://github.com/PRIME-RL/Entropy-Mechanism-of-RL). From their original training set, we set aside 1024 samples as an in-domain validation set (DAPO Eval), leaving 16,893 samples for DAPO Train. For broader outof-distribution evaluation, we again follow Cui et al. (2025) and use a benchmark suite, we refer to as Math-Eval, consisting of MATH500, AMC, AIME2024, AIME 2025, OMNI-MATH, OlympiadBench, and Minerva. We again build the data provided by Cui et al. (2025) and also follow their protocol by computing the mean over 32 responses for the small but hard AMC, AIME2024, and AIME2025 datasets while only considering a single response for the other sets. Finally, we ensure all 3 datasets have the same system preprompt, and include correct and identical instructions for answer formatting.

Preprompt:

Your task is to follow a systematic, thorough reasoning process before providing the final solution.
This involves analyzing, summarizing, exploring, reassessing, and refining your thought process through multiple iterations.
Structure your response into two sections: Thought and Solution.
In the Thought section, present your reasoning using the format: "<think> {thoughts} </think>".

(Ganqu Cui, Yuchen Zhang, Jiacheng Chen, Lifan Yuan, Zhi Wang, Yuxin Zuo, Haozhan Li, Yuchen Fan, Huayu Chen, Weize Chen, et al. The entropy mechanism of reinforcement learning for reasoning language models. 2025)

EURUS

Train and validation sets of the Eurus-2-RLDataset https://huggingface.co/datasets/PRIME-RL/Eurus-2-RL-Data , filter for math questions, resulting in 455 261 train and 1 024 evaluation questions

Citation

@article{becker2025troll,
  title={TROLL: Trust Regions improve Reinforcement Learning for Large Language Models},
  author={Becker, Philipp and Freymuth, Niklas and Thilges, Serge and Otto, Fabian and Neumann, Gerhard},
  journal={arXiv preprint arXiv:2510.03817},
  year={2025}
}