DataChef-32B
HF Models | HF Demo | Paper | GitHub
DataChef-32B is a specialized large language model designed for automated data recipe generation. It was introduced in the paper DataChef: Cooking Up Optimal Data Recipes for LLM Adaptation via Reinforcement Learning.
DataChef-32B facilitates LLM adaptation by generating executable data processing pipelines (data recipes) that transform raw data sources into high-quality training corpora targeted at specific benchmarks.
Model Description
DataChef-32B addresses the manual, labor-intensive process of designing data processing pipelines. It was trained using online reinforcement learning with a proxy reward system that predicts downstream performance for candidate recipes. Given a target benchmark and available data sources, the model outputs a complete data recipe to adapt a base LLM.
Performance Highlights
Across diverse tasks, DataChef-32B produces practical recipes that reach performance comparable to those curated by human experts. Notably, a recipe generated by DataChef-32B was used to adapt Qwen3-1.7B-Base to the math domain, achieving a score of 66.7 on AIME'25, surpassing the performance of the standard Qwen3-1.7B.
Installation
To use the DataChef framework for generating your own data recipes, follow the installation steps from the GitHub repository:
conda create -n datachef python=3.12
conda activate datachef
pip install -e .
Citation
If you find this work helpful, please consider citing:
@article{chen2026datachef,
title={DataChef: Cooking Up Optimal Data Recipes for LLM Adaptation via Reinforcement Learning},
author={Chen, Yicheng and Ma, Zerun and Xie, Xinchen and Li, Yining and Chen, Kai},
journal={arXiv preprint arXiv:2602.11089},
year={2026}
}
- Downloads last month
- 59
Model tree for yichengchen24/DataChef-32B
Base model
Qwen/Qwen3-32B