Datasets:
File size: 10,159 Bytes
497a2a5 d3dcd30 497a2a5 60f48fe d3dcd30 60f48fe d3dcd30 60f48fe d3dcd30 60f48fe d3dcd30 1f9c99d d3dcd30 0a8cadc d3dcd30 60f48fe b5f988b d3dcd30 b5f988b 846ee38 d3dcd30 846ee38 b5f988b d3dcd30 f8e047f bffeb3e 642faf9 bffeb3e 415139a d3dcd30 b5f988b f8e047f cf58121 f8e047f cf58121 bffeb3e b5f988b 9cd8f42 b5f988b 16cc0da 98dbb6e 85fdde4 b5f988b 9cd8f42 b5f988b d3dcd30 b5f988b d3dcd30 b5f988b 9cd8f42 b5f988b d3dcd30 b5f988b d3dcd30 b5f988b 9cd8f42 b5f988b d3dcd30 b5f988b d3dcd30 b5f988b 9cd8f42 b5f988b d3dcd30 b5f988b d3dcd30 b5f988b 02b7903 b5f988b d3dcd30 b5f988b d3dcd30 b5f988b 16cc0da 9cd8f42 b5f988b d3dcd30 0a8cadc 24a0dff d3dcd30 24a0dff 3559ea7 d3dcd30 3559ea7 b5f988b 9cd8f42 b5f988b 9cd8f42 b5f988b 24a0dff d3dcd30 24a0dff 8e8d4fe 24a0dff 9cd8f42 b5f988b d3dcd30 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 |
---
language:
- en
- zh
license: apache-2.0
size_categories:
- 100B<n<1T
task_categories:
- text-generation
pretty_name: UltraData-Math
arxiv: '2602.09003'
tags:
- llm
- pretraining
- math
- data-synthesis
- data-filtering
- high-quality
- mathematical-reasoning
configs:
- config_name: UltraData-Math-L3-Conversation-Synthetic
data_files: data/UltraData-Math-L3/Conversation-Synthetic/*.parquet
- config_name: UltraData-Math-L3-Multi-Style-Synthetic
data_files: data/UltraData-Math-L3/Multi-Style-Synthetic/*.parquet
- config_name: UltraData-Math-L3-QA-Synthetic
data_files: data/UltraData-Math-L3/QA-Synthetic/*.parquet
- config_name: UltraData-Math-L3-Textbook-Exercise-Synthetic
data_files: data/UltraData-Math-L3/Textbook-Exercise-Synthetic/*.parquet
- config_name: UltraData-Math-L2-preview
data_files: data/UltraData-Math-L2-preview/**/*.parquet
- config_name: UltraData-Math-L1
data_files: data/UltraData-Math-L1/**/*.parquet
default_config_name: UltraData-Math-L3-Conversation-Synthetic
---
# UltraData-Math
<div align="center">
<img src="https://huggingface.co/datasets/openbmb/UltraData-Math/resolve/main/assets/ultradata-math-logo.png" width="600"/>
</div>
<p align="center">
<a href="https://huggingface.co/datasets/openbmb/UltraData-Math">🤗 Dataset</a> | <a href="https://huggingface.co/papers/2602.09003">📄 Paper</a> | <a href="https://ultradata.openbmb.cn">🌐 Project Page</a> | <a href="https://github.com/UltraData-OpenBMB/UltraData-Math">💻 Source Code</a> | <a href="https://huggingface.co/datasets/openbmb/UltraData-Math/blob/main/README_ZH.md">🇨🇳 中文 README</a>
</p>
***UltraData-Math*** is a large-scale, high-quality mathematical pre-training dataset totaling **290B+ tokens** across three progressive tiers—**L1** (170.5B tokens web corpus), **L2** (33.7B tokens quality-selected), and **L3** (88B tokens multi-format refined)—designed to systematically enhance mathematical reasoning in LLMs. It was introduced in the paper [Data Science and Technology Towards AGI Part I: Tiered Data Management](https://huggingface.co/papers/2602.09003) and has been applied to the mathematical pre-training of the [MiniCPM Series](https://huggingface.co/collections/openbmb/minicpm4) models.
## 🆕 What's New
- **[2026.02.09]**: **UltraData-Math**, a large-scale high-quality mathematical pre-training dataset with 290B+ tokens across three progressive tiers (L1/L2-preview/L3), is now available on Hugging Face. Released as part of the [UltraData](https://ultradata.openbmb.cn/) ecosystem. 🔥🔥🔥
- **[2026.02.10]**: **UltraData-Math** tops the Hugging Face Datasets Trending list, reaching the #1 spot! ⭐️⭐️⭐️
## 📚 Introduction
High-quality pre-training data is crucial for enhancing the mathematical reasoning capabilities of large language models (LLMs). However, existing mathematical pre-training data construction schemes often encounter issues with HTML parsing, data quality, and diversity.
To address these issues, we propose ***UltraData-Math***—a large-scale high-quality pre-training dataset for mathematical reasoning tasks. This dataset is developed based on the [UltraData](https://ultradata.openbmb.cn/blog/position-paper) L0-L4 Tiered Data Management Framework, containing four progressive levels:
- **L0 Raw Data**: Develops a mathematical parser based on *magic-html*, combined with *w3m* layout preservation rendering and multi-level fallback strategies, standardizing MathML, KaTeX, and AsciiMath into LaTeX format.
- **L1 Filtered Data**: Cleans noise through heuristic rules and performs document-level deduplication.
- **L2 Selected Data**: Uses proprietary large models to annotate seed data and distills it into a lightweight embedding classifier to achieve efficient quality grading of the full corpus.
- **L3 Refined Data**: Produces structured content with clear reasoning through rewriting, synthetic generation, and refinement in various formats such as Q&A, multi-turn dialogues, multi-style rewriting, and knowledge-grounded textbooks.
Experiments show that on the MiniCPM-1.2B architecture, ***UltraData-Math*** achieves a score of **37.02pp** on the MATH500 benchmark, an improvement of **+3.62pp** compared to Nemotron-CC 4plus; it achieves **61.79pp** on GSM8K, an improvement of **+3.34pp**, while maintaining code generation and general knowledge capabilities.
***UltraData-Math*** has been applied to the mathematical pre-training of the [MiniCPM Series](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b) models.
- **[UltraData-Math-L1](https://huggingface.co/datasets/openbmb/UltraData-Math)**: Large-scale high-quality mathematical pre-training dataset, containing 170.5B tokens of web mathematical corpus.
- **[UltraData-Math-L2](https://huggingface.co/datasets/openbmb/UltraData-Math-L2)**: High-quality mathematical pre-training dataset selected by the quality model, containing 33.7B tokens of high-quality web mathematical corpus.
- **[UltraData-Math-L3](https://huggingface.co/datasets/openbmb/UltraData-Math-L3)**: High-quality refined mathematical dataset, containing 88B tokens of multi-format refined data (Q&A, multi-turn dialogues, knowledge textbooks, etc.).
## 🏗️ Data Processing Pipeline
To break through the limitations of existing mathematical datasets in quality and diversity, we established a refined grading standard centered on "mathematical content integrity" and "information density". ***UltraData-Math*** adopts the **L0-L4 Tiered Data Management Framework** proposed by the [UltraData](https://huggingface.co/papers/2602.09003) paper.
<div align="center">
<img src="https://huggingface.co/datasets/openbmb/UltraData-Math/resolve/main/assets/ultradata-math-pipeline.png" width="900"/>
</div>
### L0: Raw Data Parsing and Standardization
The L0 phase mainly processes raw web data obtained from sources such as Common Crawl. Given the specificity of mathematical web pages, we develop specialized parsing strategies through the [UltraData-Math-Parser](https://huggingface.co/spaces/openbmb/UltraData-Math-L0-Parser).
- **Unified Parsing Mode**: Automatically identifies page types to ensure complete content extraction.
- **Multi-level Fallback Strategy**: Implementation of a multi-level fallback mechanism to ensure text content is captured even if structured parsing fails.
- **Mathematical Formula Standardization**: Unification of different mathematical expressions in web pages into standard LaTeX format.
### L1: Heuristic Cleaning and Filtering
Cleans noise through heuristic rules:
- **Format Repair**: Clean invisible characters, garbled text, and unnatural continuous line breaks.
- **Content Filtering**: Length filtering, language identification, and document-level deduplication.
### L2: Selection Based on Quality Models
The L2 phase introduces a model-based quality assessment system:
- **Seed Data Annotation**: Use proprietary large models to score seed data.
- **Classifier Training and Distillation**: Train lightweight embedding classifiers based on annotated data.
- **Full-scale Inference**: Use the trained classifier to score and screen L1 data.
### L3: Refined Data
Production of structured content with clear reasoning through the [UltraData-Math-Generator](https://huggingface.co/spaces/openbmb/UltraData-Math-L3-Generator):
- **Q&A Pair Generation**: Rewrite declarative documents into "Question-Answer" pairs.
- **Multi-turn Dialogue Synthesis**: Simulate "Teacher-Student" tutoring scenarios.
- **Multi-style Rewriting**: Rewrite single-source data into multiple styles.
- **Knowledge Point Textbook Generation**: Systematic textbook-like content based on specific knowledge points.
## 🚀 Quick Start
You can load the dataset directly from Hugging Face:
```python
from datasets import load_dataset
# Load UltraData-Math-L1
ds = load_dataset("openbmb/UltraData-Math", "UltraData-Math-L1")
# Load UltraData-Math-L2-preview
ds = load_dataset("openbmb/UltraData-Math", "UltraData-Math-L2-preview")
# Load UltraData-Math-L3 (default: Conversation-Synthetic)
ds = load_dataset("openbmb/UltraData-Math", "UltraData-Math-L3-Conversation-Synthetic")
```
## 📈 Experimental Results
We evaluated data quality using the **Decay Verification** method by continuing pre-training of a **MiniCPM-1.2B** base model with **~100B tokens**.
### Pipeline Effectiveness (L1 vs L2 vs L3)
Results demonstrate that higher-tier data (L3) significantly boosts mathematical reasoning (MATH500, GSM8K) and general capabilities.
<div align="center">
<img src="https://huggingface.co/datasets/openbmb/UltraData-Math/resolve/main/assets/ultradata-math-l1l2l3-comparison.png" width="700"/>
</div>
## ❤️ Acknowledgements
- **L0 Parsing Layer**: [magic-html](https://github.com/opendatalab/magic-html), [w3m](http://w3m.sourceforge.net/), [trafilatura](https://github.com/adbar/trafilatura)
- **L3 Synthesis Layer**: [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct), [Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B), [GLM-4.5](https://huggingface.co/zai-org/GLM-4.5)
- **Seed Data**: [Nemotron-CC-Math](https://huggingface.co/datasets/nvidia/Nemotron-CC-Math-v1), [MegaMath](https://huggingface.co/datasets/LLM360/MegaMath), [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath)
## 📖 Citation
If you find **UltraData-Math** useful in your research, please consider citing:
```bibtex
@article{wang2026tiered,
title={Data Science and Technology Towards AGI Part I: Tiered Data Management},
author={Yudong Wang and Zixuan Fu and Hengyu Zhao and Chen Zhao and Chuyue Zhou and Xinle Lin and Hongya Lyu and Shuaikang Xue and Yi Yi and Yingjiao Wang and Zhi Zheng and Yuzhou Zhang and Jie Zhou and Chaojun Xiao and Xu Han and Zhiyuan Liu and Maosong Sun},
journal={arXiv preprint arXiv:2602.09003},
year={2026}
}
@misc{ultradata-math,
title={UltraData-Math},
author={UltraData Team},
year={2026},
url={https://huggingface.co/datasets/openbmb/UltraData-Math},
publisher={Hugging Face}
}
```
## 📜 License
This project is licensed under the [Apache 2.0](./LICENSE) license. |