UltraData-Math / README.md
nielsr's picture
nielsr HF Staff
Update ArXiv ID and paper links
d3dcd30 verified
|
raw
history blame
10.2 kB
metadata
language:
  - en
  - zh
license: apache-2.0
size_categories:
  - 100B<n<1T
task_categories:
  - text-generation
pretty_name: UltraData-Math
arxiv: '2602.09003'
tags:
  - llm
  - pretraining
  - math
  - data-synthesis
  - data-filtering
  - high-quality
  - mathematical-reasoning
configs:
  - config_name: UltraData-Math-L3-Conversation-Synthetic
    data_files: data/UltraData-Math-L3/Conversation-Synthetic/*.parquet
  - config_name: UltraData-Math-L3-Multi-Style-Synthetic
    data_files: data/UltraData-Math-L3/Multi-Style-Synthetic/*.parquet
  - config_name: UltraData-Math-L3-QA-Synthetic
    data_files: data/UltraData-Math-L3/QA-Synthetic/*.parquet
  - config_name: UltraData-Math-L3-Textbook-Exercise-Synthetic
    data_files: data/UltraData-Math-L3/Textbook-Exercise-Synthetic/*.parquet
  - config_name: UltraData-Math-L2-preview
    data_files: data/UltraData-Math-L2-preview/**/*.parquet
  - config_name: UltraData-Math-L1
    data_files: data/UltraData-Math-L1/**/*.parquet
default_config_name: UltraData-Math-L3-Conversation-Synthetic

UltraData-Math

πŸ€— Dataset | πŸ“„ Paper | 🌐 Project Page | πŸ’» Source Code | πŸ‡¨πŸ‡³ δΈ­ζ–‡ README

UltraData-Math is a large-scale, high-quality mathematical pre-training dataset totaling 290B+ tokens across three progressive tiersβ€”L1 (170.5B tokens web corpus), L2 (33.7B tokens quality-selected), and L3 (88B tokens multi-format refined)β€”designed to systematically enhance mathematical reasoning in LLMs. It was introduced in the paper Data Science and Technology Towards AGI Part I: Tiered Data Management and has been applied to the mathematical pre-training of the MiniCPM Series models.

πŸ†• What's New

  • [2026.02.09]: UltraData-Math, a large-scale high-quality mathematical pre-training dataset with 290B+ tokens across three progressive tiers (L1/L2-preview/L3), is now available on Hugging Face. Released as part of the UltraData ecosystem. πŸ”₯πŸ”₯πŸ”₯
  • [2026.02.10]: UltraData-Math tops the Hugging Face Datasets Trending list, reaching the #1 spot! ⭐️⭐️⭐️

πŸ“š Introduction

High-quality pre-training data is crucial for enhancing the mathematical reasoning capabilities of large language models (LLMs). However, existing mathematical pre-training data construction schemes often encounter issues with HTML parsing, data quality, and diversity.

To address these issues, we propose UltraData-Mathβ€”a large-scale high-quality pre-training dataset for mathematical reasoning tasks. This dataset is developed based on the UltraData L0-L4 Tiered Data Management Framework, containing four progressive levels:

  • L0 Raw Data: Develops a mathematical parser based on magic-html, combined with w3m layout preservation rendering and multi-level fallback strategies, standardizing MathML, KaTeX, and AsciiMath into LaTeX format.
  • L1 Filtered Data: Cleans noise through heuristic rules and performs document-level deduplication.
  • L2 Selected Data: Uses proprietary large models to annotate seed data and distills it into a lightweight embedding classifier to achieve efficient quality grading of the full corpus.
  • L3 Refined Data: Produces structured content with clear reasoning through rewriting, synthetic generation, and refinement in various formats such as Q&A, multi-turn dialogues, multi-style rewriting, and knowledge-grounded textbooks.

Experiments show that on the MiniCPM-1.2B architecture, UltraData-Math achieves a score of 37.02pp on the MATH500 benchmark, an improvement of +3.62pp compared to Nemotron-CC 4plus; it achieves 61.79pp on GSM8K, an improvement of +3.34pp, while maintaining code generation and general knowledge capabilities.

UltraData-Math has been applied to the mathematical pre-training of the MiniCPM Series models.

  • UltraData-Math-L1: Large-scale high-quality mathematical pre-training dataset, containing 170.5B tokens of web mathematical corpus.
  • UltraData-Math-L2: High-quality mathematical pre-training dataset selected by the quality model, containing 33.7B tokens of high-quality web mathematical corpus.
  • UltraData-Math-L3: High-quality refined mathematical dataset, containing 88B tokens of multi-format refined data (Q&A, multi-turn dialogues, knowledge textbooks, etc.).

πŸ—οΈ Data Processing Pipeline

To break through the limitations of existing mathematical datasets in quality and diversity, we established a refined grading standard centered on "mathematical content integrity" and "information density". UltraData-Math adopts the L0-L4 Tiered Data Management Framework proposed by the UltraData paper.

L0: Raw Data Parsing and Standardization

The L0 phase mainly processes raw web data obtained from sources such as Common Crawl. Given the specificity of mathematical web pages, we develop specialized parsing strategies through the UltraData-Math-Parser.

  • Unified Parsing Mode: Automatically identifies page types to ensure complete content extraction.
  • Multi-level Fallback Strategy: Implementation of a multi-level fallback mechanism to ensure text content is captured even if structured parsing fails.
  • Mathematical Formula Standardization: Unification of different mathematical expressions in web pages into standard LaTeX format.

L1: Heuristic Cleaning and Filtering

Cleans noise through heuristic rules:

  • Format Repair: Clean invisible characters, garbled text, and unnatural continuous line breaks.
  • Content Filtering: Length filtering, language identification, and document-level deduplication.

L2: Selection Based on Quality Models

The L2 phase introduces a model-based quality assessment system:

  • Seed Data Annotation: Use proprietary large models to score seed data.
  • Classifier Training and Distillation: Train lightweight embedding classifiers based on annotated data.
  • Full-scale Inference: Use the trained classifier to score and screen L1 data.

L3: Refined Data

Production of structured content with clear reasoning through the UltraData-Math-Generator:

  • Q&A Pair Generation: Rewrite declarative documents into "Question-Answer" pairs.
  • Multi-turn Dialogue Synthesis: Simulate "Teacher-Student" tutoring scenarios.
  • Multi-style Rewriting: Rewrite single-source data into multiple styles.
  • Knowledge Point Textbook Generation: Systematic textbook-like content based on specific knowledge points.

πŸš€ Quick Start

You can load the dataset directly from Hugging Face:

from datasets import load_dataset

# Load UltraData-Math-L1
ds = load_dataset("openbmb/UltraData-Math", "UltraData-Math-L1")

# Load UltraData-Math-L2-preview
ds = load_dataset("openbmb/UltraData-Math", "UltraData-Math-L2-preview")

# Load UltraData-Math-L3 (default: Conversation-Synthetic)
ds = load_dataset("openbmb/UltraData-Math", "UltraData-Math-L3-Conversation-Synthetic")

πŸ“ˆ Experimental Results

We evaluated data quality using the Decay Verification method by continuing pre-training of a MiniCPM-1.2B base model with ~100B tokens.

Pipeline Effectiveness (L1 vs L2 vs L3)

Results demonstrate that higher-tier data (L3) significantly boosts mathematical reasoning (MATH500, GSM8K) and general capabilities.

❀️ Acknowledgements

πŸ“– Citation

If you find UltraData-Math useful in your research, please consider citing:

@article{wang2026tiered,
  title={Data Science and Technology Towards AGI Part I: Tiered Data Management},
  author={Yudong Wang and Zixuan Fu and Hengyu Zhao and Chen Zhao and Chuyue Zhou and Xinle Lin and Hongya Lyu and Shuaikang Xue and Yi Yi and Yingjiao Wang and Zhi Zheng and Yuzhou Zhang and Jie Zhou and Chaojun Xiao and Xu Han and Zhiyuan Liu and Maosong Sun},
  journal={arXiv preprint arXiv:2602.09003},
  year={2026}
}

@misc{ultradata-math,
  title={UltraData-Math},
  author={UltraData Team},
  year={2026},
  url={https://huggingface.co/datasets/openbmb/UltraData-Math},
  publisher={Hugging Face}
}

πŸ“œ License

This project is licensed under the Apache 2.0 license.