Datasets:
ZhouChuYue
commited on
Commit
·
cf58121
1
Parent(s):
ffe04bc
add English README
Browse files- README.md +9 -2
- README_EN.md +156 -0
README.md
CHANGED
|
@@ -39,7 +39,14 @@ tags:
|
|
| 39 |
- **数据质量层面**:现有数据集普遍缺乏系统的质量分级机制,高价值数学内容与低质噪声混杂。
|
| 40 |
- **数据多样性层面**:主流数据集多源自教科书或竞赛题库,缺少真实网页中的数学讨论与应用场景;合成数据格式单一,难以覆盖多轮对话、多风格表达等多样化需求。
|
| 41 |
|
| 42 |
-
针对上述问题,我们提出 ***UltraData-Math***——一个面向数学推理任务的大规模高质量预训练数据集。本数据集基于 [Ultra-Data](xxx)的L0-L4 分级数据处理框架开发,包含四个递进层级:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
|
| 44 |
***UltraData-Math*** 已应用于 [MiniCPM 系列](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b) 模型的数学预训练。
|
| 45 |
|
|
@@ -48,7 +55,7 @@ tags:
|
|
| 48 |
|
| 49 |
## 🏗️ 数据处理流水线
|
| 50 |
|
| 51 |
-
为突破现有数学数据集在质量与多样性上的局限,我们建立了一套以"数学内容完整性"和"信息密度"为核心的精细化分级标准。***UltraData-Math*** 采用了
|
| 52 |
|
| 53 |
<div align="center">
|
| 54 |
<img src="assets/ultradata-math-pipeline.png" width="900"/>
|
|
|
|
| 39 |
- **数据质量层面**:现有数据集普遍缺乏系统的质量分级机制,高价值数学内容与低质噪声混杂。
|
| 40 |
- **数据多样性层面**:主流数据集多源自教科书或竞赛题库,缺少真实网页中的数学讨论与应用场景;合成数据格式单一,难以覆盖多轮对话、多风格表达等多样化需求。
|
| 41 |
|
| 42 |
+
针对上述问题,我们提出 ***UltraData-Math***——一个面向数学推理任务的大规模高质量预训练数据集。本数据集基于 [Ultra-Data](xxx) 的 L0-L4 分级数据处理框架开发,包含四个递进层级:
|
| 43 |
+
|
| 44 |
+
- **L0 原始数据层**:基于 *magic-html* 开发数学解析器,结合 *w3m* 布局保持渲染与多级回退策略,将 MathML、KaTeX、AsciiMath 标准化为 LaTeX 格式
|
| 45 |
+
- **L1 过滤数据层**:通过启发式规则清洗噪声并进行文档级去重
|
| 46 |
+
- **L2 精筛数据层**:使用闭源大模型标注种子数据并蒸馏至轻量 Embedding 分类器,实现全量语料的高效质量分级
|
| 47 |
+
- **L3 合成数据层**:基于多模型集成生成 Q&A、多轮对话、多风格改写、知识接地教材等多种格式的合成数据
|
| 48 |
+
|
| 49 |
+
实验表明,在 MiniCPM-1B 架构上,***UltraData-Math*** 在 MATH 基准上达到 **37.02** 分,相较 Nemotron-CC 4plus 提升 **+3.62** 分;在 GSM8K 上达到 **61.79** 分,提升 **+3.34** 分,同时保持代码生成与通用知识能力。
|
| 50 |
|
| 51 |
***UltraData-Math*** 已应用于 [MiniCPM 系列](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b) 模型的数学预训练。
|
| 52 |
|
|
|
|
| 55 |
|
| 56 |
## 🏗️ 数据处理流水线
|
| 57 |
|
| 58 |
+
为突破现有数学数据集在质量与多样性上的局限,我们建立了一套以"数学内容完整性"和"信息密度"为核心的精细化分级标准。***UltraData-Math*** 采用了 UltraData观点论文提出的 **L0-L4 数据分级体系**,通过标准化的层级定义,实现数学数据资产的有序管理与高效流转。每一级都代表了更高的数据纯度与数学价值,同时也对应着更精细的加工程度。
|
| 59 |
|
| 60 |
<div align="center">
|
| 61 |
<img src="assets/ultradata-math-pipeline.png" width="900"/>
|
README_EN.md
ADDED
|
@@ -0,0 +1,156 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
- zh
|
| 5 |
+
license: apache-2.0
|
| 6 |
+
size_categories:
|
| 7 |
+
- 100B<n<1T
|
| 8 |
+
task_categories:
|
| 9 |
+
- text-generation
|
| 10 |
+
pretty_name: UltraData-Math
|
| 11 |
+
arxiv: xxxx.xxxxx
|
| 12 |
+
tags:
|
| 13 |
+
- llm
|
| 14 |
+
- pretraining
|
| 15 |
+
- math
|
| 16 |
+
- data-synthesis
|
| 17 |
+
- data-filtering
|
| 18 |
+
- high-quality
|
| 19 |
+
- mathematical-reasoning
|
| 20 |
+
---
|
| 21 |
+
|
| 22 |
+
# UltraData-Math
|
| 23 |
+
|
| 24 |
+
<div align="center">
|
| 25 |
+
<img src="assets/ultradata-math-logo.png" width="600"/>
|
| 26 |
+
</div>
|
| 27 |
+
|
| 28 |
+
<div align="center">
|
| 29 |
+
|
| 30 |
+
[📜 Technical Report](https://arxiv.org/abs/xxxx.xxxxx) | [📄 MiniCPM Paper](https://huggingface.co/papers/2506.07900) | [💻 Code Repository](https://github.com/openbmb/UltraData-Math) | [🌐 Project Page](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b) | [🇨🇳 中文 README](README.md)
|
| 31 |
+
|
| 32 |
+
</div>
|
| 33 |
+
|
| 34 |
+
## 📚 Introduction
|
| 35 |
+
|
| 36 |
+
High-quality pre-training data is crucial for enhancing the mathematical reasoning capabilities of Large Language Models (LLMs). However, existing mathematical pre-training data construction schemes have the following shortcomings:
|
| 37 |
+
|
| 38 |
+
- **HTML Parsing Level**: General extractors (such as trafilatura, readability) are mainly designed for news/article scenarios, lacking specialized processing for mathematical formulas and other content, often leading to formula structure destruction or loss; meanwhile, mathematical discussions on forum-like pages are difficult to extract completely.
|
| 39 |
+
- **Data Quality Level**: Existing datasets generally lack a systematic quality grading mechanism, with high-value mathematical content mixed with low-quality noise.
|
| 40 |
+
- **Data Diversity Level**: Mainstream datasets mostly originate from textbooks or competition question banks, lacking mathematical discussions and application scenarios in real web pages; synthetic data formats are single, difficult to cover diverse needs such as multi-turn dialogues and multi-style expressions.
|
| 41 |
+
|
| 42 |
+
To address these issues, we propose ***UltraData-Math***—a large-scale high-quality pre-training dataset for mathematical reasoning tasks. This dataset is developed based on the [Ultra-Data](xxx) L0-L4 hierarchical data processing framework, containing four progressive levels:
|
| 43 |
+
|
| 44 |
+
- **L0 Raw Data Layer**: Developed a mathematical parser based on *magic-html*, combined with *w3m* layout preservation rendering and multi-level fallback strategies, standardizing MathML, KaTeX, and AsciiMath into LaTeX format.
|
| 45 |
+
- **L1 Filtered Data Layer**: Cleans noise through heuristic rules and performs document-level deduplication.
|
| 46 |
+
- **L2 Selected Data Layer**: Uses closed-source large models to annotate seed data and distills it into a lightweight Embedding classifier to achieve efficient quality grading of the full corpus.
|
| 47 |
+
- **L3 Synthetic Data Layer**: Generates synthetic data in various formats such as Q&A, multi-turn dialogues, multi-style rewriting, and knowledge-grounded textbooks based on multi-model ensemble.
|
| 48 |
+
|
| 49 |
+
Experiments show that on the MiniCPM-1B architecture, ***UltraData-Math*** achieves a score of **37.02** on the MATH benchmark, an improvement of **+3.62** compared to Nemotron-CC 4plus; it achieves **61.79** on GSM8K, an improvement of **+3.34**, while maintaining code generation and general knowledge capabilities.
|
| 50 |
+
|
| 51 |
+
***UltraData-Math*** has been applied to the mathematical pre-training of the [MiniCPM Series](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b) models.
|
| 52 |
+
|
| 53 |
+
- **[UltraData-Math-L1](https://huggingface.co/datasets/openbmb/UltraData-Math-L1)**: Large-scale high-quality mathematical pre-training dataset, containing 159.4B tokens of web mathematical corpus. (**<-- you are here**)
|
| 54 |
+
- **[UltraData-Math-L3](https://huggingface.co/datasets/openbmb/UltraData-Math-L3)**: High-quality synthetic mathematical dataset, containing 37.1B tokens of multi-format synthetic data (Q&A, multi-turn dialogues, knowledge textbooks, etc.).
|
| 55 |
+
|
| 56 |
+
## 🏗️ Data Processing Pipeline
|
| 57 |
+
|
| 58 |
+
To break through the limitations of existing mathematical datasets in quality and diversity, we established a refined grading standard centered on "mathematical content integrity" and "information density". ***UltraData-Math*** adopts the **L0-L4 Data Grading System** proposed by the UltraData position paper. Through standardized level definitions, it achieves orderly management and efficient flow of mathematical data assets. Each level represents higher data purity and mathematical value, while also corresponding to a more refined degree of processing.
|
| 59 |
+
|
| 60 |
+
<div align="center">
|
| 61 |
+
<img src="assets/ultradata-math-pipeline.png" width="900"/>
|
| 62 |
+
</div>
|
| 63 |
+
|
| 64 |
+
### L0: Raw Data Parsing and Standardization
|
| 65 |
+
|
| 66 |
+
**Goal**: Address the poor support of general HTML parsers for mathematical formulas and maximize the preservation of mathematical semantics in web pages.
|
| 67 |
+
|
| 68 |
+
The L0 phase mainly processes raw web data obtained from sources such as Common Crawl. Given the specificity of mathematical web pages, we developed specialized parsing strategies instead of directly using general ones like trafilatura or readability.
|
| 69 |
+
|
| 70 |
+
- **Unified Parsing Mode**: Automatically identifies page types to ensure complete content extraction as much as possible.
|
| 71 |
+
- **Multi-level Fallback Strategy**: To prevent data loss due to parsing failures, we implemented a multi-level fallback mechanism to ensure text content is captured even if structured parsing fails.
|
| 72 |
+
- **Mathematical Formula Standardization**: We unified different mathematical expressions in web pages into standard LaTeX format, achieving data format normalization for unified model learning.
|
| 73 |
+
|
| 74 |
+
### L1: Heuristic Cleaning and Filtering
|
| 75 |
+
|
| 76 |
+
**Goal**: Remove format noise and improve data readability and standardization.
|
| 77 |
+
|
| 78 |
+
After obtaining text containing complete mathematical formulas, we clean the L0 data through a series of heuristic rules:
|
| 79 |
+
|
| 80 |
+
- **Format Repair**:
|
| 81 |
+
- Clean invisible characters, garbled text, and unnatural continuous line breaks.
|
| 82 |
+
- Remove irrelevant web noise such as navigation bars, footers, ad pop-ups, and "read more".
|
| 83 |
+
- **Content Filtering**:
|
| 84 |
+
- *Length Filtering*: Remove overly short text fragments, which usually lack context and are difficult to support effective mathematical reasoning training.
|
| 85 |
+
- *Language Identification*: Ensure the dataset is composed mainly of high-quality English and Chinese mathematical content.
|
| 86 |
+
- *Document Deduplication*: Perform deduplication at the document level to prevent duplicate content from biasing model training.
|
| 87 |
+
|
| 88 |
+
### L2: Selection Based on Quality Models
|
| 89 |
+
|
| 90 |
+
**Goal**: Identify core corpora with high value from massive data.
|
| 91 |
+
|
| 92 |
+
Although L1 data has a clean format, the content quality varies. The L2 phase introduces a model-based quality assessment system:
|
| 93 |
+
|
| 94 |
+
- **Seed Data Annotation**: Use closed-source large models to score a portion of seed data across multiple dimensions.
|
| 95 |
+
- **Classifier Training and Distillation**: Train lightweight Embedding classifiers based on annotated data to equip them with the ability to identify high-value mathematical content.
|
| 96 |
+
- **Full-scale Inference**: Use the trained classifier to score and screen L1 data in full.
|
| 97 |
+
- *Retention*: Content containing detailed problem-solving steps, mathematical concept explanations, and high-level academic discussions.
|
| 98 |
+
- *Exclusion*: Simple stacking of nouns, meaningless lists of numbers, juvenile content, or noise from non-mathematical fields.
|
| 99 |
+
|
| 100 |
+
### L3: Synthetic and Augmented Data
|
| 101 |
+
|
| 102 |
+
**Goal**: Compensate for the singularity of natural corpora in format and scenarios through synthetic data, enhancing the model's Chain of Thought (CoT) capabilities.
|
| 103 |
+
|
| 104 |
+
Natural web data is mostly declarative text. To enhance the model's instruction following and multi-turn interaction capabilities, we built the L3 synthetic data layer:
|
| 105 |
+
|
| 106 |
+
- **Q&A Pair Generation**: Use high-performance models to rewrite declarative documents into "Question-Answer" pairs, constructing QA-style data.
|
| 107 |
+
- **Multi-turn Dialogue Synthesis**: Simulate "Teacher-Student" tutoring scenarios to generate multi-turn dialogue data containing follow-up questions, corrections, and guidance.
|
| 108 |
+
- **Multi-style Rewriting**: Rewrite single-source data into multiple styles (such as rigorous textbook style, competition problem-solving style, intuitive popular science style) to improve model generalization.
|
| 109 |
+
- **Knowledge Point Textbook Generation**: Generate systematic textbook-like content based on specific knowledge points to ensure the model masters core mathematical concepts.
|
| 110 |
+
|
| 111 |
+
| Dataset | # Tokens | # Documents |
|
| 112 |
+
|:---|:---:|:---:|
|
| 113 |
+
| UltraData-Math-L1 | 159.4B | 85.56M |
|
| 114 |
+
| UltraData-Math-L3 | 37.1B | 31.87M |
|
| 115 |
+
|
| 116 |
+
## 📈 Experimental Results
|
| 117 |
+
|
| 118 |
+
We used the **MiniCPM-1.2B** model architecture and **MiniCPM3-4B** tokenizer for experimental verification. Each experiment was conducted with a training volume of **100 billion Tokens**, allowing for comprehensive verification of data performance within a parameter range with controllable computational efficiency. We used the Lighteval library for model evaluation, and all evaluation metrics are based on **Zero-Shot** settings. Evaluation benchmarks include:
|
| 119 |
+
|
| 120 |
+
- **Mathematical Reasoning:** GSM8K, MATH, R-Bench, Math-Bench
|
| 121 |
+
- **Code Generation:** HumanEval, MBPP
|
| 122 |
+
- **Comprehensive Knowledge:** MMLU, MMLU-STEM
|
| 123 |
+
|
| 124 |
+
### L0 Parser Ablation Study
|
| 125 |
+
|
| 126 |
+
Based on data from the same source, we used different parsers for extraction and trained independently to directly compare the effects of parsing strategies:
|
| 127 |
+
|
| 128 |
+
| Parser | Average | MMLU | GSM8K | HumanEval | math | mbpp_full | mmlu-stem |
|
| 129 |
+
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
| 130 |
+
| **UltraData-Math-L0-parser (Ours)** | **43.44** | 51.41 | 54.97 | **31.71** | **28.72** | 47.10 | 46.76 |
|
| 131 |
+
| trafilatura + w3m | 42.33 | 50.95 | 54.51 | 27.44 | 27.64 | 47.93 | 45.52 |
|
| 132 |
+
| trafilatura | 42.44 | 51.42 | 56.03 | 26.83 | 28.08 | 45.64 | 46.62 |
|
| 133 |
+
| Megamath | 42.32 | 51.46 | 54.06 | 29.88 | 26.04 | 45.64 | 46.81 |
|
| 134 |
+
| magic-html + w3m | 41.29 | 51.23 | 51.63 | 26.83 | 26.58 | 45.02 | 46.45 |
|
| 135 |
+
|
| 136 |
+
### Full Evaluation Results
|
| 137 |
+
|
| 138 |
+
We used a single dataset for independent training to directly compare the effects of different data sources:
|
| 139 |
+
|
| 140 |
+
| Model | Average | MMLU | GSM8K | HumanEval | math | mbpp_full | mmlu-stem | R-bench | Math-bench |
|
| 141 |
+
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
| 142 |
+
| **UltraData-Math (Ours)** | **43.79** | 51.67 | **61.79** | 32.93 | **37.02** | **49.27** | 45.93 | 23.38 | **48.33** |
|
| 143 |
+
| Nemotron-cc 4plus mind | 43.45 | 52.09 | 59.97 | 34.76 | 35.96 | 48.03 | 45.99 | 23.51 | 47.25 |
|
| 144 |
+
| Nemotron-cc 4plus | 42.62 | 51.96 | 58.45 | 35.37 | 33.40 | 46.47 | 45.67 | 22.74 | 46.92 |
|
| 145 |
+
| MegaMath-Web-Pro | 41.38 | 53.16 | 56.71 | 31.71 | 32.12 | 47.10 | 47.15 | 21.23 | 41.83 |
|
| 146 |
+
| FineMath-4+ | 40.51 | 50.90 | 56.25 | 29.88 | 29.84 | 48.96 | 44.98 | 18.93 | 44.33 |
|
| 147 |
+
|
| 148 |
+
## ❤️ Acknowledgements
|
| 149 |
+
|
| 150 |
+
- **L0 Parsing Layer**: [magic-html](https://github.com/opendatalab/magic-html), [w3m](http://w3m.sourceforge.net/), [trafilatura](https://github.com/adbar/trafilatura)
|
| 151 |
+
- **L3 Synthesis Layer**: [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct), [Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B), [GLM-4.5](https://huggingface.co/zai-org/GLM-4.5)
|
| 152 |
+
- **Seed Data**: [Nemotron-CC-Math](https://huggingface.co/datasets/nvidia/Nemotron-CC-Math-v1), [MegaMath](https://huggingface.co/datasets/LLM360/MegaMath), [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath)
|
| 153 |
+
|
| 154 |
+
## 📜 License
|
| 155 |
+
|
| 156 |
+
This project is licensed under the [Apache 2.0](./LICENSE) license.
|