Datasets:
ZhouChuYue
commited on
Commit
·
02b7903
1
Parent(s):
c3a9d6a
Update README: Align terminology with paper (MATH500, OpenCompass, Refined Data, etc.)
Browse files
README.md
CHANGED
|
@@ -52,14 +52,14 @@ High-quality pre-training data is crucial for enhancing the mathematical reasoni
|
|
| 52 |
- **Data Quality Level**: Existing datasets generally lack a systematic quality grading mechanism, with high-value mathematical content mixed with low-quality noise.
|
| 53 |
- **Data Diversity Level**: Mainstream datasets mostly originate from textbooks or competition question banks, lacking mathematical discussions and application scenarios in real web pages; synthetic data formats are single, difficult to cover diverse needs such as multi-turn dialogues and multi-style expressions.
|
| 54 |
|
| 55 |
-
To address these issues, we propose ***UltraData-Math***—a large-scale high-quality pre-training dataset for mathematical reasoning tasks. This dataset is developed based on the [
|
| 56 |
|
| 57 |
- **L0 Raw Data Layer**: Developed a mathematical parser based on *magic-html*, combined with *w3m* layout preservation rendering and multi-level fallback strategies, standardizing MathML, KaTeX, and AsciiMath into LaTeX format.
|
| 58 |
- **L1 Filtered Data Layer**: Cleans noise through heuristic rules and performs document-level deduplication.
|
| 59 |
- **L2 Selected Data Layer**: Uses closed-source large models to annotate seed data and distills it into a lightweight Embedding classifier to achieve efficient quality grading of the full corpus.
|
| 60 |
-
- **L3
|
| 61 |
|
| 62 |
-
Experiments show that on the MiniCPM-
|
| 63 |
|
| 64 |
***UltraData-Math*** has been applied to the mathematical pre-training of the [MiniCPM Series](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b) models.
|
| 65 |
|
|
@@ -69,7 +69,7 @@ Experiments show that on the MiniCPM-1B architecture, ***UltraData-Math*** achie
|
|
| 69 |
|
| 70 |
## 🏗️ Data Processing Pipeline
|
| 71 |
|
| 72 |
-
To break through the limitations of existing mathematical datasets in quality and diversity, we established a refined grading standard centered on "mathematical content integrity" and "information density". ***UltraData-Math*** adopts the **L0-L4 Data
|
| 73 |
|
| 74 |
<div align="center">
|
| 75 |
<img src="assets/ultradata-math-pipeline.png" width="900"/>
|
|
@@ -111,9 +111,9 @@ Although L1 data has a clean format, the content quality varies. The L2 phase in
|
|
| 111 |
- *Retention*: Content containing detailed problem-solving steps, mathematical concept explanations, and high-level academic discussions.
|
| 112 |
- *Exclusion*: Simple stacking of nouns, meaningless lists of numbers, juvenile content, or noise from non-mathematical fields.
|
| 113 |
|
| 114 |
-
### L3:
|
| 115 |
|
| 116 |
-
**Goal**: Compensate for the singularity of natural corpora in format and scenarios through synthetic
|
| 117 |
|
| 118 |
Natural web data is mostly declarative text. To enhance the model's instruction following and multi-turn interaction capabilities, we built the L3 synthetic data layer:
|
| 119 |
|
|
@@ -130,9 +130,9 @@ Natural web data is mostly declarative text. To enhance the model's instruction
|
|
| 130 |
|
| 131 |
## 📈 Experimental Results
|
| 132 |
|
| 133 |
-
We used the **MiniCPM-1.2B** model architecture and **MiniCPM3-4B** tokenizer for experimental verification. Each experiment was conducted with a training volume of **100 billion Tokens**, using the **Decay Verification** method (annealing from a 1.3T base model). We used
|
| 134 |
|
| 135 |
-
- **Mathematical Reasoning:** GSM8K,
|
| 136 |
- **Code Generation:** HumanEval, MBPP
|
| 137 |
- **Comprehensive Knowledge:** MMLU, MMLU-STEM
|
| 138 |
|
|
@@ -144,9 +144,9 @@ We evaluated data quality using the **Decay Verification** method: continuing pr
|
|
| 144 |
|
| 145 |
To fairly compare different parsing strategies, we conducted experiments on a data subset sampled from the **2023-2024** distribution. We re-parsed the raw HTML from this source using different parsers and **applied the same L1 cleaning operators to all baselines**. This comparison demonstrates the **overall benefit of our L0 Parser + L1 Filtering pipeline** against other parsers under identical cleaning conditions.
|
| 146 |
|
| 147 |
-
| Parser | Average | MMLU | MMLU-STEM |
|
| 148 |
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
| 149 |
-
| **UltraData-Math-
|
| 150 |
| trafilatura + w3m | 42.33 | 50.95 | 45.52 | 27.64 | 54.51 | **47.93** | 27.44 |
|
| 151 |
| trafilatura | 42.44 | 51.42 | 46.62 | 28.08 | **56.03** | 45.64 | 26.83 |
|
| 152 |
| Megamath | 42.32 | **51.46** | **46.81** | 26.04 | 54.06 | 45.64 | 29.88 |
|
|
@@ -157,19 +157,19 @@ To fairly compare different parsing strategies, we conducted experiments on a da
|
|
| 157 |
|
| 158 |
To validate the effectiveness of our L0-L3 hierarchical framework, we conducted ablation studies comparing models trained on different tiers of UltraData-Math. Unlike the L0 parser comparison above (which used a 2023-2024 subset), these results are based on the **full dataset**.
|
| 159 |
|
| 160 |
-
| Dataset | Average | MMLU | MMLU-STEM |
|
| 161 |
| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
| 162 |
| **UltraData-Math-L1** | 42.31 | 51.41 | 45.44 | 27.78 | 54.66 | 44.71 | 29.88 |
|
| 163 |
| **UltraData-Math-L2** | 42.57 | 50.93 | 45.52 | 29.20 | 52.92 | 44.50 | 32.32 |
|
| 164 |
| **UltraData-Math-L3** | **46.44** | **51.67** | **45.93** | **37.02** | **61.79** | **49.27** | **32.93** |
|
| 165 |
|
| 166 |
-
*Note: Results demonstrate that higher-tier data (L3) significantly boosts mathematical reasoning (
|
| 167 |
|
| 168 |
### Full Evaluation Results
|
| 169 |
|
| 170 |
To compare against existing public mathematical pre-training datasets, we trained models independently on each dataset using the same model architecture and training budget (~100B tokens). The baselines include [Nemotron-CC-Math](https://huggingface.co/datasets/nvidia/Nemotron-CC-Math-v1), [MegaMath-Web-Pro](https://huggingface.co/datasets/LLM360/MegaMath), and [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath). All models are evaluated under identical conditions for a fair comparison:
|
| 171 |
|
| 172 |
-
| Model | Average | MMLU | MMLU-STEM |
|
| 173 |
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
| 174 |
| **UltraData-Math (Ours)** | **43.79** | 51.67 | 45.93 | **37.02** | **61.79** | **49.27** | 32.93 | 23.38 | **48.33** |
|
| 175 |
| Nemotron-cc 4plus mind | 43.45 | 52.09 | 45.99 | 35.96 | 59.97 | 48.03 | 34.76 | **23.51** | 47.25 |
|
|
|
|
| 52 |
- **Data Quality Level**: Existing datasets generally lack a systematic quality grading mechanism, with high-value mathematical content mixed with low-quality noise.
|
| 53 |
- **Data Diversity Level**: Mainstream datasets mostly originate from textbooks or competition question banks, lacking mathematical discussions and application scenarios in real web pages; synthetic data formats are single, difficult to cover diverse needs such as multi-turn dialogues and multi-style expressions.
|
| 54 |
|
| 55 |
+
To address these issues, we propose ***UltraData-Math***—a large-scale high-quality pre-training dataset for mathematical reasoning tasks. This dataset is developed based on the [UltraData](xxx) L0-L4 Tiered Data Management Framework, containing four progressive levels:
|
| 56 |
|
| 57 |
- **L0 Raw Data Layer**: Developed a mathematical parser based on *magic-html*, combined with *w3m* layout preservation rendering and multi-level fallback strategies, standardizing MathML, KaTeX, and AsciiMath into LaTeX format.
|
| 58 |
- **L1 Filtered Data Layer**: Cleans noise through heuristic rules and performs document-level deduplication.
|
| 59 |
- **L2 Selected Data Layer**: Uses closed-source large models to annotate seed data and distills it into a lightweight Embedding classifier to achieve efficient quality grading of the full corpus.
|
| 60 |
+
- **L3 Refined Data Layer**: Produces structured content with clear reasoning through rewriting, synthetic generation, and refinement in various formats such as Q&A, multi-turn dialogues, multi-style rewriting, and knowledge-grounded textbooks.
|
| 61 |
|
| 62 |
+
Experiments show that on the MiniCPM-1.2B architecture, ***UltraData-Math*** achieves a score of **37.02** on the MATH500 benchmark, an improvement of **+3.62** compared to Nemotron-CC 4plus; it achieves **61.79** on GSM8K, an improvement of **+3.34**, while maintaining code generation and general knowledge capabilities.
|
| 63 |
|
| 64 |
***UltraData-Math*** has been applied to the mathematical pre-training of the [MiniCPM Series](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b) models.
|
| 65 |
|
|
|
|
| 69 |
|
| 70 |
## 🏗️ Data Processing Pipeline
|
| 71 |
|
| 72 |
+
To break through the limitations of existing mathematical datasets in quality and diversity, we established a refined grading standard centered on "mathematical content integrity" and "information density". ***UltraData-Math*** adopts the **L0-L4 Tiered Data Management Framework** proposed by the [UltraData](xxx) paper. Through standardized level definitions, it achieves orderly management and efficient flow of mathematical data assets. Each level represents higher data purity and mathematical value, while also corresponding to a more refined degree of processing.
|
| 73 |
|
| 74 |
<div align="center">
|
| 75 |
<img src="assets/ultradata-math-pipeline.png" width="900"/>
|
|
|
|
| 111 |
- *Retention*: Content containing detailed problem-solving steps, mathematical concept explanations, and high-level academic discussions.
|
| 112 |
- *Exclusion*: Simple stacking of nouns, meaningless lists of numbers, juvenile content, or noise from non-mathematical fields.
|
| 113 |
|
| 114 |
+
### L3: Refined Data
|
| 115 |
|
| 116 |
+
**Goal**: Compensate for the singularity of natural corpora in format and scenarios through rewriting, synthetic generation, and refinement, enhancing the model's Chain of Thought (CoT) capabilities.
|
| 117 |
|
| 118 |
Natural web data is mostly declarative text. To enhance the model's instruction following and multi-turn interaction capabilities, we built the L3 synthetic data layer:
|
| 119 |
|
|
|
|
| 130 |
|
| 131 |
## 📈 Experimental Results
|
| 132 |
|
| 133 |
+
We used the **MiniCPM-1.2B** model architecture and **MiniCPM3-4B** tokenizer for experimental verification. Each experiment was conducted with a training volume of **100 billion Tokens**, using the **Decay Verification** method (annealing from a 1.3T base model). We used [OpenCompass](https://github.com/open-compass/opencompass) as our evaluation framework. Evaluation benchmarks include:
|
| 134 |
|
| 135 |
+
- **Mathematical Reasoning:** GSM8K, MATH500, Math-Bench, R-Bench-Math
|
| 136 |
- **Code Generation:** HumanEval, MBPP
|
| 137 |
- **Comprehensive Knowledge:** MMLU, MMLU-STEM
|
| 138 |
|
|
|
|
| 144 |
|
| 145 |
To fairly compare different parsing strategies, we conducted experiments on a data subset sampled from the **2023-2024** distribution. We re-parsed the raw HTML from this source using different parsers and **applied the same L1 cleaning operators to all baselines**. This comparison demonstrates the **overall benefit of our L0 Parser + L1 Filtering pipeline** against other parsers under identical cleaning conditions.
|
| 146 |
|
| 147 |
+
| Parser | Average | MMLU | MMLU-STEM | MATH500 | GSM8K | MBPP | HumanEval |
|
| 148 |
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
| 149 |
+
| **UltraData-Math-Parser (Ours)** | **43.44** | 51.41 | 46.76 | **28.72** | 54.97 | 47.10 | **31.71** |
|
| 150 |
| trafilatura + w3m | 42.33 | 50.95 | 45.52 | 27.64 | 54.51 | **47.93** | 27.44 |
|
| 151 |
| trafilatura | 42.44 | 51.42 | 46.62 | 28.08 | **56.03** | 45.64 | 26.83 |
|
| 152 |
| Megamath | 42.32 | **51.46** | **46.81** | 26.04 | 54.06 | 45.64 | 29.88 |
|
|
|
|
| 157 |
|
| 158 |
To validate the effectiveness of our L0-L3 hierarchical framework, we conducted ablation studies comparing models trained on different tiers of UltraData-Math. Unlike the L0 parser comparison above (which used a 2023-2024 subset), these results are based on the **full dataset**.
|
| 159 |
|
| 160 |
+
| Dataset | Average | MMLU | MMLU-STEM | MATH500 | GSM8K | MBPP | HumanEval |
|
| 161 |
| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
| 162 |
| **UltraData-Math-L1** | 42.31 | 51.41 | 45.44 | 27.78 | 54.66 | 44.71 | 29.88 |
|
| 163 |
| **UltraData-Math-L2** | 42.57 | 50.93 | 45.52 | 29.20 | 52.92 | 44.50 | 32.32 |
|
| 164 |
| **UltraData-Math-L3** | **46.44** | **51.67** | **45.93** | **37.02** | **61.79** | **49.27** | **32.93** |
|
| 165 |
|
| 166 |
+
*Note: Results demonstrate that higher-tier data (L3) significantly boosts mathematical reasoning (MATH500, GSM8K) and general capabilities.*
|
| 167 |
|
| 168 |
### Full Evaluation Results
|
| 169 |
|
| 170 |
To compare against existing public mathematical pre-training datasets, we trained models independently on each dataset using the same model architecture and training budget (~100B tokens). The baselines include [Nemotron-CC-Math](https://huggingface.co/datasets/nvidia/Nemotron-CC-Math-v1), [MegaMath-Web-Pro](https://huggingface.co/datasets/LLM360/MegaMath), and [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath). All models are evaluated under identical conditions for a fair comparison:
|
| 171 |
|
| 172 |
+
| Model | Average | MMLU | MMLU-STEM | MATH500 | GSM8K | MBPP | HumanEval | R-Bench-Math | Math-Bench |
|
| 173 |
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
| 174 |
| **UltraData-Math (Ours)** | **43.79** | 51.67 | 45.93 | **37.02** | **61.79** | **49.27** | 32.93 | 23.38 | **48.33** |
|
| 175 |
| Nemotron-cc 4plus mind | 43.45 | 52.09 | 45.99 | 35.96 | 59.97 | 48.03 | 34.76 | **23.51** | 47.25 |
|