Datasets:
ZhouChuYue
commited on
Commit
·
bffeb3e
1
Parent(s):
6f63dcd
Update README: add What's New section, unify pp notation for scores
Browse files- README.md +5 -1
- README_ZH.md +1 -1
README.md
CHANGED
|
@@ -45,6 +45,10 @@ default_config_name: UltraData-Math-L3-Conversation-Synthetic
|
|
| 45 |
|
| 46 |
***UltraData-Math*** is a large-scale, high-quality mathematical pre-training dataset totaling **290B+ tokens** across three progressive tiers—**L1** (170.5B tokens web corpus), **L2** (33.7B tokens quality-selected), and **L3** (88B tokens multi-format refined)—designed to systematically enhance mathematical reasoning in LLMs. It has been applied to the mathematical pre-training of the [MiniCPM Series](https://huggingface.co/collections/openbmb/minicpm4) models.
|
| 47 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
## 📚 Introduction
|
| 49 |
|
| 50 |
High-quality pre-training data is crucial for enhancing the mathematical reasoning capabilities of large language models (LLMs). However, existing mathematical pre-training data construction schemes have the following shortcomings:
|
|
@@ -60,7 +64,7 @@ To address these issues, we propose ***UltraData-Math***—a large-scale high-qu
|
|
| 60 |
- **L2 Selected Data**: Uses proprietary large models to annotate seed data and distills it into a lightweight embedding classifier to achieve efficient quality grading of the full corpus.
|
| 61 |
- **L3 Refined Data**: Produces structured content with clear reasoning through rewriting, synthetic generation, and refinement in various formats such as Q&A, multi-turn dialogues, multi-style rewriting, and knowledge-grounded textbooks.
|
| 62 |
|
| 63 |
-
Experiments show that on the MiniCPM-1.2B architecture, ***UltraData-Math*** achieves a score of **37.
|
| 64 |
|
| 65 |
***UltraData-Math*** has been applied to the mathematical pre-training of the [MiniCPM Series](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b) models.
|
| 66 |
|
|
|
|
| 45 |
|
| 46 |
***UltraData-Math*** is a large-scale, high-quality mathematical pre-training dataset totaling **290B+ tokens** across three progressive tiers—**L1** (170.5B tokens web corpus), **L2** (33.7B tokens quality-selected), and **L3** (88B tokens multi-format refined)—designed to systematically enhance mathematical reasoning in LLMs. It has been applied to the mathematical pre-training of the [MiniCPM Series](https://huggingface.co/collections/openbmb/minicpm4) models.
|
| 47 |
|
| 48 |
+
## 🆕 What's New
|
| 49 |
+
|
| 50 |
+
- **2026.02.09**: Released UltraData-Math (290B+ tokens), a large-scale high-quality mathematical pre-training dataset with three progressive tiers (L1/L2/L3).
|
| 51 |
+
|
| 52 |
## 📚 Introduction
|
| 53 |
|
| 54 |
High-quality pre-training data is crucial for enhancing the mathematical reasoning capabilities of large language models (LLMs). However, existing mathematical pre-training data construction schemes have the following shortcomings:
|
|
|
|
| 64 |
- **L2 Selected Data**: Uses proprietary large models to annotate seed data and distills it into a lightweight embedding classifier to achieve efficient quality grading of the full corpus.
|
| 65 |
- **L3 Refined Data**: Produces structured content with clear reasoning through rewriting, synthetic generation, and refinement in various formats such as Q&A, multi-turn dialogues, multi-style rewriting, and knowledge-grounded textbooks.
|
| 66 |
|
| 67 |
+
Experiments show that on the MiniCPM-1.2B architecture, ***UltraData-Math*** achieves a score of **37.02pp** on the MATH500 benchmark, an improvement of **+3.62pp** compared to Nemotron-CC 4plus; it achieves **61.79pp** on GSM8K, an improvement of **+3.34pp**, while maintaining code generation and general knowledge capabilities.
|
| 68 |
|
| 69 |
***UltraData-Math*** has been applied to the mathematical pre-training of the [MiniCPM Series](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b) models.
|
| 70 |
|
README_ZH.md
CHANGED
|
@@ -25,7 +25,7 @@
|
|
| 25 |
- **L2 精选数据层**:使用闭源大模型标注种子数据并蒸馏至轻量 embedding 分类器,实现全量语料的高效质量分级。
|
| 26 |
- **L3 精炼数据层**:通过改写、合成生成与精炼,生成具有清晰推理链条的结构化内容,涵盖 Q&A、多轮对话、多风格改写、知识教材等多种格式。
|
| 27 |
|
| 28 |
-
实验表明,在 MiniCPM-1.2B 架构上,***UltraData-Math*** 在 MATH500 基准上达到 **37.
|
| 29 |
|
| 30 |
***UltraData-Math*** 已应用于 [MiniCPM 系列](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b) 模型的数学预训练。
|
| 31 |
|
|
|
|
| 25 |
- **L2 精选数据层**:使用闭源大模型标注种子数据并蒸馏至轻量 embedding 分类器,实现全量语料的高效质量分级。
|
| 26 |
- **L3 精炼数据层**:通过改写、合成生成与精炼,生成具有清晰推理链条的结构化内容,涵盖 Q&A、多轮对话、多风格改写、知识教材等多种格式。
|
| 27 |
|
| 28 |
+
实验表明,在 MiniCPM-1.2B 架构上,***UltraData-Math*** 在 MATH500 基准上达到 **37.02pp**,相较 Nemotron-CC 4plus 提升 **+3.62pp**;在 GSM8K 上达到 **61.79pp**,提升 **+3.34pp**,同时保持代码生成与通用知识能力。
|
| 29 |
|
| 30 |
***UltraData-Math*** 已应用于 [MiniCPM 系列](https://huggingface.co/collections/openbmb/minicpm-4-6841ab29d180257e940baa9b) 模型的数学预训练。
|
| 31 |
|