| | --- |
| | license: apache-2.0 |
| | datasets: |
| | - MathGenie/MathCode-Pile |
| | language: |
| | - en |
| | metrics: |
| | - accuracy |
| | base_model: |
| | - codellama/CodeLlama-7b-hf |
| | pipeline_tag: text-generation |
| | tags: |
| | - math |
| | --- |
| | |
| | # MathCoder2 |
| |
|
| | ### Introduction |
| |
|
| | The MathCoder2 models are created by conducting continued pretraining on [MathCode-Pile](https://huggingface.co/datasets/MathGenie/MathCode-Pile). They are introduced in the paper [MathCoder2: Better Math Reasoning from Continued Pretraining on Model-translated Mathematical Code](https://arxiv.org/abs/2410.08196). |
| |
|
| | The mathematical pretraining dataset includes mathematical code accompanied with natural language reasoning steps, making it a superior resource for models aimed at performing advanced mathematical reasoning tasks. |
| |
|
| | ### Evaluation |
| |
|
| |  |