Datasets:
Add paper link and task categories
Browse filesThis PR improves the dataset card by:
- Adding the relevant task categories (`image-text-to-text`, `audio-text-to-text`, `text-generation`) based on the multimodal nature of the benchmark.
- Linking the dataset to the original research paper: [Multiplication in Multimodal LLMs: Computation with Text, Image, and Audio Inputs](https://huggingface.co/papers/2604.18203).
- Adding descriptive tags and language metadata to improve discoverability.
README.md
CHANGED
|
@@ -1,103 +1,90 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: gpl-2.0
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
-
|
| 40 |
-
- `
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
-
|
| 91 |
-
- `expected_error_type` records the characteristic failure pattern the trap is intended to expose.
|
| 92 |
-
- In the checked-in v2 export, `design_family`, `canonical_target_heuristic`, and `target_heuristic` align by construction.
|
| 93 |
-
|
| 94 |
-
## Modality and Evaluation Notes
|
| 95 |
-
|
| 96 |
-
- The paired multimodal benchmark includes text, image, and audio assets.
|
| 97 |
-
- The current Qwen experiments in this repository evaluate text and image, not audio.
|
| 98 |
-
- `HDSv2Images/` is intentionally a test-only image release for probe evaluation.
|
| 99 |
-
- `Trapsv2Images/` covers the full trap set.
|
| 100 |
-
|
| 101 |
-
## Release Notes
|
| 102 |
-
|
| 103 |
-
- The Hugging Face dataset repo is currently configured with the `gpl-2.0` license.
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: gpl-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-text-to-text
|
| 5 |
+
- audio-text-to-text
|
| 6 |
+
- text-generation
|
| 7 |
+
language:
|
| 8 |
+
- en
|
| 9 |
+
tags:
|
| 10 |
+
- mathematics
|
| 11 |
+
- arithmetic
|
| 12 |
+
- multimodal
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
# MultimodalMathBenchmarks
|
| 16 |
+
|
| 17 |
+
This repository contains the datasets for the paper [Multiplication in Multimodal LLMs: Computation with Text, Image, and Audio Inputs](https://huggingface.co/papers/2604.18203).
|
| 18 |
+
|
| 19 |
+
It covers the public benchmark datasets and their modality assets (text, images, and audio) used to evaluate the arithmetic capabilities of multimodal LLMs.
|
| 20 |
+
|
| 21 |
+
## Canonical Upload Manifest
|
| 22 |
+
|
| 23 |
+
| HF path | Local source | Count | Purpose |
|
| 24 |
+
| --- | --- | ---: | --- |
|
| 25 |
+
| `SharedMultimodalGrid.csv` | `SavedData/SharedMultimodalGrid.csv` | 10,000 rows | Canonical metadata table for the paired multimodal multiplication benchmark |
|
| 26 |
+
| `TextFiles/` | `SavedData/TextFiles/` | 10,000 `.txt` files | Text renderings for `mm_*` benchmark items |
|
| 27 |
+
| `Images/` | `SavedData/Images/` | 10,000 `.png` files | Image renderings for `mm_*` benchmark items |
|
| 28 |
+
| `AudioFiles/` | `SavedData/AudioFiles/` | 10,000 `.mp3` files | Audio renderings for `mm_*` benchmark items |
|
| 29 |
+
| `HDSv2.csv` | `SavedData/HDSv2.csv` | 1,000 rows | Canonical heuristic-disagreement probe set |
|
| 30 |
+
| `HDSv2Images/` | `SavedData/HDSv2Images/` | 144 `.png` files | Image renderings for the `HDSv2` test split only |
|
| 31 |
+
| `Trapsv2.csv` | `SavedData/Trapsv2.csv` | 30 rows | Canonical adversarial trap set |
|
| 32 |
+
| `Trapsv2Images/` | `SavedData/Trapsv2Images/` | 30 `.png` files | Image renderings for every trap item |
|
| 33 |
+
|
| 34 |
+
## Dataset Summary
|
| 35 |
+
|
| 36 |
+
The release contains three benchmark families:
|
| 37 |
+
|
| 38 |
+
1. `SharedMultimodalGrid.csv`
|
| 39 |
+
- 10,000 shared multiplication problems paired across text, image, and audio.
|
| 40 |
+
- Split counts: `train=7026`, `val=1416`, `test=1558`.
|
| 41 |
+
- Each row uses an `mm_XXXXX` ID that maps directly to:
|
| 42 |
+
- `TextFiles/mm_XXXXX.txt`
|
| 43 |
+
- `Images/mm_XXXXX.png`
|
| 44 |
+
- `AudioFiles/mm_XXXXX.mp3`
|
| 45 |
+
|
| 46 |
+
2. `HDSv2.csv`
|
| 47 |
+
- 1,000 heuristic-disagreement problems for fingerprinting and probe-style evaluation.
|
| 48 |
+
- Split counts: `train=701`, `val=155`, `test=144`.
|
| 49 |
+
- `HDSv2Images/` contains exactly the 144 `test`-split item IDs from `HDSv2.csv`.
|
| 50 |
+
|
| 51 |
+
3. `Trapsv2.csv`
|
| 52 |
+
- 30 adversarial trap problems designed to target heuristic-specific failures.
|
| 53 |
+
- No split column; all rows are held-out trap items.
|
| 54 |
+
- `Trapsv2Images/` contains one PNG for every row in `Trapsv2.csv`.
|
| 55 |
+
|
| 56 |
+
## Dataset Structure
|
| 57 |
+
|
| 58 |
+
### Shared Multimodal Grid
|
| 59 |
+
|
| 60 |
+
Canonical table: `SharedMultimodalGrid.csv`
|
| 61 |
+
|
| 62 |
+
Columns:
|
| 63 |
+
`id, a, b, a_times_b, template_a, template_b, digits_a, digits_b, nonzero_a, nonzero_b, digit_total, nonzero_total, complexity_c, stratum_id, split`
|
| 64 |
+
|
| 65 |
+
Notes:
|
| 66 |
+
- `id` is the stable benchmark ID, e.g. `mm_00001`.
|
| 67 |
+
- `a`, `b`, and `a_times_b` are the multiplication operands and exact product.
|
| 68 |
+
- `digit_total`, `nonzero_total`, and `complexity_c` support difficulty analyses.
|
| 69 |
+
- `split` is the deterministic train/val/test assignment used by the pipeline.
|
| 70 |
+
|
| 71 |
+
### HDSv2
|
| 72 |
+
|
| 73 |
+
Canonical table: `HDSv2.csv`
|
| 74 |
+
|
| 75 |
+
Columns:
|
| 76 |
+
`id, a, b, product, design_family, canonical_target_heuristic, canonical_target_margin, ot_cost, dd_cost, rc_cost, heuristic_definition_version, target_heuristic, ot_score, dd_score, rc_score, category, notes, digit_total, nonzero_total, complexity_c, split`
|
| 77 |
+
|
| 78 |
+
### Trapsv2
|
| 79 |
+
|
| 80 |
+
Canonical table: `Trapsv2.csv`
|
| 81 |
+
|
| 82 |
+
Columns:
|
| 83 |
+
`id, a, b, product, trap_type, design_family, canonical_target_heuristic, heuristic_definition_version, target_heuristic, expected_error_type, notes, digit_total, nonzero_total, complexity_c`
|
| 84 |
+
|
| 85 |
+
## Modality and Evaluation Notes
|
| 86 |
+
|
| 87 |
+
- The paired multimodal benchmark includes text, image, and audio assets.
|
| 88 |
+
- `HDSv2Images/` is intentionally a test-only image release for probe evaluation.
|
| 89 |
+
- `Trapsv2Images/` covers the full trap set.
|
| 90 |
+
- Studies on this dataset show that accuracy falls sharply as arithmetic load $C$ (the product of total and non-zero digit count) grows.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|