nielsr HF Staff commited on
Commit
13c5bdf
·
verified ·
1 Parent(s): bb1304e

Add paper link and task categories

Browse files

This PR improves the dataset card by:
- Adding the relevant task categories (`image-text-to-text`, `audio-text-to-text`, `text-generation`) based on the multimodal nature of the benchmark.
- Linking the dataset to the original research paper: [Multiplication in Multimodal LLMs: Computation with Text, Image, and Audio Inputs](https://huggingface.co/papers/2604.18203).
- Adding descriptive tags and language metadata to improve discoverability.

Files changed (1) hide show
  1. README.md +90 -103
README.md CHANGED
@@ -1,103 +1,90 @@
1
- ---
2
- license: gpl-2.0
3
- ---
4
-
5
- # MultimodalMathBenchmarks
6
-
7
- This file describes the canonical dataset release to publish from this
8
- repository to `cjerzak/MultimodalMathBenchmarks`. It covers the public benchmark datasets and their modality assets.
9
-
10
- ## Canonical Upload Manifest
11
-
12
- Upload the following local sources from `SavedData/` to the dataset repo root on Hugging Face:
13
-
14
- | HF path | Local source | Count | Purpose |
15
- | --- | --- | ---: | --- |
16
- | `SharedMultimodalGrid.csv` | `SavedData/SharedMultimodalGrid.csv` | 10,000 rows | Canonical metadata table for the paired multimodal multiplication benchmark |
17
- | `TextFiles/` | `SavedData/TextFiles/` | 10,000 `.txt` files | Text renderings for `mm_*` benchmark items |
18
- | `Images/` | `SavedData/Images/` | 10,000 `.png` files | Image renderings for `mm_*` benchmark items |
19
- | `AudioFiles/` | `SavedData/AudioFiles/` | 10,000 `.mp3` files | Audio renderings for `mm_*` benchmark items |
20
- | `HDSv2.csv` | `SavedData/HDSv2.csv` | 1,000 rows | Canonical heuristic-disagreement probe set |
21
- | `HDSv2Images/` | `SavedData/HDSv2Images/` | 144 `.png` files | Image renderings for the `HDSv2` test split only |
22
- | `Trapsv2.csv` | `SavedData/Trapsv2.csv` | 30 rows | Canonical adversarial trap set |
23
- | `Trapsv2Images/` | `SavedData/Trapsv2Images/` | 30 `.png` files | Image renderings for every trap item |
24
-
25
- ## Dataset Summary
26
-
27
- The release contains three benchmark families:
28
-
29
- 1. `SharedMultimodalGrid.csv`
30
- - 10,000 shared multiplication problems paired across text, image, and audio.
31
- - Split counts: `train=7026`, `val=1416`, `test=1558`.
32
- - Each row uses an `mm_XXXXX` ID that maps directly to:
33
- - `TextFiles/mm_XXXXX.txt`
34
- - `Images/mm_XXXXX.png`
35
- - `AudioFiles/mm_XXXXX.mp3`
36
-
37
- 2. `HDSv2.csv`
38
- - 1,000 heuristic-disagreement problems for fingerprinting and probe-style evaluation.
39
- - Split counts: `train=701`, `val=155`, `test=144`.
40
- - `HDSv2Images/` contains exactly the 144 `test`-split item IDs from `HDSv2.csv`.
41
-
42
- 3. `Trapsv2.csv`
43
- - 30 adversarial trap problems designed to target heuristic-specific failures.
44
- - No split column; all rows are held-out trap items.
45
- - `Trapsv2Images/` contains one PNG for every row in `Trapsv2.csv`.
46
-
47
- ## Dataset Structure
48
-
49
- ### Shared Multimodal Grid
50
-
51
- Canonical table: `SharedMultimodalGrid.csv`
52
-
53
- Columns:
54
-
55
- `id, a, b, a_times_b, template_a, template_b, digits_a, digits_b, nonzero_a, nonzero_b, digit_total, nonzero_total, complexity_c, stratum_id, split`
56
-
57
- Notes:
58
-
59
- - `id` is the stable benchmark ID, e.g. `mm_00001`.
60
- - `a`, `b`, and `a_times_b` are the multiplication operands and exact product.
61
- - `template_a` and `template_b` record the rendering template choices used for the paired benchmark export.
62
- - `digit_total`, `nonzero_total`, and `complexity_c` support difficulty analyses.
63
- - `split` is the deterministic train/val/test assignment used by the pipeline.
64
-
65
- ### HDSv2
66
-
67
- Canonical table: `HDSv2.csv`
68
-
69
- Columns:
70
-
71
- `id, a, b, product, design_family, canonical_target_heuristic, canonical_target_margin, ot_cost, dd_cost, rc_cost, heuristic_definition_version, target_heuristic, ot_score, dd_score, rc_score, category, notes, digit_total, nonzero_total, complexity_c, split`
72
-
73
- Notes:
74
-
75
- - `design_family` is the construction family used to generate the item.
76
- - `canonical_target_heuristic` is the v2 cost-model winner.
77
- - `target_heuristic`, `ot_score`, `dd_score`, and `rc_score` are retained as legacy compatibility fields.
78
- - `heuristic_definition_version` identifies the versioned heuristic labeling scheme.
79
-
80
- ### Trapsv2
81
-
82
- Canonical table: `Trapsv2.csv`
83
-
84
- Columns:
85
-
86
- `id, a, b, product, trap_type, design_family, canonical_target_heuristic, heuristic_definition_version, target_heuristic, expected_error_type, notes, digit_total, nonzero_total, complexity_c`
87
-
88
- Notes:
89
-
90
- - `trap_type` names the adversarial construction family.
91
- - `expected_error_type` records the characteristic failure pattern the trap is intended to expose.
92
- - In the checked-in v2 export, `design_family`, `canonical_target_heuristic`, and `target_heuristic` align by construction.
93
-
94
- ## Modality and Evaluation Notes
95
-
96
- - The paired multimodal benchmark includes text, image, and audio assets.
97
- - The current Qwen experiments in this repository evaluate text and image, not audio.
98
- - `HDSv2Images/` is intentionally a test-only image release for probe evaluation.
99
- - `Trapsv2Images/` covers the full trap set.
100
-
101
- ## Release Notes
102
-
103
- - The Hugging Face dataset repo is currently configured with the `gpl-2.0` license.
 
1
+ ---
2
+ license: gpl-2.0
3
+ task_categories:
4
+ - image-text-to-text
5
+ - audio-text-to-text
6
+ - text-generation
7
+ language:
8
+ - en
9
+ tags:
10
+ - mathematics
11
+ - arithmetic
12
+ - multimodal
13
+ ---
14
+
15
+ # MultimodalMathBenchmarks
16
+
17
+ This repository contains the datasets for the paper [Multiplication in Multimodal LLMs: Computation with Text, Image, and Audio Inputs](https://huggingface.co/papers/2604.18203).
18
+
19
+ It covers the public benchmark datasets and their modality assets (text, images, and audio) used to evaluate the arithmetic capabilities of multimodal LLMs.
20
+
21
+ ## Canonical Upload Manifest
22
+
23
+ | HF path | Local source | Count | Purpose |
24
+ | --- | --- | ---: | --- |
25
+ | `SharedMultimodalGrid.csv` | `SavedData/SharedMultimodalGrid.csv` | 10,000 rows | Canonical metadata table for the paired multimodal multiplication benchmark |
26
+ | `TextFiles/` | `SavedData/TextFiles/` | 10,000 `.txt` files | Text renderings for `mm_*` benchmark items |
27
+ | `Images/` | `SavedData/Images/` | 10,000 `.png` files | Image renderings for `mm_*` benchmark items |
28
+ | `AudioFiles/` | `SavedData/AudioFiles/` | 10,000 `.mp3` files | Audio renderings for `mm_*` benchmark items |
29
+ | `HDSv2.csv` | `SavedData/HDSv2.csv` | 1,000 rows | Canonical heuristic-disagreement probe set |
30
+ | `HDSv2Images/` | `SavedData/HDSv2Images/` | 144 `.png` files | Image renderings for the `HDSv2` test split only |
31
+ | `Trapsv2.csv` | `SavedData/Trapsv2.csv` | 30 rows | Canonical adversarial trap set |
32
+ | `Trapsv2Images/` | `SavedData/Trapsv2Images/` | 30 `.png` files | Image renderings for every trap item |
33
+
34
+ ## Dataset Summary
35
+
36
+ The release contains three benchmark families:
37
+
38
+ 1. `SharedMultimodalGrid.csv`
39
+ - 10,000 shared multiplication problems paired across text, image, and audio.
40
+ - Split counts: `train=7026`, `val=1416`, `test=1558`.
41
+ - Each row uses an `mm_XXXXX` ID that maps directly to:
42
+ - `TextFiles/mm_XXXXX.txt`
43
+ - `Images/mm_XXXXX.png`
44
+ - `AudioFiles/mm_XXXXX.mp3`
45
+
46
+ 2. `HDSv2.csv`
47
+ - 1,000 heuristic-disagreement problems for fingerprinting and probe-style evaluation.
48
+ - Split counts: `train=701`, `val=155`, `test=144`.
49
+ - `HDSv2Images/` contains exactly the 144 `test`-split item IDs from `HDSv2.csv`.
50
+
51
+ 3. `Trapsv2.csv`
52
+ - 30 adversarial trap problems designed to target heuristic-specific failures.
53
+ - No split column; all rows are held-out trap items.
54
+ - `Trapsv2Images/` contains one PNG for every row in `Trapsv2.csv`.
55
+
56
+ ## Dataset Structure
57
+
58
+ ### Shared Multimodal Grid
59
+
60
+ Canonical table: `SharedMultimodalGrid.csv`
61
+
62
+ Columns:
63
+ `id, a, b, a_times_b, template_a, template_b, digits_a, digits_b, nonzero_a, nonzero_b, digit_total, nonzero_total, complexity_c, stratum_id, split`
64
+
65
+ Notes:
66
+ - `id` is the stable benchmark ID, e.g. `mm_00001`.
67
+ - `a`, `b`, and `a_times_b` are the multiplication operands and exact product.
68
+ - `digit_total`, `nonzero_total`, and `complexity_c` support difficulty analyses.
69
+ - `split` is the deterministic train/val/test assignment used by the pipeline.
70
+
71
+ ### HDSv2
72
+
73
+ Canonical table: `HDSv2.csv`
74
+
75
+ Columns:
76
+ `id, a, b, product, design_family, canonical_target_heuristic, canonical_target_margin, ot_cost, dd_cost, rc_cost, heuristic_definition_version, target_heuristic, ot_score, dd_score, rc_score, category, notes, digit_total, nonzero_total, complexity_c, split`
77
+
78
+ ### Trapsv2
79
+
80
+ Canonical table: `Trapsv2.csv`
81
+
82
+ Columns:
83
+ `id, a, b, product, trap_type, design_family, canonical_target_heuristic, heuristic_definition_version, target_heuristic, expected_error_type, notes, digit_total, nonzero_total, complexity_c`
84
+
85
+ ## Modality and Evaluation Notes
86
+
87
+ - The paired multimodal benchmark includes text, image, and audio assets.
88
+ - `HDSv2Images/` is intentionally a test-only image release for probe evaluation.
89
+ - `Trapsv2Images/` covers the full trap set.
90
+ - Studies on this dataset show that accuracy falls sharply as arithmetic load $C$ (the product of total and non-zero digit count) grows.