File size: 3,874 Bytes
9674067
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
license: gpl-2.0
task_categories:
- image-text-to-text
- audio-text-to-text
- text-generation
language:
- en
tags:
- mathematics
- arithmetic
- multimodal
---

# MultimodalMathBenchmarks

This repository contains the datasets for the paper [Multiplication in Multimodal LLMs: Computation with Text, Image, and Audio Inputs](https://huggingface.co/papers/2604.18203).

It covers the public benchmark datasets and their modality assets (text, images, and audio) used to evaluate the arithmetic capabilities of multimodal LLMs.

## Canonical Upload Manifest

| HF path | Local source | Count | Purpose |
| --- | --- | ---: | --- |
| `SharedMultimodalGrid.csv` | `SavedData/SharedMultimodalGrid.csv` | 10,000 rows | Canonical metadata table for the paired multimodal multiplication benchmark |
| `TextFiles/` | `SavedData/TextFiles/` | 10,000 `.txt` files | Text renderings for `mm_*` benchmark items |
| `Images/` | `SavedData/Images/` | 10,000 `.png` files | Image renderings for `mm_*` benchmark items |
| `AudioFiles/` | `SavedData/AudioFiles/` | 10,000 `.mp3` files | Audio renderings for `mm_*` benchmark items |
| `HDSv2.csv` | `SavedData/HDSv2.csv` | 1,000 rows | Canonical heuristic-disagreement probe set |
| `HDSv2Images/` | `SavedData/HDSv2Images/` | 144 `.png` files | Image renderings for the `HDSv2` test split only |
| `Trapsv2.csv` | `SavedData/Trapsv2.csv` | 30 rows | Canonical adversarial trap set |
| `Trapsv2Images/` | `SavedData/Trapsv2Images/` | 30 `.png` files | Image renderings for every trap item |

## Dataset Summary

The release contains three benchmark families:

1. `SharedMultimodalGrid.csv`
   - 10,000 shared multiplication problems paired across text, image, and audio.
   - Split counts: `train=7026`, `val=1416`, `test=1558`.
   - Each row uses an `mm_XXXXX` ID that maps directly to:
     - `TextFiles/mm_XXXXX.txt`
     - `Images/mm_XXXXX.png`
     - `AudioFiles/mm_XXXXX.mp3`

2. `HDSv2.csv`
   - 1,000 heuristic-disagreement problems for fingerprinting and probe-style evaluation.
   - Split counts: `train=701`, `val=155`, `test=144`.
   - `HDSv2Images/` contains exactly the 144 `test`-split item IDs from `HDSv2.csv`.

3. `Trapsv2.csv`
   - 30 adversarial trap problems designed to target heuristic-specific failures.
   - No split column; all rows are held-out trap items.
   - `Trapsv2Images/` contains one PNG for every row in `Trapsv2.csv`.

## Dataset Structure

### Shared Multimodal Grid

Canonical table: `SharedMultimodalGrid.csv`

Columns:
`id, a, b, a_times_b, template_a, template_b, digits_a, digits_b, nonzero_a, nonzero_b, digit_total, nonzero_total, complexity_c, stratum_id, split`

Notes:
- `id` is the stable benchmark ID, e.g. `mm_00001`.
- `a`, `b`, and `a_times_b` are the multiplication operands and exact product.
- `digit_total`, `nonzero_total`, and `complexity_c` support difficulty analyses.
- `split` is the deterministic train/val/test assignment used by the pipeline.

### HDSv2

Canonical table: `HDSv2.csv`

Columns:
`id, a, b, product, design_family, canonical_target_heuristic, canonical_target_margin, ot_cost, dd_cost, rc_cost, heuristic_definition_version, target_heuristic, ot_score, dd_score, rc_score, category, notes, digit_total, nonzero_total, complexity_c, split`

### Trapsv2

Canonical table: `Trapsv2.csv`

Columns:
`id, a, b, product, trap_type, design_family, canonical_target_heuristic, heuristic_definition_version, target_heuristic, expected_error_type, notes, digit_total, nonzero_total, complexity_c`

## Modality and Evaluation Notes

- The paired multimodal benchmark includes text, image, and audio assets.
- `HDSv2Images/` is intentionally a test-only image release for probe evaluation.
- `Trapsv2Images/` covers the full trap set.
- Studies on this dataset show that accuracy falls sharply as arithmetic load $C$ (the product of total and non-zero digit count) grows.