--- configs: - config_name: add_6digit data_files: - path: add_6digit/train.parquet split: train - path: add_6digit/val.parquet split: validation - path: add_6digit/eval_stratified.parquet split: test - config_name: add_sub_6digit data_files: - path: add_sub_6digit/train.parquet split: train - path: add_sub_6digit/val.parquet split: validation - path: add_sub_6digit/eval_stratified.parquet split: test - config_name: add_handcrafted data_files: - path: add_handcrafted/test.parquet split: test - config_name: sub_handcrafted data_files: - path: sub_handcrafted/test.parquet split: test language: - en license: apache-2.0 size_categories: - 1M= 10` | Generates carry | | SS | Sum is 9 | `Dn + D'n == 9` | Propagates carry if one arrives | | UC | Use Carry | carry_in=1, sum != 9 | Consumes incoming carry | | US | Use Sum-9 | carry_in=1, sum == 9 | Cascade: hardest case | ### Subtraction (x >= y) | Label | Name | Condition | Role | |---|---|---|---| | MD | Base Diff | `Dn > D'n`, no borrow | Simplest case | | MB | Make Borrow | `Dn < D'n` | Generates borrow | | ME | Equal digits | `Dn == D'n` | Propagates borrow if one arrives | | UB | Use Borrow | borrow_in=1, `Dn != D'n` | Consumes incoming borrow | | UD | Use Equal | borrow_in=1, `Dn == D'n` | Cascade: hardest case | ## Complexity Classification ([Quirke Table 8](https://arxiv.org/abs/2402.02619)) Complexity = length of longest carry/borrow cascade chain. Example: `555555+444448=1000003` is **S6** — the carry from D0 cascades through 5 consecutive sum-9 positions. ``` S0: no carries ~10% S1: isolated carries ~50% S2: cascade of 2 ~26% S3: cascade of 3 ~9% S4: cascade of 4 ~3% S5: cascade of 5 ~1% S6: cascade of 6 <0.5% ``` ## Data Enrichment **Addition:** Following Quirke et al., 60% of batches have 40% of digit positions forced to sum-to-9, increasing carry cascade frequency so the model sees enough S4-S6 cases. **Subtraction:** 40% of digit positions are forced equal (`Dn == D'n`), creating borrow propagation cascades (ME/UD). Without this, M3-M5 borrow cascades are extremely rare (M3=0.7%, M4=0.04% in unmodified data). With enrichment: M3=3.0%, M4=0.8%. ## Usage ```python from datasets import load_dataset ds = load_dataset("thoughtworks/arithmetic-sorl-data", data_dir="add_6digit") print(ds["train"][0]) # {'tokens': [...], 'labels': ['SA', 'UC', 'US', ...], # 'complexity': 'S3', 'cascade_depth': 3, ...} # Stratified eval eval_ds = load_dataset("thoughtworks/arithmetic-sorl-data", data_dir="add_6digit", data_files="eval_stratified.parquet") # Addition + subtraction ds_mixed = load_dataset("thoughtworks/arithmetic-sorl-data", data_dir="add_sub_6digit") ``` ## Related - **Reference paper:** Quirke et al., ["Understanding Addition and Subtraction in Transformers"](https://arxiv.org/abs/2402.02619) (2024) - **Model checkpoints:** [thoughtworks/arithmetic-sorl](https://huggingface.co/thoughtworks/arithmetic-sorl) - **Dashboard:** [thoughtworks/arithmetic-sorl-dashboard](https://huggingface.co/spaces/thoughtworks/arithmetic-sorl-dashboard) - **Code:** [mod_gpt/arithmetic/](https://github.com/fangyuan-ksgk/mod_gpt/tree/amir/arithmetic/arithmetic)