Datasets:
Update dataset card: add configs, fix languages, improve documentation (#1)
Browse files- Update dataset card: add configs, fix languages, improve documentation (3cced8a21dad75c64525eb5d40290aa0769a10aa)
README.md
CHANGED
|
@@ -1,128 +1,146 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
license: apache-2.0
|
| 3 |
task_categories:
|
| 4 |
-
- text-generation
|
| 5 |
-
- text2text-generation
|
| 6 |
-
language:
|
| 7 |
-
- en
|
| 8 |
-
- ur
|
| 9 |
-
- am
|
| 10 |
-
- zh
|
| 11 |
tags:
|
| 12 |
-
- code
|
| 13 |
-
- multilingual
|
| 14 |
-
- legesher
|
| 15 |
-
- transpilation
|
| 16 |
-
- tiny-aya-expedition
|
| 17 |
-
- language-decoded
|
| 18 |
pretty_name: Language Decoded Data
|
| 19 |
size_categories:
|
| 20 |
-
- 10K<n<100K
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
dataset_info:
|
| 22 |
-
config_name: condition-1-en
|
| 23 |
features:
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
splits:
|
| 37 |
-
- name: train
|
| 38 |
-
num_bytes: 516073703
|
| 39 |
-
num_examples: 49500
|
| 40 |
-
- name: validation
|
| 41 |
-
num_bytes: 57341522
|
| 42 |
-
num_examples: 5500
|
| 43 |
-
download_size: 221522346
|
| 44 |
-
dataset_size: 573415225
|
| 45 |
-
configs:
|
| 46 |
-
- config_name: condition-1-en
|
| 47 |
-
data_files:
|
| 48 |
-
- split: train
|
| 49 |
-
path: data/condition-1-en/train-*
|
| 50 |
-
- split: validation
|
| 51 |
-
path: data/condition-1-en/validation-*
|
| 52 |
---
|
| 53 |
|
| 54 |
# Language Decoded | Multilingual Code Dataset
|
| 55 |
|
| 56 |
-
Multilingual Python code datasets for the **Language Decoded** project (part of Cohere's Tiny Aya Expedition), investigating whether code's reasoning benefit for language models is **language-dependent** or **structure-dependent**.
|
| 57 |
|
| 58 |
## Research Question
|
| 59 |
|
| 60 |
> Does fine-tuning on non-English code (Python with translated keywords) improve multilingual reasoning as much as English code does?
|
| 61 |
|
| 62 |
-
Prior work ([Aryabumi et al., 2024](https://arxiv.org/abs/2408.10914))
|
| 63 |
|
| 64 |
-
## Dataset
|
| 65 |
|
| 66 |
-
This
|
| 67 |
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
| `multilingual-code-am/` | Condition 3b | Python transpiled to Amharic keywords via Legesher |
|
| 75 |
-
| `multilingual-code-zh/` | Condition 3c | Python transpiled to Chinese keywords via Legesher |
|
| 76 |
-
| `multilingual-text/` | Condition 4 | Non-code multilingual text (control) |
|
| 77 |
|
| 78 |
-
|
| 79 |
|
| 80 |
-
|
| 81 |
-
from datasets import load_dataset
|
| 82 |
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
``
|
|
|
|
|
|
|
|
|
|
| 86 |
|
| 87 |
-
##
|
| 88 |
|
| 89 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 90 |
|
| 91 |
-
|
| 92 |
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
如果 元素 > 5:
|
| 102 |
-
打印(元素)
|
| 103 |
-
```
|
| 104 |
|
| 105 |
-
##
|
| 106 |
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
- **Transpilation tool**: [Legesher v0.6.0+](https://github.com/Legesher/legesher)
|
| 110 |
|
| 111 |
-
#
|
|
|
|
| 112 |
|
| 113 |
-
|
|
|
|
|
|
|
|
|
|
| 114 |
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
|
| 119 |
|
| 120 |
-
##
|
| 121 |
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 126 |
|
| 127 |
## Citation
|
| 128 |
|
|
@@ -132,10 +150,16 @@ Models fine-tuned on these conditions are evaluated on:
|
|
| 132 |
author={Madison Edgar and Saad Bazaz and Rafay Mustafa and Sarah Jawaid and Rashik Shahjahan and Khojasteh Mirza and Sohaib Bazaz},
|
| 133 |
year={2026},
|
| 134 |
publisher={Hugging Face},
|
| 135 |
-
url={https://huggingface.co/datasets/
|
| 136 |
}
|
| 137 |
```
|
| 138 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 139 |
## License
|
| 140 |
|
| 141 |
-
Apache 2.0
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
- zh
|
| 5 |
+
- es
|
| 6 |
+
- ur
|
| 7 |
license: apache-2.0
|
| 8 |
task_categories:
|
| 9 |
+
- text-generation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
tags:
|
| 11 |
+
- code
|
| 12 |
+
- multilingual
|
| 13 |
+
- legesher
|
| 14 |
+
- transpilation
|
| 15 |
+
- tiny-aya-expedition
|
| 16 |
+
- language-decoded
|
| 17 |
pretty_name: Language Decoded Data
|
| 18 |
size_categories:
|
| 19 |
+
- 10K<n<100K
|
| 20 |
+
configs:
|
| 21 |
+
- config_name: condition-1-en
|
| 22 |
+
data_files:
|
| 23 |
+
- split: train
|
| 24 |
+
path: data/condition-1-en/train-*.parquet
|
| 25 |
+
- split: validation
|
| 26 |
+
path: data/condition-1-en/validation-*.parquet
|
| 27 |
+
- config_name: condition-2-ur
|
| 28 |
+
data_files:
|
| 29 |
+
- split: train
|
| 30 |
+
path: data/condition-2-ur/train-*.parquet
|
| 31 |
+
- split: validation
|
| 32 |
+
path: data/condition-2-ur/validation-*.parquet
|
| 33 |
+
- config_name: condition-2-zh
|
| 34 |
+
data_files:
|
| 35 |
+
- split: train
|
| 36 |
+
path: data/condition-2-zh/train-*.parquet
|
| 37 |
+
- split: validation
|
| 38 |
+
path: data/condition-2-zh/validation-*.parquet
|
| 39 |
+
- config_name: condition-2-es
|
| 40 |
+
data_files:
|
| 41 |
+
- split: train
|
| 42 |
+
path: data/condition-2-es/train-*.parquet
|
| 43 |
+
- split: validation
|
| 44 |
+
path: data/condition-2-es/validation-*.parquet
|
| 45 |
dataset_info:
|
|
|
|
| 46 |
features:
|
| 47 |
+
- name: code
|
| 48 |
+
dtype: string
|
| 49 |
+
- name: code_en
|
| 50 |
+
dtype: string
|
| 51 |
+
- name: language
|
| 52 |
+
dtype: string
|
| 53 |
+
- name: file_path
|
| 54 |
+
dtype: string
|
| 55 |
+
- name: license
|
| 56 |
+
dtype: string
|
| 57 |
+
- name: token_count
|
| 58 |
+
dtype: int64
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 59 |
---
|
| 60 |
|
| 61 |
# Language Decoded | Multilingual Code Dataset
|
| 62 |
|
| 63 |
+
Multilingual Python code datasets for the **Language Decoded** project (part of [Cohere's Tiny Aya Expedition](https://aya.for.ai)), investigating whether code's reasoning benefit for language models is **language-dependent** or **structure-dependent**.
|
| 64 |
|
| 65 |
## Research Question
|
| 66 |
|
| 67 |
> Does fine-tuning on non-English code (Python with translated keywords) improve multilingual reasoning as much as English code does?
|
| 68 |
|
| 69 |
+
Prior work ([Aryabumi et al., 2024 -- "To Code or Not to Code"](https://arxiv.org/abs/2408.10914)) demonstrated that including English code in pre-training data improves downstream reasoning performance by approximately 8%. However, that study only tested English code. This dataset enables the natural follow-up: does the reasoning benefit come from the _structure_ of code, or from the _language_ of its keywords?
|
| 70 |
|
| 71 |
+
## Dataset Description
|
| 72 |
|
| 73 |
+
This dataset provides filtered, quality-controlled Python source code in four configurations: the original English and three keyword-swapped variants (Chinese, Spanish, Urdu). The source data is drawn from [bigcode/the-stack-dedup](https://huggingface.co/datasets/bigcode/the-stack-dedup) (Python subset), filtered for quality using the following criteria:
|
| 74 |
|
| 75 |
+
- AST-valid Python only (must parse without errors)
|
| 76 |
+
- Permissive licenses only (MIT, Apache-2.0, BSD, etc.)
|
| 77 |
+
- 10--1000 lines of code
|
| 78 |
+
- Minimum 21 GitHub stars
|
| 79 |
+
- No autogenerated files
|
| 80 |
+
- SHA-256 deduplication
|
|
|
|
|
|
|
|
|
|
| 81 |
|
| 82 |
+
Keyword-swapped variants are produced using [Legesher](https://github.com/legesher/legesher) v0.7.3, which translates Python reserved words (37 keywords, 72 builtins, 66 exceptions) into the target language while preserving code structure and semantics.
|
| 83 |
|
| 84 |
+
## Available Configs
|
|
|
|
| 85 |
|
| 86 |
+
| Config | Condition | Language | Description |
|
| 87 |
+
| ---------------- | --------------------- | -------- | ------------------------------------------------------------------------------------------------ |
|
| 88 |
+
| `condition-1-en` | Condition 1 (control) | English | Unmodified filtered Python from The Stack Dedup |
|
| 89 |
+
| `condition-2-ur` | Condition 2 | Urdu | Keyword-swapped Python -- 37 keywords, 72 builtins, 66 exceptions translated via Legesher v0.7.3 |
|
| 90 |
+
| `condition-2-zh` | Condition 2 | Chinese | Keyword-swapped Python -- same transpilation method |
|
| 91 |
+
| `condition-2-es` | Condition 2 | Spanish | Keyword-swapped Python -- same transpilation method |
|
| 92 |
|
| 93 |
+
## Schema
|
| 94 |
|
| 95 |
+
| Column | Type | Description |
|
| 96 |
+
| ------------- | ------ | ---------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| 97 |
+
| `code` | string | Python source code. For condition-2 configs, this is the transpiled (keyword-swapped) version. For condition-1, this is the original English source. |
|
| 98 |
+
| `code_en` | string | Original English Python source code. Identical to `code` for condition-1-en. |
|
| 99 |
+
| `language` | string | ISO 639-1 language code: `en`, `ur`, `zh`, or `es`. |
|
| 100 |
+
| `file_path` | string | Original file path in The Stack Dedup. |
|
| 101 |
+
| `license` | string | SPDX license identifier for the source file. |
|
| 102 |
+
| `token_count` | int64 | Token count computed using the CohereLabs/tiny-aya-base tokenizer. |
|
| 103 |
|
| 104 |
+
## Experimental Conditions
|
| 105 |
|
| 106 |
+
The Language Decoded experiment uses a ladder of six conditions to isolate the mechanism behind code's reasoning benefit. This dataset currently provides data for conditions 1 and 2:
|
| 107 |
+
|
| 108 |
+
| Condition | Name | Purpose |
|
| 109 |
+
| --------------- | -------------------- | -------------------------------------------------------------------------- |
|
| 110 |
+
| Baseline | No fine-tuning | Establishes the performance floor |
|
| 111 |
+
| Condition 1 | English code | Tests whether code fine-tuning helps at all (replicates Aryabumi et al.) |
|
| 112 |
+
| Condition 2 | Keyword-swapped code | Tests whether the _language_ of keywords matters for the reasoning benefit |
|
| 113 |
+
| Conditions 3--6 | (planned) | Additional controls not yet included in this dataset |
|
|
|
|
|
|
|
|
|
|
| 114 |
|
| 115 |
+
## Usage
|
| 116 |
|
| 117 |
+
```python
|
| 118 |
+
from datasets import load_dataset
|
|
|
|
| 119 |
|
| 120 |
+
# Load English code (control)
|
| 121 |
+
ds = load_dataset("legesher/language-decoded-data", "condition-1-en")
|
| 122 |
|
| 123 |
+
# Load a keyword-swapped variant
|
| 124 |
+
ds = load_dataset("legesher/language-decoded-data", "condition-2-ur")
|
| 125 |
+
ds = load_dataset("legesher/language-decoded-data", "condition-2-zh")
|
| 126 |
+
ds = load_dataset("legesher/language-decoded-data", "condition-2-es")
|
| 127 |
|
| 128 |
+
# Access splits
|
| 129 |
+
train = ds["train"]
|
| 130 |
+
val = ds["validation"]
|
| 131 |
+
```
|
| 132 |
|
| 133 |
+
## Technical Details
|
| 134 |
|
| 135 |
+
| Parameter | Value |
|
| 136 |
+
| ---------------------- | ------------------------------------------------------------------------------------------------------------------ |
|
| 137 |
+
| Source dataset | [bigcode/the-stack-dedup](https://huggingface.co/datasets/bigcode/the-stack-dedup) (Python subset) |
|
| 138 |
+
| Transpilation tool | [Legesher](https://github.com/legesher/legesher) v0.7.3 (legesher-core, legesher-i18n) |
|
| 139 |
+
| Tokenizer | CohereLabs/tiny-aya-base |
|
| 140 |
+
| Base model | [CohereLabs/tiny-aya-base](https://huggingface.co/CohereLabs/tiny-aya-base) (3.35B params) |
|
| 141 |
+
| Train/validation split | 90% / 10% (seed 42) |
|
| 142 |
+
| File format | Parquet (snappy compression) |
|
| 143 |
+
| Filtering criteria | AST-valid, permissive licenses, 10--1000 lines, min 21 GitHub stars, no autogenerated files, SHA-256 deduplication |
|
| 144 |
|
| 145 |
## Citation
|
| 146 |
|
|
|
|
| 150 |
author={Madison Edgar and Saad Bazaz and Rafay Mustafa and Sarah Jawaid and Rashik Shahjahan and Khojasteh Mirza and Sohaib Bazaz},
|
| 151 |
year={2026},
|
| 152 |
publisher={Hugging Face},
|
| 153 |
+
url={https://huggingface.co/datasets/legesher/language-decoded-data}
|
| 154 |
}
|
| 155 |
```
|
| 156 |
|
| 157 |
+
## Links
|
| 158 |
+
|
| 159 |
+
- [Legesher on GitHub](https://github.com/legesher/legesher)
|
| 160 |
+
- [Tiny Aya Expedition](https://aya.for.ai)
|
| 161 |
+
- [bigcode/the-stack-dedup](https://huggingface.co/datasets/bigcode/the-stack-dedup)
|
| 162 |
+
|
| 163 |
## License
|
| 164 |
|
| 165 |
+
Apache 2.0
|