user commited on
Commit
29789ce
·
1 Parent(s): b32f471
README.md ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ configs:
3
+ - config_name: default
4
+ data_files:
5
+ - split: train
6
+ path: main.jsonl.zst
7
+ - config_name: nvidia_domain
8
+ data_files:
9
+ - split: train
10
+ path: nvidia_domain/train.jsonl.zst
11
+ - split: validation
12
+ path: nvidia_domain/validation.jsonl.zst
13
+ - split: test
14
+ path: nvidia_domain/test.jsonl.zst
15
+ - config_name: doc_type_v1_primary
16
+ data_files:
17
+ - split: train
18
+ path: doc_type_v1_primary/train.jsonl.zst
19
+ - split: validation
20
+ path: doc_type_v1_primary/validation.jsonl.zst
21
+ - split: test
22
+ path: doc_type_v1_primary/test.jsonl.zst
23
+ - config_name: doc_type_v2_primary
24
+ data_files:
25
+ - split: train
26
+ path: doc_type_v2_primary/train.jsonl.zst
27
+ - split: validation
28
+ path: doc_type_v2_primary/validation.jsonl.zst
29
+ - split: test
30
+ path: doc_type_v2_primary/test.jsonl.zst
31
+ ---
32
+ # Multilingual Document Classification Dataset
33
+
34
+ This dataset contains **100,000 text passages** across **100 non-English languages** sourced from the [`agentlans/HuggingFaceFW-finetranslations-100-languages-sample`]([https://huggingface.co/datasets/agentlans/HuggingFaceFW-finetranslations-100-languages-sample](https://huggingface.co/datasets/agentlans/HuggingFaceFW-finetranslations-100-languages-sample)) collection.
35
+
36
+ Each original text passage is paired with its English translation and has been programmatically annotated with domain, writing genre, and educational classifications to facilitate cross-lingual classification and domain adaptation tasks.
37
+
38
+ ## Dataset Overview
39
+
40
+ - **Size:** 100,000 original text passages + 100,000 English translations.
41
+ - **Languages:** 100 non-English languages (original text) paired with English translations.
42
+ - **Primary Use Case:** Multilingual document classification, cross-lingual domain adaptation, and translation-based text evaluation.
43
+ - **Splits:** All subsets are split into **80% train**, **10% validation**, and **10% test** sets. The splits are stratified by the target labels to ensure identical class distributions across splits.
44
+
45
+ ### Subset Config Structure
46
+
47
+ The dataset contains subset configurations tailored for specific training objectives.
48
+ * In the **`main` config**, each original text is stored alongside its English translation within a single row.
49
+ * In **subset configs** (e.g., configurations filtered or categorized by specific schema labels), the original texts and their English translations are stored as distinct, individual rows to allow direct training on the target language or translation.
50
+
51
+ ## Annotation & Classification Details
52
+
53
+ To generate granular metadata for domain, genre, and cognitive level, two primary text classifiers were applied to the **English translations**:
54
+
55
+ 1. **[`nvidia/domain-classifier`]([https://huggingface.co/nvidia/domain-classifier](https://huggingface.co/nvidia/domain-classifier))** – Extracts high-level topical domains.
56
+ 2. **[`EssentialAI/eai-distill-0.5b`]([https://huggingface.co/EssentialAI/eai-distill-0.5b](https://huggingface.co/EssentialAI/eai-distill-0.5b))** – Extracts genre, cognitive depth, and educational level. See the [EAI Taxonomy Schema](https://github.com/Essential-AI/eai-taxonomy#dataset-schema-documentation) for detailed definitions.
57
+
58
+ ### Key Classification Fields
59
+
60
+ | Column Name | Source Model | Description / Purpose |
61
+ | :--- | :--- | :--- |
62
+ | `nvidia_domain` | NVIDIA Domain Classifier | General topical categorization (e.g., News, Food & Drink). |
63
+ | `doc_type_v1_primary` | EAI Distill 0.5B | High-level document genre classification (V1). |
64
+ | `doc_type_v2_primary` | EAI Distill 0.5B | Refined, granular document type classification (V2). |
65
+
66
+ The columns `nvidia_domain`, `doc_type_v1_primary`, and `doc_type_v2_primary` are used as the target labels for creating subset configs.
67
+
68
+ ## Dataset Schema & Examples
69
+
70
+ ### 1. `main` Configuration Example
71
+
72
+ The `main` configuration contains both the original and translated text, as well as the complete suite of metadata extracted by the classifiers.
73
+
74
+ ```json
75
+ {
76
+ "id": "<urn:uuid:8f0799fb-7964-44e1-af9d-6565a1f85937>",
77
+ "translated": "In Gujarat too, there was opposition to the atrocities against farmers in Madhya Pradesh. Anger has spread in Gujarat over the incident in Madhya Pradesh. A protest demonstration was held by the Pradesh Congress in Ahmedabad...",
78
+ "original": "મધ્યપ્રદેશમાં ખેડૂતો પર અત્યાચારનો ગુજરાતમાં પણ વિરોધ થયો છે. ગુજરાતમાં પણ મધ્યપ્રદેશની ઘટનાને લઈ નારાજગી પ્રસરી છે...",
79
+ "language": "guj_Gujr",
80
+ "nvidia_domain": "News",
81
+ "bloom_cognitive_primary": "Understand",
82
+ "bloom_cognitive_secondary": "Evaluate",
83
+ "bloom_knowledge_primary": "Factual",
84
+ "bloom_knowledge_secondary": "Conceptual",
85
+ "doc_type_v1_primary": "News/Editorial",
86
+ "doc_type_v2_primary": "News Article",
87
+ "doc_type_v2_secondary": "Knowledge Article",
88
+ "educational_level_primary": "General",
89
+ "educational_level_secondary": "High School",
90
+ "extraction_artifacts_primary": "No Artifacts",
91
+ "fdc_primary": "320.954",
92
+ "fdc_secondary": "338.954",
93
+ "missing_content_primary": "No Missing Content",
94
+ "reasoning_depth_primary": "No Reasoning",
95
+ "reasoning_depth_secondary": "Basic",
96
+ "technical_correctness_primary": "N/A",
97
+ "technical_correctness_secondary": "Highly Correct"
98
+ }
99
+ ```
100
+
101
+ ### 2. Subset Configuration Example
102
+
103
+ The subset configurations are stripped down to the target `text`, its `language` identifier, and the specific classification `label` for the subset.
104
+
105
+ ```json
106
+ {
107
+ "text": "Oh sweet potato; kuinka ihana oletkaan!\nJa vielä kaunis väriltäsi...",
108
+ "language": "fin_Latn",
109
+ "label": "Food_and_Drink"
110
+ }
111
+ ```
112
+
113
+ ## Limitations
114
+
115
+ Users should keep the following limitations in mind when utilizing this dataset:
116
+
117
+ * **Source Translation Quality:** Since the source texts are derived from `HuggingFaceFW-finetranslations`, any artifacts, vocabulary choices, or grammatical inaccuracies in the underlying translations will carry over.
118
+ * **Language Distribution:** The dataset contains an uniform number of samples across languages. As a result, high-resource languages (e.g., Mandarin Chinese) have the same number of rows as lower-resource languages (e.g., Assamese).
119
+ * **Class Imbalance:** Certain topical domains and document types are heavily over-represented compared to others. For instance, there are far more promotional news articles than niche categories like culinary recipes.
120
+
121
+ ## License
122
+
123
+ This dataset is released under the **Open Data Commons Attribution License (ODC-BY)**, matching the terms of the source datasets.
doc_type_v1_primary/test.jsonl.zst ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ac51c0fc5d45e9f826ed462afa5cefc1fe7c73ee4612b7c13a554de52fc2842
3
+ size 11080166
doc_type_v1_primary/train.jsonl.zst ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:919a15a8b8c1abcbd6d451a3849698e09fcb12d818d53338521b58070d269bf5
3
+ size 88711665
doc_type_v1_primary/validation.jsonl.zst ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:798818ca6b597140edf6b60908f4a5c9fa9dee787e0257c96612429e60bc3c1b
3
+ size 11069093
doc_type_v2_primary/test.jsonl.zst ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c278b72aab5ae9334cb582d23cb04beed6af73a61b901995ce534f05e6dc8a78
3
+ size 11273528
doc_type_v2_primary/train.jsonl.zst ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a908a4db1c543f4e94bdabceeee644b739b365c3fef35e7a6b1eebad62ab475
3
+ size 88573402
doc_type_v2_primary/validation.jsonl.zst ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab6c538b6da697b5bc27f6ce5074a8ea791004a9ebd11a6b68e1da28a251719f
3
+ size 11085282
main.jsonl.zst ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a69287b31b35e23b9fe42cced82badaba6bfee328f8b425e41ae5b6ab4b48a73
3
+ size 114204764
nvidia_domain/test.jsonl.zst ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77e52f8baa44013f42103c4ce099a50aed638913741d549140346880273b3866
3
+ size 11148329
nvidia_domain/train.jsonl.zst ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2dafaffb753dbb8f83210f09454c42711f5887bb774ad5bbfea27399f2967de
3
+ size 88912992
nvidia_domain/validation.jsonl.zst ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac22c7d34a479b45f050762ec5f28028ad775580dff05428687fb19dd431883f
3
+ size 10986701