tokenizer-data / README.md
almaghrabima's picture
Add files using upload-large-folder tool
843d6c4 verified
metadata
language:
  - en
  - ar
pretty_name: DeepLatent bilingual tokenizer-data
size_categories:
  - 1M<n<10M
task_categories:
  - text-generation
tags:
  - arabic
  - english
  - bilingual
  - tokenizer-training

DeepLatent bilingual tokenizer-data

Balanced English/Arabic corpus for tokenizer training. The two languages carry essentially the same number of Unicode codepoints, so a BPE/WordPiece tokenizer trained on this corpus sees equal representation of both languages by character content.

Composition

Slice Source Filter Rows Chars
English almaghrabima/deeplatent-hq-merged-dedup-token-counts GlotLID language == "English" 3,980,035 12,377,830,142
Arabic AdaMLLab/AraMix-HQ mmbert_score >= 0.2784language != "English" 2,380,570 12,380,271,596
Total 6,360,605 24,758,101,738

The Arabic threshold mmbert_score >= 0.2784 was chosen so the Arabic char count matches the English char count (balancing the two languages). This yields the top ~7% highest-scoring Arabic content in AraMix-HQ.

Language labels come from GlotLID (cis-lmu/glotlid) run on the first 2000 characters of each document. The HQ corpus is fully labeled; the AraMix-HQ source was partially labeled (~38.5% of shards) — remaining Arabic rows in this merged release default to "Arabic" since labeled AraMix-HQ shards were 96.4% Arabic.

Schema

Column Type Description
text string Document text
source string Original sub-source (e.g. ar, en, lightonai/ArabicWeb24)
language string GlotLID label: "English", "Arabic", or raw lang_Script
origin string "hq" or "aramix"

File layout

  • en_00000.parqueten_00222.parquet — 223 English shards
  • ar_00000.parquetar_00178.parquet — 179 Arabic shards

Each file is zstd-compressed parquet.