Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Datasets:
Natooka
/
parameter-golf-sp-tokenizers

Languages:
English
Size:
10B<n<100B
Tags:
sentencepiece
tokenizer
fineweb
parameter-golf
bpe
tokenized-corpus
License:
Dataset card Files Files and versions
xet
Community
parameter-golf-sp-tokenizers
26.6 GB
  • 1 contributor
History: 6 commits
Natooka's picture
Natooka
README: document tokenized shards (1 val + 133 train = 13.3B tokens) alongside .model
e9d696d verified 8 days ago
  • shards
    add SP16384 tokenized shards (13.3B tokens uint16: 1 val + 133 train) 8 days ago
  • .gitattributes
    2.5 kB
    initial commit 8 days ago
  • README.md
    3.21 kB
    README: document tokenized shards (1 val + 133 train = 13.3B tokens) alongside .model 8 days ago
  • fineweb_16384_bpe.model
    455 kB
    xet
    add fineweb_16384_bpe.model (SP16384 BPE, docs_selected rev 9bb295dd, sp_seed=1337, 5M train docs, SP 0.2.1, shuffle_input_sentence=False) 8 days ago
  • fineweb_16384_bpe.vocab
    185 kB
    add fineweb_16384_bpe.vocab (SP16384 BPE, docs_selected rev 9bb295dd, sp_seed=1337, 5M train docs, SP 0.2.1, shuffle_input_sentence=False) 8 days ago