HuggingFaceFW/fineweb-edu
Viewer • Updated • 3.5B • 578k • 1.07k
A 768M parameter Hierarchical Recurrent Memory (HRM) language model trained on high-quality web text from FineWeb-Edu. This model uses Mamba2 state-space models instead of traditional attention mechanisms, enabling efficient long-range sequence modeling.
CMBA (Causal Mamba-based Architecture) implements a hierarchical processing structure:
Model Dimensions:
- d_model: 768
- n_heads: 12 (for compatibility, not used in Mamba)
- d_ff: 3072
- H_layers: 12 (high-level hierarchy)
- L_layers: 12 (low-level processing)
Mamba2 Settings:
- d_state: 128
- expand: 2
- headdim: 64
- d_conv: 4
- ngroups: 1
Training:
- Max halt steps: 8
- Block size: 1024
- Batch size: 32 (effective)
- Learning rate: 0.0002 → 1e-06
- Weight decay: 0.1
t5-small (T5 SentencePiece)8.12163366.37from transformers import T5Tokenizer
from hrm_text1_modeling import HRMText1
tokenizer = T5Tokenizer.from_pretrained("t5-small")
model = HRMText1.from_pretrained("Viharikvs/CMBA-768M-FineWeb")
# Generate text
input_ids = tokenizer("Once upon a time", return_tensors="pt").input_ids
outputs = model.generate(input_ids, max_length=100)
print(tokenizer.decode(outputs[0]))
If you use this model, please cite:
@misc{cmba-768m-fineweb,
author = {Vihari},
title = {CMBA-768M-FineWeb: Hierarchical Mamba-based Language Model},
year = {2025},
publisher = {HuggingFace},
url = {https://huggingface.co/Viharikvs/CMBA-768M-FineWeb}
}
Apache 2.0
Base model
google-t5/t5-small