Running Featured 80 Distilling 100B+ Models 40x Faster with TRL 📝 80 TRL distillation for 100B+ teachers, 40x faster
view article Article Everything You Need to Know about Knowledge Distillation Kseniase • Mar 6, 2025 • 79
Running Featured 1.34k FineWeb: decanting the web for the finest text data at scale 🍷 1.34k Explore and download the FineWeb web‑text dataset
Running 3.83k The Ultra-Scale Playbook 🌌 3.83k The ultimate guide to training LLM on large GPU Clusters