unsloth/Qwen3-Coder-Next-GGUF Text Generation โข 80B โข Updated about 20 hours ago โข 281k โข 300
unsloth/Qwen3-Coder-Next-GGUF Text Generation โข 80B โข Updated about 20 hours ago โข 281k โข 300
Unsloth Dynamic 2.0 Quants Collection New 2.0 version of our Dynamic GGUF + Quants. Dynamic 2.0 achieves superior accuracy & SOTA quantization performance. โข 71 items โข Updated 1 day ago โข 349
view post Post 4815 We collaborated with Hugging Face to enable you to train MoE models 12ร faster with 35% less VRAM via our new Triton kernels (no accuracy loss). ๐คTrain gpt-oss locally on 12.8GB VRAM with our free notebooks: https://unsloth.ai/docs/new/faster-moe See translation 1 reply ยท ๐ฅ 27 27 ๐ค 5 5 + Reply
view post Post 4815 We collaborated with Hugging Face to enable you to train MoE models 12ร faster with 35% less VRAM via our new Triton kernels (no accuracy loss). ๐คTrain gpt-oss locally on 12.8GB VRAM with our free notebooks: https://unsloth.ai/docs/new/faster-moe See translation 1 reply ยท ๐ฅ 27 27 ๐ค 5 5 + Reply