Running Featured 195 Low-bit Quantized Open LLM Leaderboard 🏆 195 Track, rank and evaluate open LLMs and chatbots
view article Article GGML and llama.cpp join HF to ensure the long-term progress of Local AI +4 5 days ago • 379
view post Post 8388 You can now run MiniMax-2.5 locally! 🚀At 230B parameters, MiniMax-2.5 is the strongest LLM under 700B params, delivering SOTA agentic coding & chat.Run Dynamic 3/4-bit on a 128GB Mac for 20 tokens/s.Guide: https://unsloth.ai/docs/models/minimax-2.5GGUF: unsloth/MiniMax-M2.5-GGUF See translation 1 reply · 🔥 24 24 ❤️ 4 4 + Reply