Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Open4bits
/
MiniMax-M2-GGUF
like
0
Follow
Open4bit
6
Text Generation
GGUF
open4bits
imatrix
conversational
License:
modified-mit
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
README.md exists but content is empty.
Downloads last month
60
GGUF
Model size
229B params
Architecture
minimax-m2
Chat template
Hardware compatibility
Log In
to add your hardware
2-bit
Q2_K
83.3 GB
Inference Providers
NEW
Text Generation
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for
Open4bits/MiniMax-M2-GGUF
Base model
MiniMaxAI/MiniMax-M2
Quantized
(
45
)
this model