Instructions to use amd/MiniMax-M2.1-MXFP4 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use amd/MiniMax-M2.1-MXFP4 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="amd/MiniMax-M2.1-MXFP4", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("amd/MiniMax-M2.1-MXFP4", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("amd/MiniMax-M2.1-MXFP4", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use amd/MiniMax-M2.1-MXFP4 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "amd/MiniMax-M2.1-MXFP4" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "amd/MiniMax-M2.1-MXFP4", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/amd/MiniMax-M2.1-MXFP4
- SGLang
How to use amd/MiniMax-M2.1-MXFP4 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "amd/MiniMax-M2.1-MXFP4" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "amd/MiniMax-M2.1-MXFP4", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "amd/MiniMax-M2.1-MXFP4" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "amd/MiniMax-M2.1-MXFP4", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use amd/MiniMax-M2.1-MXFP4 with Docker Model Runner:
docker model run hf.co/amd/MiniMax-M2.1-MXFP4
Model Overview
- Model Architecture: MiniMaxM2ForCausalLM
- Input: Text
- Output: Text
- Supported Hardware Microarchitecture: AMD MI300 MI350/MI355
- ROCm: 7.0
- PyTorch: 2.8.0
- Transformers: 4.57.1
- Operating System(s): Linux
- Inference Engine: SGLang/vLLM
- Model Optimizer: AMD-Quark (v0.11)
- Weight quantization: OCP MXFP4, Static
- Activation quantization: OCP MXFP4, Dynamic
Model Quantization
The model was quantized from QuixiAI/MiniMax-M2.1-bf16 using AMD-Quark. The weights are quantized to MXFP4 and activations are quantized to MXFP4.
Quantization scripts:
cd Quark/examples/torch/language_modeling/llm_ptq/
export exclude_layers="lm_head *block_sparse_moe.gate* *self_attn*"
python3 quantize_quark.py --model_dir $MODEL_DIR \
--quant_scheme mxfp4 \
--num_calib_data 128 \
--exclude_layers $exclude_layers \
--skip_evaluation \
--multi_gpu \
--trust_remote_code \
--model_export hf_format \
--output_dir $output_dir
For further details or issues, please refer to the AMD-Quark documentation or contact the respective developers.
Evaluation
The model was evaluated on gsm8k benchmarks using the vllm framework.
Accuracy
| Benchmark | QuixiAI/MiniMax-M2.1-bf16 | amd/MiniMax-M2.1-MXFP4(this model) | Recovery |
| gsm8k (flexible-extract) | 0.9356 | 0.9348 | 99.91% |
Reproduction
The GSM8K results were obtained using the vLLM framework, based on the Docker image rocm/vllm-dev:nightly, and vLLM is installed inside the container.
Preparation in container
# Install vLLM code repo
git clone https://github.com/vllm-project/vllm.git
cd vllm
git checkout v0.13.0
cd ..
Launching server
VLLM_ROCM_USE_AITER=1 \
VLLM_DISABLE_COMPILE_CACHE=1 \
vllm serve "$MODEL" \
--tensor-parallel-size 4 \
--trust-remote-code \
--max-model-len 32768 \
--port 8899
Evaluating model in a new terminal
python vllm/tests/evals/gsm8k/gsm8k_eval.py --host http://127.0.0.1 --port 8899 --num-questions 1000 --save-results logs
License
Modifications Copyright(c) 2026 Advanced Micro Devices, Inc. All rights reserved.
- Downloads last month
- 2,244
Model tree for amd/MiniMax-M2.1-MXFP4
Base model
MiniMaxAI/MiniMax-M2.1