Voxtral-Mini-3B-2507-RotorQuant-MLX-8bit
8-bit MLX weight-quantized build of mistralai/Voxtral-Mini-3B-2507 with a RotorQuant KV-cache profile. Best-fidelity MLX variant with rotational cache re-basis for robust streaming audio.
Hardware compatibility
| Device | VRAM / RAM | Recommendation |
|---|---|---|
| Apple M4 Max 128 GB | ~3.9 GB | recommended — headroom for long context |
| Apple M3 Max 64 GB | ~3.9 GB | comfortable |
| Apple M2 Max 32 GB | ~3.6 GB | fits |
Overview
- Base:
mistralai/Voxtral-Mini-3B-2507— 3B speech-understanding model - Capabilities: transcription, speech translation, audio QA
- Weight precision: 8-bit (group-wise)
- KV-cache profile: RotorQuant (rotational online re-basis)
- Approx. on-disk size: ~3 GB
- Runtime: MLX on Apple Silicon
Quickstart
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("majentik/Voxtral-Mini-3B-2507-RotorQuant-MLX-8bit")
prompt = tokenizer.apply_chat_template(
[{"role": "user", "content": [{"type": "audio", "path": "meeting.wav"},
{"type": "text", "text": "Transcribe and diarize."}]}],
add_generation_prompt=True,
)
print(generate(model, tokenizer, prompt=prompt, max_tokens=512))
Model specs
| Field | Value |
|---|---|
| Parameters | 3B |
| Weight bits | 8 |
| Group size | 64 |
| Cache profile | RotorQuant |
| Size on disk | ~3 GB |
| Target hardware | Apple Silicon (M1/M2/M3/M4) |
| License | Apache 2.0 |
RotorQuant vs TurboQuant
| RotorQuant | TurboQuant | |
|---|---|---|
| Strategy | Rotational online re-basis | Per-head static calibration |
| Memory reduction | ~4x on KV-cache | ~3.5x on KV-cache |
| Best for | Streaming, code-switching | Batch transcription |
See also
- Downloads last month
- 111
Model size
5B params
Tensor type
BF16
·
F32 ·
U32 ·
Hardware compatibility
Log In to add your hardware
8-bit
Model tree for majentik/Voxtral-Mini-3B-2507-RotorQuant-MLX-8bit
Base model
mistralai/Voxtral-Mini-3B-2507