Voxtral-Mini-3B-2507-RotorQuant
RotorQuant KV cache compression for mistralai/Voxtral-Mini-3B-2507.
This is a documentation repository that explains how to combine Voxtral-Mini-3B-2507's weights with RotorQuant inference-time KV cache compression. No weights are stored here β use the base model directly and apply RotorQuant via the Python package or llama.cpp fork.
Hardware compatibility
| Device | VRAM / RAM | Recommendation |
|---|---|---|
| Any host that runs the base model | baseline + runtime savings | RotorQuant/TurboQuant is a KV-cache runtime modifier; pair with any weight variant |
What is this?
KV cache compression reduces the memory used by the attention cache during inference. Unlike weight quantization (which is baked into the GGUF/MLX file), KV cache compression is applied at runtime β so the same base weights can be used with or without compression.
| Technique | Where it's applied | Savings |
|---|---|---|
| Weight quantization (GGUF/MLX/AWQ) | Baked into model file | Reduces disk + weight memory |
| RotorQuant KV cache | At inference time | Reduces attention memory (critical for long context) |
Both can be combined for maximum efficiency.
Quickstart
Option A β Python / transformers
Install the rotorquant package:
pip install rotorquant
Then use it with the base model:
import torch
from transformers import VoxtralForConditionalGeneration, AutoTokenizer
from rotorquant import IsoQuantCache
tokenizer = AutoTokenizer.from_pretrained("mistralai/Voxtral-Mini-3B-2507", trust_remote_code=True)
model = VoxtralForConditionalGeneration.from_pretrained(
"mistralai/Voxtral-Mini-3B-2507",
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
# Apply RotorQuant to the KV cache
cache = IsoQuantCache(bits=4) # or bits=2 for more aggressive compression
inputs = tokenizer("Hello, how are you?", return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=128,
past_key_values=cache,
use_cache=True,
)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))
Model Specifications
| Property | Value |
|---|---|
| Base Model | mistralai/Voxtral-Mini-3B-2507 |
| Architecture | Mistral-based audio understanding (speech + text) |
| Parameters | 3B (audio + text) |
| Context Length | 32K |
| BF16 Size | ~6 GB |
| Modalities | Audio + Text |
| License | apache-2.0 |
What is RotorQuant?
RotorQuant is a KV cache compression method based on Clifford algebra (Cl(3,0)) rotors β a faster, more parameter-efficient alternative to Google's TurboQuant. Uses lightweight block-diagonal rotations (independent 2D/4D rotations per pair/quartet) achieving O(d) complexity instead of O(d log d), fully parallelisable with no inter-element dependencies.
Benchmarks (from the RotorQuant repository, Llama 3.1 8B on RTX 5090 β results vary by model and hardware):
- Prefill: 3,822 tok/s (vs TurboQuant 722 tok/s)
- Decode: 119 tok/s (vs TurboQuant 93 tok/s)
- Perplexity: 6.91 (vs TurboQuant 7.07)
- Parameters: 4 per rotor (vs TurboQuant 16,384)
Benchmarks are from the RotorQuant repository using Llama 3.1 8B. Performance on Voxtral-Mini-3B-2507 will differ. Please open a discussion if you have independent results.
Current Ecosystem Support
| Runtime | RotorQuant Support | Notes |
|---|---|---|
Python transformers + rotorquant |
β Full | Drop-in cache class |
| llama.cpp upstream | β Not merged | Use fork below |
| llama-cpp-turboquant fork | β
planar3, iso3 |
GitHub |
| LM Studio | β Requested | Use q8_0 as alternative |
| Ollama | β Not supported | Use OLLAMA_KV_CACHE_TYPE=q8_0 |
| vLLM | β Not supported | β |
| koboldcpp | β Not supported | β |
Pre-quantized weight variants
If you want combined weight + KV cache compression, majentik hosts pre-quantized versions:
See Also
Model tree for majentik/Voxtral-Mini-3B-2507-RotorQuant
Base model
mistralai/Voxtral-Mini-3B-2507