Qwen3.5-397B-A17B-RotorQuant

RotorQuant KV cache compression for Qwen/Qwen3.5-397B-A17B.

This is a documentation repository that explains how to combine Qwen3.5-397B-A17B's weights with RotorQuant inference-time KV cache compression. No weights are stored here β€” use the base model directly and apply RotorQuant via the Python package or llama.cpp fork.

Hardware compatibility

Device VRAM / RAM Recommendation
Any host that runs the base model baseline + runtime savings RotorQuant/TurboQuant is a KV-cache runtime modifier; pair with any weight variant

What is this?

KV cache compression reduces the memory used by the attention cache during inference. Unlike weight quantization (which is baked into the GGUF/MLX file), KV cache compression is applied at runtime β€” so the same base weights can be used with or without compression.

Technique Where it's applied Savings
Weight quantization (GGUF/MLX/AWQ) Baked into model file Reduces disk + weight memory
RotorQuant KV cache At inference time Reduces attention memory (critical for long context)

Both can be combined for maximum efficiency.

Quickstart

Option A β€” Python / transformers

Install the rotorquant package:

pip install rotorquant

Then use it with the base model:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from rotorquant import IsoQuantCache

tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3.5-397B-A17B", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen3.5-397B-A17B",
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True,
)

# Apply RotorQuant to the KV cache
cache = IsoQuantCache(bits=4)  # or bits=2 for more aggressive compression

inputs = tokenizer("Hello, how are you?", return_tensors="pt").to(model.device)
outputs = model.generate(
    **inputs,
    max_new_tokens=128,
    past_key_values=cache,
    use_cache=True,
)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))

Option B β€” llama.cpp / LM Studio / Ollama (with fork)

RotorQuant KV cache types (iso3) are not in upstream llama.cpp. They require:

Once built:

llama-cli -m Qwen3.5-397B-A17B.gguf \
  --cache-type-k iso3 --cache-type-v iso3 \
  -ngl 99 -fa \
  -p "Hello"

For standard runtimes (LM Studio, Ollama, upstream llama.cpp), use conventional KV cache types (q8_0, q4_0). You lose the RotorQuant-specific benefits but keep GGUF weight quantization.

Model Specifications

Property Value
Base Model Qwen/Qwen3.5-397B-A17B
Architecture Sparse MoE (17B active per token)
Parameters 397B total, 17B active (MoE)
Context Length 256K
BF16 Size ~794 GB
Modalities Text + Image (image-text-to-text)
License apache-2.0

What is RotorQuant?

RotorQuant is a KV cache compression method based on Clifford algebra (Cl(3,0)) rotors β€” a faster, more parameter-efficient alternative to Google's TurboQuant. Uses lightweight block-diagonal rotations (independent 2D/4D rotations per pair/quartet) achieving O(d) complexity instead of O(d log d), fully parallelisable with no inter-element dependencies.

Benchmarks (from the RotorQuant repository, Llama 3.1 8B on RTX 5090 β€” results vary by model and hardware):

  • Prefill: 3,822 tok/s (vs TurboQuant 722 tok/s)
  • Decode: 119 tok/s (vs TurboQuant 93 tok/s)
  • Perplexity: 6.91 (vs TurboQuant 7.07)
  • Parameters: 4 per rotor (vs TurboQuant 16,384)

Benchmarks are from the RotorQuant repository using Llama 3.1 8B. Performance on Qwen3.5-397B-A17B will differ. Please open a discussion if you have independent results.

Current Ecosystem Support

Runtime RotorQuant Support Notes
Python transformers + rotorquant βœ… Full Drop-in cache class
llama.cpp upstream ❌ Not merged Use fork below
llama-cpp-turboquant fork βœ… planar3, iso3 GitHub
LM Studio ❌ Requested Use q8_0 as alternative
Ollama ❌ Not supported Use OLLAMA_KV_CACHE_TYPE=q8_0
vLLM ❌ Not supported β€”
koboldcpp ❌ Not supported β€”

Pre-quantized weight variants

If you want combined weight + KV cache compression, majentik hosts pre-quantized versions:

See Also

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for majentik/Qwen3.5-397B-A17B-RotorQuant

Finetuned
(28)
this model

Paper for majentik/Qwen3.5-397B-A17B-RotorQuant