gemma-4-e2b-it-OptiQ-4bit

Optimized for Apple Silicon with mlx-optiq — sensitivity-aware mixed-precision quantization, reusable at inference, fine-tuning, and serving time.

This is a mixed-precision quantized version of google/gemma-4-e2b-it in MLX format. Instead of uniform 4-bit across every layer, OptIQ measures each layer's sensitivity via KL divergence on calibration data and assigns per-layer bit-widths (some layers at 8-bit, the rest at 4-bit) at the same average bits-per-weight. Same size, higher quality.

The optiq_metadata.json sidecar ships in the repo; it's what mlx-optiq reads to drive sensitivity-aware LoRA fine-tuning, mixed-precision KV serving, and hot-swap adapter routing.

Quantization Details

Property Value
Target BPW 4.5
Achieved BPW 4.50
Layers at 8-bit (sensitive) 112
Layers at 4-bit (robust) 414
Total quantized layers 526
Group size 64
model_type gemma4_text

Usage

Basic (works with stock mlx-lm)

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/gemma-4-e2b-it-OptiQ-4bit")
response = generate(
    model, tokenizer,
    prompt="Explain quantum computing in simple terms.",
    max_tokens=200,
)
print(response)

Unlock the full stack with mlx-optiq

Installing mlx-optiq turns this model from a static checkpoint into a deployment-ready base:

pip install mlx-optiq

Mixed-precision KV-cache serving (+40–62% decode speedup at 64k context on Qwen3.5 2B/4B/9B vs fp16 KV on M3 Max):

# One-time per-layer KV sensitivity pass
optiq kv-cache mlx-community/gemma-4-e2b-it-OptiQ-4bit --target-bits 4.5 -o ./kv_cache

# OpenAI-compatible server on :8080
optiq serve \
    --kv-config ./kv_cache/kv_config.json \
    --model mlx-community/gemma-4-e2b-it-OptiQ-4bit \
    --max-tokens 32768 --temp 0.6 --top-p 0.95

Sensitivity-aware LoRA fine-tuning — layers OptIQ kept at 8-bit (more sensitive) get 2× the adapter rank of layers at 4-bit, at the same base budget:

optiq lora train mlx-community/gemma-4-e2b-it-OptiQ-4bit \
    --data ./my_data \
    --rank 8 --rank-scaling by_bits \
    --iters 1000 -o ./my_adapter

Hot-swap adapters — mount N adapters on one base, switch per request without reloading the model (adapter id via HF repo or local path, auto-downloaded):

optiq serve \
    --model mlx-community/gemma-4-e2b-it-OptiQ-4bit \
    --adapter ./my_adapter

Full documentation: mlx-optiq.pages.dev

Benchmarks

GSM8K (200 samples, 3-shot chain-of-thought):

Model GSM8K Accuracy
This (OptIQ mixed 4.5 BPW) see Results page
Uniform 4-bit baseline (documented on Results)

See mlx-optiq.pages.dev/results for full methodology and per-model numbers.

Links

Credits

License

Apache 2.0 (inherits from base model).

Downloads last month
772
Safetensors
Model size
1B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support