Overview
RuvLTRA Medium provides the sweet spot between capability and resource usage. Ideal for desktop applications, development workstations, and moderate-scale deployments.
Model Card
| Property | Value |
|---|---|
| Parameters | 1.1 Billion |
| Quantization | Q4_K_M |
| Context | 8,192 tokens |
| Size | ~669 MB |
| Min RAM | 2 GB |
| Recommended RAM | 4 GB |
π Quick Start
# Download
wget https://huggingface.co/ruv/ruvltra-medium/resolve/main/ruvltra-1.1b-q4_k_m.gguf
# Run inference
./llama-cli -m ruvltra-1.1b-q4_k_m.gguf \
-p "Explain quantum computing in simple terms:" \
-n 512 -c 8192
π‘ Use Cases
- Development: Code assistance and generation
- Writing: Content creation and editing
- Analysis: Document summarization
- Chat: Conversational AI applications
π§ Integration
Rust
use ruvllm::hub::ModelDownloader;
let path = ModelDownloader::new()
.download("ruv/ruvltra-medium", None)
.await?;
Python
from llama_cpp import Llama
from huggingface_hub import hf_hub_download
model_path = hf_hub_download("ruv/ruvltra-medium", "ruvltra-1.1b-q4_k_m.gguf")
llm = Llama(model_path=model_path, n_ctx=8192)
OpenAI-Compatible Server
python -m llama_cpp.server \
--model ruvltra-1.1b-q4_k_m.gguf \
--host 0.0.0.0 --port 8000
Performance
| Platform | Tokens/sec |
|---|---|
| M2 Pro (Metal) | 65 tok/s |
| RTX 4080 (CUDA) | 95 tok/s |
| i9-13900K (CPU) | 25 tok/s |
License: Apache 2.0 | GitHub: ruvnet/ruvector
β‘ TurboQuant KV-Cache Compression
RuvLTRA models are fully compatible with TurboQuant β 2-4 bit KV-cache quantization that reduces inference memory by 6-8x with <0.5% quality loss.
| Quantization | Compression | Quality Loss | Best For |
|---|---|---|---|
| 3-bit | 10.7x | <1% | Recommended β best balance |
| 4-bit | 8x | <0.5% | High quality, long context |
| 2-bit | 32x | ~2% | Edge devices, max savings |
Usage with RuvLLM
cargo add ruvllm # Rust
npm install @ruvector/ruvllm # Node.js
use ruvllm::quantize::turbo_quant::{TurboQuantCompressor, TurboQuantConfig, TurboQuantBits};
let config = TurboQuantConfig {
bits: TurboQuantBits::Bit3_5, // 10.7x compression
use_qjl: true,
..Default::default()
};
let compressor = TurboQuantCompressor::new(config)?;
let compressed = compressor.compress_batch(&kv_vectors)?;
let scores = compressor.inner_product_batch_optimized(&query, &compressed)?;
v2.1.0 Ecosystem
- Hybrid Search β Sparse + dense vectors with RRF fusion (20-49% better retrieval)
- Graph RAG β Knowledge graph + community detection for multi-hop queries
- DiskANN β Billion-scale SSD-backed ANN with <10ms latency
- FlashAttention-3 β IO-aware tiled attention, O(N) memory
- MLA β Multi-Head Latent Attention (~93% KV-cache compression)
- Mamba SSM β Linear-time selective state space models
- Speculative Decoding β 2-3x generation speedup
RuVector GitHub | ruvllm crate | @ruvector/ruvllm npm
Benchmarks (L4 GPU, 24GB VRAM)
| Metric | Result |
|---|---|
| Inference Speed | 62.6 tok/s |
| Model Load Time | 1.1s |
| Parameters | 3B |
| TurboQuant KV (3-bit) | 10.7x compression, <1% PPL loss |
| TurboQuant KV (4-bit) | 8x compression, <0.5% PPL loss |
Benchmarked on Google Cloud L4 GPU via ruvltra-calibration Cloud Run Job (2026-03-28)
- Downloads last month
- 113
Hardware compatibility
Log In to add your hardware
4-bit