TorchSight Beam q8_0

Cybersecurity document classifier. LoRA fine-tune of Qwen 3.5 27B, quantized to q8_0. 28GB GGUF.

Higher quality weights (92.7% accuracy). For 48GB+ GPU or 64GB Mac.

Benchmark Results (1000 samples)

Model Category Acc Subcategory Acc
Beam q4_K_M 95.1% 48.5%
Beam f16 93.0% 51.3%
Beam q8_0 92.7% 51.3%
Claude Opus 4 79.9% 22.5%
Gemini 2.5 Pro 75.4% 21.0%
Qwen 3.5 27B (no fine-tune) 43.3% 4.3%

Usage with Ollama

ollama pull torchsight/beam:q8_0

Or with the GGUF file:

# Modelfile
FROM ./beam-1.0-q8_0.gguf

TEMPLATE "{{ .Prompt }}"

Output Format

[
  {
    "category": "credentials",
    "subcategory": "credentials.api_key",
    "severity": "critical",
    "explanation": "AWS access key found: AKIA****VIW..."
  }
]

Categories: pii, credentials, financial, medical, confidential, malicious, safe

Training

  • Base: Qwen 3.5 27B (dense)
  • Method: LoRA (r=128, alpha=256)
  • Data: 74K balanced samples from 18+ sources
  • Epochs: 5
  • GPU: H100 80GB PCIe

Links

License

Apache 2.0

Downloads last month
23
GGUF
Model size
27B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for torchsight/beam-q8_0

Base model

Qwen/Qwen3.5-27B
Adapter
(35)
this model