Qwen2.5-0.5B-linux-aggressive

🎯 LINUX-optimized | πŸ“¦ Aggressive pruning | ⚑ 10% weights pruned

This model is a aggressively pruned version of Qwen/Qwen2.5-0.5B.

Performance Comparison

Category Original Pruned Change
Python 0.0% 0.0% β†’
Html 0.0% 0.0% β†’
Trivia 100.0% 83.3% ↓ 16.7%
Math 66.7% 66.7% β†’
Reasoning 66.7% 66.7% β†’
Medical 66.7% 50.0% ↓ 16.7%
Linux 33.3% 16.7% ⭐ ↓ 16.7%
Writing 33.3% 33.3% β†’

Average: 45.8% β†’ 39.6% (-6.2%)

Linux Retention: 50.0%

Comparison Graph

Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("CompactAI/Qwen2.5-0.5B-linux-aggressive")
tokenizer = AutoTokenizer.from_pretrained("CompactAI/Qwen2.5-0.5B-linux-aggressive")

inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Technical Details

Property Value
Base Model Qwen/Qwen2.5-0.5B
Specialization Linux
Prune Mode Aggressive
Weight Reduction 10% weights pruned

License

This model inherits the license from the base model.

Downloads last month
10
Safetensors
Model size
0.5B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for CompactAI/Qwen2.5-0.5B-linux-aggressive

Base model

Qwen/Qwen2.5-0.5B
Finetuned
(540)
this model

Collection including CompactAI/Qwen2.5-0.5B-linux-aggressive