---
license: apache-2.0
tags:
- pruned
- html
- optimized
- wanda
base_model: LiquidAI/LFM2.5-1.2B-Base
pipeline_tag: text-generation
---
# LFM2.5-1.2B-Base-html-aggressive
> 🎯 **HTML-optimized** | 📦 **Aggressive** pruning | ⚡ **25% weights pruned**
This model is a **aggressively pruned** version of [LiquidAI/LFM2.5-1.2B-Base](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Base).
## Performance Comparison
| Category | Original | Pruned | Change |
|----------|----------|--------|--------|
| Python | 0.0% | 0.0% | → |
| **Html** | 16.7% | 41.7% ⭐ | ↑ 25.0% |
| Trivia | 91.7% | 75.0% | ↓ 16.7% |
| Math | 75.0% | 58.3% | ↓ 16.7% |
| Reasoning | 41.7% | 50.0% | ↑ 8.3% |
| Medical | 66.7% | 50.0% | ↓ 16.7% |
| Linux | 16.7% | 8.3% | ↓ 8.3% |
| Writing | 33.3% | 58.3% | ↑ 25.0% |
**Average**: 42.7% → 42.7% (+0.0%)
**Html Retention**: 250.0%

## Quick Start
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("CompactAI/LFM2.5-1.2B-Base-html-aggressive")
tokenizer = AutoTokenizer.from_pretrained("CompactAI/LFM2.5-1.2B-Base-html-aggressive")
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Technical Details
| Property | Value |
|----------|-------|
| Base Model | [LiquidAI/LFM2.5-1.2B-Base](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Base) |
| Specialization | Html |
| Prune Mode | Aggressive |
| Weight Reduction | 25% weights pruned |
## License
This model inherits the license from the base model.