--- license: apache-2.0 tags: - pruned - math - optimized - wanda base_model: LiquidAI/LFM2.5-1.2B-Instruct pipeline_tag: text-generation --- # LFM2.5-1.2B-Instruct-math-aggressive > **MATH-optimized** | **Aggressive** pruning | **35% weights pruned** This model is a **aggressively pruned** version of [LiquidAI/LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct). > **Note:** Minimal quality drop detected. The Wanda pruning algorithm effectively identifies and removes less important weights while preserving model capability. ## Performance Comparison | Category | Original | Pruned | Change | |----------|----------|--------|--------| | Python | 5.0% | 0.0% | ↓ 5.0% | | Html | 15.0% | 0.0% | ↓ 15.0% | | Trivia | 90.0% | 90.0% | → | | **Math** | 55.0% | 55.0% ⭐ | → | | Reasoning | 45.0% | 40.0% | ↓ 5.0% | | Medical | 80.0% | 80.0% | → | | Linux | 50.0% | 50.0% | → | | Writing | 15.0% | 15.0% | → | **Average**: 44.4% -> 41.2% (-3.1%) **Math Retention**: 100.0% ![Comparison Graph](comparison_graph.png) ## Quick Start ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("CompactAI/LFM2.5-1.2B-Instruct-math-aggressive") tokenizer = AutoTokenizer.from_pretrained("CompactAI/LFM2.5-1.2B-Instruct-math-aggressive") inputs = tokenizer("Your prompt here", return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Technical Details | Property | Value | |----------|-------| | Base Model | [LiquidAI/LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct) | | Specialization | Math | | Prune Mode | Aggressive | | Weight Reduction | 35% weights pruned | ## License This model inherits the license from the base model.