Update pruned model - 8 files
Browse files- README.md +18 -12
- comparison_graph.png +0 -0
- model.safetensors +1 -1
- tokenizer.json +1 -1
README.md
CHANGED
|
@@ -11,26 +11,32 @@ pipeline_tag: text-generation
|
|
| 11 |
|
| 12 |
# LFM2.5-1.2B-Instruct-python-safe
|
| 13 |
|
| 14 |
-
>
|
| 15 |
|
| 16 |
This model is a **conservatively pruned** version of [LiquidAI/LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct).
|
| 17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
## Performance Comparison
|
| 19 |
|
| 20 |
| Category | Original | Pruned | Change |
|
| 21 |
|----------|----------|--------|--------|
|
| 22 |
-
| **Python** |
|
| 23 |
-
| Html |
|
| 24 |
-
| Trivia |
|
| 25 |
-
| Math |
|
| 26 |
-
| Reasoning |
|
| 27 |
-
| Medical |
|
| 28 |
-
| Linux |
|
| 29 |
-
| Writing |
|
|
|
|
|
|
|
| 30 |
|
| 31 |
-
**Average**: 64.6% β 63.5% (-1.0%)
|
| 32 |
|
| 33 |
-
**Python Retention**: 100.0%
|
| 34 |
|
| 35 |

|
| 36 |
|
|
@@ -54,7 +60,7 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
| 54 |
| Base Model | [LiquidAI/LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct) |
|
| 55 |
| Specialization | Python |
|
| 56 |
| Prune Mode | Safe |
|
| 57 |
-
| Weight Reduction |
|
| 58 |
|
| 59 |
## License
|
| 60 |
|
|
|
|
| 11 |
|
| 12 |
# LFM2.5-1.2B-Instruct-python-safe
|
| 13 |
|
| 14 |
+
> **PYTHON-optimized** | **Safe** pruning | **30% weights pruned**
|
| 15 |
|
| 16 |
This model is a **conservatively pruned** version of [LiquidAI/LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct).
|
| 17 |
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
> **Pruning Alert:** The benchmarks show virtually NO quality drop! This isn't a bug -- it is a feature. The Wanda pruning algorithm is so effective at identifying unimportant weights that it can remove a large percentage of parameters without affecting performance. Think of it like pruning dead leaves from a tree -- the tree does not miss them because they were not doing anything anyway!
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
|
| 24 |
## Performance Comparison
|
| 25 |
|
| 26 |
| Category | Original | Pruned | Change |
|
| 27 |
|----------|----------|--------|--------|
|
| 28 |
+
| **Python** | 0.0% | 0.0% β | β |
|
| 29 |
+
| Html | 0.0% | 0.0% | β |
|
| 30 |
+
| Trivia | 90.0% | 90.0% | β |
|
| 31 |
+
| Math | 55.0% | 55.0% | β |
|
| 32 |
+
| Reasoning | 40.0% | 40.0% | β |
|
| 33 |
+
| Medical | 80.0% | 80.0% | β |
|
| 34 |
+
| Linux | 45.0% | 45.0% | β |
|
| 35 |
+
| Writing | 20.0% | 20.0% | β |
|
| 36 |
+
|
| 37 |
+
**Average**: 41.2% -> 41.2% (+0.0%)
|
| 38 |
|
|
|
|
| 39 |
|
|
|
|
| 40 |
|
| 41 |

|
| 42 |
|
|
|
|
| 60 |
| Base Model | [LiquidAI/LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct) |
|
| 61 |
| Specialization | Python |
|
| 62 |
| Prune Mode | Safe |
|
| 63 |
+
| Weight Reduction | 30% weights pruned |
|
| 64 |
|
| 65 |
## License
|
| 66 |
|
comparison_graph.png
CHANGED
|
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 2340697784
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5bf45555d31a2eb510a66cabe4efe6384003f7405ec870f5ae9b4f9d3cffe6a0
|
| 3 |
size 2340697784
|
tokenizer.json
CHANGED
|
@@ -2,7 +2,7 @@
|
|
| 2 |
"version": "1.0",
|
| 3 |
"truncation": {
|
| 4 |
"direction": "Right",
|
| 5 |
-
"max_length":
|
| 6 |
"strategy": "LongestFirst",
|
| 7 |
"stride": 0
|
| 8 |
},
|
|
|
|
| 2 |
"version": "1.0",
|
| 3 |
"truncation": {
|
| 4 |
"direction": "Right",
|
| 5 |
+
"max_length": 126976,
|
| 6 |
"strategy": "LongestFirst",
|
| 7 |
"stride": 0
|
| 8 |
},
|