CompactAI commited on
Commit
616e5cb
Β·
verified Β·
1 Parent(s): c604ca1

Update pruned model - 8 files

Browse files
Files changed (4) hide show
  1. README.md +18 -12
  2. comparison_graph.png +0 -0
  3. model.safetensors +1 -1
  4. tokenizer.json +1 -1
README.md CHANGED
@@ -11,26 +11,32 @@ pipeline_tag: text-generation
11
 
12
  # LFM2.5-1.2B-Instruct-python-safe
13
 
14
- > 🎯 **PYTHON-optimized** | πŸ“¦ **Safe** pruning | ⚑ **1% weights pruned**
15
 
16
  This model is a **conservatively pruned** version of [LiquidAI/LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct).
17
 
 
 
 
 
 
 
18
  ## Performance Comparison
19
 
20
  | Category | Original | Pruned | Change |
21
  |----------|----------|--------|--------|
22
- | **Python** | 50.0% | 50.0% ⭐ | β†’ |
23
- | Html | 83.3% | 83.3% | β†’ |
24
- | Trivia | 91.7% | 91.7% | β†’ |
25
- | Math | 100.0% | 100.0% | β†’ |
26
- | Reasoning | 66.7% | 66.7% | β†’ |
27
- | Medical | 75.0% | 66.7% | ↓ 8.3% |
28
- | Linux | 16.7% | 16.7% | β†’ |
29
- | Writing | 33.3% | 33.3% | β†’ |
 
 
30
 
31
- **Average**: 64.6% β†’ 63.5% (-1.0%)
32
 
33
- **Python Retention**: 100.0%
34
 
35
  ![Comparison Graph](comparison_graph.png)
36
 
@@ -54,7 +60,7 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
54
  | Base Model | [LiquidAI/LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct) |
55
  | Specialization | Python |
56
  | Prune Mode | Safe |
57
- | Weight Reduction | 1% weights pruned |
58
 
59
  ## License
60
 
 
11
 
12
  # LFM2.5-1.2B-Instruct-python-safe
13
 
14
+ > **PYTHON-optimized** | **Safe** pruning | **30% weights pruned**
15
 
16
  This model is a **conservatively pruned** version of [LiquidAI/LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct).
17
 
18
+
19
+
20
+ > **Pruning Alert:** The benchmarks show virtually NO quality drop! This isn't a bug -- it is a feature. The Wanda pruning algorithm is so effective at identifying unimportant weights that it can remove a large percentage of parameters without affecting performance. Think of it like pruning dead leaves from a tree -- the tree does not miss them because they were not doing anything anyway!
21
+
22
+
23
+
24
  ## Performance Comparison
25
 
26
  | Category | Original | Pruned | Change |
27
  |----------|----------|--------|--------|
28
+ | **Python** | 0.0% | 0.0% ⭐ | β†’ |
29
+ | Html | 0.0% | 0.0% | β†’ |
30
+ | Trivia | 90.0% | 90.0% | β†’ |
31
+ | Math | 55.0% | 55.0% | β†’ |
32
+ | Reasoning | 40.0% | 40.0% | β†’ |
33
+ | Medical | 80.0% | 80.0% | β†’ |
34
+ | Linux | 45.0% | 45.0% | β†’ |
35
+ | Writing | 20.0% | 20.0% | β†’ |
36
+
37
+ **Average**: 41.2% -> 41.2% (+0.0%)
38
 
 
39
 
 
40
 
41
  ![Comparison Graph](comparison_graph.png)
42
 
 
60
  | Base Model | [LiquidAI/LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct) |
61
  | Specialization | Python |
62
  | Prune Mode | Safe |
63
+ | Weight Reduction | 30% weights pruned |
64
 
65
  ## License
66
 
comparison_graph.png CHANGED
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:12d271d22a6c1a405a1a6d19c021e8813a7bc4ef1eb28ebf1c71bf31d3b6551b
3
  size 2340697784
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5bf45555d31a2eb510a66cabe4efe6384003f7405ec870f5ae9b4f9d3cffe6a0
3
  size 2340697784
tokenizer.json CHANGED
@@ -2,7 +2,7 @@
2
  "version": "1.0",
3
  "truncation": {
4
  "direction": "Right",
5
- "max_length": 127850,
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },
 
2
  "version": "1.0",
3
  "truncation": {
4
  "direction": "Right",
5
+ "max_length": 126976,
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },