CompactAI commited on
Commit
b34dfd5
Β·
verified Β·
1 Parent(s): fe0c001

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -5,44 +5,36 @@ tags:
5
  - math
6
  - optimized
7
  - wanda
8
- - activation-pruning
9
  base_model: Qwen/Qwen3-1.7B
10
  pipeline_tag: text-generation
11
  ---
12
 
13
  # Qwen3-1.7B-math-aggressive
14
 
15
- > 🎯 **MATH-optimized** | πŸ“¦ **Aggressive** pruning | ⚑ **20% weights pruned**
16
 
17
- This model is a **aggressively pruned** version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B), specialized for **MATH** tasks using activation-aware weight pruning (Wanda-style).
18
 
19
- ## ✨ Key Features
20
-
21
- - **Specialization**: Optimized for Math tasks
22
- - **Pruning Method**: Wanda-style (|W| Γ— |activation|) importance scoring
23
- - **Size Reduction**: 20% weights pruned
24
- - **Use Case**: Maximum compression for edge deployment
25
-
26
- ## πŸ“Š Performance Comparison
27
 
28
  | Category | Original | Pruned | Change |
29
  |----------|----------|--------|--------|
30
- | Python | 13.3% | 13.3% | β†’ |
31
  | Html | 0.0% | 0.0% | β†’ |
32
- | Trivia | 91.1% | 42.2% | ↓ 48.9% |
33
- | **Math** | 91.1% | 86.7% ⭐ | ↓ 4.4% |
34
- | Reasoning | 28.9% | 22.2% | ↓ 6.7% |
35
- | Medical | 91.1% | 35.6% | ↓ 55.6% |
36
- | Linux | 93.3% | 75.6% | ↓ 17.8% |
37
- | Writing | 71.1% | 31.1% | ↓ 40.0% |
38
 
39
- **Average**: 60.0% β†’ 38.3% (-21.7%)
40
 
41
- **Math Retention**: 95.1% of original performance
42
 
43
  ![Comparison Graph](comparison_graph.png)
44
 
45
- ## πŸš€ Quick Start
46
 
47
  ```python
48
  from transformers import AutoModelForCausalLM, AutoTokenizer
@@ -50,31 +42,20 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
50
  model = AutoModelForCausalLM.from_pretrained("CompactAI/Qwen3-1.7B-math-aggressive")
51
  tokenizer = AutoTokenizer.from_pretrained("CompactAI/Qwen3-1.7B-math-aggressive")
52
 
53
- # Example usage
54
  inputs = tokenizer("Your prompt here", return_tensors="pt")
55
  outputs = model.generate(**inputs, max_new_tokens=100)
56
  print(tokenizer.decode(outputs[0], skip_special_tokens=True))
57
  ```
58
 
59
- ## πŸ“‹ Technical Details
60
 
61
  | Property | Value |
62
  |----------|-------|
63
  | Base Model | [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) |
64
  | Specialization | Math |
65
  | Prune Mode | Aggressive |
66
- | Pruning Method | Activation-based weight pruning (Wanda) |
67
- | Weight Reduction | 20% weights pruned |
68
-
69
- ## πŸ”— Related Models
70
 
71
- This model is part of the **Qwen3-1.7B** pruned model collection. Variants:
72
- - **Safe** - Conservative pruning (~10-20%), high accuracy retention
73
- - **Aggressive** - Maximum compression (~40-50%), best for edge deployment
74
 
75
- ## πŸ“œ License
76
-
77
- This model inherits the license from the base model [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B).
78
-
79
- ---
80
- *Generated by ZANNPS [Zeto Automatic Neural Network Pruning System]*
 
5
  - math
6
  - optimized
7
  - wanda
 
8
  base_model: Qwen/Qwen3-1.7B
9
  pipeline_tag: text-generation
10
  ---
11
 
12
  # Qwen3-1.7B-math-aggressive
13
 
14
+ > 🎯 **MATH-optimized** | πŸ“¦ **Aggressive** pruning | ⚑ **35% weights pruned**
15
 
16
+ This model is a **aggressively pruned** version of [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B).
17
 
18
+ ## Performance Comparison
 
 
 
 
 
 
 
19
 
20
  | Category | Original | Pruned | Change |
21
  |----------|----------|--------|--------|
22
+ | Python | 0.0% | 0.0% | β†’ |
23
  | Html | 0.0% | 0.0% | β†’ |
24
+ | Trivia | 57.1% | 50.0% | ↓ 7.1% |
25
+ | **Math** | 66.7% | 73.3% ⭐ | ↑ 6.7% |
26
+ | Reasoning | 20.0% | 0.0% | ↓ 20.0% |
27
+ | Medical | 50.0% | 66.7% | ↑ 16.7% |
28
+ | Linux | 20.0% | 0.0% | ↓ 20.0% |
29
+ | Writing | 16.7% | 0.0% | ↓ 16.7% |
30
 
31
+ **Average**: 28.8% β†’ 23.8% (-5.1%)
32
 
33
+ **Math Retention**: 110.0%
34
 
35
  ![Comparison Graph](comparison_graph.png)
36
 
37
+ ## Quick Start
38
 
39
  ```python
40
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
42
  model = AutoModelForCausalLM.from_pretrained("CompactAI/Qwen3-1.7B-math-aggressive")
43
  tokenizer = AutoTokenizer.from_pretrained("CompactAI/Qwen3-1.7B-math-aggressive")
44
 
 
45
  inputs = tokenizer("Your prompt here", return_tensors="pt")
46
  outputs = model.generate(**inputs, max_new_tokens=100)
47
  print(tokenizer.decode(outputs[0], skip_special_tokens=True))
48
  ```
49
 
50
+ ## Technical Details
51
 
52
  | Property | Value |
53
  |----------|-------|
54
  | Base Model | [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) |
55
  | Specialization | Math |
56
  | Prune Mode | Aggressive |
57
+ | Weight Reduction | 35% weights pruned |
 
 
 
58
 
59
+ ## License
 
 
60
 
61
+ This model inherits the license from the base model.
 
 
 
 
 
comparison_graph.png CHANGED
model-00001-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d80e81d8b167f2e7d9cd17653e7bcb4ca188ceaec6abab686020521fd601acd0
3
  size 3988008024
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce58e31d1bfff711a8f1af4d99bc55d4c12e9755b8ef4d4654f3297e07330dca
3
  size 3988008024
model-00002-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3d62d1a7ec82ae37c01d5c82ca45a1cbd9bed5694fbb9ff2ab4435d0a06dbf37
3
  size 75507296
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c8d8dd602bba248513248162e0f47be004f71f8b6393c931ce75176f32cd47f
3
  size 75507296
tokenizer.json CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fea4f89c198c65a418ebfd87d7480db83fe21f31c7f56cd2ecea1110b1dff53e
3
  size 11422917
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb5aa816bcb7fb495b5269f933d2710c6170d4dd410f5010a21bbfdd8c41f963
3
  size 11422917