HyzeAI jianchen0311 commited on
Commit
bb46b83
·
0 Parent(s):

Duplicate from z-lab/Qwen3.5-27B-DFlash

Browse files

Co-authored-by: Jian Chen <jianchen0311@users.noreply.huggingface.co>

Files changed (7) hide show
  1. .gitattributes +36 -0
  2. README.md +171 -0
  3. assets/dflash_system.png +3 -0
  4. assets/speedup.png +0 -0
  5. config.json +52 -0
  6. dflash.py +188 -0
  7. model.safetensors +3 -0
.gitattributes ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ assets/dflash_system.png filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - dflash
7
+ - speculative-decoding
8
+ - diffusion
9
+ - efficiency
10
+ - flash-decoding
11
+ - qwen
12
+ - diffusion-language-model
13
+ ---
14
+
15
+ # Qwen3.5-27B-DFlash
16
+ [**Paper**](https://arxiv.org/abs/2602.06036) | [**GitHub**](https://github.com/z-lab/dflash) | [**Blog**](https://z-lab.ai/projects/dflash/)
17
+
18
+ **DFlash** is a novel speculative decoding method that utilizes a lightweight **block diffusion** model for drafting. It enables efficient, high-quality parallel drafting that pushes the limits of inference speed.
19
+
20
+ This model is the **drafter** component. It must be used in conjunction with the target model `Qwen/Qwen3.5-27B`. It was trained with a context length of 4096 tokens.
21
+
22
+ <div align="center">
23
+ <img src="assets/dflash_system.png" alt="DFlash Architecture" width="100%">
24
+ </div>
25
+
26
+ ## Quick Start
27
+
28
+ ### Installation
29
+
30
+ vLLM:
31
+ ```bash
32
+ uv pip install vllm
33
+ uv pip install -U vllm --torch-backend=auto --extra-index-url https://wheels.vllm.ai/nightly
34
+ ```
35
+
36
+ SGLang:
37
+ ```bash
38
+ uv pip install "git+https://github.com/sgl-project/sglang.git@refs/pull/20547/head#subdirectory=python"
39
+ ```
40
+
41
+ ### Launch Server
42
+
43
+ vLLM:
44
+ ```bash
45
+ vllm serve Qwen/Qwen3.5-27B \
46
+ --speculative-config '{"method": "dflash", "model": "z-lab/Qwen3.5-27B-DFlash", "num_speculative_tokens": 15}' \
47
+ --attention-backend flash_attn \
48
+ --max-num-batched-tokens 32768
49
+ ```
50
+
51
+ SGLang:
52
+ ```bash
53
+ # Optional: enable schedule overlapping (experimental, may not be stable)
54
+ # export SGLANG_ENABLE_SPEC_V2=1
55
+ # export SGLANG_ENABLE_DFLASH_SPEC_V2=1
56
+ # export SGLANG_ENABLE_OVERLAP_PLAN_STREAM=1
57
+
58
+ python -m sglang.launch_server \
59
+ --model-path Qwen/Qwen3.5-27B \
60
+ --speculative-algorithm DFLASH \
61
+ --speculative-draft-model-path z-lab/Qwen3.5-27B-DFlash \
62
+ --speculative-num-draft-tokens 16 \
63
+ --tp-size 1 \
64
+ --attention-backend fa3 \
65
+ --mem-fraction-static 0.75 \
66
+ --mamba-scheduler-strategy extra_buffer \
67
+ --trust-remote-code
68
+ ```
69
+ > **Tip:** For long-context or agentic workloads, add `--speculative-dflash-draft-window-size WINDOW_SIZE` to enable sliding-window attention for the drafter.
70
+
71
+ ### Usage
72
+
73
+ ```python
74
+ from openai import OpenAI
75
+
76
+ client = OpenAI(base_url="http://localhost:30000/v1", api_key="EMPTY")
77
+
78
+ response = client.chat.completions.create(
79
+ model="Qwen/Qwen3.5-27B",
80
+ messages=[{"role": "user", "content": "Write a quicksort in Python."}],
81
+ max_tokens=4096,
82
+ temperature=0.0
83
+ )
84
+ print(response.choices[0].message.content)
85
+ ```
86
+
87
+ ## Benchmark Results
88
+
89
+ **Setup:** Single NVIDIA B200, SGLang, thinking enabled, max output length 4096. We report end-to-end throughput, including prefill time. See our [GitHub repository](https://github.com/z-lab/dflash) for reproduction scripts.
90
+
91
+ ### Throughput and Speedup
92
+
93
+ _Tokens/sec (speedup vs. autoregressive baseline)_
94
+
95
+ **Block Size = 16**
96
+
97
+ | Task | Concurrency | AR | MTP | **DFlash** |
98
+ |---|---:|---:|---:|---:|
99
+ | Math500 | 1 | 84 | 243 (2.9x) | **397 (4.7x)** |
100
+ | | 8 | 625 | 1457 (2.3x) | **2270 (3.6x)** |
101
+ | | 16 | 1121 | 2224 (2.0x) | **3135 (2.8x)** |
102
+ | | 32 | 1949 | 2504 (1.3x) | **3712 (1.9x)** |
103
+ | GSM8K | 1 | 83 | 215 (2.6x) | **330 (4.0x)** |
104
+ | | 8 | 625 | 1303 (2.1x) | **1868 (3.0x)** |
105
+ | | 16 | 1109 | 1773 (1.6x) | **2589 (2.3x)** |
106
+ | | 32 | 1914 | 2170 (1.1x) | **3152 (1.6x)** |
107
+ | HumanEval | 1 | 83 | 236 (2.9x) | **427 (5.2x)** |
108
+ | | 8 | 602 | 1345 (2.2x) | **2079 (3.5x)** |
109
+ | | 16 | 1031 | 1921 (1.9x) | **2748 (2.7x)** |
110
+ | | 32 | 1720 | 2234 (1.3x) | **3198 (1.9x)** |
111
+ | MBPP | 1 | 84 | 200 (2.4x) | **347 (4.2x)** |
112
+ | | 8 | 627 | 1049 (1.7x) | **1826 (2.9x)** |
113
+ | | 16 | 1075 | 1729 (1.6x) | **2479 (2.3x)** |
114
+ | | 32 | 1832 | 1933 (1.1x) | **2808 (1.5x)** |
115
+ | MT-Bench | 1 | 84 | 169 (2.0x) | **255 (3.0x)** |
116
+ | | 8 | 622 | 1035 (1.7x) | **1444 (2.3x)** |
117
+ | | 16 | 1113 | 1550 (1.4x) | **1984 (1.8x)** |
118
+ | | 32 | 1900 | 1772 (0.9x) | **2391 (1.3x)** |
119
+
120
+ **Block Size = 8**
121
+
122
+ | Task | Concurrency | AR | MTP | **DFlash** |
123
+ |---|---:|---:|---:|---:|
124
+ | Math500 | 1 | 84 | 273 (3.2x) | **335 (4.0x)** |
125
+ | | 8 | 625 | 1673 (2.7x) | **2020 (3.2x)** |
126
+ | | 16 | 1121 | 2731 (2.4x) | **3646 (3.3x)** |
127
+ | | 32 | 1949 | 3739 (1.9x) | **4288 (2.2x)** |
128
+ | GSM8K | 1 | 83 | 243 (2.9x) | **301 (3.6x)** |
129
+ | | 8 | 625 | 1539 (2.5x) | **1814 (2.9x)** |
130
+ | | 16 | 1109 | 2472 (2.2x) | **2896 (2.6x)** |
131
+ | | 32 | 1914 | 3431 (1.8x) | **3822 (2.0x)** |
132
+ | HumanEval | 1 | 83 | 258 (3.1x) | **350 (4.2x)** |
133
+ | | 8 | 602 | 1486 (2.5x) | **1856 (3.1x)** |
134
+ | | 16 | 1031 | 2302 (2.2x) | **2749 (2.7x)** |
135
+ | | 32 | 1720 | 2477 (1.4x) | **3412 (2.0x)** |
136
+ | MBPP | 1 | 84 | 234 (2.8x) | **311 (3.7x)** |
137
+ | | 8 | 627 | 1375 (2.2x) | **1757 (2.8x)** |
138
+ | | 16 | 1075 | 2159 (2.0x) | **2661 (2.5x)** |
139
+ | | 32 | 1832 | 2885 (1.6x) | **3309 (1.8x)** |
140
+ | MT-Bench | 1 | 84 | 210 (2.5x) | **250 (3.0x)** |
141
+ | | 8 | 622 | 1300 (2.1x) | **1495 (2.4x)** |
142
+ | | 16 | 1113 | 2105 (1.9x) | **2403 (2.2x)** |
143
+ | | 32 | 1900 | 2873 (1.5x) | **3256 (1.7x)** |
144
+
145
+ ### Acceptance Length
146
+ _Format: MTP / DFlash (averaged across concurrency levels)_
147
+
148
+ | Task | B8 | B16 |
149
+ |---|---:|---:|
150
+ | Math500 | 5.73 / **5.90** | 7.14 / **7.93** |
151
+ | GSM8K | 5.54 / **5.57** | 6.84 / **7.22** |
152
+ | HumanEval | 5.81 / **6.34** | 7.38 / **9.18** |
153
+ | MBPP | 5.10 / **5.60** | 5.94 / **7.27** |
154
+ | MT-Bench | **4.60** / 4.54 | 5.30 / **5.47** |
155
+
156
+ ## Acknowledgements
157
+
158
+ Special thanks to [David Wang](https://davidwa.ng/) for his outstanding engineering support on this project. We are also grateful to [Modal](https://modal.com/), [InnoMatrix](https://innomatrix.ai), and [Yotta Labs](https://www.yottalabs.ai/) for providing the compute resources used to train this draft model.
159
+
160
+ ## Citation
161
+
162
+ If you find DFlash useful, please cite our work. To share feedback on DFlash or request new model support, please fill out this form: [DFlash Feedback](https://forms.gle/4YNwfqb4nJdqn6hq9).
163
+
164
+ ```bibtex
165
+ @article{chen2026dflash,
166
+ title = {{DFlash: Block Diffusion for Flash Speculative Decoding}},
167
+ author = {Chen, Jian and Liang, Yesheng and Liu, Zhijian},
168
+ journal = {arXiv preprint arXiv:2602.06036},
169
+ year = {2026}
170
+ }
171
+ ```
assets/dflash_system.png ADDED

Git LFS Details

  • SHA256: bea1f82796909c1e4f7261ee3c08af743ec3c25057b83fca918808b76af4a7dc
  • Pointer size: 131 Bytes
  • Size of remote file: 338 kB
assets/speedup.png ADDED
config.json ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "DFlashDraftModel"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.0,
7
+ "auto_map": {
8
+ "AutoModel": "dflash.DFlashDraftModel"
9
+ },
10
+ "block_size": 16,
11
+ "dflash_config": {
12
+ "mask_token_id": 248070,
13
+ "target_layer_ids": [
14
+ 1,
15
+ 16,
16
+ 31,
17
+ 46,
18
+ 61
19
+ ]
20
+ },
21
+ "dtype": "bfloat16",
22
+ "eos_token_id": 248044,
23
+ "head_dim": 128,
24
+ "hidden_act": "silu",
25
+ "hidden_size": 5120,
26
+ "initializer_range": 0.02,
27
+ "intermediate_size": 17408,
28
+ "layer_types": [
29
+ "full_attention",
30
+ "full_attention",
31
+ "full_attention",
32
+ "full_attention",
33
+ "full_attention"
34
+ ],
35
+ "max_position_embeddings": 262144,
36
+ "max_window_layers": 5,
37
+ "model_type": "qwen3",
38
+ "num_attention_heads": 32,
39
+ "num_hidden_layers": 5,
40
+ "num_key_value_heads": 8,
41
+ "num_target_layers": 64,
42
+ "pad_token_id": 248044,
43
+ "rms_norm_eps": 1e-06,
44
+ "rope_scaling": null,
45
+ "rope_theta": 10000000,
46
+ "sliding_window": null,
47
+ "tie_word_embeddings": false,
48
+ "transformers_version": "4.57.1",
49
+ "use_cache": true,
50
+ "use_sliding_window": false,
51
+ "vocab_size": 248320
52
+ }
dflash.py ADDED
@@ -0,0 +1,188 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Optional, Callable
2
+ from typing_extensions import Unpack, Tuple
3
+ import torch
4
+ from torch import nn
5
+ from transformers.models.qwen3.modeling_qwen3 import (
6
+ Qwen3RMSNorm,
7
+ Qwen3RotaryEmbedding,
8
+ Qwen3Config,
9
+ Qwen3PreTrainedModel,
10
+ Qwen3MLP,
11
+ GradientCheckpointingLayer,
12
+ FlashAttentionKwargs,
13
+ rotate_half,
14
+ eager_attention_forward,
15
+ ALL_ATTENTION_FUNCTIONS,
16
+ )
17
+ from transformers.modeling_outputs import CausalLMOutputWithPast
18
+ from transformers.cache_utils import Cache
19
+
20
+ def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
21
+ cos = cos.unsqueeze(unsqueeze_dim)
22
+ sin = sin.unsqueeze(unsqueeze_dim)
23
+ q_len = q.size(-2)
24
+ q_embed = (q * cos[..., -q_len:, :]) + (rotate_half(q) * sin[..., -q_len:, :])
25
+ k_embed = (k * cos) + (rotate_half(k) * sin)
26
+ return q_embed, k_embed
27
+
28
+ class Qwen3DFlashAttention(nn.Module):
29
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
30
+
31
+ def __init__(self, config: Qwen3Config, layer_idx: int):
32
+ super().__init__()
33
+ self.config = config
34
+ self.layer_idx = layer_idx
35
+ self.head_dim = getattr(config, "head_dim", config.hidden_size // config.num_attention_heads)
36
+ self.num_key_value_groups = config.num_attention_heads // config.num_key_value_heads
37
+ self.scaling = self.head_dim**-0.5
38
+ self.attention_dropout = config.attention_dropout
39
+ self.is_causal = False
40
+ self.q_proj = nn.Linear(
41
+ config.hidden_size, config.num_attention_heads * self.head_dim, bias=config.attention_bias
42
+ )
43
+ self.k_proj = nn.Linear(
44
+ config.hidden_size, config.num_key_value_heads * self.head_dim, bias=config.attention_bias
45
+ )
46
+ self.v_proj = nn.Linear(
47
+ config.hidden_size, config.num_key_value_heads * self.head_dim, bias=config.attention_bias
48
+ )
49
+ self.o_proj = nn.Linear(
50
+ config.num_attention_heads * self.head_dim, config.hidden_size, bias=config.attention_bias
51
+ )
52
+ self.q_norm = Qwen3RMSNorm(self.head_dim, eps=config.rms_norm_eps)
53
+ self.k_norm = Qwen3RMSNorm(self.head_dim, eps=config.rms_norm_eps)
54
+ self.sliding_window = config.sliding_window if config.layer_types[layer_idx] == "sliding_attention" else None
55
+
56
+ def forward(
57
+ self,
58
+ hidden_states: torch.Tensor,
59
+ target_hidden: torch.Tensor,
60
+ position_embeddings: tuple[torch.Tensor, torch.Tensor],
61
+ attention_mask: Optional[torch.Tensor],
62
+ past_key_values: Optional[Cache] = None,
63
+ cache_position: Optional[torch.LongTensor] = None,
64
+ **kwargs: Unpack[FlashAttentionKwargs],
65
+ ) -> tuple[torch.Tensor, Optional[torch.Tensor]]:
66
+ bsz, q_len = hidden_states.shape[:-1]
67
+ ctx_len = target_hidden.shape[1]
68
+ q = self.q_proj(hidden_states)
69
+ q = q.view(bsz, q_len, -1, self.head_dim)
70
+ q = self.q_norm(q).transpose(1, 2)
71
+ k_ctx = self.k_proj(target_hidden)
72
+ k_noise = self.k_proj(hidden_states)
73
+ v_ctx = self.v_proj(target_hidden)
74
+ v_noise = self.v_proj(hidden_states)
75
+ k = torch.cat([k_ctx, k_noise], dim=1).view(bsz, ctx_len + q_len, -1, self.head_dim)
76
+ v = torch.cat([v_ctx, v_noise], dim=1).view(bsz, ctx_len + q_len, -1, self.head_dim)
77
+ k = self.k_norm(k).transpose(1, 2)
78
+ v = v.transpose(1, 2)
79
+ cos, sin = position_embeddings
80
+ q, k = apply_rotary_pos_emb(q, k, cos, sin)
81
+ if past_key_values is not None:
82
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
83
+ k, v = past_key_values.update(k, v, self.layer_idx, cache_kwargs)
84
+ attn_fn: Callable = eager_attention_forward
85
+ if self.config._attn_implementation != "eager":
86
+ attn_fn = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation]
87
+ attn_output, attn_weights = attn_fn(
88
+ self,
89
+ q,
90
+ k,
91
+ v,
92
+ attention_mask,
93
+ dropout=0.0 if not self.training else self.attention_dropout,
94
+ scaling=self.scaling,
95
+ sliding_window=self.sliding_window,
96
+ **kwargs,
97
+ )
98
+ attn_output = attn_output.reshape(bsz, q_len, -1)
99
+ attn_output = self.o_proj(attn_output)
100
+ return attn_output, attn_weights
101
+
102
+ class Qwen3DFlashDecoderLayer(GradientCheckpointingLayer):
103
+ def __init__(self, config: Qwen3Config, layer_idx: int):
104
+ super().__init__()
105
+ self.hidden_size = config.hidden_size
106
+ self.self_attn = Qwen3DFlashAttention(config=config, layer_idx=layer_idx)
107
+ self.mlp = Qwen3MLP(config)
108
+ self.input_layernorm = Qwen3RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
109
+ self.post_attention_layernorm = Qwen3RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
110
+
111
+ def forward(
112
+ self,
113
+ target_hidden: Optional[torch.Tensor] = None,
114
+ hidden_states: Optional[torch.Tensor] = None,
115
+ attention_mask: Optional[torch.Tensor] = None,
116
+ position_ids: Optional[torch.LongTensor] = None,
117
+ past_key_value: Optional[Cache] = None,
118
+ output_attentions: Optional[bool] = False,
119
+ use_cache: Optional[bool] = False,
120
+ cache_position: Optional[torch.LongTensor] = None,
121
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # necessary, but kept here for BC
122
+ **kwargs: Unpack[FlashAttentionKwargs],
123
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
124
+ residual = hidden_states
125
+ hidden_states = self.input_layernorm(hidden_states)
126
+ hidden_states = self.self_attn(
127
+ hidden_states=hidden_states,
128
+ target_hidden=target_hidden,
129
+ attention_mask=attention_mask,
130
+ position_ids=position_ids,
131
+ past_key_values=past_key_value,
132
+ output_attentions=output_attentions,
133
+ use_cache=use_cache,
134
+ cache_position=cache_position,
135
+ position_embeddings=position_embeddings,
136
+ **kwargs,
137
+ )[0]
138
+ hidden_states = residual + hidden_states
139
+ residual = hidden_states
140
+ hidden_states = self.post_attention_layernorm(hidden_states)
141
+ hidden_states = self.mlp(hidden_states)
142
+ hidden_states = residual + hidden_states
143
+ return hidden_states
144
+
145
+ class DFlashDraftModel(Qwen3PreTrainedModel):
146
+ config_class = Qwen3Config
147
+ _no_split_modules = ["Qwen3DFlashDecoderLayer"]
148
+
149
+ def __init__(self, config) -> None:
150
+ super().__init__(config)
151
+ self.config = config
152
+ self.layers = nn.ModuleList(
153
+ [Qwen3DFlashDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
154
+ )
155
+ self.target_layer_ids = self.config.dflash_config.get("target_layer_ids", None)
156
+ self.norm = Qwen3RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
157
+ self.rotary_emb = Qwen3RotaryEmbedding(config)
158
+ self.fc = nn.Linear(len(self.target_layer_ids) * config.hidden_size, config.hidden_size, bias=False)
159
+ self.hidden_norm = Qwen3RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
160
+ self.block_size = config.block_size
161
+ self.mask_token_id = self.config.dflash_config.get("mask_token_id", None)
162
+ self.post_init()
163
+
164
+ def forward(
165
+ self,
166
+ position_ids: torch.LongTensor,
167
+ attention_mask: Optional[torch.Tensor] = None,
168
+ noise_embedding: Optional[torch.Tensor] = None,
169
+ target_hidden: Optional[torch.Tensor] = None,
170
+ past_key_values: Optional[Cache] = None,
171
+ use_cache: bool = False,
172
+ **kwargs,
173
+ ) -> CausalLMOutputWithPast:
174
+ hidden_states = noise_embedding
175
+ target_hidden = self.hidden_norm(self.fc(target_hidden))
176
+ position_embeddings = self.rotary_emb(hidden_states, position_ids)
177
+ for layer in self.layers:
178
+ hidden_states = layer(
179
+ hidden_states=hidden_states,
180
+ target_hidden=target_hidden,
181
+ attention_mask=attention_mask,
182
+ position_ids=position_ids,
183
+ past_key_value=past_key_values,
184
+ use_cache=use_cache,
185
+ position_embeddings=position_embeddings,
186
+ **kwargs,
187
+ )
188
+ return self.norm(hidden_states)
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e44cae31ebda1940da56318c129509df468a1a6508a8f816a03e0c5c8661b77
3
+ size 3460432504