Speculator Models
Collection
16 items
•
Updated
•
7
This is a speculator model designed for use with Qwen/Qwen3-235B-A22B, based on the EAGLE-3 speculative decoding algorithm.
It was trained using the speculators library on a combination of the Magpie-Align/Magpie-Llama-3.1-Pro-300K-Filtered dataset and the train_sft split of the HuggingFaceH4/ultrachat_200k dataset.
This model should be used with the Qwen/Qwen3-235B-A22B chat template, specifically through the /chat/completions endpoint. It was trained with thinking model on.
vllm serve Qwen/Qwen3-235B-A22B \
-tp 8 \
--speculative-config '{
"model": "RedHatAI/Qwen3-235B-A22B-speculator.eagle3",
"num_speculative_tokens": 3,
"method": "eagle3"
}'
| Use Case | Dataset | Number of Samples |
|---|---|---|
| Coding | HumanEval | 168 |
| Math Reasoning | gsm8k | 80 |
| Text Summarization | CNN/Daily Mail | 80 |
| Use Case | k=1 | k=2 | k=3 | k=4 | k=5 |
|---|---|---|---|---|---|
| Coding | 1.77 | 2.19 | 2.51 | 2.72 | 2.83 |
| Math Reasoning | 1.77 | 2.33 | 2.73 | 3.03 | 3.24 |
| Text Summarization | 1.63 | 2.00 | 2.22 | 2.34 | 2.40 |
Command
GUIDELLM__PREFERRED_ROUTE="chat_completions" \
guidellm benchmark \
--target "http://localhost:8000/v1" \
--data "RedHatAI/speculator_benchmarks" \
--data-args '{"data_files": "HumanEval.jsonl"}' \
--rate-type sweep \
--max-seconds 600 \
--output-path "Qwen235B-HumanEval.json" \
</details>
Base model
Qwen/Qwen3-235B-A22B