Mingke977 commited on
Commit
fac5ece
·
verified ·
1 Parent(s): 58f80fd

Add files using upload-large-folder tool

Browse files
Files changed (1) hide show
  1. docs/deploy_guidance.md +25 -0
docs/deploy_guidance.md ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Model Deployment Guide
2
+
3
+ > [!Note]
4
+ > This guide offers a selection of deployment command examples for JoyAI-LLM Flash, which may not be the optimal configuration. Given the rapid evolution of inference engines, we recommend referring to their official documentation for the latest updates to ensure peak performance.
5
+
6
+ > Support for JoyAI-LLM Flash’s dense MTP architecture is currently being integrated into vLLM and SGLang. Until these PRs are merged into a stable release, please use the nightly Docker image for access to these features.
7
+
8
+ ## SGLang Deployment
9
+
10
+ Similarly, here is the example to run on a single GPU card via SGLang:
11
+
12
+ 1. pull the Docker image.
13
+ ```bash
14
+ docker pull jdopensource/joyai-llm-sglang:v0.5.8-joyai_llm_flash
15
+ ```
16
+ 2. launch JoyAI-LLM Flash model with dense MTP.
17
+
18
+ ```bash
19
+ python3 -m sglang.launch_server --model-path jdopensource/JoyAI-LLM-Flash-Block-INT8 --tp-size 1 --trust-remote-code \
20
+ --tool-call-parser qwen3_coder \
21
+ --speculative-algorithm EAGLE \
22
+ --speculative-num-steps 3 --speculative-eagle-topk 1 --speculative-num-draft-tokens 4
23
+ ```
24
+ **Key notes:**
25
+ - `--tool-call-parser qwen3_coder`: Required when enabling tool usage.