RangiLyu commited on
Commit
7554694
·
verified ·
1 Parent(s): de4f18d

update readme

Browse files
Files changed (5) hide show
  1. .gitattributes +1 -0
  2. README.md +109 -3
  3. deployment_guide.md +116 -0
  4. figs/efficiency.jpg +2 -2
  5. figs/performance.png +3 -0
.gitattributes CHANGED
@@ -36,3 +36,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
36
  tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
  figs/efficiency.jpg filter=lfs diff=lfs merge=lfs -text
38
  figs/title.png filter=lfs diff=lfs merge=lfs -text
 
 
36
  tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
  figs/efficiency.jpg filter=lfs diff=lfs merge=lfs -text
38
  figs/title.png filter=lfs diff=lfs merge=lfs -text
39
+ figs/performance.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -24,7 +24,7 @@ pipeline_tag: image-text-to-text
24
 
25
  ## Introduction
26
 
27
- We introduce **Intern-S2-Preview**, an efficient 35B scientific multimodal foundation model. Beyond conventional parameter and data scaling, Intern-S2-Preview explores **task scaling**: increasing the difficulty, diversity, and coverage of scientific tasks to further unlock model capabilities.
28
 
29
  By extending professional scientific tasks into a full-chain training pipeline from pre-training to reinforcement learning, Intern-S2-Preview achieves performance comparable to the trillion-scale Intern-S1-Pro on multiple core professional scientific tasks, while using only 35B parameters. At the same time, it maintains strong general reasoning, multimodal understanding, coding, and agent capabilities.
30
 
@@ -45,12 +45,12 @@ By extending professional scientific tasks into a full-chain training pipeline f
45
 
46
  We evaluate the Intern-S2-Preview on various benchmarks, including general datasets and scientific datasets. We report the performance comparison with the recent VLMs and LLMs below.
47
 
48
- ![performance](./figs/performance.jpeg)
49
 
50
 
51
  > **Note**: <u>Underline</u> means the best performance among open-sourced models, **Bold** indicates the best performance among all models.
52
 
53
- We use the [OpenCompass](https://github.com/open-compass/OpenCompass/) and [VLMEvalKit](https://github.com/open-compass/vlmevalkit) to evaluate all models.
54
 
55
 
56
  ## Quick Start
@@ -288,3 +288,109 @@ print(json.dumps(response.model_dump(), indent=2, ensure_ascii=False))
288
  ```
289
 
290
  > Note: We do not recommend disabling thinking mode for agentic tasks.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
  ## Introduction
26
 
27
+ We introduce **Intern-S2-Preview**, an efficient 35B scientific multimodal foundation model continued pre-trained from Qwen3.5. Beyond conventional parameter and data scaling, Intern-S2-Preview explores **task scaling**: increasing the difficulty, diversity, and coverage of scientific tasks to further unlock model capabilities.
28
 
29
  By extending professional scientific tasks into a full-chain training pipeline from pre-training to reinforcement learning, Intern-S2-Preview achieves performance comparable to the trillion-scale Intern-S1-Pro on multiple core professional scientific tasks, while using only 35B parameters. At the same time, it maintains strong general reasoning, multimodal understanding, coding, and agent capabilities.
30
 
 
45
 
46
  We evaluate the Intern-S2-Preview on various benchmarks, including general datasets and scientific datasets. We report the performance comparison with the recent VLMs and LLMs below.
47
 
48
+ ![performance](./figs/performance.png)
49
 
50
 
51
  > **Note**: <u>Underline</u> means the best performance among open-sourced models, **Bold** indicates the best performance among all models.
52
 
53
+ We use the [OpenCompass](https://github.com/open-compass/OpenCompass/) and [VLMEvalKit](https://github.com/open-compass/vlmevalkit) to evaluate all models. For text reasoning benchmarks, Intern-S2-Preview is evaluated with a maximum inference length of 128K tokens, while for multimodal benchmarks, it is evaluated with a maximum inference length of 64K tokens.
54
 
55
 
56
  ## Quick Start
 
288
  ```
289
 
290
  > Note: We do not recommend disabling thinking mode for agentic tasks.
291
+
292
+
293
+ ## Agent Integration
294
+
295
+ Intern-S2-Preview can be plugged into agent frameworks in two ways: connecting to a **self-hosted deployment**, or calling the **official InternLM API**. Below we cover both, with examples for agent frameworks (OpenClaw, Hermes, etc.) and for Claude Code.
296
+
297
+ ### 1. Self-hosted Deployment (LMDeploy as an example)
298
+
299
+ First, serve the model with LMDeploy following the [Model Deployment Guide](./deployment_guide.md). The example below assumes the server is running at `http://0.0.0.0:23333`.
300
+
301
+ #### Connecting Agent Frameworks
302
+
303
+ Most agent frameworks (OpenClaw, Hermes, etc.) accept an OpenAI-compatible endpoint. Point them at the LMDeploy server base url `http://0.0.0.0:23333/v1`.
304
+
305
+ You can check the connection with the following command:
306
+
307
+ ```bash
308
+ curl http://0.0.0.0:23333/v1/chat/completions \
309
+ -H "Content-Type: application/json" \
310
+ -H "Authorization: Bearer EMPTY" \
311
+ -d '{
312
+ "model": "internlm/Intern-S2-Preview",
313
+ "messages": [
314
+ {"role": "user", "content": "Hello"}
315
+ ],
316
+ "temperature": 0.8,
317
+ "top_p": 0.95
318
+ }'
319
+ ```
320
+
321
+ Or you can configure your agent framework with the environment variables
322
+
323
+ ```bash
324
+ export OPENAI_API_KEY=EMPTY
325
+ export OPENAI_BASE_URL=http://0.0.0.0:23333/v1
326
+ export OPENAI_MODEL=internlm/Intern-S2-Preview
327
+ ```
328
+
329
+ Remember to launch LMDeploy with `--tool-call-parser interns2-preview` so tool calls are parsed correctly.
330
+
331
+ #### Connecting Claude Code
332
+
333
+ LMDeploy exposes an Anthropic-compatible `/v1/messages` endpoint that Claude Code can talk to directly. Add the following to `~/.claude/settings.json`:
334
+
335
+ ```json
336
+ {
337
+ "env": {
338
+ "ANTHROPIC_BASE_URL": "http://127.0.0.1:23333",
339
+ "ANTHROPIC_AUTH_TOKEN": "dummy",
340
+ "ANTHROPIC_MODEL": "internlm/Intern-S2-Preview",
341
+ "ANTHROPIC_CUSTOM_MODEL_OPTION": "internlm/Intern-S2-Preview"
342
+ }
343
+ }
344
+ ```
345
+
346
+ For a full walkthrough (curl verification, model routing, troubleshooting), see [LMDeploy × Claude Code](https://lmdeploy.readthedocs.io/en/latest/intergration/claude_code.html).
347
+
348
+ ### 2. Official Intern API
349
+
350
+ If you do not want to self-host, you can use the official Intern API. Register at [internlm.intern-ai.org.cn](https://internlm.intern-ai.org.cn/) and create an API token (`sk-xxxxxxxx`).
351
+
352
+ #### Connecting Agent Frameworks
353
+
354
+ The service is OpenAI-compatible, so any agent framework works. You can set the base url to `https://chat.intern-ai.org.cn/api/v1` and the model name to `intern-s2-preview` in the cli or config file.
355
+
356
+ You can check the connection with the following command:
357
+
358
+ ```bash
359
+ curl https://chat.intern-ai.org.cn/api/v1/chat/completions \
360
+ -H "Content-Type: application/json" \
361
+ -H "Authorization: Bearer sk-xxxxxxxx" \
362
+ -d '{
363
+ "model": "intern-s2-preview",
364
+ "messages": [
365
+ {"role": "user", "content": "Hello"}
366
+ ],
367
+ "temperature": 0.8,
368
+ "top_p": 0.95
369
+ }'
370
+ ```
371
+
372
+ Refer to the [Intern API documentation](https://internlm.intern-ai.org.cn/api/document?lang=en) for the current endpoint, available model names, rate limits, and advanced parameters.
373
+
374
+ #### Connecting Claude Code
375
+
376
+ Claude Code can route to the official Intern API by pointing `ANTHROPIC_BASE_URL` at the Intern Anthropic-compatible gateway:
377
+
378
+ ```json
379
+ {
380
+ "env": {
381
+ "ANTHROPIC_BASE_URL": "http://chat.staging.intern-ai.org.cn",
382
+ "ANTHROPIC_AUTH_TOKEN": "your-api-token",
383
+ "ANTHROPIC_MODEL": "intern-s2-preview",
384
+ "ANTHROPIC_SMALL_FAST_MODEL": "intern-s2-preview"
385
+ }
386
+ }
387
+ ```
388
+
389
+ Then start claude code with the following command:
390
+
391
+ ```bash
392
+ claude --model intern-s2-preview
393
+ ```
394
+
395
+ For step-by-step setup, see [Intern API × Claude Code Integration](https://internlm.intern-ai.org.cn/api/document?lang=en).
396
+
deployment_guide.md ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Intern-S2-Preview Deployment Guide
2
+
3
+ The Intern-S2-Preview release is a 35B-A3B model stored in bfloat16 weight format. This guide provides deployment examples for the following configurations:
4
+
5
+ - MTP speculative decoding (Recommended)
6
+ - Basic serving without MTP
7
+ - Long-context inference with YaRN RoPE configuration
8
+
9
+ > NOTE: The commands below are reference configurations. Inference frameworks are under active development, so use the latest framework documentation and your local validation results when tuning production deployments.
10
+
11
+ ## LMDeploy
12
+
13
+ Use the latest LMDeploy (>=0.13.0) with Intern-S2-Preview support.
14
+
15
+ - Serving With MTP (Recommended)
16
+
17
+ ```bash
18
+ lmdeploy serve api_server \
19
+ internlm/Intern-S2-Preview \
20
+ --trust-remote-code \
21
+ --backend pytorch \
22
+ --tp 2 \
23
+ --reasoning-parser default \
24
+ --tool-call-parser interns2-preview \
25
+ --speculative-algorithm qwen3_5_mtp \
26
+ --speculative-num-draft-tokens 4 \
27
+ --max-batch-size 256
28
+ ```
29
+
30
+ - Basic Serving Without MTP
31
+
32
+ ```bash
33
+ lmdeploy serve api_server \
34
+ internlm/Intern-S2-Preview \
35
+ --trust-remote-code \
36
+ --backend pytorch \
37
+ --tp 2 \
38
+ --reasoning-parser default \
39
+ --tool-call-parser interns2-preview
40
+ ```
41
+
42
+ - Long-Context Serving
43
+
44
+ For long-context inference, configure both `--session-len` and YaRN RoPE parameters. The following example uses a 512k context length:
45
+
46
+ ```bash
47
+ lmdeploy serve api_server \
48
+ internlm/Intern-S2-Preview \
49
+ --trust-remote-code \
50
+ --tp 2 \
51
+ --backend pytorch \
52
+ --reasoning-parser default \
53
+ --tool-call-parser interns2-preview \
54
+ --session-len 512000 \
55
+ --max-batch-size 64 \
56
+ --hf-overrides '{"text_config": {"rope_parameters": {"mrope_interleaved": true, "mrope_section": [11, 11, 10], "rope_type": "yarn", "rope_theta": 10000000, "partial_rotary_factor": 0.25, "factor": 4.0, "original_max_position_embeddings": 262144}}}'
57
+ ```
58
+
59
+ ## vLLM
60
+
61
+ Use the latest vLLM Docker image or source build with Intern-S2-Preview support.
62
+
63
+ - Serving With MTP (Recommended)
64
+
65
+ ```bash
66
+ vllm serve internlm/Intern-S2-Preview \
67
+ --trust-remote-code \
68
+ --tensor-parallel-size 2 \
69
+ --reasoning-parser qwen3 \
70
+ --enable-auto-tool-choice \
71
+ --tool-call-parser qwen3_coder \
72
+ --speculative-config '{"method":"mtp","num_speculative_tokens":4}'
73
+ ```
74
+
75
+ - Basic Serving Without MTP
76
+
77
+ ```bash
78
+ vllm serve internlm/Intern-S2-Preview \
79
+ --trust-remote-code \
80
+ --tensor-parallel-size 2 \
81
+ --reasoning-parser qwen3 \
82
+ --enable-auto-tool-choice \
83
+ --tool-call-parser qwen3_coder
84
+ ```
85
+
86
+ ## SGLang
87
+
88
+ Use the latest SGLang Docker image or source build with Intern-S2-Preview support.
89
+
90
+ - Serving With MTP (Recommended)
91
+
92
+ ```bash
93
+ SGLANG_ENABLE_SPEC_V2=1 \
94
+ python3 -m sglang.launch_server \
95
+ --model-path internLM/Intern-S2-Preview \
96
+ --trust-remote-code \
97
+ --tp-size 2 \
98
+ --reasoning-parser qwen3 \
99
+ --tool-call-parser qwen3_coder \
100
+ --mamba-scheduler-strategy extra_buffer \
101
+ --speculative-algo 'NEXTN' \
102
+ --speculative-eagle-topk 1 \
103
+ --speculative-num-steps 3 \
104
+ --speculative-num-draft-tokens 4
105
+ ```
106
+
107
+ - Basic Serving Without MTP
108
+
109
+ ```bash
110
+ python3 -m sglang.launch_server \
111
+ --model-path internlm/Intern-S2-Preview \
112
+ --trust-remote-code \
113
+ --tp-size 2 \
114
+ --reasoning-parser qwen3 \
115
+ --tool-call-parser qwen3_coder
116
+ ```
figs/efficiency.jpg CHANGED

Git LFS Details

  • SHA256: 2d7b1336523b6fe067a513fab92964c30c7a28a682a0debed4402041092bd8de
  • Pointer size: 131 Bytes
  • Size of remote file: 182 kB

Git LFS Details

  • SHA256: 39b53166ece4ceda370e99c9d864f8150b98159747cd84c3d538588e3934c859
  • Pointer size: 131 Bytes
  • Size of remote file: 346 kB
figs/performance.png ADDED

Git LFS Details

  • SHA256: 85ec61e9af588fb1f03774c79517b6052e93e63727e92b24bb5d868d8e420d03
  • Pointer size: 132 Bytes
  • Size of remote file: 1.1 MB