3amthoughts commited on
Commit
0429ec1
·
verified ·
1 Parent(s): 1e9f9f5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -4
README.md CHANGED
@@ -69,9 +69,10 @@ To measure exactly 4 liters, follow these steps:
69
  You now have exactly 4 liters of water remaining in the 5-liter jug.
70
  ```
71
 
 
72
  💻 Prompt Format (ChatML)
73
  DeepLink-R1 strictly uses the ChatML prompt format.
74
- code
75
  Text
76
  <|im_start|>system
77
  You are a logical architect. Think step-by-step.<|im_end|>
@@ -82,11 +83,11 @@ How many 'r's are in the word strawberry?<|im_end|>
82
  ...
83
  </think>
84
  ...<|im_end|>
85
-
86
 
87
  🚀 Usage
88
  Using transformers (Python)
89
- code
90
  Python
91
  from transformers import AutoModelForCausalLM, AutoTokenizer
92
  import torch
@@ -107,4 +108,5 @@ messages = [
107
 
108
  inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to("cuda")
109
  outputs = model.generate(inputs, max_new_tokens=1024, temperature=0.6)
110
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
 
 
69
  You now have exactly 4 liters of water remaining in the 5-liter jug.
70
  ```
71
 
72
+
73
  💻 Prompt Format (ChatML)
74
  DeepLink-R1 strictly uses the ChatML prompt format.
75
+ ``` code
76
  Text
77
  <|im_start|>system
78
  You are a logical architect. Think step-by-step.<|im_end|>
 
83
  ...
84
  </think>
85
  ...<|im_end|>
86
+ ```
87
 
88
  🚀 Usage
89
  Using transformers (Python)
90
+ ```code
91
  Python
92
  from transformers import AutoModelForCausalLM, AutoTokenizer
93
  import torch
 
108
 
109
  inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to("cuda")
110
  outputs = model.generate(inputs, max_new_tokens=1024, temperature=0.6)
111
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
112
+ ```