davanstrien HF Staff commited on
Commit
a2ed5f0
·
1 Parent(s): 06504ae

Add transformers continuous batching response generation script

Browse files

Uses transformers native CB for GPU inference - no vLLM dependency.
Tested with Qwen3-4B (L4) and Qwen3-8B (A10G).

Files changed (2) hide show
  1. README.md +105 -0
  2. generate-responses.py +567 -0
README.md ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ viewer: false
3
+ tags:
4
+ - uv-script
5
+ - transformers
6
+ - continuous-batching
7
+ - gpu
8
+ - inference
9
+ ---
10
+
11
+ # Transformers Continuous Batching Scripts
12
+
13
+ GPU inference scripts using transformers' native continuous batching (CB). No vLLM dependency required.
14
+
15
+ ## Why transformers CB?
16
+
17
+ - **Instant new model support** - works with any model supported by transformers, including newly released architectures. No waiting for vLLM to add support.
18
+ - **No dependency headaches** - no vLLM, flashinfer, or custom wheel indexes. Just `transformers` + `accelerate`.
19
+ - **Simple HF Jobs setup** - no Docker image needed. Just `hf jobs uv run`.
20
+ - **~95% of vLLM throughput** - uses PagedAttention and continuous scheduling for near-vLLM performance.
21
+
22
+ ## Available Scripts
23
+
24
+ ### generate-responses.py
25
+
26
+ Generate responses for prompts in a dataset. Supports chat messages and plain text prompts.
27
+
28
+ #### Quick Start
29
+
30
+ ```bash
31
+ # Local (requires GPU)
32
+ uv run generate-responses.py \
33
+ username/input-dataset \
34
+ username/output-dataset \
35
+ --prompt-column question
36
+
37
+ # HF Jobs (single GPU)
38
+ hf jobs uv run --flavor l4x1 -s HF_TOKEN \
39
+ https://huggingface.co/datasets/uv-scripts/transformers-inference/raw/main/generate-responses.py \
40
+ username/input-dataset \
41
+ username/output-dataset \
42
+ --prompt-column question \
43
+ --max-tokens 1024
44
+
45
+ # HF Jobs (multi-GPU for larger models)
46
+ hf jobs uv run --flavor l4x4 -s HF_TOKEN \
47
+ https://huggingface.co/datasets/uv-scripts/transformers-inference/raw/main/generate-responses.py \
48
+ username/input-dataset \
49
+ username/output-dataset \
50
+ --model-id Qwen/Qwen3-30B-A3B-Instruct-2507 \
51
+ --messages-column messages \
52
+ --max-batch-tokens 2048 \
53
+ --max-tokens 4096
54
+ ```
55
+
56
+ #### Parameters
57
+
58
+ | Parameter | Default | Description |
59
+ |-----------|---------|-------------|
60
+ | `--model-id` | `Qwen/Qwen3-4B-Instruct-2507` | Any HF causal LM model |
61
+ | `--messages-column` | `messages` | Column with chat messages |
62
+ | `--prompt-column` | - | Column with plain text prompts (alternative to messages) |
63
+ | `--output-column` | `response` | Name for the generated response column |
64
+ | `--temperature` | `0.7` | Sampling temperature |
65
+ | `--top-p` | `0.8` | Top-p (nucleus) sampling |
66
+ | `--top-k` | `20` | Top-k sampling |
67
+ | `--max-tokens` | `4096` | Maximum tokens to generate per response |
68
+ | `--repetition-penalty` | `1.0` | Repetition penalty |
69
+ | `--max-batch-tokens` | `512` | Token budget per scheduling step (see below) |
70
+ | `--dtype` | `bfloat16` | Model precision (`bfloat16`, `float16`, `float32`) |
71
+ | `--attn-implementation` | `paged|sdpa` | Attention backend (`paged|sdpa` or `paged\|flash_attention_2`) |
72
+ | `--max-samples` | all | Limit to N samples (useful for testing) |
73
+ | `--hf-token` | - | HF token (or use `HF_TOKEN` env var) |
74
+ | `--skip-long-prompts` | `True` | Skip prompts exceeding context length |
75
+
76
+ #### Tuning `--max-batch-tokens`
77
+
78
+ This is the key performance parameter. It controls how many tokens the continuous batching scheduler processes per step:
79
+
80
+ - **Too low** (e.g., 128): GPU underutilized, slow throughput
81
+ - **Too high** (e.g., 8192): May cause out-of-memory errors
82
+ - **Default 512**: Conservative, works on most GPUs
83
+ - **Recommended for A100/H100**: 2048-4096
84
+ - **Recommended for L4**: 512-1024
85
+
86
+ If you hit OOM errors, reduce this value or switch to `--dtype float16`.
87
+
88
+ ## Current Limitations
89
+
90
+ - **Single GPU only** - `device_map="auto"` (pipeline parallelism) doesn't work with CB's PagedAttention cache. Transformers does have tensor parallelism (`tp_plan="auto"`) for supported models, but it requires `torchrun` and is undocumented with CB. For now, use a model that fits on one GPU (e.g., 8B in bf16 on A10G/L4 with 24GB).
91
+ - **Text-only** - no vision-language model support yet.
92
+
93
+ ## When to use this vs vLLM
94
+
95
+ | | Transformers CB | vLLM |
96
+ |---|---|---|
97
+ | **Best for** | New/niche models, simple setup, avoiding dependency issues | Maximum throughput, production serving |
98
+ | **Model support** | Any transformers model, immediately | Popular models, may lag on new architectures |
99
+ | **Dependencies** | `transformers` + `accelerate` | `vllm` + `flashinfer` + custom indexes |
100
+ | **Docker image** | Not needed | `vllm/vllm-openai` recommended |
101
+ | **Multi-GPU** | Single GPU only (for now) | Tensor parallelism |
102
+ | **Performance** | ~95% of vLLM for text generation | Fastest for supported models |
103
+ | **VLM support** | Not yet | Yes |
104
+
105
+ **Rule of thumb**: Use transformers CB when you want simplicity and broad model support. Use vLLM when you need maximum throughput with well-supported models.
generate-responses.py ADDED
@@ -0,0 +1,567 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.10"
3
+ # dependencies = [
4
+ # "accelerate",
5
+ # "datasets",
6
+ # "huggingface-hub",
7
+ # "hf-xet",
8
+ # "torch",
9
+ # "tqdm",
10
+ # "transformers>=5.1",
11
+ # ]
12
+ # ///
13
+ """
14
+ Generate responses for prompts in a dataset using transformers continuous batching.
15
+
16
+ Uses transformers' native continuous batching (CB) for efficient GPU inference.
17
+ No vLLM dependency required - works with any model supported by transformers,
18
+ including newly released architectures.
19
+
20
+ Example usage:
21
+ # Local execution
22
+ uv run generate-responses.py \\
23
+ username/input-dataset \\
24
+ username/output-dataset \\
25
+ --prompt-column question
26
+
27
+ # With custom model and sampling parameters
28
+ uv run generate-responses.py \\
29
+ username/input-dataset \\
30
+ username/output-dataset \\
31
+ --model-id meta-llama/Llama-3.1-8B-Instruct \\
32
+ --messages-column messages \\
33
+ --temperature 0.9 \\
34
+ --max-tokens 2048
35
+
36
+ # HF Jobs execution (see script output for full command)
37
+ hf jobs uv run --flavor l4x1 ...
38
+ """
39
+
40
+ import argparse
41
+ import logging
42
+ import os
43
+ import sys
44
+ from datetime import datetime
45
+ from typing import Optional
46
+
47
+ import torch
48
+ from datasets import load_dataset
49
+ from huggingface_hub import DatasetCard, get_token, login
50
+ from transformers import AutoModelForCausalLM, AutoTokenizer
51
+ from transformers.generation import GenerationConfig
52
+
53
+ # Enable HF Transfer for faster downloads
54
+ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
55
+
56
+ logging.basicConfig(
57
+ level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s"
58
+ )
59
+ logger = logging.getLogger(__name__)
60
+
61
+
62
+ def check_gpu_availability() -> int:
63
+ """Check if CUDA is available and return the number of GPUs."""
64
+ if not torch.cuda.is_available():
65
+ logger.error("CUDA is not available. This script requires a GPU.")
66
+ logger.error(
67
+ "Please run on a machine with NVIDIA GPU or use HF Jobs with GPU flavor."
68
+ )
69
+ sys.exit(1)
70
+
71
+ num_gpus = torch.cuda.device_count()
72
+ for i in range(num_gpus):
73
+ gpu_name = torch.cuda.get_device_name(i)
74
+ gpu_memory = torch.cuda.get_device_properties(i).total_memory / 1024**3
75
+ logger.info(f"GPU {i}: {gpu_name} with {gpu_memory:.1f} GB memory")
76
+
77
+ return num_gpus
78
+
79
+
80
+ def create_dataset_card(
81
+ source_dataset: str,
82
+ model_id: str,
83
+ messages_column: str,
84
+ prompt_column: Optional[str],
85
+ generation_config: GenerationConfig,
86
+ num_gpus: int,
87
+ num_examples: int,
88
+ generation_time: str,
89
+ num_skipped: int = 0,
90
+ attn_implementation: str = "paged|sdpa",
91
+ ) -> str:
92
+ """Create a dataset card documenting the generation process."""
93
+ filtering_section = ""
94
+ if num_skipped > 0:
95
+ skip_percentage = (num_skipped / num_examples) * 100
96
+ processed = num_examples - num_skipped
97
+ filtering_section = f"""
98
+
99
+ ### Filtering Statistics
100
+
101
+ - **Total Examples**: {num_examples:,}
102
+ - **Processed**: {processed:,} ({100 - skip_percentage:.1f}%)
103
+ - **Skipped (too long)**: {num_skipped:,} ({skip_percentage:.1f}%)
104
+
105
+ Note: Prompts exceeding the model's maximum context length were skipped and have empty responses."""
106
+
107
+ input_col = prompt_column if prompt_column else messages_column
108
+ input_type = "plain text prompts" if prompt_column else "chat messages"
109
+
110
+ return f"""---
111
+ tags:
112
+ - generated
113
+ - transformers
114
+ - continuous-batching
115
+ - uv-script
116
+ ---
117
+
118
+ # Generated Responses Dataset
119
+
120
+ This dataset contains generated responses for prompts from [{source_dataset}](https://huggingface.co/datasets/{source_dataset}).
121
+
122
+ ## Generation Details
123
+
124
+ - **Source Dataset**: [{source_dataset}](https://huggingface.co/datasets/{source_dataset})
125
+ - **Input Column**: `{input_col}` ({input_type})
126
+ - **Model**: [{model_id}](https://huggingface.co/{model_id})
127
+ - **Backend**: transformers continuous batching
128
+ - **Number of Examples**: {num_examples:,}
129
+ - **Generation Date**: {generation_time}{filtering_section}
130
+
131
+ ### Generation Parameters
132
+
133
+ - **Temperature**: {generation_config.temperature}
134
+ - **Top P**: {generation_config.top_p}
135
+ - **Top K**: {generation_config.top_k}
136
+ - **Max New Tokens**: {generation_config.max_new_tokens}
137
+ - **Max Batch Tokens**: {generation_config.max_batch_tokens}
138
+ - **Repetition Penalty**: {generation_config.repetition_penalty}
139
+
140
+ ### Hardware Configuration
141
+
142
+ - **GPUs**: {num_gpus}
143
+ - **Attention Implementation**: {attn_implementation}
144
+
145
+ ## Dataset Structure
146
+
147
+ The dataset contains all columns from the source dataset plus:
148
+ - `response`: The generated response from the model
149
+
150
+ ## Generation Script
151
+
152
+ Generated using the transformers continuous batching script from [uv-scripts/transformers-inference](https://huggingface.co/datasets/uv-scripts/transformers-inference).
153
+
154
+ To reproduce this generation:
155
+
156
+ ```bash
157
+ uv run https://huggingface.co/datasets/uv-scripts/transformers-inference/raw/main/generate-responses.py \\
158
+ {source_dataset} \\
159
+ <output-dataset> \\
160
+ --model-id {model_id} \\
161
+ {"--prompt-column " + prompt_column if prompt_column else "--messages-column " + messages_column} \\
162
+ --temperature {generation_config.temperature} \\
163
+ --top-p {generation_config.top_p} \\
164
+ --top-k {generation_config.top_k} \\
165
+ --max-tokens {generation_config.max_new_tokens}
166
+ ```
167
+ """
168
+
169
+
170
+ def main(
171
+ src_dataset_hub_id: str,
172
+ output_dataset_hub_id: str,
173
+ model_id: str = "Qwen/Qwen3-4B-Instruct-2507",
174
+ messages_column: str = "messages",
175
+ prompt_column: Optional[str] = None,
176
+ output_column: str = "response",
177
+ temperature: float = 0.7,
178
+ top_p: float = 0.8,
179
+ top_k: int = 20,
180
+ max_tokens: int = 4096,
181
+ repetition_penalty: float = 1.0,
182
+ max_batch_tokens: int = 512,
183
+ dtype: str = "bfloat16",
184
+ attn_implementation: str = "paged|sdpa",
185
+ skip_long_prompts: bool = True,
186
+ max_samples: Optional[int] = None,
187
+ hf_token: Optional[str] = None,
188
+ ):
189
+ generation_start_time = datetime.now().isoformat()
190
+
191
+ # GPU check
192
+ num_gpus = check_gpu_availability()
193
+
194
+ # Authentication
195
+ HF_TOKEN = hf_token or os.environ.get("HF_TOKEN") or get_token()
196
+ if not HF_TOKEN:
197
+ logger.error("No HuggingFace token found. Please provide token via:")
198
+ logger.error(" 1. --hf-token argument")
199
+ logger.error(" 2. HF_TOKEN environment variable")
200
+ logger.error(" 3. Run 'huggingface-cli login' or use login() in Python")
201
+ sys.exit(1)
202
+
203
+ logger.info("HuggingFace token found, authenticating...")
204
+ login(token=HF_TOKEN)
205
+
206
+ # Resolve dtype
207
+ torch_dtype = getattr(torch, dtype, None)
208
+ if torch_dtype is None:
209
+ logger.error(f"Unknown dtype: {dtype}. Use 'bfloat16', 'float16', or 'float32'.")
210
+ sys.exit(1)
211
+
212
+ # Load model with continuous batching support
213
+ # Note: CB currently requires single-GPU. Multi-GPU device_map="auto" causes
214
+ # device mismatch errors with PagedAttention cache. Use a model that fits on one GPU.
215
+ if num_gpus > 1:
216
+ logger.warning(
217
+ "Multiple GPUs detected but transformers CB currently works on single GPU only. "
218
+ "Using cuda:0. Choose a model that fits on one GPU."
219
+ )
220
+ device_map = "cuda"
221
+ logger.info(f"Loading model: {model_id} (dtype={dtype}, attn={attn_implementation}, device_map={device_map})")
222
+
223
+ model = AutoModelForCausalLM.from_pretrained(
224
+ model_id,
225
+ attn_implementation=attn_implementation,
226
+ device_map=device_map,
227
+ dtype=torch_dtype,
228
+ )
229
+
230
+ # Load tokenizer
231
+ logger.info("Loading tokenizer...")
232
+ tokenizer = AutoTokenizer.from_pretrained(model_id, padding_side="left")
233
+
234
+ # Create generation config
235
+ do_sample = temperature != 1.0 or top_p != 1.0 or top_k != 0
236
+ generation_config = GenerationConfig(
237
+ max_new_tokens=max_tokens,
238
+ max_batch_tokens=max_batch_tokens,
239
+ do_sample=do_sample,
240
+ temperature=temperature,
241
+ top_p=top_p,
242
+ top_k=top_k,
243
+ repetition_penalty=repetition_penalty,
244
+ eos_token_id=tokenizer.eos_token_id,
245
+ pad_token_id=tokenizer.pad_token_id,
246
+ )
247
+
248
+ # Load dataset
249
+ logger.info(f"Loading dataset: {src_dataset_hub_id}")
250
+ dataset = load_dataset(src_dataset_hub_id, split="train")
251
+
252
+ if max_samples is not None and max_samples < len(dataset):
253
+ logger.info(f"Limiting dataset to {max_samples} samples")
254
+ dataset = dataset.select(range(max_samples))
255
+
256
+ total_examples = len(dataset)
257
+ logger.info(f"Dataset loaded with {total_examples:,} examples")
258
+
259
+ # Validate column
260
+ if prompt_column:
261
+ if prompt_column not in dataset.column_names:
262
+ logger.error(
263
+ f"Column '{prompt_column}' not found. Available columns: {dataset.column_names}"
264
+ )
265
+ sys.exit(1)
266
+ logger.info(f"Using prompt column: '{prompt_column}'")
267
+ use_messages = False
268
+ else:
269
+ if messages_column not in dataset.column_names:
270
+ logger.error(
271
+ f"Column '{messages_column}' not found. Available columns: {dataset.column_names}"
272
+ )
273
+ sys.exit(1)
274
+ logger.info(f"Using messages column: '{messages_column}'")
275
+ use_messages = True
276
+
277
+ # Get model's max context length for filtering
278
+ effective_max_len = model.config.max_position_embeddings
279
+ logger.info(f"Model max context length: {effective_max_len}")
280
+
281
+ # Prepare prompts and tokenize
282
+ logger.info("Preparing and tokenizing prompts...")
283
+ valid_input_ids = []
284
+ valid_indices = []
285
+ skipped_info = []
286
+
287
+ for i, example in enumerate(dataset):
288
+ if use_messages:
289
+ messages = example[messages_column]
290
+ prompt = tokenizer.apply_chat_template(
291
+ messages, tokenize=False, add_generation_prompt=True
292
+ )
293
+ else:
294
+ user_prompt = example[prompt_column]
295
+ messages = [{"role": "user", "content": user_prompt}]
296
+ prompt = tokenizer.apply_chat_template(
297
+ messages, tokenize=False, add_generation_prompt=True
298
+ )
299
+
300
+ input_ids = tokenizer.encode(prompt)
301
+
302
+ if skip_long_prompts:
303
+ if len(input_ids) <= effective_max_len:
304
+ valid_input_ids.append(input_ids)
305
+ valid_indices.append(i)
306
+ else:
307
+ skipped_info.append((i, len(input_ids)))
308
+ else:
309
+ valid_input_ids.append(input_ids)
310
+ valid_indices.append(i)
311
+
312
+ # Log filtering results
313
+ if skip_long_prompts and skipped_info:
314
+ logger.warning(
315
+ f"Skipped {len(skipped_info)} prompts exceeding context length ({effective_max_len} tokens)"
316
+ )
317
+ for idx, (prompt_idx, token_count) in enumerate(skipped_info[:10]):
318
+ logger.info(
319
+ f" - Example {prompt_idx}: {token_count} tokens (exceeds by {token_count - effective_max_len})"
320
+ )
321
+ if len(skipped_info) > 10:
322
+ logger.info(f" ... and {len(skipped_info) - 10} more")
323
+
324
+ skip_percentage = (len(skipped_info) / total_examples) * 100
325
+ if skip_percentage > 10:
326
+ logger.warning(f"WARNING: {skip_percentage:.1f}% of prompts were skipped!")
327
+
328
+ if not valid_input_ids:
329
+ logger.error("No valid prompts to process after filtering!")
330
+ sys.exit(1)
331
+
332
+ # Generate responses using continuous batching
333
+ logger.info(f"Starting generation for {len(valid_input_ids):,} prompts using continuous batching...")
334
+ logger.info(f"max_batch_tokens={max_batch_tokens}, max_new_tokens={max_tokens}")
335
+
336
+ batch_outputs = model.generate_batch(
337
+ inputs=valid_input_ids,
338
+ generation_config=generation_config,
339
+ progress_bar=True,
340
+ )
341
+
342
+ # Extract generated text
343
+ logger.info("Extracting generated responses...")
344
+ responses = [""] * total_examples
345
+
346
+ for request_id, output in batch_outputs.items():
347
+ # request_id is formatted as "req_0", "req_1", etc.
348
+ idx = int(request_id.split("_", 1)[1])
349
+ original_idx = valid_indices[idx]
350
+ generated_text = tokenizer.decode(
351
+ output.generated_tokens, skip_special_tokens=True
352
+ )
353
+ responses[original_idx] = generated_text.strip()
354
+
355
+ # Count non-empty responses
356
+ non_empty = sum(1 for r in responses if r)
357
+ logger.info(f"Generated {non_empty:,} non-empty responses out of {total_examples:,} total")
358
+
359
+ # Add responses to dataset
360
+ logger.info("Adding responses to dataset...")
361
+ dataset = dataset.add_column(output_column, responses)
362
+
363
+ # Create dataset card
364
+ logger.info("Creating dataset card...")
365
+ card_content = create_dataset_card(
366
+ source_dataset=src_dataset_hub_id,
367
+ model_id=model_id,
368
+ messages_column=messages_column,
369
+ prompt_column=prompt_column,
370
+ generation_config=generation_config,
371
+ num_gpus=num_gpus,
372
+ num_examples=total_examples,
373
+ generation_time=generation_start_time,
374
+ num_skipped=len(skipped_info) if skip_long_prompts else 0,
375
+ attn_implementation=attn_implementation,
376
+ )
377
+
378
+ # Push to Hub
379
+ logger.info(f"Pushing dataset to: {output_dataset_hub_id}")
380
+ dataset.push_to_hub(output_dataset_hub_id, token=HF_TOKEN)
381
+
382
+ card = DatasetCard(card_content)
383
+ card.push_to_hub(output_dataset_hub_id, token=HF_TOKEN)
384
+
385
+ logger.info("Generation complete!")
386
+ logger.info(
387
+ f"Dataset available at: https://huggingface.co/datasets/{output_dataset_hub_id}"
388
+ )
389
+
390
+
391
+ if __name__ == "__main__":
392
+ if len(sys.argv) > 1:
393
+ parser = argparse.ArgumentParser(
394
+ description="Generate responses using transformers continuous batching",
395
+ formatter_class=argparse.RawDescriptionHelpFormatter,
396
+ epilog="""
397
+ Examples:
398
+ # Basic usage with default Qwen model
399
+ uv run generate-responses.py input-dataset output-dataset \\
400
+ --prompt-column question
401
+
402
+ # With custom model and parameters
403
+ uv run generate-responses.py input-dataset output-dataset \\
404
+ --model-id meta-llama/Llama-3.1-8B-Instruct \\
405
+ --messages-column messages \\
406
+ --temperature 0.9 \\
407
+ --max-tokens 2048
408
+
409
+ # Increase batch token budget for better GPU utilization
410
+ uv run generate-responses.py input-dataset output-dataset \\
411
+ --prompt-column text \\
412
+ --max-batch-tokens 2048
413
+ """,
414
+ )
415
+
416
+ parser.add_argument(
417
+ "src_dataset_hub_id",
418
+ help="Input dataset on Hugging Face Hub (e.g., username/dataset-name)",
419
+ )
420
+ parser.add_argument(
421
+ "output_dataset_hub_id",
422
+ help="Output dataset name on Hugging Face Hub",
423
+ )
424
+ parser.add_argument(
425
+ "--model-id",
426
+ type=str,
427
+ default="Qwen/Qwen3-4B-Instruct-2507",
428
+ help="Model to use for generation (default: Qwen3-4B-Instruct-2507)",
429
+ )
430
+ parser.add_argument(
431
+ "--messages-column",
432
+ type=str,
433
+ default="messages",
434
+ help="Column containing chat messages (default: messages)",
435
+ )
436
+ parser.add_argument(
437
+ "--prompt-column",
438
+ type=str,
439
+ help="Column containing plain text prompts (alternative to --messages-column)",
440
+ )
441
+ parser.add_argument(
442
+ "--output-column",
443
+ type=str,
444
+ default="response",
445
+ help="Column name for generated responses (default: response)",
446
+ )
447
+ parser.add_argument(
448
+ "--max-samples",
449
+ type=int,
450
+ help="Maximum number of samples to process (default: all)",
451
+ )
452
+ parser.add_argument(
453
+ "--temperature",
454
+ type=float,
455
+ default=0.7,
456
+ help="Sampling temperature (default: 0.7)",
457
+ )
458
+ parser.add_argument(
459
+ "--top-p",
460
+ type=float,
461
+ default=0.8,
462
+ help="Top-p sampling parameter (default: 0.8)",
463
+ )
464
+ parser.add_argument(
465
+ "--top-k",
466
+ type=int,
467
+ default=20,
468
+ help="Top-k sampling parameter (default: 20)",
469
+ )
470
+ parser.add_argument(
471
+ "--max-tokens",
472
+ type=int,
473
+ default=4096,
474
+ help="Maximum tokens to generate (default: 4096)",
475
+ )
476
+ parser.add_argument(
477
+ "--repetition-penalty",
478
+ type=float,
479
+ default=1.0,
480
+ help="Repetition penalty (default: 1.0)",
481
+ )
482
+ parser.add_argument(
483
+ "--max-batch-tokens",
484
+ type=int,
485
+ default=512,
486
+ help="Token budget per batch for continuous batching scheduler (default: 512). "
487
+ "Increase for better GPU utilization on large GPUs (e.g., 2048-4096 for A100/H100).",
488
+ )
489
+ parser.add_argument(
490
+ "--dtype",
491
+ type=str,
492
+ default="bfloat16",
493
+ choices=["bfloat16", "float16", "float32"],
494
+ help="Model dtype (default: bfloat16)",
495
+ )
496
+ parser.add_argument(
497
+ "--attn-implementation",
498
+ type=str,
499
+ default="paged|sdpa",
500
+ help="Attention implementation (default: paged|sdpa). "
501
+ "Use 'paged|flash_attention_2' if flash-attn is installed.",
502
+ )
503
+ parser.add_argument(
504
+ "--hf-token",
505
+ type=str,
506
+ help="Hugging Face token (can also use HF_TOKEN env var)",
507
+ )
508
+ parser.add_argument(
509
+ "--skip-long-prompts",
510
+ action="store_true",
511
+ default=True,
512
+ help="Skip prompts exceeding context length (default: True)",
513
+ )
514
+ parser.add_argument(
515
+ "--no-skip-long-prompts",
516
+ dest="skip_long_prompts",
517
+ action="store_false",
518
+ help="Fail on prompts that exceed context length",
519
+ )
520
+
521
+ args = parser.parse_args()
522
+
523
+ main(
524
+ src_dataset_hub_id=args.src_dataset_hub_id,
525
+ output_dataset_hub_id=args.output_dataset_hub_id,
526
+ model_id=args.model_id,
527
+ messages_column=args.messages_column,
528
+ prompt_column=args.prompt_column,
529
+ output_column=args.output_column,
530
+ temperature=args.temperature,
531
+ top_p=args.top_p,
532
+ top_k=args.top_k,
533
+ max_tokens=args.max_tokens,
534
+ repetition_penalty=args.repetition_penalty,
535
+ max_batch_tokens=args.max_batch_tokens,
536
+ dtype=args.dtype,
537
+ attn_implementation=args.attn_implementation,
538
+ skip_long_prompts=args.skip_long_prompts,
539
+ max_samples=args.max_samples,
540
+ hf_token=args.hf_token,
541
+ )
542
+ else:
543
+ print("""
544
+ Transformers Continuous Batching - Response Generation
545
+ ======================================================
546
+
547
+ This script requires arguments. For usage information:
548
+ uv run generate-responses.py --help
549
+
550
+ Why transformers CB instead of vLLM?
551
+ - Works with ANY model supported by transformers (new models immediately!)
552
+ - No vLLM/flashinfer dependency issues
553
+ - Simpler setup - no custom Docker images or wheel indexes needed
554
+ - ~95% of vLLM throughput with PagedAttention and continuous scheduling
555
+
556
+ Example HF Jobs command:
557
+ hf jobs uv run \\
558
+ --flavor l4x1 \\
559
+ -s HF_TOKEN \\
560
+ https://huggingface.co/datasets/uv-scripts/transformers-inference/raw/main/generate-responses.py \\
561
+ username/input-dataset \\
562
+ username/output-dataset \\
563
+ --prompt-column question \\
564
+ --model-id Qwen/Qwen3-4B-Instruct-2507 \\
565
+ --temperature 0.7 \\
566
+ --max-tokens 4096
567
+ """)