repo stringclasses 147 values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2 values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/lerobot | 2,543 | Different finetune loss given policy.type=pi0 / policy.path=lerobot/pi0_base. What is the difference? | Hi, I have two different configurations:
1. ` --dataset.repo_id=BBBBBBob/libero_goal_lerobot \
--dataset.root=/home/j84403411/data/libero/libero_goal_lerobot \
--policy.path=lerobot/pi0_base \
--policy.push_to_hub=false \
--policy.use_proprio=true \
--output_dir=/home/j84403411/checkpoint/libero/pi0/libero_goal_proprio \
--policy.dtype=bfloat16 \
--steps=40_000 \
--batch_size=16 \
--rename_map='{"observation.images.image":"observation.images.base_0_rgb", "observation.images.wrist_image":"observation.images.left_wrist_0_rgb"}' \ `
and
2.
` --dataset.repo_id=BBBBBBob/libero_goal_lerobot \
--dataset.root=/home/j84403411/data/libero/libero_goal_lerobot \
--policy.type=pi0 \
--policy.pretrained_path=lerobot/pi0_base \
--policy.push_to_hub=false \
--policy.use_proprio=true \
--output_dir=/home/j84403411/checkpoint/libero/pi0/libero_goal_proprio \
--policy.dtype=bfloat16 \
--steps=40_000 \
--batch_size=16 \
--policy.input_features='{"observation.state": {"type": "STATE", "shape": [8]},
"observation.images.wrist_image": {"type": "VISUAL", "shape": [3, 256, 256]},
"observation.images.image": {"type": "VISUAL", "shape": [3, 256, 256]},
}' \
--policy.output_features='{"action": {"type": "ACTION", "shape": [7]}}' \ `
The loss trained from the second configuration is 10 times higher than the first one. What caused the difference? Do you know if different checkpoints are loaded in this case? I appreciate your help! | https://github.com/huggingface/lerobot/issues/2543 | closed | [] | 2025-11-28T12:34:38Z | 2025-12-01T11:25:17Z | null | BBBBBBob |
huggingface/transformers.js | 1,467 | Missing the following inputs: input_points, input_labels (or input_boxes) | ### Question
thanks for your excellent works!
I just write test code for SlimSAM model powered by transformers.js referring to this example(with some improvements): https://github.com/huggingface/transformers.js-examples/blob/main/segment-anything-webgpu/index.js
my code for `decode` method:
```js
// Decode segmentation
async function decode() {
if (!imageEmbeddings || isDecoding || isEncoding) return;
if (isDecoding) {
decodePending = true;
return;
}
isDecoding = true;
try {
let input_points = null;
let input_labels = null;
let input_boxes = null;
let outputs = null;
if (promptMode == "point" && points.length > 0) {
const reshaped = imageprocessed.reshaped_input_sizes[0]; // [H, W]
const scaledPoints = points.map(p => [
p.x * reshaped[1],
p.y * reshaped[0]
]);
const labels = points.map(p => BigInt(p.label));
input_points = new Tensor("float32", scaledPoints.flat(), [1, 1, points.length, 2]);
input_labels = new Tensor("int64", labels, [1, 1, points.length]);
// Fallback: if no prompts, skip
if (!input_points) return;
// Run model with point mode
outputs = await model({
...imageEmbeddings,
input_points: input_points,
input_labels: input_labels,
input_boxes: null
});
}
if (promptMode == "box" && box) {
const reshaped = imageprocessed.reshaped_input_sizes[0];
const [x1, y1, x2, y2] = [
box.x1 * reshaped[1],
box.y1 * reshaped[0],
box.x2 * reshaped[1],
box.y2 * reshaped[0]
];
input_boxes = new Tensor("float32", [x1, y1, x2, y2], [1, 1, 4]);
// Fallback: if no prompts, skip
if (!input_boxes) return;
// Run model with box mode
outputs = await model({
...imageEmbeddings,
input_points: null,
input_labels: null,
input_boxes: input_boxes
});
}
// Post-process
const masks = await processor.post_process_masks(
outputs.pred_masks,
imageprocessed.original_sizes,
imageprocessed.reshaped_input_sizes
);
const scores = outputs.iou_scores.data;
updateMask(masks[0], scores); // masks[0] is [3, H, W]
} catch (e) {
console.error("Decode error:", e);
statusEl.textContent = "❌ Segmentation failed.";
} finally {
isDecoding = false;
if (decodePending) {
decodePending = false;
decode();
}
}
}
```
it supports 2 prompt modes: `point` &` box` which selected by users on UI elements (html not provided).
but error printed every time when running `decode` method (at the line of calling `outputs = await model(...)`), the error message is:
with box prompt mode:
`Error: An error occurred during model execution: "Missing the following inputs: input_points, input_labels.`
with point prompt mode:
`Error: An error occurred during model execution: "Missing the following inputs: input_boxes.`
Should I pass all three parameters(input_points/input_labels/input_boxes) simultaneously, regardless of which prompt mode I’m using? How could I support point & box at the same time, since no demo codes found on internet. thanks!
```
version: transformers.js 3.5.0 from https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.5.0
os: Windows 10
chorme: 142
model: Xenova/slimsam-77-uniform
``` | https://github.com/huggingface/transformers.js/issues/1467 | closed | [
"question"
] | 2025-11-28T10:01:04Z | 2025-12-01T04:04:59Z | null | sherlockchou86 |
vllm-project/vllm | 29,643 | [Usage]: Enabling Tool call in the Python SDK | ### Your current environment
Hi Team,
I am currently exploring VLLM to enable tool calling, and I need some support with this. It would be very helpful if you could provide the corresponding Python code.
What I’m trying to achieve is to configure the Python package with the same settings that I use when starting the VLLM server. The configuration I’m using is:
vllm serve DeepSeek-R1-0528-Qwen3-8B \
--served-model-name deepseek \
--gpu_memory_utilization 0.5 \
--max_num_seqs 20 \
--max_model_len 10000 \
--enable-auto-tool-choice \
--tool-call-parser deepseek_v3 \
--chat-template tool_chat_template_deepseekr1.jinja \
--port 5050 \
--max_num_batched_tokens 5000
I need to replicate this exact configuration in Python.
Your support would be greatly appreciated. Please respond at your earliest convenience.
If you want, I can also write the **Python code equivalent** for these VLLM configurations.
Best Regards
Madan
### How would you like to use vllm
I want to use vLLM to serve a model with tool-calling support enabled. Specifically, I need to run the model with the same configuration parameters that I currently use when launching the vLLM server from the command line. These settings include GPU memory utilization, maximum sequence limits, tool-calling options, a custom tool-call parser, and a custom chat template.
My goal is to reproduce the following server configuration within a Python environment using the vLLM Python API:
vllm serve DeepSeek-R1-0528-Qwen3-8B \
--served-model-name deepseek \
--gpu_memory_utilization 0.5 \
--max_num_seqs 20 \
--max_model_len 10000 \
--enable-auto-tool-choice \
--tool-call-parser deepseek_v3 \
--chat-template tool_chat_template_deepseekr1.jinja \
--port 5050 \
--max_num_batched_tokens 5000
`
In short, I need Python code that sets these exact configurations so I can run vLLM programmatically with tool calling enabled.
If you want, I can also provide the **Python code equivalent** for this configuration.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/29643 | open | [
"usage"
] | 2025-11-28T04:39:47Z | 2025-12-01T14:54:47Z | 2 | Madan1215 |
vllm-project/vllm | 29,641 | [Bug]: Max Tokens not being honoured in Chat Completions for GPTOSS model | ### Your current environment
It seems that in the latest version of vllm 0.11+ Chat Completions has stopped honouring `max_tokens` with GPTOSS 120B model, the below request payload has stopped working with `max_tokens` earlier the same payload would provide an output to the limit of the `max_tokens` provided..
Interestingly if you look at the `usage` tokens, it's showing `completion_tokens` as 500 but the output is BLANK.
```json
{
"messages": [
{
"role": "user",
"content": "What is the role of AI in medicine?"
}
],
"model": "openai/gpt-oss-120b",
"max_tokens": 500,
"reasoning": {"effort": "low"},
"stream": false
}
```
getting BLANK output, even though the `usage` is showing token counts created is matching max_tokens
```json
{
"id": "chatcmpl-c71e934ac0b74bd4b8f99fe9b5516ea3",
"object": "chat.completion",
"created": 1764300020,
"model": "openai/gpt-oss-120b",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": null,
"refusal": null,
"annotations": null,
"audio": null,
"function_call": null,
"tool_calls": [],
"reasoning": "Need to answer.",
"reasoning_content": "Need to answer."
},
"logprobs": null,
"finish_reason": "length",
"stop_reason": null,
"token_ids": null
}
],
"service_tier": null,
"system_fingerprint": null,
"usage": {
"prompt_tokens": 78,
"total_tokens": 578,
"completion_tokens": 500,
"prompt_tokens_details": null
},
"prompt_logprobs": null,
"prompt_token_ids": null,
"kv_transfer_params": null
}
```
When you remove the `max_tokens`, we get the output which shows `usage_token` to have `completion_tokens` to be around 1600 tokens..
It seems that starting from vllm 0.11+ version, the auto-truncation using the `max_tokens` has stopped working
```json
{
"id": "chatcmpl-61b60144d43147e2b007158712ad4920",
"object": "chat.completion",
"created": 1764300423,
"model": "openai/gpt-oss-120b",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "**The role of AI in medicine is expanding rapidly and touches virtually every aspect of healthcare—from the way doctors diagnose patients to how hospitals run their operations.** Below is a structured overview that covers the major domains, concrete examples, benefits, challenges, and future directions.\n\n---\n\n## 1. Clinical Care\n\n| Sub‑area | What AI Does | Real‑World Examples | Benefits |\n|----------|--------------|---------------------|----------|\n| **Diagnostics** | Image analysis, pattern recognition, risk stratification | • Radiology: Google DeepMind’s AI detects lung cancer on CT scans with >95% accuracy.<br>• Dermatology: FDA‑cleared apps (e.g., SkinVision) classify skin lesions from photos.<br>• Pathology: Paige.ai assists in detecting prostate cancer in biopsy slides. | Faster, more consistent readings; can catch subtle findings that human eyes miss. |\n| **Predictive Analytics** | Forecast disease onset, complications, readmission risk | • Sepsis prediction models (e.g., Epic Sepsis Model) trigger alerts hours before clinical signs.<br>• Cardiovascular risk calculators incorporating genomics and wearables. | Enables proactive interventions, reduces morbidity and cost. |\n| **Treatment Planning** | Decision support, dose optimisation, drug selection | • IBM Watson for Oncology (clinical trial matching).<br>• Radiation oncology: AI‑driven dose‑painting to spare healthy tissue.<br>• Pharmacogenomics: AI predicts drug‑gene interactions. | Personalises therapy, improves outcomes, reduces adverse events. |\n| **Robotics & Minimally Invasive Surgery** | Real‑time image guidance, autonomous suturing, task automation | • Da Vinci Surgical System (augmented with AI for instrument tracking).<br>• VERDICT AI for autonomous suturing in animal models. | Increases precision, reduces surgeon fatigue, shortens recovery. |\n\n---\n\n## 2. Patient‑Facing Applications\n\n| Application | Description | Example |\n|-------------|-------------|---------|\n| **Virtual Assistants & Chatbots** | Symptom triage, medication reminders, mental‑health chat | • Babylon Health (AI‑driven triage).<br>• Woebot (CBT‑based mental‑health chatbot). |\n| **Telemedicine Enhancements** | Real‑time vitals extraction from video, automated note‑taking | • KardiaMobile ECG integration with AI‑based arrhythmia detection. |\n| **Wearables & Remote Monitoring** | Continuous data streams analysed for early alerts | • Apple Watch ECG + AI arrhythmia detection; Fitbit heart‑rate trend alerts. |\n\n---\n\n## 3. Operational & Administrative Efficiency\n\n| Domain | AI Functions | Example | | https://github.com/vllm-project/vllm/issues/29641 | closed | [
"bug"
] | 2025-11-28T03:39:34Z | 2025-12-21T02:39:32Z | 16 | soodrohit |
huggingface/transformers | 42,464 | Add SAM 3D Objects Encoder | ### Model description
## Model Description
SAM 3D Objects is Meta AI's foundation model for 3D object reconstruction from single images. I'm proposing to add the **encoder component** (DINOv2-based Vision Transformer) to Transformers.
**Scope**: Encoder only, not the full 3D generation pipeline (which includes Gaussian Splatting/Mesh decoders better suited for Diffusers).
## Open source status
- [x] The model implementation available
- [x] The model weights are available
## Provide useful links for the implementation
- **Model Card**: https://huggingface.co/facebook/sam-3d-objects
- **Paper**: https://arxiv.org/abs/2511.16624
- **Original Repository**: https://github.com/facebookresearch/sam-3d-objects
- **Blog Post**: https://ai.meta.com/blog/sam-3d/
## Implementation Progress
I have already implemented this model and it's ready for review:
✅ **Implementation Complete:**
- `Sam3DObjectsEncoderConfig` - Configuration with DINO variant support
- `Sam3DObjectsEncoder` - Main encoder model
- `Sam3DObjectsEncoderForMasks` - Variant for mask encoding
- `Sam3DObjectsImageProcessor` - Image preprocessing
- Comprehensive test suite: **28/28 tests passing**
- Full documentation
**Test Results:**
collected 29 items
28 passed, 1 skipped in 4.92s
**Example Usage:**
```python
from transformers.models.sam3d_objects import (
Sam3DObjectsEncoder,
Sam3DObjectsEncoderConfig,
Sam3DObjectsImageProcessor,
)
config = Sam3DObjectsEncoderConfig.from_dino_config("dinov2_vitl14")
model = Sam3DObjectsEncoder(config)
processor = Sam3DObjectsImageProcessor()
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
embeddings = outputs.last_hidden_state
```
## Questions
1. Is there interest in adding the SAM 3D Objects Encoder to Transformers?
2. Should this be limited to the encoder component (my recommendation)?
3. Should I submit a PR, or are there any requirements I should address first?
## Additional Context
- The encoder is based on DINOv2 and fits naturally in Transformers
- Full 3D generation pipeline would be better suited for Diffusers
- Model is gated on Hub (requires license acceptance)
- Implementation follows Transformers patterns and guidelines
I'm ready to submit a PR and address any feedback.
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
## Links
- **Model Card**: https://huggingface.co/facebook/sam-3d-objects
- **Paper**: https://arxiv.org/abs/2511.16624 (SAM 3D: 3Dfy Anything in Images)
- **Original Repository**: https://github.com/facebookresearch/sam-3d-objects
- **Blog Post**: https://ai.meta.com/blog/sam-3d/
- **Project Page**: https://ai.meta.com/sam3d/
## Authors
**SAM 3D Team** from Meta AI
For the complete author list and contributions, see:
- [ArXiv Paper](https://arxiv.org/abs/2511.16624)
- [Original Repository](https://github.com/facebookresearch/sam-3d-objects)
*Note: This is a large collaborative project with many contributors from Meta Superintelligence Labs.*
## Implementation Details
**Model Type**: Vision Encoder (DINOv2-based)
**Architecture**: Vision Transformer (ViT)
**Variants Supported**:
- ViT-S/14 (384 dim)
- ViT-B/14 (768 dim)
- ViT-L/14 (1024 dim)
- ViT-G/14 (1536 dim)
**Input**: RGB images (224x224 or 518x518)
**Output**: Visual embeddings for 3D generation tasks
**License**: SAM License (gated model on HuggingFace Hub) | https://github.com/huggingface/transformers/issues/42464 | open | [
"New model"
] | 2025-11-27T19:48:28Z | 2025-12-05T10:32:33Z | 1 | Aznix07 |
pytorch/pytorch | 169,175 | Regarding this issue, how can I upgrade or replace the cuDNN version built into my current PyTorch installation? | ### 🚀 The feature, motivation and pitch
Significant Memory Regression in F.conv3d with bfloat16 Inputs in PyTorch 2.9.0 (#166643) This release provides work around this issue. If you are impacted please install nvidia-cudnn package version 9.15+ from pypi. (#166480) (#167111) .
### Alternatives
_No response_
### Additional context
_No response_ | https://github.com/pytorch/pytorch/issues/169175 | closed | [] | 2025-11-27T09:32:00Z | 2025-11-27T20:19:07Z | 2 | saberrroool |
pytorch/pytorch | 169,174 | Does torch.masked_select preserve the original order of the selected elements? | There is the following issue on this page: https://docs.pytorch.org/docs/stable/generated/torch.masked_select.html
Does torch.masked_select preserve the original order of the selected elements?
`mask = torch.from_numpy(np.random.uniform(0, 1, 1234567) > 0.5)
idx = torch.arange(len(mask))
select = idx.masked_select(mask)
assert (select == torch.sort(select)[0]).all()` | https://github.com/pytorch/pytorch/issues/169174 | closed | [] | 2025-11-27T09:26:45Z | 2025-11-30T12:12:18Z | 0 | wanglin03 |
vllm-project/vllm | 29,584 | [Usage]: Can KV Cache be disabled in non-autoregressive generation tasks? | ### Your current environment
```text
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collect
CMake version : version 3.28.3
Libc version : glibc-2.39
==============================
PyTorch Info
==============================
PyTorch version : 2.9.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform : Linux-6.8.0-87-generic-x86_64-with-glibc2.39
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : Could not collect
CUDA_MODULE_LOADING set to :
GPU models and configuration :
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version : 575.57.08
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
vLLM Info
==============================
ROCM Version : Could not collect
vLLM Version : 0.11.2
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled
GPU Topology:
GPU0 GPU1 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X SYS 0-23,48-71 0 N/A
GPU1 SYS X 24-47,72-95 1 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
```
### How would you like to use vllm
Hello vLLM team,
Currently, vLLM (v0.11.2) enables KV cache for certain LLM-based pooling and reranking models, such as the Qwen3-Embedding series, even when `--no-enable-chunked-prefill` and `--no-enable-prefix-caching` are set. This leads to unnecessary GPU memory usage.
Would it be possible to disable KV cache for pooling and reranking models under these conditions?
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/29584 | open | [
"usage"
] | 2025-11-27T05:30:08Z | 2025-12-05T02:40:28Z | 5 | GitEventhandler |
vllm-project/vllm | 29,574 | [Performance]: Using vLLM to accelerate VLM models, does the vision encoding part currently support parallel processing, or is it still being processed serially? | ### Proposal to improve performance
I found that currently, images of different sizes are processed sequentially, which significantly slows down the processing speed. How can we adapt to parallel processing? Should we resize or pad all images to the same size for batch processing, or can we run multiple encoder models in parallel? Thank you.
### Report of performance regression
_No response_
### Misc discussion on performance
_No response_
### Your current environment (if you think it is necessary)
```text
The output of `python collect_env.py`
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/29574 | open | [
"performance"
] | 2025-11-27T03:51:36Z | 2025-11-27T10:54:09Z | 2 | NewZxy |
pytorch/pytorch | 169,160 | Is there any way to make pinned CPU tensors released back to the OS immediately | ### 🐛 Describe the bug
The pinned CPU tensors can't be released back to the OS immediately.
```python
import torch
import gc
import ctypes
import psutil
import os
def get_memory_usage():
"""Return current process RSS memory usage in MB."""
process = psutil.Process(os.getpid())
return process.memory_info().rss / (1024 * 1024)
def trim_memory():
"""Attempt to release unused memory back to the OS using malloc_trim."""
libc = ctypes.CDLL("libc.so.6")
libc.malloc_trim(0)
# Initial memory usage
print(f"[Before allocation] Memory usage: {get_memory_usage():.2f} MB")
# Allocate 1 GiB of pinned memory on CPU
x = torch.empty(1024 * 1024 * 1024, dtype=torch.uint8, device="cpu", pin_memory=True)
print(f"[After allocation] Memory usage: {get_memory_usage():.2f} MB")
# Delete the tensor
del x
# Run garbage collection
gc.collect()
# Try to trim memory
trim_memory()
print(f"[After del + gc + malloc_trim] Memory usage: {get_memory_usage():.2f} MB")
```
### Versions
PyTorch version: 2.7.0a0+7c8ec84dab.nv25.03
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.5.0-1ubuntu1~24.04) 11.5.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.10.134-008.18.kangaroo.al8.x86_64-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H20
GPU 1: NVIDIA H20
GPU 2: NVIDIA H20
GPU 3: NVIDIA H20
GPU 4: NVIDIA H20
GPU 5: NVIDIA H20
GPU 6: NVIDIA H20
GPU 7: NVIDIA H20
Nvidia driver version: 550.54.15
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.8.0
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 160
On-line CPU(s) list: 0-159
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Processor
CPU family: 6
Model: 143
Thread(s) per core: 1
Core(s) per socket: 160
Socket(s): 1
Stepping: 8
BogoMIPS: 5200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd avx512vbmi umip pku waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3.8 MiB (80 instances)
L1i cache: 2.5 MiB (80 instances)
L2 cache: 160 MiB (80 instances)
L3 cache: 195 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-79
NUMA node1 CPU(s): 80-159
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] intel-openmp==2021.4.0
[pip3] mkl==2021.1. | https://github.com/pytorch/pytorch/issues/169160 | closed | [] | 2025-11-27T03:19:54Z | 2025-11-27T20:24:46Z | 1 | dashanji |
vllm-project/vllm | 29,564 | [Doc]: Make PyTorch profiler gzip and CUDA time dump configurable | ### 📚 The doc issue
We observed that enabling both use_gzip and dump_self_cuda_time_total in the vLLM torch profiler introduces significant overhead during profiling.
For example, when profiling 10 randomly generated requests (1000 input tokens, 200 output tokens) on an A100 using the Qwen3-32B model, we found that gzip compression of the profiling trace and dumping the CUDA time table take ~68 seconds, dominating the overall profiling time.
The main sources of overhead appear to be:
1. Gzip compression of the profiling trace file
2. Generation and dumping of the CUDA time summary table
After disabling these two features, the total profiling dump time is reduced to ~18 seconds.
In many profiling scenarios (e.g., quick performance checks or small-scale experiments), users may not need gzip compression or the CUDA time table. Therefore, it would be helpful to make these two behaviors individually configurable via environment variables—enabled by default for completeness, but optionally turnable off when faster profiling turnaround is preferred. Moreover, gzip compression could potentially be performed asynchronously after the trace is dumped, allowing lower-latency profiling in staging or pre-production environments.
This patch proposes adding such configurability so users can selectively disable gzip compression and/or CUDA time table generation when they want a faster and lighter profiling workflow.
### Suggest a potential alternative/fix
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/29564 | closed | [
"documentation"
] | 2025-11-27T02:21:20Z | 2025-12-01T04:30:48Z | 1 | zhangruoxu |
pytorch/pytorch | 169,157 | AOTI does not support fallback kernels with parameters of types other than int and tensor. | ### 🚀 The feature, motivation and pitch
Currently, AOTI does not support fallback kernels with parameters of types other than int and tensor. https://github.com/pytorch/pytorch/blob/main/torch/_inductor/codegen/cpp_wrapper_cpu.py#L2723-L2729.
Why does AOTI restrict the parameter types?
Do we have any plans to add support for fallback kernels with more complex parameters ?
### Alternatives
I implemented a workaround, replacing `generate_fallback_kernel_with_runtime_lookup_aot` with `generate_fallback_kernel_with_runtime_lookup_nopython`, which worked in my experiments.
### Additional context
_No response_
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @yushangdi @benjaminglass1 @jataylo @iupaikov-amd | https://github.com/pytorch/pytorch/issues/169157 | open | [
"triaged",
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | 2025-11-27T02:10:26Z | 2025-12-18T02:30:56Z | 3 | CaoE |
vllm-project/vllm | 29,562 | [Bug]: "\n\n" content between reasoning and tool_call content when tool_call and stream mode | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
https://github.com/QwenLM/Qwen3/issues/1755
When stream mode true, the response contains content "\n\n" between reasoning and tool_call; but with stream model false, it didn't generate content "\n\n".
Is there some thing different, I don't want the content "\n\n" between reasoning and tool_call.
<img width="974" height="533" alt="Image" src="https://github.com/user-attachments/assets/0cc36343-3c0f-4ce1-9028-30f561a55dac" />
Here is my requests:
```
{
"model": "Qwen3-235B-A22B-Thinking-2507",
"tools": [
{
"type": "function",
"function": {
"name": "search_law_articles",
"parameters": {
"type": "object",
"properties": {
"level": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "搜索条件:法规类型"
},
"query": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "查询语句"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "法律标题"
},
"article": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "法律条款序号,如 第十条"
},
"content": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "法律条款及内容,如 第十条 贷款人委托支付"
},
"pub_department": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "发布部门"
},
"pub_time_after": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "搜索条件:发布时间晚于此时间,格式如2025-06-20"
},
"pub_time_before": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "搜索条件:发布时间早于此时间,格式如2025-06-20"
},
"imply_time_after": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "搜索条件:实施时间晚于此时间,格式如2025-06-20"
},
"imply_time_before": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "搜索条件:实施时间早于此时间,格式如2025-06-20"
}
}
},
"description": "此工具用于搜索法条内容, 库中是按照法律条目进行存储, 查询可选多个查询过滤条件"
}
}
],
"stream": true,
"messages": [
{
"role": "user",
"content": [
{
"text": "帮我解读下网络安全法",
"type": "text"
}
]
}
]
}
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/29562 | open | [
"bug"
] | 2025-11-27T01:49:04Z | 2025-11-27T01:49:04Z | 0 | NiuBlibing |
vllm-project/vllm | 29,560 | [Doc]: Batch Invariance on Ampere Platforms | ### 📚 The doc issue
Does the batch invariance feature released in vllm 0.11.2 support the Ampere architecture? If adaptations are required, what modifications need to be made?
### Suggest a potential alternative/fix
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/29560 | closed | [
"documentation"
] | 2025-11-27T01:06:49Z | 2025-11-27T14:21:30Z | 0 | luo1206 |
pytorch/tutorials | 3,666 | Feedback about What is torch.nn really? | There is the following issue on this page: https://docs.pytorch.org/tutorials/beginner/nn_tutorial.html
In the section "Neural net from scratch (without torch.nn)" there is a pre-training loss function evaluation on a batch of 64 instances,
```
yb = y_train[0:bs]
print(loss_func(preds, yb))
```
then training is performed (comments my own)
```
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
# set_trace()
start_i = i * bs
end_i = start_i + bs
xb = x_train[start_i:end_i] # note that xb gets redefined
yb = y_train[start_i:end_i] # note that yb gets redefined
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
with torch.no_grad():
weights -= weights.grad * lr
bias -= bias.grad * lr
weights.grad.zero_()
bias.grad.zero_()
```
and the loss function is evaluated again to demonstrate a reduction in loss.
`print(loss_func(model(xb), yb), accuracy(model(xb), yb))
`
The final evaluation is not applied to the same data as the first though. Both invoke xb and yb, but xb and yb in the pre-training evaluation are the first 64 instances from the set, during training these variables are updated with subsequent batches and the final evaluation is performed on the final batch.
Pre and post-training evaluations should be performed on the same batch, either the original 64 instances from the first training batch, or (if the intent is to demonstrate generalization loss) the test dataset.
cc @albanD @jbschlosser | https://github.com/pytorch/tutorials/issues/3666 | open | [
"core"
] | 2025-11-26T21:16:14Z | 2025-11-26T21:35:10Z | null | bogpetre |
huggingface/trl | 4,582 | Does the GRPO Trainer support multi-image input for Qwen3-VL? | Does the GRPO Trainer support multi-image input for Qwen3-VL? | https://github.com/huggingface/trl/issues/4582 | open | [
"🏋 GRPO"
] | 2025-11-26T14:03:57Z | 2025-11-27T08:08:25Z | 1 | Lestoky |
huggingface/diffusers | 12,722 | How to run qwen-image in kaggle gpu T4 * 2 successfully? | ```python3
!python3 -m pip install -U diffusers peft bitsandbytes
import diffusers, torch, math
qwen = diffusers.QwenImagePipeline.from_pretrained('Qwen/Qwen-Image', torch_dtype=torch.float16, low_cpu_mem_usage=True, quantization_config=diffusers.PipelineQuantizationConfig(quant_backend='bitsandbytes_4bit', quant_kwargs={'load_in_4bit':True, 'bnb_4bit_quant_type':'nf4', 'bnb_4bit_compute_dtype':torch.float16}, components_to_quantize=['transformer', 'text_encoder']))
qwen.scheduler = diffusers.FlowMatchEulerDiscreteScheduler.from_config({'base_image_seq_len':256, 'base_shift':math.log(3), 'invert_sigmas':False, 'max_image_seq_len':8192, 'max_shift':math.log(3), 'num_train_timesteps':1000, 'shift':1, 'shift_terminal':None, 'stochastic_sampling':False, 'time_shift_type':'exponential', 'use_beta_sigmas':False, 'use_dynamic_shifting':True, 'use_exponential_sigmas':False, 'use_karras_sigmas':False})
qwen.load_lora_weights('lightx2v/Qwen-Image-Lightning', weight_name='Qwen-Image-Lightning-4steps-V2.0.safetensors', adapter_name='lightning')
qwen.set_adapters('lightning', adapter_weights=1)
qwen.enable_sequential_cpu_offload()
qwen(prompt='a beautiful girl', height=1280, width=720, num_inference_steps=4, true_cfg_scale=1).images[0].save('a.png')
```
----> 3 qwen = diffusers.QwenImagePipeline.from_pretrained('Qwen/Qwen-Image', torch_dtype=torch.float16, low_cpu_mem_usage=True, quantization_config=diffusers.PipelineQuantizationConfig(quant_backend='bitsandbytes_4bit', quant_kwargs={'load_in_4bit':True, 'bnb_4bit_quant_type':'nf4', 'bnb_4bit_compute_dtype':torch.float16}, components_to_quantize=['transformer', 'text_encoder']))
OutOfMemoryError: CUDA out of memory. Tried to allocate 34.00 MiB. GPU 0 has a total capacity of 14.74 GiB of which 4.19 MiB is free. Process 8568 has 14.73 GiB memory in use. Of the allocated memory 14.50 GiB is allocated by PyTorch, and 129.00 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
How to get more cuda memory?
@yiyixuxu @DN6 | https://github.com/huggingface/diffusers/issues/12722 | open | [] | 2025-11-26T12:53:30Z | 2025-11-28T03:54:07Z | null | chaowenguo |
vllm-project/vllm | 29,494 | [Doc]: Documentation inconsistency: Blog mentions append_slots() but codebase uses allocate_slots() | ### 📚 The doc issue
The Automatic Prefix Caching blog post mentions:
> "The scheduler calls kv_cache_manager.append_slots()"
However, the actual codebase uses a unified `kv_cache_manager.allocate_slots()` method that handles both prefill and decode requests.
**Location:**
- Blog: [[link to blog post](https://docs.vllm.ai/en/v0.8.5/design/v1/prefix_caching.html#operations)]
- Code: ./vllm/v1/core/kv_cache_manager.py
### Suggest a potential alternative/fix
Update the blog post to reflect the actual implementation `kv_cache_manager.allocate_slots()`
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/29494 | closed | [
"documentation"
] | 2025-11-26T11:37:40Z | 2025-11-26T11:46:08Z | 1 | pradsgit |
huggingface/transformers | 42,418 | Custom nn.Parameter initialization in PreTrainedModel subclasses is overwritten by post_init()/from_pretrained() causing NaNs/Zeros | ### System Info
- `transformers` version: 4.57.1
- Platform: Linux-4.18.0-147.mt20200626.413.el8_1.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.14
- Huggingface_hub version: 0.35.3
- Safetensors version: 0.6.2
- Accelerate version: 1.11.0
- Accelerate config: not found
- DeepSpeed version: 0.18.2
- PyTorch version (accelerator?): 2.7.1+cu126 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: No
- GPU type: NVIDIA A100-SXM4-80GB
### Who can help?
@Cyrilvallez @zucchini-nlp
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
import numpy as np
import os
import random
import torch
import torch.nn as nn
from transformers import Qwen3VLForConditionalGeneration
def seed_everything(TORCH_SEED):
random.seed(TORCH_SEED)
os.environ["PYTHONHASHSEED"] = str(TORCH_SEED)
np.random.seed(TORCH_SEED)
torch.manual_seed(TORCH_SEED)
torch.cuda.manual_seed(TORCH_SEED)
torch.cuda.manual_seed_all(TORCH_SEED)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
seed_everything(66)
class TestModel1(Qwen3VLForConditionalGeneration):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.action_head = nn.Linear(1024, 7)
self.positional_embedding = nn.Parameter(torch.randn(16, 1152))
self.post_init()
class TestModel2(nn.Module):
def __init__(self, *args, model_path, **kwargs):
super().__init__(*args, **kwargs)
self.model = Qwen3VLForConditionalGeneration.from_pretrained(model_path)
self.action_head = nn.Linear(1024, 7)
self.positional_embedding = nn.Parameter(torch.randn(16, 1152))
test_model1 = TestModel1.from_pretrained("Qwen/Qwen3-VL-4B-Instruct")
test_model2 = TestModel2(model_path="Qwen/Qwen3-VL-4B-Instruct")
print(test_model1.positional_embedding)
print(test_model1.positional_embedding.mean(), test_model1.positional_embedding.std())
print(test_model2.positional_embedding)
print(test_model2.positional_embedding.mean(), test_model2.positional_embedding.std())
````
### Expected behavior
When subclassing a model (inheriting from PreTrainedModel, e.g., Qwen3VLForConditionalGeneration, LlamaForCausalLM) to add custom learnable parameters, user-defined initialization in __init__ is often silently overwritten.
This occurs because from_pretrained (or the end of __init__) triggers self.post_init(), which recursively calls _init_weights. This mechanism re-initializes all parameters, ignoring the explicit initialization code provided by the user in __init__.
In the specific case of Qwen3-VL (and potentially others), this re-initialization results in NaNs or Zeros, rendering the model unusable without manual intervention.
Steps to reproduce The following script demonstrates the issue. Note: I used torch.randn for the custom parameter initialization. While I understand that torch.randn samples from a standard normal distribution and does not guarantee an exact sample mean of 0 and std of 1, it should result in valid float values. The observed NaNs/Zeros confirm that this initialization is being discarded and replaced by a faulty internal initialization logic. | https://github.com/huggingface/transformers/issues/42418 | open | [
"Usage",
"Feature request",
"bug"
] | 2025-11-26T10:29:57Z | 2025-12-01T15:10:32Z | 10 | Noietch |
huggingface/diffusers | 12,720 | how to quantization wan 2.2 vace after loading lora? | ```python3
diffusers.WanVACEPipeline.from_pretrained('linoyts/Wan2.2-VACE-Fun-14B-diffusers', vae=diffusers.AutoencoderKLWan.from_pretrained('linoyts/Wan2.2-VACE-Fun-14B-diffusers', subfolder='vae', torch_dtype=torch.float32), torch_dtype=torch.bfloat16, quantization_config=diffusers.PipelineQuantizationConfig(quant_backend='bitsandbytes_8bit', quant_kwargs={'load_in_8bit':True}, components_to_quantize=['transformer', 'transformer_2'])).save_pretrained('wan')
```
normally I can save the quantization model in this way
But now I want to merge lora and the quantization and then save the model with lora. How?
```python3
wan = diffusers.WanVACEPipeline.from_pretrained('linoyts/Wan2.2-VACE-Fun-14B-diffusers', vae=diffusers.AutoencoderKLWan.from_pretrained('linoyts/Wan2.2-VACE-Fun-14B-diffusers', subfolder='vae', torch_dtype=torch.float32), torch_dtype=torch.bfloat16)
wan.load_lora_weights('lightx2v/Wan2.2-Lightning', weight_name='Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1/high_noise_model.safetensors', adapter_name='lightning')
wan.load_lora_weights('lightx2v/Wan2.2-Lightning', weight_name='Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1/low_noise_model.safetensors', adapter_name='lightning_2', load_into_transformer_2=True)
wan.set_adapters(['lightning', 'lightning_2'], adapter_weights=[1] * 2)
how to quantization and save_pretrained?
```
@yiyixuxu @DN6 | https://github.com/huggingface/diffusers/issues/12720 | open | [] | 2025-11-26T10:11:38Z | 2025-12-11T17:29:30Z | null | chaowenguo |
vllm-project/vllm | 29,489 | [Usage]: Removing last generated token from output and kv cache | ### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collect
CMake version : version 3.28.3
Libc version : glibc-2.39
==============================
PyTorch Info
==============================
PyTorch version : 2.9.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.13.5 | packaged by conda-forge | (main, Jun 16 2025, 08:27:50) [GCC 13.3.0] (64-bit runtime)
Python platform : Linux-6.8.0-87-generic-x86_64-with-glibc2.39
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.8.93
CUDA_MODULE_LOADING set to :
GPU models and configuration :
GPU 0: NVIDIA B200
GPU 1: NVIDIA B200
GPU 2: NVIDIA B200
GPU 3: NVIDIA B200
GPU 4: NVIDIA B200
GPU 5: NVIDIA B200
GPU 6: NVIDIA B200
GPU 7: NVIDIA B200
Nvidia driver version : 570.195.03
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) PLATINUM 8570
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 2
CPU(s) scaling MHz: 33%
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities ibpb_exit_to_user
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 600 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI | https://github.com/vllm-project/vllm/issues/29489 | open | [
"usage"
] | 2025-11-26T09:35:37Z | 2025-11-26T09:36:37Z | 0 | josefdra |
huggingface/diffusers | 12,719 | how to use quantization and device_map=balance to run qwen-image on kaggle T4 * 2 | ```python3
!python3 -m pip install -U diffusers peft bitsandbytes protobuf
import diffusers, torch, math
qwen = diffusers.QwenImagePipeline.from_pretrained('Qwen/Qwen-Image', quantization_config=diffusers.PipelineQuantizationConfig(quant_backend='bitsandbytes_4bit', quant_kwargs={'load_in_4bit':True, 'bnb_4bit_quant_type':'nf4', 'bnb_4bit_compute_dtype':torch.float16}, components_to_quantize=['transformer', 'text_encoder']), torch_dtype=torch.float16, device_map='balanced')
print(qwen.hf_device_map)
qwen.scheduler = diffusers.FlowMatchEulerDiscreteScheduler.from_config({'base_image_seq_len':256, 'base_shift':math.log(3), 'invert_sigmas':False, 'max_image_seq_len':8192, 'max_shift':math.log(3), 'num_train_timesteps':1000, 'shift':1, 'shift_terminal':None, 'stochastic_sampling':False, 'time_shift_type':'exponential', 'use_beta_sigmas':False, 'use_dynamic_shifting':True, 'use_exponential_sigmas':False, 'use_karras_sigmas':False})
qwen.load_lora_weights('lightx2v/Qwen-Image-Lightning', weight_name='Qwen-Image-Lightning-4steps-V2.0.safetensors', adapter_name='lightning')
qwen.set_adapters('lightning', adapter_weights=1)
qwen(prompt='a beautiful girl', height=1280, width=720, num_inference_steps=4, true_cfg_scale=1).images[0].save('a.png')
```
WARNING:accelerate.big_modeling:Some parameters are on the meta device because they were offloaded to the cpu.
{'text_encoder': 'cpu', 'vae': 0} where is the transformer ?
NotImplementedError: Cannot copy out of meta tensor; no data!
I want to ask how to make the above code work in kaggle. why 16G * 2 vram still not enough to run q4 quantization qwen-image? I want to take full advantage of 2 gpu. Do I need max_memory?
full error logs:
/usr/local/lib/python3.11/dist-packages/torch/utils/_contextlib.py in decorate_context(*args, **kwargs)
114 def decorate_context(*args, **kwargs):
115 with ctx_factory():
--> 116 return func(*args, **kwargs)
117
118 return decorate_context
/usr/local/lib/python3.11/dist-packages/diffusers/pipelines/qwenimage/pipeline_qwenimage.py in __call__(self, prompt, negative_prompt, true_cfg_scale, height, width, num_inference_steps, sigmas, guidance_scale, num_images_per_prompt, generator, latents, prompt_embeds, prompt_embeds_mask, negative_prompt_embeds, negative_prompt_embeds_mask, output_type, return_dict, attention_kwargs, callback_on_step_end, callback_on_step_end_tensor_inputs, max_sequence_length)
566 )
567 do_true_cfg = true_cfg_scale > 1 and has_neg_prompt
--> 568 prompt_embeds, prompt_embeds_mask = self.encode_prompt(
569 prompt=prompt,
570 prompt_embeds=prompt_embeds,
/usr/local/lib/python3.11/dist-packages/diffusers/pipelines/qwenimage/pipeline_qwenimage.py in encode_prompt(self, prompt, device, num_images_per_prompt, prompt_embeds, prompt_embeds_mask, max_sequence_length)
252
253 if prompt_embeds is None:
--> 254 prompt_embeds, prompt_embeds_mask = self._get_qwen_prompt_embeds(prompt, device)
255
256 prompt_embeds = prompt_embeds[:, :max_sequence_length]
/usr/local/lib/python3.11/dist-packages/diffusers/pipelines/qwenimage/pipeline_qwenimage.py in _get_qwen_prompt_embeds(self, prompt, device, dtype)
203 txt, max_length=self.tokenizer_max_length + drop_idx, padding=True, truncation=True, return_tensors="pt"
204 ).to(device)
--> 205 encoder_hidden_states = self.text_encoder(
206 input_ids=txt_tokens.input_ids,
207 attention_mask=txt_tokens.attention_mask,
/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py in _wrapped_call_impl(self, *args, **kwargs)
1737 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1738 else:
-> 1739 return self._call_impl(*args, **kwargs)
1740
1741 # torchrec tests the code consistency with the following code
/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py in _call_impl(self, *args, **kwargs)
1748 or _global_backward_pre_hooks or _global_backward_hooks
1749 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1750 return forward_call(*args, **kwargs)
1751
1752 result = None
/usr/local/lib/python3.11/dist-packages/accelerate/hooks.py in new_forward(module, *args, **kwargs)
173 output = module._old_forward(*args, **kwargs)
174 else:
--> 175 output = module._old_forward(*args, **kwargs)
176 return module._hf_hook.post_forward(module, output)
177
/usr/local/lib/python3.11/dist-packages/transformers/utils/generic.py in wrapper(self, *args, **kwargs)
941
942 try:
--> 943 output = func(self, *args, **kwargs)
944 if is_requested_to_return_tuple or (is_configured_to_return_tuple and is_top_level_module):
945 | https://github.com/huggingface/diffusers/issues/12719 | open | [] | 2025-11-26T08:35:46Z | 2025-11-26T09:15:54Z | null | chaowenguo |
pytorch/pytorch | 169,112 | `torch.compile(fullgraph=True, dynamic=True)` on CUDA fails when using `torch.utils.dlpack.to_dlpack` / `from_dlpack` (`torch._C._to_dlpack` skipped by Dynamo) | ### 🐛 Describe the bug
### Summary
When compiling a simple model that uses `torch.utils.dlpack.to_dlpack` / `from_dlpack` with:
backend="inductor", fullgraph=True, dynamic=True, device="cuda"
the eager CUDA execution works fine, but `torch.compile` fails during Dynamo tracing with:
> torch._dynamo.exc.Unsupported: Attempted to call function marked as skipped
Dynamo does not know how to trace the builtin torch._C._to_dlpack.
In some setups this shows up only as a warning + graph break, but with `fullgraph=True` it turns into a hard error and the script terminates.
### Minimal Repro
```python
# -*- coding: utf-8 -*-
import torch
import torch.nn as nn
class MyModel(nn.Module):
def forward(self, x):
if x.dtype == torch.bool:
# bool path: go through uint8 + dlpack roundtrip and back to bool
x_uint8 = x.to(torch.uint8)
dlpack = torch.utils.dlpack.to_dlpack(x_uint8)
converted = torch.utils.dlpack.from_dlpack(dlpack)
return converted.bool()
else:
# non-bool path: direct dlpack roundtrip
dlpack = torch.utils.dlpack.to_dlpack(x)
return torch.utils.dlpack.from_dlpack(dlpack)
def my_model_function():
return MyModel()
def GetInput():
# bool tensor, shape [2], to exercise the bool branch
return torch.rand(2).bool()
def main():
if not torch.cuda.is_available():
raise RuntimeError(
"CUDA is not available, but this repro expects device='cuda'."
)
device = torch.device("cuda")
# ---------- 1. Eager on CUDA: works ----------
model_eager = my_model_function().to(device).eval()
inp = GetInput().to(device)
with torch.no_grad():
out_eager = model_eager(inp)
print("=== Eager CUDA Output ===")
print("out_eager:", out_eager)
print("shape:", out_eager.shape)
print("dtype:", out_eager.dtype)
print("device:", out_eager.device)
# ---------- 2. torch.compile on CUDA ----------
from torch._inductor import config as inductor_config
old_max_autotune = inductor_config.max_autotune
inductor_config.max_autotune = True # emulate 'max-autotune' mode
try:
compiled_model = torch.compile(
model_eager,
backend="inductor",
fullgraph=True,
dynamic=True,
)
with torch.no_grad():
out_compiled = compiled_model(inp) # <-- fails here
print("\n=== compiled Output ===")
print("out_compiled:", out_compiled)
print("shape:", out_compiled.shape)
print("dtype:", out_compiled.dtype)
print("device:", out_compiled.device)
same = torch.equal(out_eager, out_compiled)
print("\n=== eager vs compiled elementwise equal ===", bool(same))
finally:
inductor_config.max_autotune = old_max_autotune
if __name__ == "__main__":
main()
```
### Console output (abridged):
```
=== Eager CUDA Output ===
out_eager: tensor([True, True], device='cuda:0')
shape: torch.Size([2])
dtype: torch.bool
device: cuda:0
.../torch/_dynamo/variables/functions.py:1598: UserWarning:
Dynamo does not know how to trace the builtin `torch._C._to_dlpack.` ...
torch._dynamo.utils.warn_once(explanation + "\n" + "\n".join(hints))
Traceback (most recent call last):
...
File ".../torch/_dynamo/eval_frame.py", line 841, in compile_wrapper
raise e.with_traceback(None) from e.__cause__ # User compiler error
torch._dynamo.exc.Unsupported: Attempted to call function marked as skipped
Explanation: Dynamo does not know how to trace the builtin `torch._C._to_dlpack.` ...
Hint: If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it ...
Hint: If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator
or, if it is traceable, use `torch.compiler.allow_in_graph`.
Developer debug context: module: torch._C, qualname: _to_dlpack, skip reason: <missing reason>
from user code:
File "for_test.py", line 11, in forward
dlpack = torch.utils.dlpack.to_dlpack(x_uint8)
```
### Versions
```
PyTorch: 2.9.0 (installed via pip)
CUDA: 12.x
cuDNN: 9.x
Python: 3.10.x
OS: Ubuntu 22.04 (x86_64)
GPU: NVIDIA RTX A6000 (repro uses cuda:0)
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @kadeng @amjames @Lucaskabela @jataylo | https://github.com/pytorch/pytorch/issues/169112 | open | [
"triaged",
"module: dlpack",
"oncall: pt2",
"module: dynamo"
] | 2025-11-26T08:13:38Z | 2025-12-04T02:10:01Z | 3 | tinywisdom |
pytorch/pytorch | 169,106 | Why is fusion restricted here in dynamic mode? | https://github.com/pytorch/pytorch/blob/3ab08946d5052eaeda11d683d6a58e801a032755/torch/_inductor/ir.py#L3555
I wrote a small demo myself and the numerical accuracy is perfect
```python
import torch
from torch import nn
from typing import List
#concat in dynamic dim
class MyCatMul(nn.Module):
def __init__(self, n: int):
super().__init__()
self.n = n
self.W = nn.Parameter(torch.randn(64, 64))
def forward(self, xs: List[torch.Tensor]):
assert len(xs) == self.n, f"need {self.n} tensors"
# last = torch.sigmoid(xs[-1] @ self.W)
last = torch.sigmoid(xs[-1])
outs = list(xs[:-1]) + [last]
return torch.cat(outs, dim=0)
n = 15
model = MyCatMul(n).cuda()
x_list = []
for i in range(n):
a = torch.randint(2, 120, (1,)).item()
x_list.append(torch.randn(a, 64, device='cuda'))
from torch.export import export, Dim
dynamic_shapes = [
{0: Dim(f"b{i}", min=1, max=2048), 1: 64}
for i in range(n)
]
with torch.no_grad():
out = model(x_list)
ep = export(model, (x_list,), dynamic_shapes=[dynamic_shapes])
torch._inductor.aoti_compile_and_package(
ep, package_path="./model.pt2",
inductor_configs={"max_autotune": True,
"epilogue_fusion": True,
"permute_fusion": True,
"max_autotune_pointwise": True,
"max_autotune_gemm":True,
"freezing":True,
}
)
aot_model = torch._inductor.aoti_load_package("./model.pt2")
################diff##############
for i in range(100):
test_input = []
for ii in range(n):
a = torch.randint(2, 1024, (1,)).item()
test_input.append(torch.randn(a, 64, device='cuda'))
out_raw = model(test_input)
out_aot = aot_model(test_input)
diff = torch.abs(out_raw - out_aot)
max_val, max_idx = diff.max(), diff.argmax()
coord = torch.unravel_index(max_idx, out_raw.shape)
val_raw = out_raw.flatten()[max_idx]
val_aot = out_aot.flatten()[max_idx]
avg_err = diff.mean().item()
if max_val > 1e-5:
print(f"iter {i}: max_err {max_val.item():.8f} @ coord {coord} "
f"raw={val_raw.item():.8f} aot={val_aot.item():.8f} "
f"avg_err {avg_err:.8f}")
raise "error"
print("pass")
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @kadeng @muchulee8 @amjames @aakhundov @coconutruben @jataylo | https://github.com/pytorch/pytorch/issues/169106 | closed | [
"triaged",
"oncall: pt2",
"module: inductor"
] | 2025-11-26T03:56:02Z | 2025-12-10T04:43:43Z | 3 | Jin-TaoZhang |
vllm-project/vllm | 29,474 | [P/D][Metrics] Consider combined/summed metrics (e.g. ttft and e2e_request_latency) for prefill and decode instances | ### Your current environment
<details>
<summary>Env info snipped</summary>
```
Collecting environment information...
uv is set
==============================
System Info
==============================
OS : Ubuntu 24.04.1 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collect
CMake version : version 3.28.3
Libc version : glibc-2.39
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] (64-bit runtime)
Python platform : Linux-5.15.0-152-generic-x86_64-with-glibc2.39
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.8.93
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA H20
GPU 1: NVIDIA H20
GPU 2: NVIDIA H20
GPU 3: NVIDIA H20
GPU 4: NVIDIA H20
GPU 5: NVIDIA H20
GPU 6: NVIDIA H20
GPU 7: NVIDIA H20
Nvidia driver version : 570.172.08
cuDNN version : Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.8.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.8.0
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
BIOS Vendor ID: Intel(R) Corporation
Model name: INTEL(R) XEON(R) PLATINUM 8562Y+
BIOS Model name: INTEL(R) XEON(R) PLATINUM 8562Y+ CPU @ 2.8GHz
BIOS CPU family: 179
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 2
CPU(s) scaling MHz: 73%
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Indirect target s | https://github.com/vllm-project/vllm/issues/29474 | open | [
"usage",
"kv-connector"
] | 2025-11-26T02:50:17Z | 2025-11-26T08:31:18Z | 1 | mgw2168-1 |
vllm-project/vllm | 29,472 | [Installation]: how to Install vllm on dell promax gb10 | ### Your current environment
I failed to install vllm on dell promax gb10 , mesages as followed
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2025 NVIDIA Corporation
Built on Wed_Aug_20_01:57:39_PM_PDT_2025
Cuda compilation tools, release 13.0, V13.0.88
Build cuda_13.0.r13.0/compiler.36424714_0
pip install vllm
Successfully installed torch-2.9.0 torchaudio-2.9.0 torchvision-0.24.0 vllm-0.11.2
```
(py312) dell@promaxgb10-0843:~/test/vllm/Qwen$ vllm -V
Traceback (most recent call last):
File "/home/dell/miniconda3/envs/py312/bin/vllm", line 3, in <module>
from vllm.entrypoints.cli.main import main
File "/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/entrypoints/cli/__init__.py", line 3, in <module>
from vllm.entrypoints.cli.benchmark.latency import BenchmarkLatencySubcommand
File "/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/entrypoints/cli/benchmark/latency.py", line 5, in <module>
from vllm.benchmarks.latency import add_cli_args, main
File "/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/benchmarks/latency.py", line 17, in <module>
from vllm.engine.arg_utils import EngineArgs
File "/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/engine/arg_utils.py", line 35, in <module>
from vllm.attention.backends.registry import AttentionBackendEnum
File "/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/attention/__init__.py", line 4, in <module>
from vllm.attention.backends.abstract import (
File "/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/attention/backends/abstract.py", line 9, in <module>
from vllm.model_executor.layers.linear import ColumnParallelLinear
File "/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/model_executor/__init__.py", line 4, in <module>
from vllm.model_executor.parameter import BasevLLMParameter, PackedvLLMParameter
File "/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/model_executor/parameter.py", line 11, in <module>
from vllm.distributed import (
File "/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/distributed/__init__.py", line 4, in <module>
from .communication_op import *
File "/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/distributed/communication_op.py", line 9, in <module>
from .parallel_state import get_tp_group
File "/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/distributed/parallel_state.py", line 250, in <module>
direct_register_custom_op(
File "/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/utils/torch_utils.py", line 640, in direct_register_custom_op
from vllm.platforms import current_platform
File "/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/platforms/__init__.py", line 257, in __getattr__
_current_platform = resolve_obj_by_qualname(platform_cls_qualname)()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/utils/import_utils.py", line 89, in resolve_obj_by_qualname
module = importlib.import_module(module_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dell/miniconda3/envs/py312/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/platforms/cuda.py", line 16, in <module>
import vllm._C # noqa
^^^^^^^^^^^^^^
ImportError: libtorch_cuda.so: cannot open shared object file: No such file or directory
```
### How you are installing vllm
```sh
pip install vllm
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/29472 | open | [
"installation"
] | 2025-11-26T02:41:18Z | 2026-01-01T12:28:29Z | 2 | goactiongo |
vllm-project/vllm | 29,436 | [Bug]: vLLM Serve with LMCache enabled produces wrong output for GPT-OSS-20B | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
vLLM serve command with LMCache enabled produces wrong output with GPT OSS 20B for subsequent invocations with the same prompt
Steps to reproduce:
Command to start the server:
```
LMCACHE_CONFIG_FILE=lmcache_cpu.yaml
vllm serve openai/gpt-oss-20b --port 8000 --kv-transfer-config '{"kv_connector":"LMCacheConnectorV1", "kv_role":"kv_both"}'
```
Invocation:
```
curl 127.0.0.1:8000/v1/chat/completions -H "Content-Type: application/json" -d '{"model": "openai/gpt-oss-20b", "messages": [ {"role": "user", "content": "What is Amazon SageMaker?"}]}'
```
First invocation:
```
{
"id":"chatcmpl-951ca7178b1e4226b0343cb070033487",
"object":"chat.completion",
"created":1764098087,
"model":"openai/gpt-oss-20b",
"choices":[
{"index":0,"message":{"role":"assistant","content":"**Amazon SageMaker** is Amazon Web Services’ fully‑managed platform that lets you build, train, tune, and deploy machine‑learning models fast—without managing the underlying infrastructure.\n\nKey capabilities\n\n| Feature | What it does |\n|--------|--------------|\n| **SageMaker Studio** | A web‑based IDE that bundles notebooks, visual debugging, model monitoring, and collaboration tools. |\n| **Built‑in algorithms & frameworks** | Pre‑packaged models (XGBoost, Linear Learner, etc.) and support for your own TensorFlow, PyTorch, MXNet, Scikit‑learn, R, etc. |\n| **Auto‑ML & automated model tuning** | SageMaker Autopilot automatically searches model architectures and hyper‑parameters. |\n| **Managed training** | Spot, distributed, and GPU training jobs that scale to the required compute. |\n| **Model deployment** | One‑click production endpoints, batch transform, edge inference (SageMaker Edge), and real‑time or asynchronous inference. |\n| **Inference pipelines** | Compose multiple models or processing steps into a single pipeline. |\n| **Model monitoring & A/B testing** | Continuous evaluation of drift, predictions, and performance metrics. |\n| **Security & compliance** | VPC, IAM, KMS encryption, private cataloging, and audit trails. |\n\nIn short, SageMaker removes the operational burden of ML—so teams can focus on data science and business value rather than servers, networking, and scaling.","refusal":null,"annotations":null,"audio":null,"function_call":null,"tool_calls":[],"reasoning":"User asks \"What is Amazon SageMaker?\" Short answer. Provide description: fully managed ML service, environment to build, train, deploy models, etc. Should be succinct.","reasoning_content":"User asks \"What is Amazon SageMaker?\" Short answer. Provide description: fully managed ML service, environment to build, train, deploy models, etc. Should be succinct."},"logprobs":null,"finish_reason":"stop","stop_reason":null,"token_ids":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":75,"total_tokens":426,"completion_tokens":351,"prompt_tokens_details":null},"prompt_logprobs":null,"prompt_token_ids":null,"kv_transfer_params":null}
```
Second invocation:
```
{
"id": "chatcmpl-4ebc19fc5c2a41a7bebc01ea8d1c98b1",
"object": "chat.completion",
"created": 1764098160,
"model": "openai/gpt-oss-20b",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Sure! Here’s a basic guide to get you started with writing a cool, informative yet accessible article on **\"The Fascinating World of Quantum Computing\"** for a general audience. Feel free to adapt the structure, tone, or content to match your style and publication’s guidelines.\n\n---\n\n## 1. Hook & Context (≈150–200 words)\n\n- **Start with a vivid anecdote, surprising fact, or a relatable analogy** that introduces the “wow” moment in quantum computing.\n - *Example:* “Imagine a coin that, instead of being heads or tails, can be both at the same time… until you look at it.” \n- **Briefly state why this topic matters** to everyday life: faster drug discovery, better encryption, breakthrough materials, etc.\n\n> **Tell readers what they’ll learn**: a quick glimpse of quantum fundamentals, why it’s different from classic bits, and how it could reshape technologies.\n\n---\n\n## 2. What’s a Quantum Computer? (≈300 words)\n\n| Section | Content | Quick Tips |\n|---------|---------|------------|\n| **2.1 “Bits” vs. “Qubits”** | • Classical bits (“0” or “1”).<br>• Qubits: superposition (both 0 & 1) & entanglement. | Use visual metaphors: a spinning top (superposition) and two dancers always in sync (entanglement). |\n| **2.2 Basic Operations** | • Quantum gates (Pauli X, H, CNOT).<br>• The role of interference. | A tiny “reversible” logic of the quantum “if‑then” that flips outcomes. |\n| **2.3 Measuring As a Collapses** | • Outcome collapse on measurement.<br>• Probabilities & expectation values. | Compare to a gamble: you only learn the re | https://github.com/vllm-project/vllm/issues/29436 | open | [
"bug"
] | 2025-11-25T19:27:24Z | 2025-11-25T19:27:24Z | 0 | ksuma2109 |
pytorch/ao | 3,389 | Is it possible to export a QAT model in AWQ Format? | I'm new to torchao and QAT but I'm pretty comfortable with PTQ techniques like AWQ and GPTQ. My deployment pipeline requires AWQ format (safetensors supported by autoawq or gptqmodel's new AWQ integration, needs to be in uint32 like Int4PackingFormat.PLAIN_INT32). I want to train a model with Int4WeightOnlyConfig and but it's confusing as to how I convert the final model into AWQ format, as AWQ format is supported but is this only for PTQ? Unless I'm missing something, you can save to roughly the same format (PLAIN_INT32 but only on xpu?) AND have AWQ support but there's no way to export to this format? If wrap my Int4WeightOnlyConfig in an AWQConfig, will it be trainable or only able to calibrate? Could I otherwise use something along the lines to the converter defined in [this project](https://github.com/gau-nernst/gemma3-int4/blob/92517e8cac07f5caa3e3c98f26931b9046a0fa38/convert_flax.py#L232)? | https://github.com/pytorch/ao/issues/3389 | closed | [
"triaged"
] | 2025-11-25T17:30:03Z | 2025-12-12T17:27:25Z | 10 | ambroser53 |
pytorch/executorch | 15,978 | qnn_executor_runner - mismatch in the skel files ? | hi,
im testing qnn_executor_runner on s25 ultra,
a Snapdragon 8 Gen 4 processor.
it seems qnn backend choses libQnnHtpV79Skel.so as the backend
but these messages seem to point to some mismatch ? it tries to call hmx_v73_convf16 ?
i.e. shouldnt it call hmx_v79_convf16 ?
V b037a:4006: CDSP0:[R]: Process "/frpc/f05c4930 qnn_executor_ru" crashed in thread "nn_3e56a57b" due to TLBMISS RW occurrence
2025-11-25 16:48:18.994 2314-2320 adsprpc cdsprpcd V b037a:4006: CDSP0:[R]: Crashed Shared Object "./libQnnHtpV79Skel.so" load address : 0x01000000
2025-11-25 16:48:18.994 2314-2320 adsprpc cdsprpcd V b037a:4006: CDSP0:[R]: [<015E5C3C>] hmx_v73_convf16_NxN_stride1+0x3C53C: (./libQnnHtpV79Skel.so)
2025-11-25 16:48:18.994 2314-2320 adsprpc cdsprpcd V b037a:4006: CDSP0:[R]: [<015E5C38>] hmx_v73_convf16_NxN_stride1+0x3C538: (./libQnnHtpV79Skel.so)
2025-11-25 16:48:18.994 2314-2320 adsprpc cdsprpcd V b037a:4006: CDSP0:[R]: [<015E5D74>] hmx_v73_convf16_NxN_stride1+0x3C674: (./libQnnHtpV79Skel.so)
2025-11-25 16:48:18.994 2314-2320 adsprpc cdsprpcd V b037a:4006: CDSP0:[R]: [<01546168>] continue_execution_bkgrnd_thread+0xA8: (./libQnnHtpV79Skel.so)
2025-11-25 16:48:18.994 2314-2320 adsprpc cdsprpcd V b037a:4006: CDSP0:[R]: [<0120EC94>] _ZN5Graph18exec_bkgrnd_workerEP12HexagonNNEnvPS_N9GraphData8ListTypeEN4hnnx3OsSE+0xD4: (./libQnnHtpV79Skel.so)
2025-11-25 16:48:18.994 2314-2320 adsprpc cdsprpcd V b037a:4006: CDSP0:[R]: [<01219EA0>] _ZNK5Graph31ubwcd_get_corresponding_surfaceEPKv+0x9E0: (./libQnnHtpV79Skel.so)
and output from adb shell
#./qnn_executor_runner --model_path ./my_model_fp16.pte --input_list_path ./raw_list.txt
[INFO] [Qnn ExecuTorch]: Deserializing processed data using QnnContextCustomProtocol
[INFO] [Qnn ExecuTorch]: create QNN Logger with log_level 1
[INFO] [Qnn ExecuTorch]: Initialize Qnn backend parameters for Qnn executorch backend type 2
[INFO] [Qnn ExecuTorch]: Caching: Caching is in RESTORE MODE.
[INFO] [Qnn ExecuTorch]: QnnContextCustomProtocol expected magic number: 0x5678abcd but get: 0x2000000
[INFO] [Qnn ExecuTorch]: Running level=1 optimization.
I 00:00:00.150474 executorch:qnn_executor_runner.cpp:313] Method loaded.
E 00:00:00.156807 executorch:method.cpp:1274] Output 0 is memory planned, or is a constant. Cannot override the existing data pointer.
I 00:00:00.156838 executorch:qnn_executor_runner.cpp:373] ignoring error from set_output_data_ptr(): 0x2
E 00:00:00.157118 executorch:method.cpp:1274] Output 1 is memory planned, or is a constant. Cannot override the existing data pointer.
I 00:00:00.157144 executorch:qnn_executor_runner.cpp:373] ignoring error from set_output_data_ptr(): 0x2
E 00:00:00.158031 executorch:method.cpp:1274] Output 2 is memory planned, or is a constant. Cannot override the existing data pointer.
I 00:00:00.158057 executorch:qnn_executor_runner.cpp:373] ignoring error from set_output_data_ptr(): 0x2
I 00:00:00.158069 executorch:qnn_executor_runner.cpp:376] Inputs prepared.
I 00:00:00.158198 executorch:qnn_executor_runner.cpp:382] Number of inputs: 1
I 00:00:00.178327 executorch:qnn_executor_runner.cpp:490] Perform 0 inference for warming up
I 00:00:00.178343 executorch:qnn_executor_runner.cpp:496] Start inference (0)
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> DspTransport call failed, error 0x00000010
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> Error from rpc transport
[ERROR] [Qnn ExecuTorch]: QnnDsp <E> Graph forward failed in execution with err 1003
[ERROR] [Qnn ExecuTorch]: qnn_graph_execute failed. Error 1003
E 00:00:00.192908 executorch:QnnExecuTorchBackend.cpp:176] Fail to execute graph
E 00:00:00.192912 executorch:method.cpp:1426] CALL_DELEGATE execute failed at instruction 0: 0x1
I 00:00:00.192924 executorch:qnn_executor_runner.cpp:514] 1 inference took 14.576000 ms, avg 14.576000 ms
F 00:00:00.192943 executorch:qnn_executor_runner.cpp:519] In function main(), assert failed (status == Error::Ok): Execution of method forward failed with status 0x1
Aborted
cc @cccclai @winskuo-quic @shewu-quic @haowhsu-quic @DannyYuyang-quic @cbilgin | https://github.com/pytorch/executorch/issues/15978 | open | [
"partner: qualcomm",
"module: qnn"
] | 2025-11-25T15:14:00Z | 2025-12-19T02:26:49Z | 3 | eliyam32 |
pytorch/executorch | 15,973 | What should I do if there is SoC for my processor? | ### 📚 The doc issue
Hello. I have a device with a Snapdragon 685 processor, it is not on the Qualcomm SoCs list. In this case, the only thing left for me is to convert via Xnnpack? And will the model converted via Xnnpack work on android?
### Suggest a potential alternative/fix
_No response_
cc @cccclai @winskuo-quic @shewu-quic @haowhsu-quic @DannyYuyang-quic @cbilgin | https://github.com/pytorch/executorch/issues/15973 | open | [
"partner: qualcomm",
"module: qnn"
] | 2025-11-25T13:29:32Z | 2025-11-26T01:50:30Z | null | kejndan |
vllm-project/vllm | 29,409 | [Usage]: Custom Logits Processors V1 how to get tokenizer into processor | ### Problem with tokenizer
For the second day now, I've been unable to figure out how to get a tokenizer inside a custom processor. I used the processor from the documentation as an example. I examined each object through debug, but couldn't find where to extract the tokenizer. In v0, this was done simply at the request level, by passing an argument to the object.
How to pass a tokenizer to the processor?
```python import torch
from vllm.config import VllmConfig
from vllm.sampling_params import SamplingParams
from vllm.v1.sample.logits_processor import (BatchUpdate,
LogitsProcessor,
MoveDirectionality)
class DummyLogitsProcessor(LogitsProcessor):
"""Fake logit processor to support unit testing and examples"""
@classmethod
def validate_params(cls, params: SamplingParams):
target_token: int | None = params.extra_args and params.extra_args.get(
"target_token"
)
if target_token is not None and not isinstance(target_token, int):
raise ValueError(f"target_token value {target_token} is not int")
def __init__(self, vllm_config: "VllmConfig", device: torch.device,
is_pin_memory: bool):
self.req_info: dict[int, int] = {}
def is_argmax_invariant(self) -> bool:
"""Never impacts greedy sampling"""
return False
def update_state(self, batch_update: BatchUpdate | None):
if not batch_update:
return
# Process added requests.
for index, params, _, _ in batch_update.added:
assert params is not None
self.validate_params(params)
if params.extra_args and (target_token :=
params.extra_args.get("target_token")):
self.req_info[index] = target_token
else:
self.req_info.pop(index, None)
if self.req_info:
# Process removed requests.
for index in batch_update.removed:
self.req_info.pop(index, None)
# Process moved requests, unidirectional move (a->b) and swap
# (a<->b)
for adx, bdx, direct in batch_update.moved:
a_val = self.req_info.pop(adx, None)
b_val = self.req_info.pop(bdx, None)
if a_val is not None:
self.req_info[bdx] = a_val
if direct == MoveDirectionality.SWAP and b_val is not None:
self.req_info[adx] = b_val
def apply(self, logits: torch.Tensor) -> torch.Tensor:
if not self.req_info:
return logits
# Save target values before modification
cols = torch.tensor(
list(self.req_info.values()), dtype=torch.long, device=logits.device
)
rows = torch.tensor(
list(self.req_info.keys()), dtype=torch.long, device=logits.device
)
values_to_keep = logits[rows, cols].clone()
# Mask all but target tokens
logits[rows] = float('-inf')
logits[rows, cols] = values_to_keep
return logits
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/29409 | closed | [
"usage"
] | 2025-11-25T13:24:17Z | 2025-12-02T10:33:18Z | 6 | cvadim130 |
pytorch/torchtitan | 2,086 | mxfp8 MoE train is slower for DeepSeekV3 16b and Qwen models | I have tested **mxfp8** train for **Qwen** MoE models, and for **DeepSeekV3 16b** on **B200**. It did not show any speed up and even slows down in some case when I use mxfp8 (quantize.grouped_mm.mx).
I found [this](https://github.com/pytorch/ao/tree/main/torchao/prototype/moe_training#low-precision-moe-training) in torchao repo, saying that mxfp8 gives up to 1.6x speed up for DeepSeekV3 671b. Looks like it works only for big MoE models?
I have tried benchmarking of single MoE layer like [here](https://github.com/pytorch/ao/tree/main/torchao/prototype/moe_training#benchmark-single-moe-layer-forward--backward-pass).
This is what I got with dims used in [DeepSeekV3 16b](https://github.com/pytorch/torchtitan/blob/7e10d6052a8029592a37d1c843dc7949a6b30043/torchtitan/models/deepseek_v3/__init__.py#L78) [dim=2048, moe_inter_dim=1408]:
```
$ python -m benchmarks.prototype.moe_training.bench_moe_layer --recipe mxfp8 --local_batch_size=16 --dim=2048 --hidden_dim=1408 --local_num_experts=8
total_M: 131072, N: 1408, K: 2048
bf16 time: 16.882 ms
mxfp8 time: 17.710 ms
speedup: 0.953x
```
I couldn't get any speedup on Qwen3 [235B-A22B](https://github.com/pytorch/torchtitan/blob/7e10d6052a8029592a37d1c843dc7949a6b30043/torchtitan/models/qwen3/__init__.py#L168) and [30B-A3B](https://github.com/pytorch/torchtitan/blob/7e10d6052a8029592a37d1c843dc7949a6b30043/torchtitan/models/qwen3/__init__.py#L145) too.
Benchmarking of MoE layer with dims form Qwen3 235B-A22B [dim=4096, moe_inter_dim=1536] is following:
```
$ python -m benchmarks.prototype.moe_training.bench_moe_layer --recipe mxfp8 --local_batch_size=16 --dim=4096 --hidden_dim=1536 --local_num_experts=8
total_M: 131072, N: 1536, K: 4096
bf16 time: 34.154 ms
mxfp8 time: 34.196 ms
speedup: 0.999x
```
Is there a any way, how I can get speed up using mxfp8 for above models?
| https://github.com/pytorch/torchtitan/issues/2086 | open | [] | 2025-11-25T10:33:42Z | 2025-11-26T16:44:51Z | 2 | Yerniyaz |
vllm-project/vllm | 29,389 | [Bug]: race condition in shm_broadcast.py | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
# Problem
`ShmRingBuffer` is a lock-free queue, the implementation of which https://github.com/vllm-project/vllm/blob/12c007e288bf5c0ae3bd438036fbafbad88e706b/vllm/distributed/device_communicators/shm_broadcast.py#L98-L153
relies on the fact that when a flag is written to, signalling a valid state, the associated data is also in a valid state. To illustrate the point, consider the program
```python
shm = shared_memory.SharedMemory(..., size=128)
# set shm to 0
# process 1
shm[0] = 1
shm[64] = 1
# process 2
while shm[64] != 1:
pass
print(shm[0])
```
`ShmRingBuffer` requires that `print(shm[0])` always prints `1`. **There is no guarantee this is true**. For this to be true,
1. The Python language/implementation must provide a memory model, which it doesn't. Loosely speaking, a memory model is a set of guarantees on how source code maps to hardware instructions.
2. Even if we assume the source code maps "as intended" to hardware instructions, the hardware must ensure that process 2 must observe the writes to `shm[0]` and `shm[64]` in the same order as process 1.
An example of 2 breaking down is given in [`race_condition.cpp`](https://gist.github.com/nvjullin/cc52386e291fe41218b54406ece962a0). On an ARM CPU,
```bash
$ g++ -std=c++17 race_condition.cpp
$ ./a.out
number of violations: 5
# ...
```
Unfortunately, I don't know how to demonstrate the same race condition in Python.
# What it means
`ShmRingBuffer` can have corrupted memory and crashes vLLM sporadically. Such a crash would be near impossible to reproduce and debug.
# Solutions
In order of recommendation:
1. Remove `ShmRingBuffer` and always use the fallback `self.local_socket.send(serialized_obj)`. This is the simplest.
2. Use a well-tested lock-free queue implementation and don't write our own. Lock-free programming is notoriously difficult to write correctly, requires expertise to understand and is overall a maintenence nightmare.
3. Write it in C++ with proper atomics that guarantees the ordering of writes. The implementation should document extensively the proof of its correctness across different architectures. Python provides no tools for lock-free programming, making it impossible to write.
CC @youkaichao @nvpohanh
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/29389 | open | [
"bug"
] | 2025-11-25T09:25:52Z | 2025-11-25T09:25:52Z | 0 | nvjullin |
pytorch/pytorch | 169,050 | [Graph Partition] [Inductor] UnboundLocalError: cannot access local variable 'buf271' where it is not associated with a value | ### 🐛 Describe the bug
Using "reduce-overhead" mode and "inductor backend for training, with `torch._inductor.config.graph_partition = True`. Run into inductor gen-code bug:
```
[rank0]: File "/home/tiger/.local/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 1044, in _fn
[rank0]: return fn(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/tiger/.local/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1130, in forward
[rank0]: return compiled_fn(full_args)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/tiger/.local/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 339, in runtime_wrapper
[rank0]: all_outs = call_func_at_runtime_with_args(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/tiger/.local/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/utils.py", line 129, in call_func_at_runtime_with_args
[rank0]: out = normalize_as_list(f(args))
[rank0]: ^^^^^^^
[rank0]: File "/home/tiger/.local/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/utils.py", line 103, in g
[rank0]: return f(*args)
[rank0]: ^^^^^^^^
[rank0]: File "/home/tiger/.local/lib/python3.11/site-packages/torch/autograd/function.py", line 581, in apply
[rank0]: return super().apply(*args, **kwargs) # type: ignore[misc]
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/tiger/.local/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2118, in forward
[rank0]: fw_outs = call_func_at_runtime_with_args(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/tiger/.local/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/utils.py", line 129, in call_func_at_runtime_with_args
[rank0]: out = normalize_as_list(f(args))
[rank0]: ^^^^^^^
[rank0]: File "/home/tiger/.local/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 526, in wrapper
[rank0]: return compiled_fn(runtime_args)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/tiger/.local/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 690, in inner_fn
[rank0]: unwrapped_outs = compiled_fn(unwrapped_args)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/tiger/.local/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 724, in inner_fn
[rank0]: outs = compiled_fn(args)
[rank0]: ^^^^^^^^^^^^^^^^^
[rank0]: File "/home/tiger/.local/lib/python3.11/site-packages/torch/_inductor/output_code.py", line 613, in __call__
[rank0]: return self.current_callable(inputs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/tiger/.local/lib/python3.11/site-packages/torch/_inductor/utils.py", line 3017, in run
[rank0]: out = model(new_inputs)
[rank0]: ^^^^^^^^^^^^^^^^^
[rank0]: File "/tmp/torchinductor_tiger/tmpngii2htx/na/cnabkmabktacecyr75a7sgnkip7pjfcd672lse2ndmzilbphpxxh.py", line 5071, in call
[rank0]: partition1_args = [buf301, buf305, buf306, primals_42, buf311, buf286, primals_45, buf294, buf271, s54, u0, u1]
[rank0]: ^^^^^^
[rank0]: UnboundLocalError: cannot access local variable 'buf271' where it is not associated with a value
```
### Versions
Collecting environment information...
PyTorch version: 2.9.1+cu129
Is debug build: False
CUDA used to build PyTorch: 12.9
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (x86_64)
GCC version: (Debian 12.2.0-14+deb12u1) 12.2.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.36
Python version: 3.11.2 (main, Apr 28 2025, 14:11:48) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-5.15.152.bsk.10-amd64-x86_64-with-glibc2.36
Is CUDA available: True
CUDA runtime version: 12.9.86
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA H800
Nvidia driver version: 535.261.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.11.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.11.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.11.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.11.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.11.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.11.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.11.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.11.0
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byt | https://github.com/pytorch/pytorch/issues/169050 | open | [
"triaged",
"module: cuda graphs",
"oncall: pt2",
"module: inductor"
] | 2025-11-25T08:29:02Z | 2025-12-01T22:19:24Z | null | wmhst7 |
vllm-project/vllm | 29,382 | [Doc]: Expert Parallel Deployment says "Tensor parallel size (always 1 for now)" is confusing | ### 📚 The doc issue
On page https://docs.vllm.ai/en/latest/serving/expert_parallel_deployment/#single-node-deployment it says Tensor parallel size can only be 1 but didn't mention the behavior of Attention Layers
On page https://docs.vllm.ai/en/latest/serving/data_parallel_deployment/ it says The expert layers will by default form a (DP x TP) sized tensor parallel group. To enable expert parallelism, include the --enable-expert-parallel CLI arg (on all nodes in the multi-node case).
which is rather confusing.
### Suggest a potential alternative/fix
Point out the correct behavior of MoE models when TP, EP are both set.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/29382 | closed | [
"documentation"
] | 2025-11-25T07:54:42Z | 2025-12-13T17:38:01Z | 0 | xeonliu |
huggingface/transformers | 42,375 | SAM3 single image inference with multiple text prompt | Hi
I'm trying to run inference on a single image, aiming to get the bbox of objects from several different categories (e.g. "a person" and "a car").
the only example i found for prompting with multiple categories is in the "Batched Inference with Text Prompts" example, but then i need to unnecessarily duplicate my image as the # of categories.
is there a different more efficient way of achieving this?
p.s
when i try prompting with a list of several categories and a single image i get an error.
| https://github.com/huggingface/transformers/issues/42375 | open | [] | 2025-11-25T06:20:09Z | 2026-01-05T16:16:01Z | 9 | iariav |
pytorch/pytorch | 169,035 | [Question] Why torch.ops.symm_mem.multimem_all_reduce_() don't support e4m3, e5m2, fp16? | ### 🚀 The feature, motivation and pitch
Hi PyTorch developer,
Is there any reason why torch.ops.symm_mem.multimem_all_reduce_() don't support e4m3, e5m2, fp16? From CUDA PTX doc https://docs.nvidia.com/cuda/parallel-thread-execution/#data-movement-and-conversion-instructions-multimem, those data type were supported in multimem.ld_reduce. From latest NCCL code https://github.com/NVIDIA/nccl/blob/master/src/device/symmetric/generate.py#L54, NCCL also support multimem.ld_reduce based fp8 & fp16.
It seems like enable those data type doesn't require much engineering efforts. My guess is there's likely some accuracy issue PyTorch folks have found that block fp16/e5m2/e4m3 integration? Can we get more info on this? Also, should we expected torch symmetric memory to support fp16 & fp8 in near future?
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci | https://github.com/pytorch/pytorch/issues/169035 | open | [
"oncall: distributed",
"module: symm_mem"
] | 2025-11-25T02:39:22Z | 2025-11-26T15:00:34Z | 0 | XiaoSong9905 |
pytorch/pytorch | 169,033 | Pytorch CI is partially paused for the time being (updated 11/27) | ## Current Status
*ongoing*. Linux and Windows runners are re-enabled as of 12pm 11/27. Mac runners and ROCM/H100 still disabled.
## Error looks like
*No CI was running at all. No merges were processed.*
## Incident timeline (all times pacific)
*Include when the incident began, when it was detected, mitigated, root caused, and finally closed.*
## User impact
*How does this affect users of PyTorch CI?*
## Root cause
*What was the root cause of this issue?*
## Mitigation
*How did we mitigate the issue?*
## Prevention/followups
*How do we prevent issues like this in the future?*
cc @seemethere @pytorch/pytorch-dev-infra | https://github.com/pytorch/pytorch/issues/169033 | closed | [
"module: ci",
"triaged"
] | 2025-11-25T01:57:30Z | 2025-12-07T20:08:54Z | 3 | malfet |
huggingface/trl | 4,569 | [doc issue] doc on "GRPO with replay buffer" buggy | ### Reproduction
The code example in [doc for "GRPO with replay buffer"](https://huggingface.co/docs/trl/main/en/experimental#grpo-with-replay-buffer) is kind of buggy.
- It imports `GRPOWithReplayBufferTrainer` but never used.
- It uses `GRPOWithReplayBufferConfig` but never imported
- The code is apparently not executable.
Below is the code example given in the doc:
```python
from trl.experimental.grpo_with_replay_buffer import GRPOWithReplayBufferTrainer
from datasets import load_dataset
dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train")
# Guarantee that some rewards have 0 std
def custom_reward_func(completions, **kwargs):
if torch.rand(1).item() < 0.25:
return [0] * len(completions) # simulate some None rewards
else:
return torch.rand(len(completions)).tolist()
training_args = GRPOWithReplayBufferConfig(
output_dir=self.tmp_dir,
learning_rate=1e-4,
per_device_train_batch_size=4,
num_generations=4,
max_completion_length=8,
replay_buffer_size=8,
report_to="none",
)
trainer = GRPOTrainer(
model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5",
reward_funcs=[custom_reward_func],
args=training_args,
train_dataset=dataset,
)
previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()}
trainer.train()
```
### System Info
NA
### Checklist
- [x] I have checked that my issue isn't already filed (see [open issues](https://github.com/huggingface/trl/issues?q=is%3Aissue))
- [x] I have included my system information
- [x] Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))
- [x] Any code provided is properly formatted in code blocks, (no screenshot, [more on code blocks](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))
- [x] Any traceback provided is complete | https://github.com/huggingface/trl/issues/4569 | closed | [
"🐛 bug",
"📚 documentation",
"🏋 GRPO"
] | 2025-11-25T01:30:28Z | 2025-11-25T21:28:00Z | 2 | DNXie |
pytorch/pytorch | 169,002 | Torch dynamo fails to do proper type promotion during export | ### 🐛 Describe the bug
When I tried to use torch.where with a boolean tensor, a float, and and int, torch dynamo tripped up on doing type promotion, and gave me a really unclear error message on what was wrong. When I explicitly converted the int input to float, it worked. Can we develop proper type promotion in the tracer internally?
Error message:
```
Exporting to ONNX with dynamo=True...
W1124 11:47:57.487000 1846666 miniconda3/envs/py310/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_compat.py:114] Setting ONNX exporter to use operator set version 18 because the requested opset_version 17 is a lower version than we have implementations for. Automatic version conversion will be performed, which may not be successful at converting to the requested version. If version conversion is unsuccessful, the opset version of the exported model will be kept at 18. Please consider setting opset_version >=18 to leverage latest ONNX features
[torch.onnx] Obtain model graph for `TestModel()` with `torch.export.export(..., strict=False)`...
[torch.onnx] Obtain model graph for `TestModel()` with `torch.export.export(..., strict=False)`... ✅
[torch.onnx] Run decomposition...
[torch.onnx] Run decomposition... ❌
Traceback (most recent call last):
File "/home/aboubezari/miniconda3/envs/py310/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_core.py", line 1416, in export
decomposed_program = _prepare_exported_program_for_export(
File "/home/aboubezari/miniconda3/envs/py310/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_core.py", line 984, in _prepare_exported_program_for_export
_fx_passes.insert_type_promotion_nodes(graph_module)
File "/home/aboubezari/miniconda3/envs/py310/lib/python3.10/site-packages/torch/onnx/_internal/exporter/_fx_passes.py", line 28, in insert_type_promotion_nodes
passes.InsertTypePromotion(module).run()
File "/home/aboubezari/miniconda3/envs/py310/lib/python3.10/site-packages/torch/onnx/_internal/fx/_pass.py", line 235, in run
return self._run(*args, **kwargs)
File "/home/aboubezari/miniconda3/envs/py310/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py", line 1666, in _run
self.interpreter.run(*fake_args)
File "/home/aboubezari/miniconda3/envs/py310/lib/python3.10/site-packages/torch/fx/interpreter.py", line 174, in run
self.env[node] = self.run_node(node)
File "/home/aboubezari/miniconda3/envs/py310/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py", line 1583, in run_node
self._maybe_promote_node(n, rule)
File "/home/aboubezari/miniconda3/envs/py310/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py", line 1564, in _maybe_promote_node
self._rerun_node_after_type_promotion(node, type_promotion_info.out_dtype)
File "/home/aboubezari/miniconda3/envs/py310/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py", line 1389, in _rerun_node_after_type_promotion
node.target = find_compatible_op_overload(target.overloadpacket, args, kwargs)
File "/home/aboubezari/miniconda3/envs/py310/lib/python3.10/site-packages/torch/onnx/_internal/fx/passes/type_promotion.py", line 1318, in find_compatible_op_overload
assert new_op_overload.overloadpacket == op, (
AssertionError: Expected same OpOverload packet, got prim.device != aten.where
```
Reproduce:
```python
import os
import torch
# Disable CUDA to match the user's environment
os.environ['CUDA_VISIBLE_DEVICES'] = ''
torch.cuda.is_available = lambda: False
class TestModel(torch.nn.Module):
"""Simple model that reproduces the torch.where type promotion issue."""
def __init__(self):
super().__init__()
def forward(self, attention_mask):
"""Forward pass that uses torch.where with scalar arguments.
"""
# This fails with Expected same OpOverload packet, got prim.device != aten.where
attention_mask = torch.where(attention_mask, 0, -1000.0)
# This works!!
# attention_mask = torch.where(attention_mask, float(0), -1000.0)
return attention_mask
"""Main function to run the reproduction."""
print("Creating model...")
model = TestModel()
model.eval()
model = model.cpu()
# Shape: [batch, num_heads, seq_len, seq_len] or similar 4D shape
print("Creating sample inputs...")
attention_mask = torch.randn(1, 1, 1505, 1505) > 0 # 4D boolean tensor
attention_mask = attention_mask.cpu()
print(f"Attention mask shape: {attention_mask.shape}")
print(f"Attention mask dtype: {attention_mask.dtype}")
# Test forward pass first
print("\nTesting forward pass...")
with torch.no_grad():
output = model(attention_mask)
print(f"Forward pass successful. Output shape: {output.shape}")
print(f"Output dtype: {output.dtype}")
# Export to ONNX with dynamo=True to trigger type promotion pass
print("\nExporting to ONNX with dynamo=True...")
onnx_path = "where_reproduce.onnx"
torch.onnx.export(
model,
(atten | https://github.com/pytorch/pytorch/issues/169002 | open | [
"oncall: pt2",
"oncall: export"
] | 2025-11-24T19:51:33Z | 2025-12-02T20:20:47Z | 1 | aboubezari |
pytorch/pytorch | 169,000 | Dr CI is temporarily not working due to API fairewall |
## Current Status
ongoing
## Incident timeline (all times pacific)
Since Nov 21st, 2025
## User impact
*How does this affect users of PyTorch CI?*
The jobs and Pr that depends on Dr CI will see no update.
## Root cause
*What was the root cause of this issue?*
We changed the configuration of our firewall, this changes affected all bot jobs, and can make bots have failed api call
## Mitigation
*How did we mitigate the issue?*
currently dev infra team is working on fixing it
| https://github.com/pytorch/pytorch/issues/169000 | closed | [
"ci: sev"
] | 2025-11-24T19:22:26Z | 2025-12-01T22:13:09Z | 3 | yangw-dev |
pytorch/pytorch | 168,993 | [CI][B200] DGXB200-07 Is Having NVIDIA-CONTAINER-TOOLKIT Related Issues | ## Current Status
On-going
## Error looks like
Only affecting periodic jobs, not PR blocking.
Errors are like: (Using https://github.com/pytorch/pytorch/actions/runs/19630438757/job/56210849037 for example)
docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running prestart hook #0: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: detection error: driver rpc error: timed out: unknown
## Incident timeline (all times pacific)
First noticed this from this job: https://github.com/pytorch/pytorch/actions/runs/19626056398/job/56197556152
Which was Nov 23rd 10pm.
## User impact
*How does this affect users of PyTorch CI?*
Commits landed in trunk may be run with B200 periodic job and 1/3 chance the job would land on dgxb200-07 runner, which is the broken one.
## Root cause
*What was the root cause of this issue?*
Not root-caused yet. But only dgxb200-07 is affected.
## Mitigation
*How did we mitigate the issue?*
To be figured out.
## Prevention/followups
*How do we prevent issues like this in the future?*
To be figured out.
cc @ptrblck @msaroufim @eqy @jerryzh168 @tinglvv @seemethere @malfet @pytorch/pytorch-dev-infra @atalman @huydhn | https://github.com/pytorch/pytorch/issues/168993 | closed | [
"module: cuda",
"module: ci",
"triaged"
] | 2025-11-24T18:35:16Z | 2025-12-02T19:18:32Z | 2 | nWEIdia |
pytorch/pytorch | 168,965 | max_autotuned BMM produces wrong result when multiple threads are used | ### 🐛 Describe the bug
I noticed that when I use aoti_compile_and_package with max_autotune, in certain conditions the result is wrong. Specifically:
1. It's important to `set_num_threads(4)`. With 1 threads it doesn't reproduce
2. It's important to do `import cv2`, without it the bug doesn't reproduce
3. Adding `os.environ['OPENCV_FOR_OPENMP_DYNAMIC_DISABLE'] = '1'` before import fixes the issue
My explanation of this behavior is that code produced by max_autotune looks like this
```
void cpp_CppMicroGemmFP32Vec_threaded_mm(const float* X, const float* W, float* Y, const int64_t ks_b_index)
...
#pragma omp parallel num_threads(4)
{
const int tid = omp_get_thread_num();
const int64_t k_group_id = tid / num_Kt_blocks;
const int64_t k_slice_id = tid % num_Kt_blocks;
...
```
and the code relies that this block would be really executed 4 times in parallel. But if you call `omp_set_dynamic`, openmp can ignore this thread hint and run the code less times that leads to wrong results and this behavior is documented [here](https://www.openmp.org/spec-html/5.0/openmpsu35.html#x55-860002.6.1). Unfortunatly omp_set_dynamic is called while I'm importing `cv2` library, specifically [here](https://github.com/opencv/opencv/blob/4.x/modules/core/src/parallel.cpp#L470) when just loading shared library.
So, I think it should be fixed somehow, to not depend on this kind of OMP behavior, and maybe even use at::parallel_for instead, because different parallelizing backends can be enabled, not necessary openmp
[This](https://colab.research.google.com/drive/1fDz0ZcDbYhluSTQ-ldPcZebS65YPP5KX?usp=sharing) notebook should reproduce the bug, but I didn't manage to do it in colab because there max_autotune chooses different implementation and pytorch version is also different.
[data.zip](https://github.com/user-attachments/files/23722728/data.zip)
On pytorch 2.9 it doesn't reproduce, but I noticed that the generated code is using different constants. Maybe layout of input tensors in BMM has changed, so the bug isn't triggered, but anyway the code still relies on the invariant that actuall executed count is equal to `#pragma omp parallel num_threads=N`
### Error logs
_No response_
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.3 LTS (aarch64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-134-generic-aarch64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L40S
Nvidia driver version: 550.127.05
cuDNN version: Could not collect
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: ARM
Model name: Neoverse-N1
Model: 1
Thread(s) per core: 1
Core(s) per cluster: 128
Socket(s): -
Cluster(s): 1
Stepping: r3p1
Frequency boost: disabled
CPU(s) scaling MHz: 41%
CPU max MHz: 3000.0000
CPU min MHz: 1000.0000
BogoMIPS: 50.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp
L1d cache: 8 MiB (128 instances)
L1i cache: 8 MiB (128 instances)
L2 cache: 128 MiB (128 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected | https://github.com/pytorch/pytorch/issues/168965 | open | [
"triaged",
"module: correctness (silent)",
"oncall: pt2",
"oncall: export",
"oncall: cpu inductor",
"module: aotinductor"
] | 2025-11-24T12:41:52Z | 2025-12-11T12:23:10Z | 6 | mstebelev |
vllm-project/vllm | 29,306 | [Usage]: dots.llm.inst is not running due to a type error | ### Your current environment
I'm trying to run dots llm on 4xH100
```
vllm serve \
--uvicorn-log-level=info \
rednote-hilab/dots.llm1.inst \
--dtype auto \
--api-key xxx \
--host 0.0.0.0 \
--port 8000 \
--tensor-parallel-size 4
--ipc=host \
--trust-remote-code
```
It failed to run, I got the following crash:
```text
(EngineCore_DP0 pid=10684) ERROR 11-24 09:41:25 [v1/executor/multiproc_executor.py:230] Worker proc VllmWorker-1 died unexpectedly, shutting down executor.
(EngineCore_DP0 pid=10684) Process EngineCore_DP0:
(EngineCore_DP0 pid=10684) Traceback (most recent call last):
(EngineCore_DP0 pid=10684) File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
(EngineCore_DP0 pid=10684) self.run()
(EngineCore_DP0 pid=10684) File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
(EngineCore_DP0 pid=10684) self._target(*self._args, **self._kwargs)
(EngineCore_DP0 pid=10684) File "/home/ubuntu/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 846, in run_engine_core
(EngineCore_DP0 pid=10684) raise e
(EngineCore_DP0 pid=10684) File "/home/ubuntu/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 833, in run_engine_core
(EngineCore_DP0 pid=10684) engine_core = EngineCoreProc(*args, **kwargs)
(EngineCore_DP0 pid=10684) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=10684) File "/home/ubuntu/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 606, in __init__
(EngineCore_DP0 pid=10684) super().__init__(
(EngineCore_DP0 pid=10684) File "/home/ubuntu/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 109, in __init__
(EngineCore_DP0 pid=10684) num_gpu_blocks, num_cpu_blocks, kv_cache_config = self._initialize_kv_caches(
(EngineCore_DP0 pid=10684) ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=10684) File "/home/ubuntu/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 231, in _initialize_kv_caches
(EngineCore_DP0 pid=10684) available_gpu_memory = self.model_executor.determine_available_memory()
(EngineCore_DP0 pid=10684) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=10684) File "/home/ubuntu/venv/lib/python3.12/site-packages/vllm/v1/executor/abstract.py", line 126, in determine_available_memory
(EngineCore_DP0 pid=10684) return self.collective_rpc("determine_available_memory")
(EngineCore_DP0 pid=10684) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=10684) File "/home/ubuntu/venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 358, in collective_rpc
(EngineCore_DP0 pid=10684) return aggregate(get_response())
(EngineCore_DP0 pid=10684) ^^^^^^^^^^^^^^
(EngineCore_DP0 pid=10684) File "/home/ubuntu/venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 341, in get_response
(EngineCore_DP0 pid=10684) raise RuntimeError(
(EngineCore_DP0 pid=10684) RuntimeError: Worker failed with error 'TypeError: can't multiply sequence by non-int of type 'float'
(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] EngineCore failed to start.
(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] Traceback (most recent call last):
(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] File "/home/ubuntu/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 833, in run_engine_core
(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] engine_core = EngineCoreProc(*args, **kwargs)
(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] File "/home/ubuntu/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 606, in __init__
(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] super().__init__(
(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] File "/home/ubuntu/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 109, in __init__
(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] num_gpu_blocks, num_cpu_blocks, kv_cache_config = self._initialize_kv_caches(
(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] File "/home/ubuntu/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 231, in _initialize_kv_caches
(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] available_gpu_memory = self.model_executor.determine_available_memory()
(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] | https://github.com/vllm-project/vllm/issues/29306 | closed | [
"usage"
] | 2025-11-24T09:48:08Z | 2025-11-28T23:25:27Z | 1 | rain-1 |
pytorch/torchtitan | 2,077 | Context Parallel for Qwen3 | Thanks for supporting Qwen3 models!
> CP is not supported currently because of RoPE embedding implementation details.
Any plan to support CP + EP for Qwen3 MoE models?
If no plan in short time, can you help guide how can I implement it myself? | https://github.com/pytorch/torchtitan/issues/2077 | open | [
"high priority",
"triage review"
] | 2025-11-24T08:09:30Z | 2025-12-15T23:56:00Z | 8 | unavailableun |
huggingface/transformers | 42,353 | SAM3 point mode is not supported yet? | In [SAM3 official example](https://github.com/facebookresearch/sam3/blob/main/examples/sam3_for_sam1_task_example.ipynb
), they also support point mode. But it seems that transforms has not supported yet?
| https://github.com/huggingface/transformers/issues/42353 | closed | [] | 2025-11-24T07:16:52Z | 2025-11-26T15:16:25Z | 1 | haofanwang |
pytorch/executorch | 15,956 | [QNN] Support for in-place modification of mutable buffers (weights) within the QNN delegate? | ### 🚀 The feature, motivation and pitch
### Description
I am working on a model where certain buffers (serving as weights) are updated in-place during the `forward` pass (e.g., zero-order optimization algorithm).
I attempted to export this model and lower it to the QNN backend. My goal is to have the entire graph, including the weight update logic, executed on the QNN backend to avoid context switching between CPU and NPU.
### Current Behavior
Currently, it seems that:
1. The partitioner either rejects the node performing the mutation (fallback to CPU).
2. Or, if forced, the compiled binary does not reflect the updated weights in subsequent runs (weights are treated as static constants baked into the context binary).
### Question / Request
1. **Is there native support in the QNN backend** to handle mutable buffers that are modified inside the delegated graph?
2. If not, is the only recommended workaround to **lift the buffers to graph inputs/outputs** (managing state on the CPU)?
3. Are there any specific compiler specs or flags (e.g., `take_over_mutable_buffer` equivalent for QNN) that I should be enabling?
### Minimal Reproducible Example (MRE)
Here is a simplified version of the logic:
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from executorch.backends.qualcomm.partition.qnn_partitioner import QnnPartitioner
# ... other imports ...
class MutableModel(nn.Module):
def __init__(self):
super().__init__()
# Registering a buffer that acts like a weight
self.register_buffer("dynamic_weight", torch.empty(10, 10))
def forward(self, x):
# Update the weight in-place during inference
self.dynamic_weight.add_(0.01)
# Use the updated weight for computation
out = F.linear(x, self.dynamic_weight)
return out
# Standard export and lowering flow...
# ...
### Alternatives
Modify the QNN backend kernel to support weight updates
### Additional context
_No response_
### RFC (Optional)
_No response_ | https://github.com/pytorch/executorch/issues/15956 | closed | [] | 2025-11-24T06:07:43Z | 2025-11-24T08:40:16Z | 0 | qqqqqqqwy |
vllm-project/vllm | 29,297 | [Bug]: What should the image embedding input be like? I have tested with multiple cases but it all fails | ### Your current environment
```text
==============================
System Info
==============================
OS : Red Hat Enterprise Linux release 8.10 (Ootpa) (x86_64)
GCC version : (GCC) 8.5.0 20210514 (Red Hat 8.5.0-26)
Clang version : Could not collect
CMake version : Could not collect
Libc version : glibc-2.28
==============================
PyTorch Info
==============================
PyTorch version : 2.9.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.10.19 (main, Oct 21 2025, 16:43:05) [GCC 11.2.0] (64-bit runtime)
Python platform : Linux-4.18.0-553.50.1.el8_10.x86_64-x86_64-with-glibc2.28
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : Could not collect
CUDA_MODULE_LOADING set to :
GPU models and configuration :
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
Nvidia driver version : 575.51.03
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 1
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7742 64-Core Processor
Stepping: 0
CPU MHz: 2250.000
CPU max MHz: 2250.0000
CPU min MHz: 1500.0000
BogoMIPS: 4491.72
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 16384K
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-31
NUMA node2 CPU(s): 32-47
NUMA node3 CPU(s): 48-63
NUMA node4 CPU(s): 64-79
NUMA node5 CPU(s): 80-95
NUMA node6 CPU(s): 96-111
NUMA node7 CPU(s): 112-127
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
==============================
Versions of relevant libraries
==============================
[pip3] flashinfer-python==0.5.2
[pip3] numpy==2.2.6
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cudnn-frontend==1.16.0
[pip3] nvidia-cufft-cu12==11.3.3.83
[pip3] nvidia-cufile-cu12==1.13.1.3
[pip3] nvidia-curand-cu12==10.3.9.90
[pip3] nvidia-cusolver-cu12==11.7.3.90
[pip3] nvidia-cusparse-cu12==12.5.8.93
[pip3] nvidia-cusparselt-cu12==0.7.1
[pip3] nvidia-cutlass-dsl==4.3.0
[pip3] nvidia-ml-py==13.580.82
[pip3] nvidia-nccl-cu12==2.27.5
[pip3] nvidia-nvjitlink-cu12==12.8.93
[pip3] nvidia-nvshmem-cu12==3.3.20
[pip3] nvidia-nvtx-cu12==12.8.90
[pip3] pyzmq==27.1.0
[pip3] torch==2.9.0
[pip3] torchaudio==2.9.0
[pip3] torchvision==0.24.0
[pip3] transformers==4.57.1
[pip3] triton==3.5.0
[conda] flashinfer-python 0.5.2 pypi_0 pypi
[conda] numpy 2.2.6 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.8.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.8.90 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.8.93 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.8.90 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.10.2.21 pypi_0 pypi
[conda] nvidia-cudn | https://github.com/vllm-project/vllm/issues/29297 | closed | [
"usage"
] | 2025-11-24T06:02:09Z | 2025-11-26T13:00:17Z | 2 | DamonZhao-sfu |
vllm-project/vllm | 29,294 | [CPU Backend] [Doc]: Update Installation Docs for Arm CPUs | ### 📚 The doc issue
This page https://docs.vllm.ai/en/stable/getting_started/installation/cpu/#arm-aarch64 is very out-dated.
We now release Arm CPU wheels and images thanks to #26931 and #27331
We need to update that page to reflect that :)
### Suggest a potential alternative/fix
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/29294 | closed | [
"documentation",
"cpu"
] | 2025-11-24T05:33:46Z | 2025-12-15T19:46:26Z | 5 | fadara01 |
pytorch/executorch | 15,954 | qnn_llama_runner on SA8295 outputs repetitive “sp” with Qwen3-1.7B after ExecuTorch export | ### 🐛 Describe the bug
use main commit b4d72f1e271915e9c0e1d313753a1eec840fbdee
I have tried some settings, the setting:( when I use other setting, the convert would be failed, and the error
" some op has incorrect Value 68, expected >= 73"
or
" [ERROR] [Qnn ExecuTorch]: fa_alloc.cc:2462::ERROR:graph requires estimated allocation of 2315388 KB, limit is 2097152 KB [ERROR] [Qnn ExecuTorch]: graph_prepare.cc:845::ERROR:error during serialize: memory usage too large",
When using default_quant_dtype = QuantDtype.use_8a8w and disabling the 16a4w_block quantization, the quantization/conversion completes successfully
`
class Qwen3_1_7BQuantRecipe(StaticLLMQuantRecipe):
default_quant_dtype = QuantDtype.use_8a8w
def __init__(self, verbose: bool = False):
super().__init__()
self.recipe = (
QuantRecipe(
self.default_quant_dtype,
False,
act_observer=MinMaxObserver,
granularity=QuantGranularity.PER_TENSOR,
verbose=verbose,
)
.add_regex(
{
r"output\.conv",
},
QuantDtype.use_16a8w,
False,
act_observer=MinMaxObserver,
granularity=QuantGranularity.PER_CHANNEL,
)
)
self.recipe.custom_quant_annotations.append(annotate_kv_8bit)
`
however, when running qnn_llama_runner with Qwen3-1.7B converted via ExecuTorch (hybrid QNN .pte) on a Qualcomm SA8295 device, the model generates a long sequence of “sp” .
` <|im_start|>user
what is 1+1<|im_end|>
<|im_start|>assistant.addHandlertoHaveBeenCalled sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp sp`
I hope to get your help or suggestions. Thanks very much.
### Versions
commit b4d72f1e271915e9c0e1d313753a1eec840fbdee
cc @cccclai @winskuo-quic @shewu-quic @haowhsu-quic @DannyYuyang-quic @cbilgin | https://github.com/pytorch/executorch/issues/15954 | closed | [
"partner: qualcomm",
"module: qnn"
] | 2025-11-24T03:28:00Z | 2025-12-04T03:41:00Z | 12 | lansexinhu |
pytorch/pytorch | 168,940 | [DTensor] aten.max.dim returns wrong indices when using DTensor | ### 🐛 Describe the bug
I found that current strategy of `aten.max.dim` may get incorrect indices output if sharded the dim for maximization.
Sample code:
```python
import torch
from torch.distributed.tensor import distribute_tensor, Shard
from torch.testing._internal.common_utils import run_tests
from torch.testing._internal.distributed._tensor.common_dtensor import DTensorTestBase, with_comms
class TestRegisterSharding(DTensorTestBase):
@with_comms
def test_max_dim(self):
mesh = self.build_device_mesh()
x = torch.randn(4, 4, device="cuda")
max_value, max_indices = torch.max(x, dim=1)
dist_x = distribute_tensor(x, mesh, [Shard(1)])
dist_max_value, dist_max_indices = torch.max(dist_x, dim=1)
print("x:", x)
print("max_value:", max_value)
print("max_indices:", max_indices)
print("dist_max_value:", dist_max_value.full_tensor())
print("dist_max_indices:", dist_max_indices.full_tensor())
if __name__ == "__main__":
run_tests()
```
Result:
```python
x: tensor([[-1.6165, 0.5685, -0.5102, -0.9113],
[-1.1555, -0.2262, -1.2891, 1.0654],
[-0.7167, -0.5333, 0.2078, -0.9798],
[ 0.7447, -0.2395, 0.2737, 0.0920]], device='cuda:0')
max_value: tensor([0.5685, 1.0654, 0.2078, 0.7447], device='cuda:0')
max_indices: tensor([1, 3, 2, 0], device='cuda:0')
dist_max_value: tensor([0.5685, 1.0654, 0.2078, 0.7447], device='cuda:0')
dist_max_indices: tensor([0, 0, 0, 0], device='cuda:0')
```
Each rank gets a shape(4, 1) local tensor to call `max.dim` in this case, and the local result of max indices is [0, 0, 0, 0]. The framework doesn't process the offset of index, which leads to an incorrect global result when the relavant dim is sharded.
Is there a good way to implement a strategy that supports sharding the index dim?
### Versions
torch v2.9.0
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci @tianyu-l @XilunWu @SherlockNoMad | https://github.com/pytorch/pytorch/issues/168940 | open | [
"oncall: distributed",
"module: dtensor"
] | 2025-11-24T02:36:58Z | 2025-12-12T14:40:32Z | 11 | qqq6op |
vllm-project/vllm | 29,286 | [Performance]: cache system prompt token ids | ### Proposal to improve performance
As system prompt can be very long now, tokenize the system prompt can be slow.
Using H20, tokenize 5000 tokens cost about 10ms as below:

System prompts are usually fixed and reusable, so cache the system prompt can be profitable.
Specificly:
1. In **apply_hf_chat_template** method we can separate the system prompt from other prompts, we can use condition **cache_system_prompt = truncate_prompt_tokens is None and not tokenize and len(conversation) > 1 and conversation[0].get("role") == "system"** to judge when we should separate the system prompt.
2. In **_normalize_prompt_text_to_input** method we judge that whether system prompt is in the dict ({system prompt: token ids}) that we can reuse, then concat system prompt token ids and prompt token ids as the final input_ids.
I am willing to contribute to this opt and looking forward to your suggestions!
### Report of performance regression
The above cost can be profitable.
### Misc discussion on performance
_No response_
### Your current environment (if you think it is necessary)
```text
The output of `python collect_env.py`
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/29286 | open | [
"performance"
] | 2025-11-24T01:55:32Z | 2025-11-28T08:57:06Z | 2 | Eviannn |
vllm-project/vllm | 29,281 | [Usage]: Removing last generated token from output and kv cache | ### Your current environment
```text
vLLM 0.11.2
```
### How would you like to use vllm
Hey guys,
i am currently working on a research project where i load a moe-like model and i want to do routing based on the sequence state.
The goal is to let expert 0 generate until it reaches the eos token, then remove the eos token and finish generation with expert 1 until the eos token is hit a second time.
I want to do this to use different strengths of both models.
My current approach is to modify GPUModelRunner and Scheduler to remove the eos token from output, reduce num_computed_tokens by 1 and compute a static routing tensor based on the sequence state which i pass as additional model input, to route to expert 0 or 1.
Now i am having some issues with unexpected output, especially with tensor_parallelism>1 on multiple gpus.
I was wondering if there already is a reliable solution to remove the last generated token from output and kv cache, so that the computation leading to eos does not interfere with the second expert.
Or maybe there is even a better way to do this?
Thank you!
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/29281 | closed | [
"usage"
] | 2025-11-23T22:39:16Z | 2025-11-26T09:33:53Z | 0 | josefdra |
vllm-project/vllm | 29,277 | [Usage]: Creating and accessing per request arguments inside vLLM model | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
I want to implement token compression techniques on the output embeddings of Qwen-2.5VL which would occur dynamically as the number of requests change. Is there anyway to implement this in vLLM? I see that SamplingParams seem to be the only way to use per request custom arguments but I don’t believe it can be accessed within the model code directly?
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/29277 | open | [
"usage"
] | 2025-11-23T21:59:31Z | 2025-11-23T21:59:31Z | 0 | minlu21 |
huggingface/transformers | 42,344 | How to fine-tune SAM 3D models? | ### Model description
The recently released SAM 3D work is truly remarkable. Do you plan to integrate it into Transformers and enable fine-tuning?
https://huggingface.co/facebook/sam-3d-objects
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
_No response_ | https://github.com/huggingface/transformers/issues/42344 | open | [
"New model"
] | 2025-11-23T17:40:57Z | 2025-11-23T17:40:57Z | null | bruno686 |
vllm-project/vllm | 29,264 | [Usage]: Monkey Patching SamplingParams | ### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collect
CMake version : version 3.28.3
Libc version : glibc-2.39
==============================
PyTorch Info
==============================
PyTorch version : 2.9.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.13.5 | packaged by conda-forge | (main, Jun 16 2025, 08:27:50) [GCC 13.3.0] (64-bit runtime)
Python platform : Linux-6.8.0-87-generic-x86_64-with-glibc2.39
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.8.93
CUDA_MODULE_LOADING set to :
GPU models and configuration :
GPU 0: NVIDIA B200
GPU 1: NVIDIA B200
GPU 2: NVIDIA B200
GPU 3: NVIDIA B200
GPU 4: NVIDIA B200
GPU 5: NVIDIA B200
GPU 6: NVIDIA B200
GPU 7: NVIDIA B200
Nvidia driver version : 570.195.03
cuDNN version : Could not collect
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) PLATINUM 8570
CPU family: 6
Model: 207
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 2
CPU(s) scaling MHz: 31%
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities ibpb_exit_to_user
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 600 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI | https://github.com/vllm-project/vllm/issues/29264 | closed | [
"usage"
] | 2025-11-23T11:45:54Z | 2025-11-24T13:03:50Z | 2 | josefdra |
vllm-project/vllm | 29,263 | [Feature]: Enable flash attention (and/or FlashMLA) for AMD GPUs | ### 🚀 The feature, motivation and pitch
In [this page from flash-attention](https://github.com/Dao-AILab/flash-attention?tab=readme-ov-file#amd-rocm-support), I checked that the upstream `flash-attention` currently has composable_kernel (for newer AMD GPUs) and WIP triton (for older RNDA GPUs, etc.) implementations. As well as [flash MLA](https://github.com/deepseek-ai/FlashMLA?tab=readme-ov-file#amd-instinct).
Is it possible to enable `vllm.vllm_flash_attn._vllm_fa2_C` and more modules for AMD GPUs?
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/29263 | closed | [
"feature request",
"rocm"
] | 2025-11-23T11:28:47Z | 2025-12-05T01:54:08Z | 4 | Inokinoki |
vllm-project/vllm | 29,245 | [Usage]: 启动 qwen3 vl 超级超级超级慢,sglang 启动很快,可能的原因是什么? | ### Your current environment
连执行 python collect_env.py 都很慢,环境是直接 uv 安装的
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.2 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collect
CMake version : version 4.1.2
Libc version : glibc-2.39
==============================
PyTorch Info
==============================
PyTorch version : 2.9.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.3 (main, Jun 18 2025, 17:59:45) [GCC 13.3.0] (64-bit runtime)
Python platform : Linux-5.10.134-19.100.al8.x86_64-x86_64-with-glibc2.39
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.9.86
CUDA_MODULE_LOADING set to :
GPU models and configuration :
GPU 0: NVIDIA L20Y
GPU 1: NVIDIA L20Y
GPU 2: NVIDIA L20Y
GPU 3: NVIDIA L20Y
GPU 4: NVIDIA L20Y
GPU 5: NVIDIA L20Y
GPU 6: NVIDIA L20Y
GPU 7: NVIDIA L20Y
Nvidia driver version : 570.148.08
cuDNN version : Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.10.2
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.10.2
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.10.2
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.10.2
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.10.2
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.10.2
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.10.2
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.10.2
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8468V
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 8
CPU(s) scaling MHz: 70%
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req hfi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm uintr md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 195 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; | https://github.com/vllm-project/vllm/issues/29245 | open | [
"usage"
] | 2025-11-22T20:41:27Z | 2025-12-11T11:23:54Z | 3 | hucorz |
huggingface/candle | 3,208 | `cudarc` dynamic loading support | Currently, `candle` uses `cudarc` with the `dynamic-linking` feature, which requires the executable to find the DLLs or SOs at startup. However, it would be more convenient if `candle` also supported the `dynamic-loading` feature from `cudarc` to load DLLs or SOs at runtime.
Is it possible for `candle` to support it? | https://github.com/huggingface/candle/issues/3208 | open | [] | 2025-11-22T18:18:25Z | 2025-11-25T09:00:27Z | 7 | mayocream |
huggingface/transformers | 42,331 | SAM3 does not support custom inference resolutions | ### System Info
Note: I am running the latest git version, sys Info should not be relevant to the issue
$ transformers env
Traceback (most recent call last):
File "/home/master-andreas/panopticon/test_env/bin/transformers", line 3, in <module>
from transformers.cli.transformers import main
File "/home/master-andreas/panopticon/test_env/lib/python3.12/site-packages/transformers/cli/transformers.py", line 23, in <module>
from transformers.cli.serve import Serve
File "/home/master-andreas/panopticon/test_env/lib/python3.12/site-packages/transformers/cli/serve.py", line 351, in <module>
class Serve:
File "/home/master-andreas/panopticon/test_env/lib/python3.12/site-packages/transformers/cli/serve.py", line 658, in Serve
) -> ChatCompletionChunk:
^^^^^^^^^^^^^^^^^^^
NameError: name 'ChatCompletionChunk' is not defined
### Who can help?
@yonigozlan
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```py
"""
Test script for SAM3 text prompting only.
This script demonstrates how to use SAM3 for text-based segmentation on images.
"""
import torch
from PIL import Image
import requests
from transformers import Sam3Processor, Sam3Model
import os
INFERENCE_RESOLUTION = (1008, 1008) # If run with anything else other than 1008 it fails
# INFERENCE_RESOLUTION = (1400, 1400)
def test_sam3_text_prompting():
"""Test SAM3 with text prompting on a sample image."""
# Set device
device = "cpu"
print(f"Using device: {device}")
# Load model and processor
print("Loading SAM3 model and processor...")
model = Sam3Model.from_pretrained("facebook/sam3").to(device)
processor = Sam3Processor.from_pretrained("facebook/sam3")
# Load a sample image
print("Loading sample image...")
image_url = "http://images.cocodataset.org/val2017/000000077595.jpg"
image = Image.open(requests.get(image_url, stream=True).raw).convert("RGB")
# Define text prompts to test
text_prompts = ["cat", "ear", "eye"]
for text_prompt in text_prompts:
print(f"\nTesting text prompt: '{text_prompt}'")
# Prepare inputs
inputs = processor(images=image, text=text_prompt, size=INFERENCE_RESOLUTION, return_tensors="pt").to(device)
# Run inference
with torch.no_grad():
outputs = model(**inputs)
# Post-process results
results = processor.post_process_instance_segmentation(
outputs,
threshold=0.5,
mask_threshold=0.5,
target_sizes=inputs.get("original_sizes").tolist()
)[0]
# Display results
num_objects = len(results['masks'])
print(f"Found {num_objects} objects matching '{text_prompt}'")
if num_objects > 0:
# Show scores for first few objects
scores = results['scores']
print(f"Confidence scores: {scores[:min(3, len(scores))].tolist()}")
# Show bounding boxes for first object
if 'boxes' in results and len(results['boxes']) > 0:
box = results['boxes'][0]
print(f"First object bounding box (xyxy): {box.tolist()}")
if __name__ == "__main__":
print("SAM3 Text Prompting Test Script")
print("=" * 40)
try:
test_sam3_text_prompting()
print("\n✓ All tests completed successfully!")
except Exception as e:
print(f"\n✗ Test failed with error: {e}")
raise
```
Output when INFERENCE_RESOLUTION=[1400, 1400]:
```sh
$ py test_sam3_text.py
SAM3 Text Prompting Test Script
========================================
Using device: cpu
Loading SAM3 model and processor...
Loading weights: 100%|█| 1468/1468 [00:00<00:00, 2709.52it/s, Materializing param=vision_encoder.neck.fpn
Loading sample image...
Testing text prompt: 'cat'
✗ Test failed with error: The size of tensor a (10000) must match the size of tensor b (5184) at non-singleton dimension 2
Traceback (most recent call last):
File "/home/master-andreas/panopticon/test_sam3_text.py", line 124, in <module>
test_sam3_text_prompting()
File "/home/master-andreas/panopticon/test_sam3_text.py", line 48, in test_sam3_text_prompting
outputs = model(**inputs)
^^^^^^^^^^^^^^^
File "/home/master-andreas/panopticon/test_env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/panopticon/test_env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/panopticon/test_env/lib/python3.12/site-packages/transformers/utils/generic.py", line 938, in wrapper
| https://github.com/huggingface/transformers/issues/42331 | closed | [
"bug"
] | 2025-11-21T22:17:08Z | 2025-12-10T22:46:39Z | 3 | Kallinteris-Andreas |
huggingface/lerobot | 2,500 | question about the gr00t policy | hi,
I see here https://huggingface.co/docs/lerobot/en/groot that gr00t is intergrated into lerobot.
is it in sync with the original repo: https://github.com/NVIDIA/Isaac-GR00T ?
I see in original repo that the dataset used to fine-tune, is a bit different from the original lerobot format, like libero dataset (https://huggingface.co/datasets/physical-intelligence/libero) used in pi model ,
therefore i wonder what dataset format should be used here in lerbot policy training ?
any example dataset that is passed to `--dataset.repo_id=$DATASET_ID` ?
is it a post-processed dataset ? | https://github.com/huggingface/lerobot/issues/2500 | open | [
"question",
"policies"
] | 2025-11-21T21:45:19Z | 2025-12-03T14:03:34Z | null | yanan1116 |
vllm-project/vllm | 29,192 | Tool Calling Parsers Fail to Populate tool_calls Array for Qwen2.5-Coder Models | # Tool Calling Parsers Fail to Populate `tool_calls` Array for Qwen2.5-Coder Models
## Environment
- **vLLM Version**: v0.11.2.dev115+g56669c1f2 (Blackwell build)
- **Model**: Qwen/Qwen2.5-Coder-14B-Instruct-AWQ
- **Quantization**: AWQ
- **Python Version**: 3.x (Docker container)
- **GPU**: NVIDIA GeForce RTX 5080 (16GB, Blackwell/sm_120)
- **Platform**: WSL2, Linux 6.6.87.2-microsoft-standard-WSL2
## Description
When using tool calling with Qwen2.5-Coder models, the model correctly generates tool calls in `<tools>` XML format, but both `qwen3_xml` and `qwen3_coder` parsers fail to extract these tool calls into the `tool_calls` array in the API response. The tool call information remains in the `content` field but the `tool_calls` array stays empty.
## Steps to Reproduce
1. Start vLLM with Qwen2.5-Coder and tool calling parser:
```bash
python -m vllm.entrypoints.openai.api_server \
--model Qwen/Qwen2.5-Coder-14B-Instruct-AWQ \
--quantization awq \
--enable-auto-tool-choice \
--tool-call-parser qwen3_xml # or qwen3_coder
```
2. Send a tool calling request:
```bash
curl -s http://localhost:8002/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "qwen2.5-coder-14b-awq",
"messages": [{"role": "user", "content": "What is the weather in San Francisco?"}],
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
}
},
"required": ["location"]
}
}
}
],
"tool_choice": "auto"
}'
```
## Actual Output
```json
{
"id": "chatcmpl-xxx",
"object": "chat.completion",
"model": "qwen2.5-coder-14b-awq",
"choices": [
{
"message": {
"role": "assistant",
"content": "<tools>\n{\n \"name\": \"get_weather\",\n \"arguments\": {\n \"location\": \"San Francisco, CA\"\n }\n}\n</tools>",
"tool_calls": []
}
}
]
}
```
## Expected Output
```json
{
"id": "chatcmpl-xxx",
"object": "chat.completion",
"model": "qwen2.5-coder-14b-awq",
"choices": [
{
"message": {
"role": "assistant",
"content": "",
"tool_calls": [
{
"type": "function",
"id": "call_0",
"function": {
"name": "get_weather",
"arguments": "{\"location\": \"San Francisco, CA\"}"
}
}
]
}
}
]
}
```
## Analysis
### Model Output (Correct)
The model correctly generates tool calls in the expected `<tools>` XML format:
```xml
<tools>
{
"name": "get_weather",
"arguments": {
"location": "San Francisco, CA"
}
}
</tools>
```
### Parser Behavior (Incorrect)
Both recommended parsers fail to extract tool calls:
- **hermes parser**: Expects `<tool_call>` tags, doesn't match `<tools>` tags
- **qwen3_xml parser**: Designed for `<tools>` tags but doesn't populate `tool_calls` array
- **qwen3_coder parser**: Also designed for Qwen but fails to populate array
### Root Cause
The parsers appear to load correctly (visible in logs as `'tool_call_parser': 'qwen3_xml'`) but the extraction logic fails to populate the OpenAI-compatible `tool_calls` array structure.
## Workaround
Manual extraction from the `content` field:
```python
import re
import json
def extract_tool_calls(response):
"""Extract tool calls from Qwen2.5-Coder <tools> tags"""
content = response['choices'][0]['message']['content']
pattern = r'<tools>\s*({.*?})\s*</tools>'
match = re.search(pattern, content, re.DOTALL)
if match:
tool_data = json.loads(match.group(1))
return [{
"type": "function",
"function": {
"name": tool_data["name"],
"arguments": json.dumps(tool_data["arguments"])
}
}]
return []
```
## Additional Context
### Multi-AI Consultation Results
Consulted with multiple AI models for parser recommendation:
- **Qwen3 Coder (480B)**: Recommended `qwen3_xml` parser
- **DeepSeek V3.1**: Ranked `qwen3_xml` (90% confidence), `qwen3_coder` (80% confidence)
- **Claude Sonnet 4.5**: Confirmed tag mismatch between Hermes and Qwen formats
All models agreed that the parser selection is correct, suggesting the issue is in the parser implementation rather than configuration.
### vLLM Configuration
```python
{
'tool_call_parser': 'qwen3_xml', # Confirmed in logs
'enable_auto_tool_choice': True,
'model': 'Qwen/Qwen2.5-Coder-14B-Instruct-AWQ',
'quantization': 'awq',
'max_model_len': 8192
}
```
## Impact
- **Severity**: High - Breaks OpenAI API compatibility for tool calling
- **Affected Models**: Likely all Qwen2.5-Coder variants
- | https://github.com/vllm-project/vllm/issues/29192 | open | [] | 2025-11-21T18:31:19Z | 2025-11-21T18:31:19Z | 0 | Platano78 |
vllm-project/vllm | 29,180 | [Bug]: Recorded `EngineCoreEventType.QUEUED` time is off | ### Your current environment
<details>
</details>
### 🐛 Describe the bug
When running benchmarking with the CLI:
- on one side the serving point `vllm serve ...`
- on the other side the benchmarking client : `vllm bench serve...`
(note that the two are running on the same machine, there is no networking delay)
I noticed that the `EngineCoreEventType.QUEUED` event recorder on the server side didn't match the time of posting the request. In my understanding these two should events should be approximately equivalent. These values aren't off by a few milliseconds, but here the mismatch can be pretty big, up to a few seconds.
I think the reason might be because adding [request to the scheduler](https://github.com/vllm-project/vllm/blob/fcb1d570bb8f95f5b7ded716a52fec902c535f0e/vllm/v1/core/sched/scheduler.py#L1166) cannot be done when the engine is running a decoding or a prefill, see the [`_process_input_queue` function](https://github.com/vllm-project/vllm/blob/fcb1d570bb8f95f5b7ded716a52fec902c535f0e/vllm/v1/engine/core.py#L801), where `add_request()` ultimately gets called. This can introduce delays before the queued event gets recorded, having "floating" requests that are not tracked in the logs.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/29180 | closed | [
"bug"
] | 2025-11-21T12:58:36Z | 2025-11-30T20:56:44Z | 4 | sducouedic |
vllm-project/vllm | 29,177 | [Usage]: Vllm + Intervl model local infra Image preprocessing / request adding becomes bottleneck even with more CPU cores — how to accelerate? | ### Your current environment
vllm 0.11.0
### How would you like to use vllm
### current phenomenon
When doing **batched image classification** (64 images per batch) with InternVL3_5-1B, the bottleneck is clearly in the **"Adding requests"** phase (image preprocessing).
Even after increasing CPU cores and setting `OMP_NUM_THREADS=16`, the preprocessing speed stays around **50 it/s**, while the actual generation phase is extremely fast (>1500 prompts/s).
```text
Adding requests: 100%|██████████| 64/64 [00:01<00:00, 52.67it/s] ← bottleneck
Processed prompts: 100%|█| 64/64 [00:00<00:00, 1515.23it/s, est. speed input: 812805.23 tok/s]
```
This means ~95% of the total latency is spent on CPU-side image preprocessing, (I have disabled dynamic resolution)
### Minimal Reproducible Example
```python
import os
from PIL import Image
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_path = "/data/code/haobang.geng/models/InternVL3_5-1B"
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
llm = LLM(
model=model_path,
dtype="bfloat16",
max_model_len=4096,
gpu_memory_utilization=0.95,
limit_mm_per_prompt={"image": 1},
trust_remote_code=True,
enforce_eager=False,
)
prompt = "<image>\nYou are an image classifier. Output only one word: safe or nsfw."
sampling_params = SamplingParams(temperature=0.0, max_tokens=8)
batch_inputs = []
for i in range(64):
img = Image.open(f"/path/to/images/{i}.jpg").convert("RGB")
batch_inputs.append({
"prompt": prompt,
"multi_modal_data": {"image": img},
})
outputs = llm.generate(batch_inputs, sampling_params=sampling_params, use_tqdm=True)
```
### Expected behavior
For pure-text batches, Adding requests is >2000 it/s such as qwen3vl.
Attempted solutions (all ineffective)
### my attempt to speed up
Increase CPU cores / set OMP_NUM_THREADS=16 → no speedup
mm_processor_kwargs={"max_dynamic_patch": 1, ...} → seems no speedup
Pre-resize images to 384×384 → helps a little (~55 it/s) but still far from ideal
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
Thank you for the great work on vLLM! Looking forward to a simple way to slove it | https://github.com/vllm-project/vllm/issues/29177 | open | [
"usage"
] | 2025-11-21T10:56:29Z | 2025-12-01T14:08:22Z | 3 | Passenger12138 |
pytorch/torchtitan | 2,073 | Slow Dataloader should use num_worker > 1 | I am trying to use torchtitan with procedurally generated data (data augmentation). This process is CPU-intensive and I strongly do not want to store each sample before. Under this setup, `torchtitan` is really slow to train and I'm seeing my MFU dropping by 4-5x compared to unbottlenecked dataloader (no data augmentation).
I have seen a related problem reported [here](https://github.com/pytorch/torchtitan/issues/1663) with some caveats on how to do multiprocess dataloader effectively. It would be cool to have an official implementation of multiprocess dataloader with `num_worker>1` | https://github.com/pytorch/torchtitan/issues/2073 | closed | [] | 2025-11-21T08:13:27Z | 2025-12-19T01:45:50Z | 3 | hypnopump |
huggingface/trl | 4,554 | Better packing of data with best-fit decrease strategy | Hello,
When using packing with the bfd strategy, it looks like too much truncation is done when the seq_length is smaller than the average length of the sequences we want to pack.
For example :
```python
from datasets import Dataset
from trl import pack_dataset
examples = {
"input_ids": [[1, 2, 3, 4], [5, 6], [7, 8, 9], [10]],
"attention_mask": [[1, 1, 1, 1], [1, 0], [1, 0, 0], [1]],
}
dataset = Dataset.from_dict(examples)
packed_dataset = pack_dataset(dataset, seq_length=3, strategy="bfd")
print(packed_dataset )
```
results in:
```python
{'input_ids': [[1, 2, 3], [7, 8, 9], [5, 6, 10]],
'attention_mask': [[1, 1, 1], [1, 0, 0], [1, 0, 1]],
'seq_lengths': [[3], [3], [2, 1]]}
```
So the token '4' is missing from the training tokens.
In a extreme case:
```python
examples_2 = {
"input_ids": [[0, 0], [1, 2, 3, 4], [5, 6, 7, 8, 9], [10]],
"attention_mask": [[1, 1], [1, 1, 1, 1], [1, 1, 1, 1, 1], [1]],
}
dataset_2 = Dataset.from_dict(examples_2)
print(pack_dataset(dataset_2, seq_length=1, strategy="bfd")[:])
```
results in:
```python
{'input_ids': [[0], [1], [5], [10]],
'attention_mask': [[1], [1], [1], [1]],
'seq_lengths': [[1], [1], [1], [1]]}
```
So here we are basically applying truncation to every sequence instead of having twelve sequences of one token.
If we put ourself in a more usefull setting, when I was finetunning on some very long sequences with a seq_lenfth of 4096, the majority of the tokens was discarded y the bfd packing. On my dataset, the bfd method kept only 0.2% of the total training tokens.
Is the behavior normal ?
I would find it useful to add an option to still have tokens that are deleted in other sequences, even if this is less than ideal. It would be a good compromise between the current versions of bfd and wrapped. | https://github.com/huggingface/trl/issues/4554 | closed | [
"✨ enhancement",
"❓ question"
] | 2025-11-21T07:53:55Z | 2025-12-16T20:37:02Z | 3 | ntnq4 |
pytorch/FBGEMM | 5,161 | Does anyone know how to build fbgemm_gpu from source without fbgemm | I'd like to only build fbgemm_gpu from source without building fbgemm.
Seems that
```
cd fbgemm_gpu
python setup.py install
```
missed some arguments? | https://github.com/pytorch/FBGEMM/issues/5161 | closed | [] | 2025-11-21T07:40:18Z | 2025-11-27T08:45:52Z | null | fmo-mt |
vllm-project/vllm | 29,148 | [Usage]: Deployment of the embedding models | ### Your current environment
```text
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not collect
CMake version : version 3.22.1
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform : Linux-5.15.0-161-generic-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.8.61
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA GeForce RTX 5090
GPU 1: NVIDIA GeForce RTX 5090
GPU 2: NVIDIA GeForce RTX 5090
GPU 3: NVIDIA GeForce RTX 5090
GPU 4: NVIDIA GeForce RTX 5090
GPU 5: NVIDIA GeForce RTX 5090
GPU 6: NVIDIA GeForce RTX 5090
GPU 7: NVIDIA GeForce RTX 5090
Nvidia driver version : 570.172.08
cuDNN version : Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.0
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
```
### How would you like to use vllm
When deploying the embedding model, I found that the actual GPU memory usage included not only the model itself but also kv_cache. Is this a reasonable phenomenon? In version v0.9.0, the GPU memory usage was only for the model itself.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/29148 | closed | [
"usage"
] | 2025-11-21T03:57:59Z | 2025-11-21T06:17:18Z | 3 | Root970103 |
vllm-project/vllm | 29,139 | [Feature]: Optimize collectives in TP MoE case using torch.compile pass | ### 🚀 The feature, motivation and pitch
To avoid redundant work in MoE models in the TP case, sequence parallelism was added to the Deepseek model definition in #24134 and expanded to other models in #24982. However, to avoid performing surgery on the linear layer, the current approach performs more communication than necessary. With a torch.compile custom pass, we can rewrite the graph to remove the redundant computation.
### More details
Before the SP optimization, the ops in the model were:
```
- o_proj:[num_tokens, ...] -> [num_tokens, ...] (incomplete results)
- all_reduce:[num_tokens, ...] -> [num_tokens, ...]
- router:[num_tokens, ...] -> [num_tokens, ...]
- experts:[num_tokens, ...] -> [num_tokens, ...]
- ...
```
With sequence parallel enabled, this becomes:
```
- o_proj: [num_tokens, ...] -> [num_tokens, ...] (incomplete results)
- all_reduce: [num_tokens, ...] -> [num_tokens, ...]
- chunk: [num_tokens, ...] -> [num_tokens/tp, ...]
- router: [num_tokens/tp, ...] -> [num_tokens/tp, ...]
- experts: [num_tokens/tp, ...] -> [num_tokens/tp, ...]
- all_gather: [num_tokens/tp, ...] -> [num_tokens, ...]
```
Additionally, experts now properly do the dp+tp<->ep dispatch instead of just the original replicated dp<->ep dispatch.
Notice that the `all_reduce` does redundant communication as each TP rank only requires partial results. With a compile pass, we can convert the `all_reduce` -> `chunk` sequence into a `reduce_scatter`:
```
- o_proj: [num_tokens, ...] -> [num_tokens, ...] (incomplete results)
- reduce_scatter: [num_tokens, ...] -> [num_tokens/tp, ...]
- router: [num_tokens/tp, ...] -> [num_tokens/tp, ...]
- experts: [num_tokens/tp, ...] -> [num_tokens/tp, ...]
- all_gather: [num_tokens/tp, ...] -> [num_tokens, ...]
```
We should create a new `SequenceParallelismMoEPass`, controlled by a new `PassConfig.enable_sp_moe` flag (following the new naming convention in #27995) so that it can be turned on independently of regular SP. We will likely need to pad the number of tokens to a multiple of TP size, although like described in #29136, there are alternatives.
### Alternatives
Alternatively, the original optimization could be done as a compile pass as well, which would significantly clean up the MoE model definitions. However, that would mean that `VLLM_COMPILE` compilation mode would be required for this optimization and if compilation is disabled, the optimization would be disabled as well. Generally we accept lower performance in eager mode as compilation is on by default, but I know there was a reason this was done this way (don't remember why).
### Additional context
Original proposal comment: https://github.com/vllm-project/vllm/pull/24982#pullrequestreview-3259494618
cc @tlrmchlsmth @bnellnm @robertgshaw2-redhat @alexm-redhat @zou3519 @nvpohanh @youkaichao
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/29139 | open | [
"help wanted",
"good first issue",
"performance",
"feature request",
"torch.compile"
] | 2025-11-21T01:36:06Z | 2025-12-07T15:39:48Z | 19 | ProExpertProg |
pytorch/pytorch | 168,291 | Remove unnecessary `ConstantVariable` wrapping in `raise_observed_exception` | ~We currently convert arguments to `ConstantVariable` before calling `raise_observed_exception` in several places. This conversion is unnecessary as the Python objects can be used directly. Doing so also improves readability of some error reports.~
Before:
```python
Observed exception
Explanation: ...
Hint: ...
Hint: ...
Developer debug context: raised exception TypeError([ConstantVariable(str: "unhashable type: <class 'torch._dynamo.variables.dicts.SetVariable'>")])
```
After:
```python
Observed exception
Explanation: ...
Hint: ...
Hint: ...
Developer debug context: raised exception TypeError(["unhashable type: <class 'torch._dynamo.variables.dicts.SetVariable'>"])
```
Example of places that needs to be changed:
https://github.com/pytorch/pytorch/blob/9396e69194e8e16801b08b1326e34708a859fa5f/torch/_dynamo/variables/functions.py#L196-L204
https://github.com/pytorch/pytorch/blob/9396e69194e8e16801b08b1326e34708a859fa5f/torch/_dynamo/variables/functions.py#L211-L219
### Versions
main
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @Lucaskabela | https://github.com/pytorch/pytorch/issues/168291 | closed | [
"good first issue",
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2025-11-20T19:28:03Z | 2025-12-03T13:48:14Z | 8 | guilhermeleobas |
pytorch/executorch | 15,923 | 1008 Giene-t2t-OnSM8850 chippet | ### 🐛 Describe the bug
./genie-t2t-run -c genie_bundle_llama3.2-1b/genie_config.json -p "<|begin_of_text|><|start_header_id|>user<|end_header_id|>"$'\n\n'$"What is France's capital?<|eot_id|><|sta>
Using libGenie.so version 1.13.0
[ERROR] "Failed to create device: 1008"
[ERROR] "Device Creation failure"
Failure to initialize model.
Failed to create the dialog.
### Versions
python version 3.11 | https://github.com/pytorch/executorch/issues/15923 | closed | [] | 2025-11-20T18:49:32Z | 2025-11-24T18:09:35Z | 3 | pbtsvinaysukhesh |
vllm-project/vllm | 29,097 | [Docs] Feedback for `/en/latest/` | ### 📚 The doc issue
no
### Suggest a potential alternative/fix
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/29097 | closed | [
"documentation"
] | 2025-11-20T14:53:44Z | 2025-11-21T07:51:57Z | 2 | ch950684-svg |
pytorch/pytorch | 168,253 | nestedtensor inconsistency in `torch.masked_select` | ### 🐛 Describe the bug
Here is the code that left me with questions: I am not sure if it is a bug, but I feel it is not it would be a great addition to the docs. I would expect padded nt and padded nt1 to have the same values at the end of the script, but they are not. If it is not a bug, how can I achieve it: create a nested tensor from a padded tensor and a mask that will have a proper max_len?
```python
import torch
lengths = [5,5,6,6,6,7,7,7,7,8,8,8,8,9]
results = []
for length in lengths:
results.append(torch.ones((length,)))
results
# [tensor([1., 1., 1., 1., 1.]),
# tensor([1., 1., 1., 1., 1.]),
# tensor([1., 1., 1., 1., 1., 1.]),
# tensor([1., 1., 1., 1., 1., 1.]),
# tensor([1., 1., 1., 1., 1., 1.]),
# tensor([1., 1., 1., 1., 1., 1., 1.]),
# tensor([1., 1., 1., 1., 1., 1., 1.]),
# tensor([1., 1., 1., 1., 1., 1., 1.]),
# tensor([1., 1., 1., 1., 1., 1., 1.]),
# tensor([1., 1., 1., 1., 1., 1., 1., 1.]),
# tensor([1., 1., 1., 1., 1., 1., 1., 1.]),
# tensor([1., 1., 1., 1., 1., 1., 1., 1.]),
# tensor([1., 1., 1., 1., 1., 1., 1., 1.]),
# tensor([1., 1., 1., 1., 1., 1., 1., 1., 1.])]
nt = torch.nested.nested_tensor(results, layout=torch.jagged)
nt
# NestedTensor(size=(14, j1), offsets=tensor([ 0, 5, 10, 16, 22, 28, 35, 42, 49, 56, 64, 72, 80, 88, 97]), contiguous=True)
pt_infer = torch.nested.to_padded_tensor(nt, 0.0)
pt_infer.shape
# torch.Size([14, 9])
mask = pt_infer != 0
mask.shape
# torch.Size([14, 9])
nt1 = torch.nested.masked_select(pt_infer, mask)
nt1.shape
# torch.Size([14, j2])
nt.shape
# torch.Size([14, j1])
nt1.to_padded_tensor(0.0, ).shape
# torch.Size([14, 97])
torch.nested.to_padded_tensor(nt1, 0.0).shape
# torch.Size([14, 97])
torch.nested.to_padded_tensor(nt, 0.0).shape
# torch.Size([14, 9])
```
### Versions
PyTorch version: 2.9.0+cu128
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Rocky Linux 9.4 (Blue Onyx) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.34
Python version: 3.12.12 | packaged by conda-forge | (main, Oct 22 2025, 23:25:55) [GCC 14.3.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.13.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 575.57.08
cuDNN version: Could not collect
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==2.3.4
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cufft-cu12==11.3.3.83
[pip3] nvidia-curand-cu12==10.3.9.90
[pip3] nvidia-cusolver-cu12==11.7.3.90
[pip3] nvidia-cusparse-cu12==12.5.8.93
[pip3] nvidia-cusparselt-cu12==0.7.1
[pip3] nvidia-nccl-cu12==2.27.5
[pip3] nvidia-nvjitlink-cu12==12.8.93
[pip3] nvidia-nvtx-cu12==12.8.90
[pip3] pytorch-lightning==2.5.6
[pip3] torch==2.9.0
[pip3] torch-dct==0.1.6
[pip3] torchaudio==2.9.0
[pip3] torchmetrics==1.8.2
[pip3] triton==3.5.0
[conda] numpy 2.3.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.8.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.8.90 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.8.93 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.8.90 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.10.2.21 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.3.83 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.9.90 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.3.90 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.8.93 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.7.1 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.27.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.8.93 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.8.90 pypi_0 pypi
[conda] pytorch-lightning 2.5.6 pypi_0 pypi
[conda] torch 2.9.0 pypi_0 pypi
[conda] torch-dct 0.1.6 pypi_0 pypi
[conda] torchaudio 2.9.0 pypi_0 pypi
[conda] torchmetrics 1.8.2 pypi_0 pypi
[conda] triton 3.5.0 pypi_0 pypi
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ | https://github.com/pytorch/pytorch/issues/168253 | open | [
"triaged",
"module: nestedtensor"
] | 2025-11-20T14:08:48Z | 2025-11-21T17:47:04Z | 2 | rustamzh |
vllm-project/vllm | 29,089 | [Performance]: Can we use CUDA graph to accelerate the Qwen2_5omniAudioEncoder in Qwen2.5-Omni-3B? | ### Proposal to improve performance
<img width="3088" height="1264" alt="Image" src="https://github.com/user-attachments/assets/535d7854-b9db-4e40-8f85-1abe08b4d35e" />
The trace graph shows that Qwen2_5omniAudioEncoder has a large number of small kernel startups, indicating significant room for optimization.
Can we use CUDA graph to accelerate the Qwen2_5omniAudioEncoder in Qwen2.5-Omni-3B?
### Report of performance regression
_No response_
### Misc discussion on performance
_No response_
### Your current environment (if you think it is necessary)
```text
The output of `python collect_env.py`
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/29089 | open | [
"performance"
] | 2025-11-20T12:13:58Z | 2025-11-20T12:13:58Z | 0 | xq25478 |
pytorch/torchrec | 3,567 | how to use torch.distributed.checkpoint to save and load state dict | sparse_arch is a part of my model.
<img width="721" height="698" alt="Image" src="https://github.com/user-attachments/assets/cb35959b-418e-4ff4-8e12-4524528cbad2" />
<img width="1439" height="684" alt="Image" src="https://github.com/user-attachments/assets/d008966e-e2d2-404d-bcda-bce3e3285eed" /> | https://github.com/meta-pytorch/torchrec/issues/3567 | open | [] | 2025-11-20T09:30:47Z | 2025-11-20T09:30:47Z | 0 | haolujun |
vllm-project/vllm | 29,078 | [Performance]: 多实例导致的cpu占用过高 | ### Your current environment
GPU: RTX4090
cuda version: cuda12.8
vllm version: 0.11.0
中文:我使用triton server的 vllm backend 启动了4个 minerU2.5 模型的实例,我的服务器上有2张卡,我每张卡启动了1个实例,我发现cpu负载有时候极高,几乎占满了我的服务器,我的服务器有96核,vllm backend使用的是AsyncLLMEngine,我观察到在单卡上启动一个实例时,我发送200张小尺寸的文字图做OCR时,fps可以达到最高,也就是每秒可以处理200张的图片,cpu负载在40-50%左右,为了进一步增加性能,我在两张卡上各启动了一个实例,但是我观察到此时cpu负载几乎达到99%,占用了极高的cpu,每个实例的fps只有120左右,性能几乎没有提升。
我做了大量的测试,我开始以为是triton server的问题,但经过排查,我认为问题可能出现在vllm推理时占用了很高的cpu,因为我不使用triton server,使用 `vllm serve`来模拟同样的情况,每个vllm实例推理时也占用掉了20-30%的cpu,如果这样,我的服务器即使有再多的GPU,也不能够提升模型的性能,我该如何调试?
english:
I launched 4 instances of the minerU2.5 model using the vllm backend of Triton Server. My server is equipped with 2 GPUs, with 1 instance running on each GPU. However, I noticed that the CPU load sometimes spikes to extremely high levels, nearly maxing out the server—which has 192 CPU cores. The vllm backend uses AsyncLLMEngine.
When running a single instance on one GPU and sending 200 small-sized text images for OCR, I achieved the highest FPS—processing up to 200 images per second—with the CPU load hovering around 40-50%. To further improve performance, I launched one instance on each of the two GPUs. But in this scenario, the CPU load reached nearly 99% (extremely high usage), and each instance only achieved around 120 FPS, with almost no performance gain.
I conducted numerous tests. Initially, I suspected the issue was with Triton Server, but after troubleshooting, I believe the problem lies in the high CPU usage during vllm inference. Even when not using Triton Server—simulating the same scenario with `vllm serve`—each vllm instance consumes 20-30% of the CPU. If this persists, adding more GPUs to the server will not improve model performance. How should I debug this?
### How would you like to use vllm
I want to run inference of a [[MinerU2.5-2509-1.2B]().](https://huggingface.co/opendatalab/MinerU2.5-2509-1.2B) I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/29078 | closed | [
"usage"
] | 2025-11-20T08:26:35Z | 2025-11-21T02:17:51Z | 4 | zjq1996518 |
huggingface/transformers | 42,291 | Can we disable IPython progress bar and use normal tqdm bar? | I like the normal tqdm bar much better, it is lighter, cleaner, simpler, and less stress on my eyes (no green color). I would love to have an option to use tqdm bar and not IPython bar. | https://github.com/huggingface/transformers/issues/42291 | closed | [] | 2025-11-20T01:26:11Z | 2025-12-28T08:02:45Z | 1 | weathon |
pytorch/pytorch | 168,186 | 2nd example of large numeric divergence for torch compile vs eager in bf16 | ### 🐛 Describe the bug
First example is https://github.com/pytorch/pytorch/issues/168126.
Here's another smaller example where I'm seeing a significant difference (rtol 1.0) between eager and compiled when running under bf16. Somehow the call to `torch.chunk` in `Module2` causes a numeric divergence to occur. It's likely related to inductor because the results match when I set `torch.compile(..., backend='aot_eager')`.
```python
import torch
from torch import Tensor, nn
class BaseModule(nn.Module):
def __init__(self, dim: int = 128) -> None:
super().__init__()
self.p_in = nn.Linear(dim, 2 * dim, bias=False)
self.g_in = nn.Linear(dim, 2 * dim, bias=False)
class Module1(BaseModule):
def forward(self, x: Tensor, mask: Tensor) -> Tensor:
x = self.p_in(x) * self.g_in(x)
return x
class Module2(BaseModule):
def forward(self, x: Tensor, mask: Tensor) -> Tensor:
x = self.p_in(x) * self.g_in(x)
a, b = torch.chunk(x, 2, dim=-1)
x = a + b
return x
if __name__ == "__main__":
for module_cls in [Module1, Module2]:
for dtype in [torch.float32, torch.bfloat16]:
print(f"Testing module {module_cls.__name__} with dtype: {dtype}")
with torch.autocast(device_type="cuda", dtype=dtype):
torch.manual_seed(42)
x = torch.randn(16, 128, 128, 128, device="cuda")
mask = torch.randint(0, 2, (16, 128, 128), device="cuda")
eager_layer = module_cls().cuda()
compiled_layer = torch.compile(module_cls().cuda(), fullgraph=True)
# Copy weights from reference to optimized to ensure identical parameters
with torch.no_grad():
for param, ref_param in zip(
compiled_layer.parameters(), eager_layer.parameters()
):
param.data.copy_(ref_param.data)
out_eager = eager_layer(x, mask)
out_compiled = compiled_layer(x, mask)
torch.testing.assert_close(out_eager, out_compiled)
print(f"Passed module {module_cls.__name__} with dtype: {dtype}")
```
### Error logs
```
(repro) jamin@jamin-dev:~/deep-affinity$ python repro.py
Testing module Module1 with dtype: torch.float32
Passed module Module1 with dtype: torch.float32
Testing module Module1 with dtype: torch.bfloat16
Passed module Module1 with dtype: torch.bfloat16
Testing module Module2 with dtype: torch.float32
Passed module Module2 with dtype: torch.float32
Testing module Module2 with dtype: torch.bfloat16
Traceback (most recent call last):
File "/home/jamin/deep-affinity/repro.py", line 47, in <module>
torch.testing.assert_close(out_eager, out_compiled)
File "/home/jamin/miniconda3/envs/repro/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1589, in assert_close
raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!
Mismatched elements: 753240 / 33554432 (2.2%)
Greatest absolute difference: 0.01171875 at index (1, 61, 83, 48) (up to 1e-05 allowed)
Greatest relative difference: 127.0 at index (4, 33, 47, 23) (up to 0.016 allowed)
```
With `TORCHINDUCTOR_EMULATE_PRECISION_CASTS=1`:
```
(repro) jamin@jamin-dev:~/deep-affinity$ TORCHINDUCTOR_EMULATE_PRECISION_CASTS=1 python repro.py
Testing module Module1 with dtype: torch.float32
Passed module Module1 with dtype: torch.float32
Testing module Module1 with dtype: torch.bfloat16
Passed module Module1 with dtype: torch.bfloat16
Testing module Module2 with dtype: torch.float32
Passed module Module2 with dtype: torch.float32
Testing module Module2 with dtype: torch.bfloat16
Traceback (most recent call last):
File "/home/jamin/deep-affinity/repro.py", line 47, in <module>
torch.testing.assert_close(out_eager, out_compiled)
File "/home/jamin/miniconda3/envs/repro/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1589, in assert_close
raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!
Mismatched elements: 554480 / 33554432 (1.7%)
Greatest absolute difference: 0.0078125 at index (3, 87, 97, 127) (up to 1e-05 allowed)
Greatest relative difference: 1.0 at index (0, 0, 13, 20) (up to 0.016 allowed)
```
### Versions
```
PyTorch version: 2.9.1+cu130
Is debug build: False
CUDA used to build PyTorch: 13.0
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-1043-gcp-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 13.0.88
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 580.95.05
cuDNN version: Could not collect
Is XPU available: False
HIP r | https://github.com/pytorch/pytorch/issues/168186 | closed | [
"triaged",
"oncall: pt2",
"module: inductor"
] | 2025-11-19T21:19:58Z | 2025-12-01T19:20:59Z | 6 | jamin-chen |
vllm-project/vllm | 29,023 | [Feature]: Disable logging `/metrics` | ### 🚀 The feature, motivation and pitch
- IGW hits `/metrics` continuously to understand the current load on the system
- This leads to an overload of logs
- We can disable this with `--disable-uvicorn-access-log`, but lose access to all access logs
We should have `--disable-uvicorn-metrics-access-log` to avoid logging * just * metrics. Per Gemini, we can do this with something like:
```python
# Define the routes for which access logs should be disabled
EXCLUDE_PATHS = ["/health", "/metrics"]
class EndpointFilter(logging.Filter):
def filter(self, record: logging.LogRecord) -> bool:
# Check if the log record contains arguments and if the path matches an excluded path
if record.args and len(record.args) >= 3:
path = record.args[2] # The path is typically the third argument in uvicorn access logs
if path in EXCLUDE_PATHS:
return False # Exclude this log record
return True # Include all other log records
```
Create a command line arg like `--disable-uvicorn-metrics-access-log`which selectively disables logging hits to `/metrics`
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/29023 | open | [
"help wanted",
"good first issue",
"feature request"
] | 2025-11-19T18:25:48Z | 2025-11-19T21:57:34Z | 5 | robertgshaw2-redhat |
huggingface/sentence-transformers | 3,575 | How to override model's `max_seq_length`? | It seems that impossible to override model's max length from `sentence_bert_config.json`.
```python
from sentence_transformers import SentenceTransformer
m = SentenceTransformer("intfloat/e5-small", tokenizer_kwargs={"model_max_length":3})
print(m.tokenize(["hi hi hi hi hi hi hi hi hi hi hi hi hi"]))
# {'input_ids': tensor([[ 101, 7632, 7632, 7632, 7632, 7632, 7632, 7632, 7632, 7632, 7632, 7632,
# 7632, 7632, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
print(m.tokenize(["hi hi hi hi hi hi hi hi hi hi hi hi hi"], truncation=True))
# {'input_ids': tensor([[ 101, 7632, 7632, 7632, 7632, 7632, 7632, 7632, 7632, 7632, 7632, 7632,
# 7632, 7632, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
print(m[0].tokenizer(["hi hi hi hi hi hi hi hi hi hi hi hi hi"], truncation=True))
# {'input_ids': [[101, 7632, 102]], 'token_type_ids': [[0, 0, 0]], 'attention_mask': [[1, 1, 1]]}
m.max_seq_length = 3
print(m.tokenize(["hi hi hi hi hi hi hi hi hi hi hi hi hi"]))
# {'input_ids': tensor([[ 101, 7632, 102]]), 'token_type_ids': tensor([[0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1]])}
```
This is happening because during load it load `max_seq_length` from `sentence_bert_config` and then in `Transformers` it will override `max_seq_length` only it wasn't set in `sentence_bert_config` https://github.com/huggingface/sentence-transformers/blob/ad28c0a982acc39c73abdf0019faca10f227ef28/sentence_transformers/models/Transformer.py#L101-L118 even if `model_max_length` is passed in `tokenizer_kwargs` and then `max_seq_length` will be used as `max_length` instead of passed in kwargs https://github.com/huggingface/sentence-transformers/blob/ad28c0a982acc39c73abdf0019faca10f227ef28/sentence_transformers/models/Transformer.py#L319-L327
Probably this can be fixed by
```diff
max_seq_length = min(max_seq_length, self.tokenizer.model_max_length)
```
Source https://github.com/embeddings-benchmark/mteb/pull/3587#discussion_r2542434603
I think this is cause of https://github.com/huggingface/sentence-transformers/issues/3187 | https://github.com/huggingface/sentence-transformers/issues/3575 | open | [] | 2025-11-19T16:42:27Z | 2025-11-20T13:47:13Z | null | Samoed |
huggingface/trl | 4,546 | Does TRL support PipelineRL for compute efficiency? | Hi 👋,
I'm trying to understand whether TRL currently supports (or plans to support) the PipelineRL approach described here:
- Paper: [https://arxiv.org/pdf/2509.19128v2](https://arxiv.org/pdf/2509.19128v2?utm_source=chatgpt.com)
- Overview: [https://arxiv.org/html/2509.19128](https://arxiv.org/html/2509.19128?utm_source=chatgpt.com)
PipelineRL introduces an actor–learner pipeline with in-flight weight updates, where actors keep generating while the learner updates weights concurrently. This reduces policy lag and improves GPU utilization for long-context RL runs.
Does TRL currently support this kind of pipelineRL workflow, or is there a recommended way to approximate it using the existing TRL trainers (GRPO + vLLM)?
If not, I'd love suggestions or best practices for building something similar on top of TRL.
Thanks! 🙏 | https://github.com/huggingface/trl/issues/4546 | open | [
"✨ enhancement",
"❓ question"
] | 2025-11-19T12:39:29Z | 2025-11-22T12:43:54Z | 3 | harisarang |
pytorch/torchrec | 3,561 | How can I export a trained model to the Triton inference server? | How can I export a trained model to the Triton inference server?
Are there any examples of exporting models, whether using Torch-TensorRT or TorchScript? | https://github.com/meta-pytorch/torchrec/issues/3561 | open | [] | 2025-11-19T08:20:51Z | 2025-11-19T08:20:51Z | 0 | intfish123 |
pytorch/pytorch | 168,148 | BF16 activation precision mismatch between eager ATen and compiled Triton | ### 🐛 Describe the bug
I’d like to report that for activation operators such as `sigmoid` and `tanh`, when the input dtype is `bf16`, the computation precision differs between eager mode and `compile[triton]`. In eager mode, ATen computes directly in `bf16`, but the generated Triton kernel upcasts to `fp32` → applies the activation → then downcasts to `bf16`. This can lead to accuracy differences between the eager and compiled paths for the same model. Why is this the current strategy?
### Error logs
_No response_
### Versions
torch==2.7.0a0+git1169ded
triton==3.2.0
cc @ezyang @gchanan @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov @coconutruben | https://github.com/pytorch/pytorch/issues/168148 | closed | [
"high priority",
"triaged",
"oncall: pt2",
"module: inductor"
] | 2025-11-19T08:10:53Z | 2025-11-28T06:05:05Z | 6 | zhaoying9105 |
pytorch/torchrec | 3,559 | How to convert DistributedModelParallel to quantize_inference_model and use torch.jit.script to save? | I run a example in `https://github.com/facebookresearch/dlrm/tree/main/torchrec_dlrm`, and want to save model with `torch.jit.script`, but it has error.
command:
```
export LEARNING_RATE=0.5;
torchx run -s local_cwd dist.ddp -j 1x1 --script dlrm_main.py -- --batch_size 2048 --learning_rate $LEARNING_RATE --dataset_name criteo_kaggle --num_embeddings_per_feature 40000000,39060,17295,7424,20265,3,7122,1543,63,40000000,3067956,405282,10,2209,11938,155,4,976,14,40000000,40000000,40000000,590152,12973,108,36 --embedding_dim 128 --over_arch_layer_sizes 1024,1024,512,256,1 --dense_arch_layer_sizes 512,256,128 --epochs 1 --validation_freq_within_epoch 12802
```
<img width="1511" height="855" alt="Image" src="https://github.com/user-attachments/assets/ce9dcb6e-c0ac-4e77-b67a-db9836a62fd7" />
logs:
```
torchx 2025-11-19 06:46:19 INFO Tracker configurations: {}
torchx 2025-11-19 06:46:19 INFO Log directory not set in scheduler cfg. Creating a temporary log dir that will be deleted on exit. To preserve log directory set the `log_dir` cfg option
torchx 2025-11-19 06:46:19 INFO Log directory is: /tmp/torchx_z2d00ny6
local_cwd://torchx/dlrm_main-vm9krtsx5bpnjd
torchx 2025-11-19 06:46:19 INFO Waiting for the app to finish...
dlrm_main/0 [0]:PARAMS: (lr, batch_size, warmup_steps, decay_start, decay_steps): (0.5, 2048, 0, 0, 0)
dlrm_main/0 [0]:/workspace/dlrm/.venv/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:860: UserWarning: `_get_pg_default_device` will be deprecated, it only stays for backward-compatiblity reason. If you need to find a device for object collectives, please use `_get_object_coll_device`. If you need to query the device types supported by group, please use `_device_capability(group)`.
dlrm_main/0 [0]: warnings.warn(
dlrm_main/0 [0]:
dlrm_main/0 [0]:Epoch 0: 0%| | 0/10 [00:00<?, ?it/s]dlrm_main/0 [0]:
dlrm_main/0 [0]:Epoch 0: 10%|█ | 1/10 [00:00<00:03, 3.00it/s]dlrm_main/0 [0]:
dlrm_main/0 [0]:Epoch 0: 100%|██████████| 10/10 [00:00<00:00, 25.97it/s]
dlrm_main/0 [0]:
dlrm_main/0 [0]:Evaluating val set: 0%| | 0/10 [00:00<?, ?it/s]dlrm_main/0 [0]:Total number of iterations: 10
dlrm_main/0 [0]:
dlrm_main/0 [0]:Evaluating val set: 50%|█████ | 5/10 [00:00<00:00, 48.80it/s]/workspace/dlrm/.venv/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py:4807: UserWarning: No device id is provided via `init_process_group` or `barrier `. Using the current device set by the user.
dlrm_main/0 [0]: warnings.warn( # warn only once
dlrm_main/0 [0]:
dlrm_main/0 [0]:Evaluating val set: 100%|██████████| 10/10 [00:00<00:00, 75.09it/s]
dlrm_main/0 [0]:
dlrm_main/0 [0]:Evaluating test set: 0%| | 0/10 [00:00<?, ?it/s]dlrm_main/0 [0]:AUROC over val set: 0.5073344707489014.
dlrm_main/0 [0]:Number of val samples: 20480
dlrm_main/0 [0]:
dlrm_main/0 [0]:Evaluating test set: 100%|██████████| 10/10 [00:00<00:00, 192.46it/s]
dlrm_main/0 [0]:[rank0]: Traceback (most recent call last):
dlrm_main/0 [0]:[rank0]: File "/workspace/dlrm/torchrec_dlrm/dlrm_main.py", line 737, in <module>
dlrm_main/0 [0]:[rank0]: invoke_main() # pragma: no cover
dlrm_main/0 [0]:[rank0]: File "/workspace/dlrm/torchrec_dlrm/dlrm_main.py", line 733, in invoke_main
dlrm_main/0 [0]:[rank0]: main(sys.argv[1:])
dlrm_main/0 [0]:[rank0]: File "/workspace/dlrm/torchrec_dlrm/dlrm_main.py", line 727, in main
dlrm_main/0 [0]:[rank0]: script_model = torch.jit.script(quantize_model)
dlrm_main/0 [0]:[rank0]: File "/workspace/dlrm/.venv/lib/python3.10/site-packages/torch/jit/_script.py", line 1443, in script
dlrm_main/0 [0]:[rank0]: ret = _script_impl(
dlrm_main/0 [0]:[rank0]: File "/workspace/dlrm/.venv/lib/python3.10/site-packages/torch/jit/_script.py", line 1152, in _script_impl
dlrm_main/0 [0]:[rank0]: return torch.jit._recursive.create_script_module(
dlrm_main/0 [0]:[rank0]: File "/workspace/dlrm/.venv/lib/python3.10/site-packages/torch/jit/_recursive.py", line 554, in create_script_module
dlrm_main/0 [0]:[rank0]: concrete_type = get_module_concrete_type(nn_module, share_types)
dlrm_main/0 [0]:[rank0]: File "/workspace/dlrm/.venv/lib/python3.10/site-packages/torch/jit/_recursive.py", line 503, in get_module_concrete_type
dlrm_main/0 [0]:[rank0]: concrete_type = concrete_type_store.get_or_create_concrete_type(nn_module)
dlrm_main/0 [0]:[rank0]: File "/workspace/dlrm/.venv/lib/python3.10/site-packages/torch/jit/_recursive.py", line 435, in get_or_create_concrete_type
dlrm_main/0 [0]:[rank0]: concrete_type_builder = infer_concrete_type_builder(nn_module)
dlrm_main/0 [0]:[rank0]: File "/workspace/dlrm/.venv/lib/python3.10/site-packages/torch/jit/_recursive.py", line 285, in infer_concrete_type_builder
dlrm_main/0 [0]:[rank0]: sub_concrete_type = get_module_concrete_type(item, share_types)
dlrm_main/0 [0]:[rank0]: File "/workspace/dlrm/.venv/lib/python3.10/site-packages/torch/jit/_recurs | https://github.com/meta-pytorch/torchrec/issues/3559 | open | [] | 2025-11-19T06:51:01Z | 2025-11-19T06:53:01Z | 0 | intfish123 |
vllm-project/vllm | 28,996 | [Usage]: How to run a single data parallel deployment across multiple nodes without ray | ### Your current environment
2 Nodes, each node has 8 H20 GPUs.
### How would you like to use vllm
According to https://docs.vllm.ai/en/latest/serving/data_parallel_deployment/#internal-load-balancing
```shell
# node0
vllm serve Qwen3-Coder-480B-A35B-Instruct --trust-remote-code --max-num-seqs 64 --max-model-len 131072 --port $PORT0 --host :: --data-parallel-size 2 --data-parallel-size-local 1 --data-parallel-address $NODE0_IPV6 --data-parallel-rpc-port $PORT1
# node1
vllm serve Qwen3-Coder-480B-A35B-Instruct --trust-remote-code --max-num-seqs 64 --max-model-len 131072 --headless --data-parallel-size 2 --data-parallel-size-local 1 --data-parallel-start-rank 1 --data-parallel-address $NODE0_IPV6 --data-parallel-rpc-port $NODE0_PORT1
```
but all of them are hanging on waiting for init message from front-end.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28996 | closed | [
"usage"
] | 2025-11-19T06:47:22Z | 2025-11-27T06:17:22Z | 3 | crystalww |
vllm-project/vllm | 28,986 | [Feature]: Fused Kernel for GPT-OSS Router | ### 🚀 The feature, motivation and pitch
<img width="1257" height="250" alt="Image" src="https://github.com/user-attachments/assets/31eba061-522c-4521-b0a9-9f25bb36c3df" />
- Right now, we spend ~3.5% of the layer in the expert selection
- The operation is unfused
Write a fused kernel like we have for deepseek grouped_topk
### Alternatives
- torch compile
- triton
- cuda
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28986 | open | [
"help wanted",
"good first issue",
"feature request"
] | 2025-11-19T03:18:25Z | 2025-12-12T16:16:37Z | 7 | robertgshaw2-redhat |
huggingface/transformers.js | 1,458 | ONNX Backend Env variable | ### Question
Hi,
For some context, I'm building an application that uses some of the models on huggingface as an annotation tool that helps create annotations for training a specialised model.
As for the specialised model, I am able to export them to onnx, and I was able to run this model in the same application, but I have to manually install the same onnxruntime-web version to be able to do so. I looked into the docs [here](https://huggingface.co/docs/transformers.js/api/backends/onnx#module_backends/onnx.createInferenceSession), but I cannot access these functions through `env.backends.onnx`. I've tried `console.log(env.backends.onnx.isONNXProxy())` and got
```
Uncaught (in promise) TypeError: env.backends.onnx.isONNXProxy is not a function
```
Is there a way I can access the same inference session through this package?
---------------------------------
My `package.json`
```
{
"dependencies": {
"@huggingface/transformers": "3.7.5",
"onnxruntime-web": "1.22.0-dev.20250409-89f8206ba4"
},
}
``` | https://github.com/huggingface/transformers.js/issues/1458 | open | [
"question"
] | 2025-11-19T01:26:02Z | 2025-11-25T15:36:13Z | null | Heinrik-20 |
pytorch/vision | 9,276 | where did torchvision v0.10.0 go? | I am trying to download torchvision v0.10.0 to my Jetson Nano to build it but I am always getting this error:
```
ams@ams-Alienware-m17-R3:~$ git ls-remote --tags https://github.com/pytorch/vision.git
remote: Internal Server Error
fatal: unable to access 'https://github.com/pytorch/vision.git/': The requested URL returned error: 500
```
I have navigated inside the repository to search for v0.10.0, but couldn't find it in the branches. | https://github.com/pytorch/vision/issues/9276 | closed | [] | 2025-11-18T21:32:56Z | 2025-11-19T09:03:29Z | 1 | abdosalem490 |
pytorch/pytorch | 168,099 | Unify pointwise DTensor and NestedTensor OP Coverage. Adds over 100 op overloads to DTensor and about to 10 to NestedTensor | ### 🚀 The feature, motivation and pitch
Currently, DTensor maintains it's own list of which ops are pointwise. NestedTensor has a similar requirement and instead elected to add a pointwise tag to OpInfo. Maintaining two separate lists of pointwise ops is error prone. We should have both use a single source of information on which ops are pointwise. Doing so should improve op coverage for DTensor and perhaps NestedTensor and remove code duplication significantly.
These calculations I quickly did using PyTorch 2.8.0 in from Google Colab.
> DTensor pointwise ops #: 364
> OPinfo pointwise ops #: 537
> DTensor pointwise ops missing in OpInfo #: 10
> OpInfo pointwise ops missing in DTensor #: 185
Unifying these would add 185 ops to DTensor coverage and 10 ops to NestedTensor coverage
I would suggest checking if an op is pointwise
with `torch.Tag.pointwise in op.tags` for an arbitrary aten operator. I would then add the pointwise tags to any ops that are listed as pointwise in DTensor but not in opinfo and unify the lists. Doing so would ensure NestedTensor and DTensor have similar coverage
Tagging @ezyang
Slack Discussion:
> Anyone know why DTensor doesn’t use optest’s pointwise tag registration that NestedTensor already uses? It’s weird to me it maintains a second list of all the pointwise ops when that info should be provided already by OpInfo registration?
> @ezyang Reply: it probably should just use it
### Alternatives
_No response_
### Additional context
Example of drift between DTensor and NestedTensor op coverage: https://github.com/pytorch/pytorch/pull/167973
Current analysis:
```
check if op is pointwise: [torch.Tag.pointwise in op.tags for op in pointwise_ops]
```
These ops are missing the op that are in the do not have pointwise tags currently, but are list in DTensor:
```python
[<OpOverload(op='aten.__irshift__', overload='Scalar')>, <OpOverload(op='aten.__irshift__', overload='Tensor')>, <OpOverload(op='aten._conj', overload='default')>, <OpOverload(op='aten.abs_', overload='default')>, <OpOverload(op='aten.copysign_', overload='Scalar')>, <OpOverload(op='aten.copysign_', overload='Tensor')>, <OpOverload(op='aten.ldexp', overload='default')>, <OpOverload(op='aten.native_dropout_backward', overload='out')>, <OpOverload(op='aten.where', overload='self_out')>, <OpOverload(op='aten.xlogy_', overload='Scalar_Other')>]
```
```python
def get_pointwise_overloads():
pointwise = []
# All registered operator schemas
for schema in torch._C._jit_get_all_schemas():
ns, op_name = schema.name.split("::", 1)
# Only care about aten ops; drop prim, quantized, etc.
if ns != "aten":
continue
# Get the OpOverloadPacket, e.g. torch.ops.aten.add
try:
packet = getattr(getattr(torch.ops, ns), op_name)
except AttributeError:
continue # some schemas may not be exposed via torch.ops
# Map JIT overload name -> Python overload attribute
overload_name = schema.overload_name or "default"
try:
overload = getattr(packet, overload_name) # OpOverload
except AttributeError:
continue # can happen in weird cases
# Check tag
if torch.Tag.pointwise in overload.tags:
pointwise.append(overload)
return pointwise
```
and comparing to that the list in the DTensor:
DTensor pointwise ops #: 364
OPinfo pointwise ops #: 537
DTensor pointwise ops missing in OpInfo #: 10
OpInfo pointwise ops missing in DTensor #: 185
So unifying this would add 173 ops to DTensor and add 10 op coverage to NestedTensor!
Pointwoise opinfo tags are found here: https://github.com/pytorch/pytorch/blob/f9724db4921288a096e331cee835abd43257fbd6/aten/src/ATen/native/native_functions.yaml#L10242
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @msaroufim @dcci @tianyu-l @XilunWu @SherlockNoMad
The least invasive way to handle this is probably to add a fallback that checks if the op has a pointwise tags and tries to pointwise it similar to how NestedTensor currently works. Similar to: https://github.com/pytorch/pytorch/blob/33d4cf4fcb7f0cba6191b242dae53b48057e05b9/torch/distributed/tensor/_ops/_pointwise_ops.py#L626C1-L629C6 may need to check if the op supports out= arg though. | https://github.com/pytorch/pytorch/issues/168099 | open | [
"oncall: distributed",
"triaged",
"module: dtensor",
"llm-amenable"
] | 2025-11-18T19:47:48Z | 2025-11-24T19:04:58Z | 2 | Skylion007 |
vllm-project/vllm | 28,956 | [Bug]: OOM when profiling multimodal model with multiple images | ### Your current environment
vLLM 0.11.0
### 🐛 Describe the bug
As per title.
The error log is as follows:
```
[multiproc_executor.py:671] Traceback (most recent call last):
[multiproc_executor.py:671] File "/root/miniconda3/lib/python3.11/site-packages/vllm/v1/executor/multiproc_executor.py", line 666, in worker_busy_loop
[multiproc_executor.py:671] output = func(*args, **kwargs)
[multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^
[multiproc_executor.py:671] File "/root/miniconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
[multiproc_executor.py:671] return func(*args, **kwargs)
[multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^
[multiproc_executor.py:671] File "/root/miniconda3/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 263, in determine_available_memory
[multiproc_executor.py:671] self.model_runner.profile_run()
[multiproc_executor.py:671] File "/root/miniconda3/lib/python3.11/site-packages/vllm/v1/worker/gpu_model_runner.py", line 3379, in profile_run
[multiproc_executor.py:671] expanded = output.new_zeros(
[multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^
[multiproc_executor.py:671] torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 2.58 GiB is free. Including non-PyTorch memory, this process has 137.21 GiB memory in use. Of the allocated memory 134.77 GiB is allocated by PyTorch, and 255.64 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
```
Looks like we only need **ONE** encoder cache with shape `(encoder_budget, encoder_output_shape[-1])` rather than `len(dummy_encoder_outputs)` ones.
https://github.com/vllm-project/vllm/blob/da8dadf68b5a2af849e7c5fd35ce9b8525d8d398/vllm/v1/worker/gpu_model_runner.py#L4128-L4144
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28956 | closed | [
"bug"
] | 2025-11-18T17:36:55Z | 2025-11-25T12:38:37Z | 7 | imShZh |
huggingface/lerobot | 2,475 | Why there is difference between async inference and local inference in image resize? | I read code between `src/lerobot/async_inference/policy_server.py` and `src/lerobot/scripts/lerobot_record.py`. I found difference in these 2 code about inference which causes different image shape
1. `src/lerobot/scripts/lerobot_record.py` use this to deal with observation
And `prepare_observation_for_inference` is like this:
```python
def prepare_observation_for_inference(
observation: dict[str, np.ndarray],
device: torch.device,
task: str | None = None,
robot_type: str | None = None,
) -> RobotObservation:
for name in observation:
observation[name] = torch.from_numpy(observation[name])
if "image" in name:
observation[name] = observation[name].type(torch.float32) / 255
observation[name] = observation[name].permute(2, 0, 1).contiguous()
observation[name] = observation[name].unsqueeze(0)
observation[name] = observation[name].to(device)
observation["task"] = task if task else ""
observation["robot_type"] = robot_type if robot_type else ""
return observation
```
Here no **resize** operation in images i think.
2. in async_inference policy_server.py,it uses
But here function`prepare_raw_observation` makes sense on image shape
```python
def prepare_raw_observation(
robot_obs: RawObservation,
lerobot_features: dict[str, dict],
policy_image_features: dict[str, PolicyFeature],
) -> Observation:
"""Matches keys from the raw robot_obs dict to the keys expected by a given policy (passed as
policy_image_features)."""
# 1. {motor.pos1:value1, motor.pos2:value2, ..., laptop:np.ndarray} ->
# -> {observation.state:[value1,value2,...], observation.images.laptop:np.ndarray}
lerobot_obs = make_lerobot_observation(robot_obs, lerobot_features)
# 2. Greps all observation.images.<> keys
image_keys = list(filter(is_image_key, lerobot_obs))
# state's shape is expected as (B, state_dim)
state_dict = {OBS_STATE: extract_state_from_raw_observation(lerobot_obs)}
image_dict = {
image_k: extract_images_from_raw_observation(lerobot_obs, image_k) for image_k in image_keys
}
# Turns the image features to (C, H, W) with H, W matching the policy image features.
# This reduces the resolution of the images
image_dict = {
key: resize_robot_observation_image(torch.tensor(lerobot_obs[key]), policy_image_features[key].shape)
for key in image_keys
}
if "task" in robot_obs:
state_dict["task"] = robot_obs["task"]
return {**state_dict, **image_dict}
```
Here the shape of observation images is modified to policy config
```python
def resize_robot_observation_image(image: torch.tensor, resize_dims: tuple[int, int, int]) -> torch.tensor:
assert image.ndim == 3, f"Image must be (C, H, W)! Received {image.shape}"
# (H, W, C) -> (C, H, W) for resizing from robot obsevation resolution to policy image resolution
image = image.permute(2, 0, 1)
dims = (resize_dims[1], resize_dims[2])
# Add batch dimension for interpolate: (C, H, W) -> (1, C, H, W)
image_batched = image.unsqueeze(0)
# Interpolate and remove batch dimension: (1, C, H, W) -> (C, H, W)
resized = torch.nn.functional.interpolate(image_batched, size=dims, mode="bilinear", align_corners=False)
return resized.squeeze(0)
```
I found this when I can inference correctly locally by made weird action outputs from async inference. it must be caused by my didn't resize input image when training. -.-
version:deb9596bd3796c03ae3a5a6b81b63c1dba296256
| https://github.com/huggingface/lerobot/issues/2475 | open | [
"question"
] | 2025-11-18T14:32:17Z | 2025-11-24T02:23:13Z | null | milong26 |
pytorch/torchtitan | 2,053 | Training Qwen3-0.6B with loss mismatch. | ### Bug description
When using the config file 'torchtitan/models/qwen3/train_configs/qwen3_0.6b.toml', the starting loss of 12x suggests the weights may not have been loaded properly.
<img width="1541" height="510" alt="Image" src="https://github.com/user-attachments/assets/ed61a47c-1c6e-47e3-8503-ec84df085f83" />
### Versions
[job]
dump_folder = "./outmodel"
description = "Qwen 3 0.6B training"
[profiling]
enable_profiling = false
save_traces_folder = "profile_trace"
profile_freq = 100
[metrics]
log_freq = 1
enable_tensorboard = false
save_tb_folder = "tb"
[model]
name = "qwen3"
flavor = "0.6B"
hf_assets_path = "./assets/hf/Qwen3-0.6B"
# converters = ["float8"]
[optimizer]
name = "AdamW"
lr = 3e-4
eps = 1e-8
[lr_scheduler]
warmup_steps = 2 # lr scheduler warm up, 20% total steps
[training]
local_batch_size = 1
seq_len = 4096
max_norm = 1.0 # grad norm clipping
steps = 100
dataset = "math"
[parallelism]
data_parallel_replicate_degree = 1
data_parallel_shard_degree = -1
fsdp_reshard_after_forward = "default" # default / never / always
tensor_parallel_degree = 1
context_parallel_degree = 1
[checkpoint]
enable = false
folder = "checkpoint"
interval = 50
last_save_model_only = false
export_dtype = "float16"
async_mode = "disabled" # ["disabled", "async", "async_with_pinned_mem"]
[activation_checkpoint]
mode = "full" # ["none", "selective", "full"]
selective_ac_option = "op" # "int" = ac every positive int layer or 'op', ac based on ops policy
[compile]
enable=false
components = ["model", "loss"]
[quantize.linear.float8]
enable_fsdp_float8_all_gather = false
precompute_float8_dynamic_scale_for_fsdp = false
filter_fqns = ["output"]
| https://github.com/pytorch/torchtitan/issues/2053 | closed | [
"question"
] | 2025-11-18T14:24:43Z | 2025-12-18T09:24:46Z | null | Joluck |
vllm-project/vllm | 28,943 | [Usage]: what's the right way to run embedding model in vllm 0.11.0 | ### Your current environment
```text
The output of `python collect_env.py`
```
in vllm 0.8.7,I use following code to run local vllm,all is right:
```
self.engine_args = EngineArgs(
model=self.model_path,
dtype='half',
task="embed",
trust_remote_code=True,
limit_mm_per_prompt={"image": 1},
)
e = asdict(self.engine_args)
self.max_len = 100
self.llm = LLM(**e)
out = self.llm.embed(datas)
```
But in vllm 0.11.0 according to the document https://www.aidoczh.com/vllm/models/pooling_models.html,it use runner=='pooling' to run embedding task. What's the diffenence? Could the 'task' arg 'embed' still take effect?
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28943 | open | [
"usage"
] | 2025-11-18T13:47:57Z | 2025-11-20T10:49:12Z | 3 | neverneverendup |
huggingface/trl | 4,541 | Is attn_implementation=sdpa not supported when using SFTTrainer with mllama? | When trying to use `sdpa` with mllama I get an error using the default collator. Upon writing my own collator it works.
When using `eager` implementation it gives cuda oom error. Is `sdpa` not supported? | https://github.com/huggingface/trl/issues/4541 | open | [] | 2025-11-18T11:57:01Z | 2025-11-18T11:57:01Z | 0 | osaidr |
vllm-project/vllm | 28,930 | [Usage]: How to build a qwen3vl embedding model with a custom mlp layer on the top use vllm? | ### Your current environment
```text
The output of `python collect_env.py`
```
Hi friends! I train a sft model built upon qwen3vl 2b model, we put a mlp layer on it to compress the embedding size of the backbone model. Now I want to use vllm 0.11.0 to serve it but I meet some confuse. Here is my custom class code
```
from argparse import Namespace
from dataclasses import asdict
from typing import Literal, NamedTuple, Optional, TypedDict, Union, get_args
import torch
import torch.nn as nn
from vllm.model_executor.models.qwen3_vl import Qwen3VLForConditionalGeneration
from vllm.v1.pool.metadata import PoolingMetadata
from vllm.v1.sample.metadata import SamplingMetadata
from vllm.config import VllmConfig
from vllm.multimodal import MULTIMODAL_REGISTRY
class CustomQwenVL3BPool(nn.Module):
def __init__(
self
):
super().__init__()
self.out = torch.nn.Sequential(
torch.nn.Linear(2048, 512),
torch.nn.SiLU(),
torch.nn.Linear(512, 128)
)
def get_prompt_lens(self,
hidden_states: Union[torch.Tensor, list[torch.Tensor]],
pooling_metadata: PoolingMetadata,
) -> torch.Tensor:
return pooling_metadata.prompt_lens
def forward(
self,
hidden_states: torch.Tensor,
pooling_metadata: PoolingMetadata,
) -> Union[list[torch.Tensor], torch.Tensor]:
# 1 提取lasttoken
prompt_lens = self.get_prompt_lens(hidden_states, pooling_metadata)
last_token_flat_indices = torch.cumsum(prompt_lens, dim=0) - 1
hidden_states = hidden_states[last_token_flat_indices]
# 2 mlp压缩维度
mlp_output = self.out(hidden_states)
# 3 正则化输出,需要check下vllm是否会再次norm
normalized_output = F.normalize(mlp_output, p=2, dim=-1)
return normalized_output
class CustomQwen3VLForConditionalGeneration(Qwen3VLForConditionalGeneration):
def __init__(self, *, vllm_config: VllmConfig, prefix: str = ""):
super().__init__(vllm_config=vllm_config, prefix=prefix)
self._pooler = CustomQwenVL3BPool()
```
When I run above code using local mode of vllm , error log says **"[adapters.py:79] ST projector loading failed".Does anybody know why?** BTW,what's the best practice to make a custom embedding model with mlp in vllm 0.11.0
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | https://github.com/vllm-project/vllm/issues/28930 | closed | [
"usage"
] | 2025-11-18T10:32:07Z | 2025-12-23T04:49:30Z | 10 | neverneverendup |
vllm-project/vllm | 28,929 | [Usage]: How | = | https://github.com/vllm-project/vllm/issues/28929 | closed | [
"usage"
] | 2025-11-18T10:26:17Z | 2025-11-18T10:30:53Z | 0 | neverneverendup |
huggingface/datasets | 7,869 | Why does dataset merge fail when tools have different parameters? | Hi, I have a question about SFT (Supervised Fine-tuning) for an agent model.
Suppose I want to fine-tune an agent model that may receive two different tools: tool1 and tool2. These tools have different parameters and types in their schema definitions.
When I try to merge datasets containing different tool definitions, I get the following error:
TypeError: Couldn't cast array of type
struct<refundFee: struct<description: string, type: string>, ... , servicerId: struct<description: string, type: string>>
to
{
'refundFee': {'description': Value(dtype='string'), 'type': Value(dtype='string')},
...
'templateId': {'description': Value(dtype='string'), 'type': Value(dtype='string')}
}
From my understanding, the merge fails because the tools column's nested structure is different across datasets — e.g., one struct contains an extra field servicerId while the other does not. This causes HuggingFace Datasets (and its underlying Apache Arrow schema) to reject the merge.
My question is: why is it designed this way?
Is this strict schema matching a hard requirement of the library?
Is there a recommended way to merge datasets with different tool schemas (different parameters and types)?
For an agent model supporting multiple tools, what's the best practice for preparing/merging training data without losing flexibility?
Any guidance or design rationale would be greatly appreciated. Thanks! | https://github.com/huggingface/datasets/issues/7869 | open | [] | 2025-11-18T08:33:04Z | 2025-11-30T03:52:07Z | 1 | hitszxs |
pytorch/pytorch | 168,065 | On aarch64, `pip install torch` resulted in the CPU version? | ### 🐛 Describe the bug
Hi, noticing that trying to `pip install torch` resulted in the CPU version of torch stable.
Repro:
1. Get an aarch64 machine, e.g. GB200
2. `pip install torch`
3. `pip list`, see if you see cudnn cublas etc
It can be bypassed with
```
pip3 install torch --index-url https://download.pytorch.org/whl/cu128
```
but just want to report this, in case this isn't intentional.
<img width="1052" height="434" alt="Image" src="https://github.com/user-attachments/assets/3b3de7a4-3ebd-47bc-b0d8-4728f8d6fcf1" />
### Versions
torch stable
cc @svekars @sekyondaMeta @AlannaBurke @ptrblck @msaroufim @eqy @jerryzh168 @tinglvv @nWEIdia | https://github.com/pytorch/pytorch/issues/168065 | open | [
"module: docs",
"module: cuda",
"triaged"
] | 2025-11-18T04:59:16Z | 2025-11-24T19:19:58Z | 3 | henrylhtsang |
vllm-project/vllm | 28,903 | [Bug]: vllm inference on qwen3-vl when use_upstream_fa is False | ### Your current environment
pip show torch vllm flash-attn
Name: torch
Version: 2.8.0
---
Name: vllm
Version: 0.11.0
Name: flash_attn
Version: 2.8.3
### 🐛 Describe the bug
unit-test code as the follows,
when simple qwen3-0.6B can run; but qwen3-vl-4b not run
```python
#coding=utf-8
"""
写单元测试来验证FA和VLLM的可用性和兼容性
"""
import torch
from flash_attn import flash_attn_func
import unittest
import vllm
# from vllm.attention.backends import get_attn_backend
class TestFA_VLLM(unittest.TestCase):
def testFA(self,):
# 检查CUDA是否可用及设备
print(f"CUDA available: {torch.cuda.is_available()}")
print(f"Current device: {torch.cuda.current_device()}")
print(f"Device name: {torch.cuda.get_device_name()}")
# 尝试创建一个简单的张量并移动到GPU
try:
q = torch.randn(1, 1, 16, 64, dtype=torch.float16, device='cuda')
k = torch.randn(1, 1, 16, 64, dtype=torch.float16, device='cuda')
v = torch.randn(1, 1, 16, 64, dtype=torch.float16, device='cuda')
output = flash_attn_func(q, k, v)
print("FlashAttention test passed!")
except Exception as e:
print(f"FlashAttention test failed: {e}")
def oriTestVLLM(self,):
# 打印当前使用的attention后端
print("Available CUDA devices:", torch.cuda.device_count())
print("Current device:", torch.cuda.current_device())
print("Device name:", torch.cuda.get_device_name())
# 检查vLLM配置
print("vLLM version:", vllm.__version__)
# 尝试创建一个小模型来触发后端初始化
try:
from vllm import LLM
llm = LLM(model="Qwen/Qwen3-0.6B", max_model_len=256)
print("vLLM初始化成功!")
prompt = "这是一个测试提示。"
response = llm.generate(prompt)
print("rollout测试成功! 生成的文本:", response)
except Exception as e:
print(f"vLLM初始化失败: {e}")
def testVLLM(self,):
# 打印当前使用的attention后端
print("Available CUDA devices:", torch.cuda.device_count())
print("Current device:", torch.cuda.current_device())
print("Device name:", torch.cuda.get_device_name())
# 尝试创建一个小模型来触发后端初始化
try:
MODEL_PATH = "Qwen/Qwen3-VL-4B-Instruct"
from vllm import LLM
from vllm import LLM, SamplingParams
from vllm.assets.image import ImageAsset # vLLM 内置工具,帮你把路径 → PIL
from vllm.assets.video import VideoAsset # 如果以后想加视频同理
# 随便用一张图就行
image_path = ""
from PIL import Image
image = Image.open(image_path)
# 方式 B:URL
# image = ImageAsset("image", "https://xxx.jpg").pil_image
# Qwen3-VL 要求的对话模板
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": image}, # 图像字段
{"type": "text", "text": "请描述这张图片。"}
]
}
]
# 用 transformers 的 apply_chat_template 把 messages → 模型输入
from transformers import AutoTokenizer
tok = AutoTokenizer.from_pretrained(MODEL_PATH)
prompt = tok.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# ---------- ④ 生成 ----------
sampling_params = SamplingParams(
temperature=0.7,
max_tokens=512,
stop_token_ids=[tok.eos_token_id, tok.convert_tokens_to_ids("<|im_end|>")]
)
llm = LLM(model=MODEL_PATH, max_model_len=4096,
limit_mm_per_prompt={"image": 1, "video": 0}, # 每张 prompt 最多 1 张图
dtype="bfloat16", # A100/H100 可开;消费卡用 "float16"
gpu_memory_utilization=0.9,)
print("vLLM初始化成功!")
outputs = llm.generate(
{"prompt": prompt, "multi_modal_data": {"image": image}}, # 关键:把图也传进去
sampling_params=sampling_params
)
response = outputs[0].outputs[0].text
print("rollout测试成功! 生成的文本:", response)
except Exception as e:
print(f"vLLM初始化失败: {e}")
if __name__ == "__main__":
unittest.main()
```
error is :vllm/vllm_flash_attn/flash_attn_interface.py", line 233, in flash_attn_varlen_func [rank0]: out, softmax_lse = torch.ops._vllm_fa2_C.varlen_fwd( [rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/_ops.py", line 1243, in __call__ [rank0]: return self._op(*args, **kwargs) [rank0]: torch.AcceleratorError: CUDA error: the provided PTX was compiled with an unsupported toolchain.
Then, I review the code in https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/qwen3_vl.py#L375
it default set use_upstream_fa = False, when I change it to True, it works? the vllm version is 0.11.0
### Before submitting a new issue...
- | https://github.com/vllm-project/vllm/issues/28903 | closed | [
"bug"
] | 2025-11-18T03:54:11Z | 2025-11-18T08:18:09Z | 1 | hedes1992 |
huggingface/lerobot | 2,465 | loss:nan grdn:nan How to solve the gradient explosion problem in PI05 training? | When training Pi05 using Lerobot, has anyone encountered a situation where gradients explode immediately after training? Errors occur when the batch_size is set to 64 or 32. How can this be resolved?
Below are my training commands and error logs.
python src/lerobot/scripts/lerobot_train.py --dataset.repo_id=aa_merged280 --policy.type=pi05 \
--output_dir=./outputs/pi05_training2 --job_name=pi05_training2 \
--policy.pretrained_path=lerobot/pi05_base --policy.compile_model=true \
--policy.gradient_checkpointing=true --wandb.enable=true --policy.dtype=bfloat16 \
--steps=100000 --policy.device=cuda --batch_size=32 --policy.push_to_hub=false
INFO 2025-11-17 22:07:40 ot_train.py:351 step:200 smpl:6K ep:9 epch:0.03 loss:nan grdn:nan lr:2.5e-06 updt_s:4.478 data_s:0.038
WARNING 2025-11-17 22:07:40 db_utils.py:141 WandB logging of key "loss_per_dim" was ignored as its type "<class 'list'>" is not handled by this wrapper.
INFO 2025-11-17 22:22:38 ot_train.py:351 step:400 smpl:13K ep:18 epch:0.06 loss:nan grdn:nan lr:7.5e-06 updt_s:4.458 data_s:0.022
WARNING 2025-11-17 22:22:38 db_utils.py:141 WandB logging of key "loss_per_dim" was ignored as its type "<class 'list'>" is not handled by this wrapper.
INFO 2025-11-17 22:37:34 ot_train.py:351 step:600 smpl:19K ep:27 epch:0.10 loss:nan grdn:nan lr:1.3e-05 updt_s:4.456 data_s:0.022
WARNING 2025-11-17 22:37:34 db_utils.py:141 WandB logging of key "loss_per_dim" was ignored as its type "<class 'list'>" is not handled by this wrapper.
INFO 2025-11-17 22:52:31 ot_train.py:351 step:800 smpl:26K ep:36 epch:0.13 loss:nan grdn:nan lr:1.8e-05 updt_s:4.456 data_s:0.022
WARNING 2025-11-17 22:52:31 db_utils.py:141 WandB logging of key "loss_per_dim" was ignored as its type "<class 'list'>" is not handled by this wrapper.
INFO 2025-11-17 23:07:29 ot_train.py:351 step:1K smpl:32K ep:45 epch:0.16 loss:nan grdn:nan lr:2.3e-05 updt_s:4.459 data_s:0.022
| https://github.com/huggingface/lerobot/issues/2465 | open | [
"bug",
"policies",
"training"
] | 2025-11-18T03:46:28Z | 2025-12-03T16:13:56Z | null | Lilgeneric |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.