repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/lerobot | 2,543 | Different finetune loss given policy.type=pi0 / policy.path=lerobot/pi0_base. What is the difference? | Hi, I have two different configurations:
1. ` --dataset.repo_id=BBBBBBob/libero_goal_lerobot \
--dataset.root=/home/j84403411/data/libero/libero_goal_lerobot \
--policy.path=lerobot/pi0_base \
--policy.push_to_hub=false \
--policy.use_proprio=true \
--output_dir=/home/j84403411/checkpoint/libero/pi0/libero_goal_pr... | https://github.com/huggingface/lerobot/issues/2543 | closed | [] | 2025-11-28T12:34:38Z | 2025-12-01T11:25:17Z | null | BBBBBBob |
huggingface/transformers.js | 1,467 | Missing the following inputs: input_points, input_labels (or input_boxes) | ### Question
thanks for your excellent works!
I just write test code for SlimSAM model powered by transformers.js referring to this example(with some improvements): https://github.com/huggingface/transformers.js-examples/blob/main/segment-anything-webgpu/index.js
my code for `decode` method:
```js
// Decode segment... | https://github.com/huggingface/transformers.js/issues/1467 | closed | [
"question"
] | 2025-11-28T10:01:04Z | 2025-12-01T04:04:59Z | null | sherlockchou86 |
vllm-project/vllm | 29,643 | [Usage]: Enabling Tool call in the Python SDK | ### Your current environment
Hi Team,
I am currently exploring VLLM to enable tool calling, and I need some support with this. It would be very helpful if you could provide the corresponding Python code.
What I’m trying to achieve is to configure the Python package with the same settings that I use when starting the... | https://github.com/vllm-project/vllm/issues/29643 | open | [
"usage"
] | 2025-11-28T04:39:47Z | 2025-12-01T14:54:47Z | 2 | Madan1215 |
vllm-project/vllm | 29,641 | [Bug]: Max Tokens not being honoured in Chat Completions for GPTOSS model | ### Your current environment
It seems that in the latest version of vllm 0.11+ Chat Completions has stopped honouring `max_tokens` with GPTOSS 120B model, the below request payload has stopped working with `max_tokens` earlier the same payload would provide an output to the limit of the `max_tokens` provided..
Inter... | https://github.com/vllm-project/vllm/issues/29641 | closed | [
"bug"
] | 2025-11-28T03:39:34Z | 2025-12-21T02:39:32Z | 16 | soodrohit |
huggingface/transformers | 42,464 | Add SAM 3D Objects Encoder | ### Model description
## Model Description
SAM 3D Objects is Meta AI's foundation model for 3D object reconstruction from single images. I'm proposing to add the **encoder component** (DINOv2-based Vision Transformer) to Transformers.
**Scope**: Encoder only, not the full 3D generation pipeline (which includes Gauss... | https://github.com/huggingface/transformers/issues/42464 | open | [
"New model"
] | 2025-11-27T19:48:28Z | 2025-12-05T10:32:33Z | 1 | Aznix07 |
pytorch/pytorch | 169,175 | Regarding this issue, how can I upgrade or replace the cuDNN version built into my current PyTorch installation? | ### 🚀 The feature, motivation and pitch
Significant Memory Regression in F.conv3d with bfloat16 Inputs in PyTorch 2.9.0 (#166643) This release provides work around this issue. If you are impacted please install nvidia-cudnn package version 9.15+ from pypi. (#166480) (#167111) .
### Alternatives
_No response_
### A... | https://github.com/pytorch/pytorch/issues/169175 | closed | [] | 2025-11-27T09:32:00Z | 2025-11-27T20:19:07Z | 2 | saberrroool |
pytorch/pytorch | 169,174 | Does torch.masked_select preserve the original order of the selected elements? | There is the following issue on this page: https://docs.pytorch.org/docs/stable/generated/torch.masked_select.html
Does torch.masked_select preserve the original order of the selected elements?
`mask = torch.from_numpy(np.random.uniform(0, 1, 1234567) > 0.5)
idx = torch.arange(len(mask))
select = idx.masked_s... | https://github.com/pytorch/pytorch/issues/169174 | closed | [] | 2025-11-27T09:26:45Z | 2025-11-30T12:12:18Z | 0 | wanglin03 |
vllm-project/vllm | 29,584 | [Usage]: Can KV Cache be disabled in non-autoregressive generation tasks? | ### Your current environment
```text
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Could not collect
CMake version ... | https://github.com/vllm-project/vllm/issues/29584 | open | [
"usage"
] | 2025-11-27T05:30:08Z | 2025-12-05T02:40:28Z | 5 | GitEventhandler |
vllm-project/vllm | 29,574 | [Performance]: Using vLLM to accelerate VLM models, does the vision encoding part currently support parallel processing, or is it still being processed serially? | ### Proposal to improve performance
I found that currently, images of different sizes are processed sequentially, which significantly slows down the processing speed. How can we adapt to parallel processing? Should we resize or pad all images to the same size for batch processing, or can we run multiple encoder models... | https://github.com/vllm-project/vllm/issues/29574 | open | [
"performance"
] | 2025-11-27T03:51:36Z | 2025-11-27T10:54:09Z | 2 | NewZxy |
pytorch/pytorch | 169,160 | Is there any way to make pinned CPU tensors released back to the OS immediately | ### 🐛 Describe the bug
The pinned CPU tensors can't be released back to the OS immediately.
```python
import torch
import gc
import ctypes
import psutil
import os
def get_memory_usage():
"""Return current process RSS memory usage in MB."""
process = psutil.Process(os.getpid())
return process.memory_info... | https://github.com/pytorch/pytorch/issues/169160 | closed | [] | 2025-11-27T03:19:54Z | 2025-11-27T20:24:46Z | 1 | dashanji |
vllm-project/vllm | 29,564 | [Doc]: Make PyTorch profiler gzip and CUDA time dump configurable | ### 📚 The doc issue
We observed that enabling both use_gzip and dump_self_cuda_time_total in the vLLM torch profiler introduces significant overhead during profiling.
For example, when profiling 10 randomly generated requests (1000 input tokens, 200 output tokens) on an A100 using the Qwen3-32B model, we found that ... | https://github.com/vllm-project/vllm/issues/29564 | closed | [
"documentation"
] | 2025-11-27T02:21:20Z | 2025-12-01T04:30:48Z | 1 | zhangruoxu |
pytorch/pytorch | 169,157 | AOTI does not support fallback kernels with parameters of types other than int and tensor. | ### 🚀 The feature, motivation and pitch
Currently, AOTI does not support fallback kernels with parameters of types other than int and tensor. https://github.com/pytorch/pytorch/blob/main/torch/_inductor/codegen/cpp_wrapper_cpu.py#L2723-L2729.
Why does AOTI restrict the parameter types?
Do we have any plans to add sup... | https://github.com/pytorch/pytorch/issues/169157 | open | [
"triaged",
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | 2025-11-27T02:10:26Z | 2025-12-18T02:30:56Z | 3 | CaoE |
vllm-project/vllm | 29,562 | [Bug]: "\n\n" content between reasoning and tool_call content when tool_call and stream mode | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
https://github.com/QwenLM/Qwen3/issues/1755
When stream mode true, the response contains content "\n\n" between rea... | https://github.com/vllm-project/vllm/issues/29562 | open | [
"bug"
] | 2025-11-27T01:49:04Z | 2025-11-27T01:49:04Z | 0 | NiuBlibing |
vllm-project/vllm | 29,560 | [Doc]: Batch Invariance on Ampere Platforms | ### 📚 The doc issue
Does the batch invariance feature released in vllm 0.11.2 support the Ampere architecture? If adaptations are required, what modifications need to be made?
### Suggest a potential alternative/fix
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for releva... | https://github.com/vllm-project/vllm/issues/29560 | closed | [
"documentation"
] | 2025-11-27T01:06:49Z | 2025-11-27T14:21:30Z | 0 | luo1206 |
pytorch/tutorials | 3,666 | Feedback about What is torch.nn really? | There is the following issue on this page: https://docs.pytorch.org/tutorials/beginner/nn_tutorial.html
In the section "Neural net from scratch (without torch.nn)" there is a pre-training loss function evaluation on a batch of 64 instances,
```
yb = y_train[0:bs]
print(loss_func(preds, yb))
```
then training is p... | https://github.com/pytorch/tutorials/issues/3666 | open | [
"core"
] | 2025-11-26T21:16:14Z | 2025-11-26T21:35:10Z | null | bogpetre |
huggingface/trl | 4,582 | Does the GRPO Trainer support multi-image input for Qwen3-VL? | Does the GRPO Trainer support multi-image input for Qwen3-VL? | https://github.com/huggingface/trl/issues/4582 | open | [
"🏋 GRPO"
] | 2025-11-26T14:03:57Z | 2025-11-27T08:08:25Z | 1 | Lestoky |
huggingface/diffusers | 12,722 | How to run qwen-image in kaggle gpu T4 * 2 successfully? | ```python3
!python3 -m pip install -U diffusers peft bitsandbytes
import diffusers, torch, math
qwen = diffusers.QwenImagePipeline.from_pretrained('Qwen/Qwen-Image', torch_dtype=torch.float16, low_cpu_mem_usage=True, quantization_config=diffusers.PipelineQuantizationConfig(quant_backend='bitsandbytes_4bit', quant_kwarg... | https://github.com/huggingface/diffusers/issues/12722 | open | [] | 2025-11-26T12:53:30Z | 2025-11-28T03:54:07Z | null | chaowenguo |
vllm-project/vllm | 29,494 | [Doc]: Documentation inconsistency: Blog mentions append_slots() but codebase uses allocate_slots() | ### 📚 The doc issue
The Automatic Prefix Caching blog post mentions:
> "The scheduler calls kv_cache_manager.append_slots()"
However, the actual codebase uses a unified `kv_cache_manager.allocate_slots()` method that handles both prefill and decode requests.
**Location:**
- Blog: [[link to blog post](https://docs.v... | https://github.com/vllm-project/vllm/issues/29494 | closed | [
"documentation"
] | 2025-11-26T11:37:40Z | 2025-11-26T11:46:08Z | 1 | pradsgit |
huggingface/transformers | 42,418 | Custom nn.Parameter initialization in PreTrainedModel subclasses is overwritten by post_init()/from_pretrained() causing NaNs/Zeros | ### System Info
- `transformers` version: 4.57.1
- Platform: Linux-4.18.0-147.mt20200626.413.el8_1.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.14
- Huggingface_hub version: 0.35.3
- Safetensors version: 0.6.2
- Accelerate version: 1.11.0
- Accelerate config: not found
- DeepSpeed version: 0.18.2
- PyTorch v... | https://github.com/huggingface/transformers/issues/42418 | open | [
"Usage",
"Feature request",
"bug"
] | 2025-11-26T10:29:57Z | 2025-12-01T15:10:32Z | 10 | Noietch |
huggingface/diffusers | 12,720 | how to quantization wan 2.2 vace after loading lora? | ```python3
diffusers.WanVACEPipeline.from_pretrained('linoyts/Wan2.2-VACE-Fun-14B-diffusers', vae=diffusers.AutoencoderKLWan.from_pretrained('linoyts/Wan2.2-VACE-Fun-14B-diffusers', subfolder='vae', torch_dtype=torch.float32), torch_dtype=torch.bfloat16, quantization_config=diffusers.PipelineQuantizationConfig(quant_ba... | https://github.com/huggingface/diffusers/issues/12720 | open | [] | 2025-11-26T10:11:38Z | 2025-12-11T17:29:30Z | null | chaowenguo |
vllm-project/vllm | 29,489 | [Usage]: Removing last generated token from output and kv cache | ### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Cou... | https://github.com/vllm-project/vllm/issues/29489 | open | [
"usage"
] | 2025-11-26T09:35:37Z | 2025-11-26T09:36:37Z | 0 | josefdra |
huggingface/diffusers | 12,719 | how to use quantization and device_map=balance to run qwen-image on kaggle T4 * 2 | ```python3
!python3 -m pip install -U diffusers peft bitsandbytes protobuf
import diffusers, torch, math
qwen = diffusers.QwenImagePipeline.from_pretrained('Qwen/Qwen-Image', quantization_config=diffusers.PipelineQuantizationConfig(quant_backend='bitsandbytes_4bit', quant_kwargs={'load_in_4bit':True, 'bnb_4bit_quant_ty... | https://github.com/huggingface/diffusers/issues/12719 | open | [] | 2025-11-26T08:35:46Z | 2025-11-26T09:15:54Z | null | chaowenguo |
pytorch/pytorch | 169,112 | `torch.compile(fullgraph=True, dynamic=True)` on CUDA fails when using `torch.utils.dlpack.to_dlpack` / `from_dlpack` (`torch._C._to_dlpack` skipped by Dynamo) | ### 🐛 Describe the bug
### Summary
When compiling a simple model that uses `torch.utils.dlpack.to_dlpack` / `from_dlpack` with:
backend="inductor", fullgraph=True, dynamic=True, device="cuda"
the eager CUDA execution works fine, but `torch.compile` fails during Dynamo tracing with:
> torch._dynamo.exc.Unsupported: A... | https://github.com/pytorch/pytorch/issues/169112 | open | [
"triaged",
"module: dlpack",
"oncall: pt2",
"module: dynamo"
] | 2025-11-26T08:13:38Z | 2025-12-04T02:10:01Z | 3 | tinywisdom |
pytorch/pytorch | 169,106 | Why is fusion restricted here in dynamic mode? | https://github.com/pytorch/pytorch/blob/3ab08946d5052eaeda11d683d6a58e801a032755/torch/_inductor/ir.py#L3555
I wrote a small demo myself and the numerical accuracy is perfect
```python
import torch
from torch import nn
from typing import List
#concat in dynamic dim
class MyCatMul(nn.Module):
def __init__(self, n... | https://github.com/pytorch/pytorch/issues/169106 | closed | [
"triaged",
"oncall: pt2",
"module: inductor"
] | 2025-11-26T03:56:02Z | 2025-12-10T04:43:43Z | 3 | Jin-TaoZhang |
vllm-project/vllm | 29,474 | [P/D][Metrics] Consider combined/summed metrics (e.g. ttft and e2e_request_latency) for prefill and decode instances | ### Your current environment
<details>
<summary>Env info snipped</summary>
```
Collecting environment information...
uv is set
==============================
System Info
==============================
OS : Ubuntu 24.04.1 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ub... | https://github.com/vllm-project/vllm/issues/29474 | open | [
"usage",
"kv-connector"
] | 2025-11-26T02:50:17Z | 2025-11-26T08:31:18Z | 1 | mgw2168-1 |
vllm-project/vllm | 29,472 | [Installation]: how to Install vllm on dell promax gb10 | ### Your current environment
I failed to install vllm on dell promax gb10 , mesages as followed
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2025 NVIDIA Corporation
Built on Wed_Aug_20_01:57:39_PM_PDT_2025
Cuda compilation tools, release 13.0, V13.0.88
Build cuda_13.0.r13.0/compiler.3642471... | https://github.com/vllm-project/vllm/issues/29472 | open | [
"installation"
] | 2025-11-26T02:41:18Z | 2026-01-01T12:28:29Z | 2 | goactiongo |
vllm-project/vllm | 29,436 | [Bug]: vLLM Serve with LMCache enabled produces wrong output for GPT-OSS-20B | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
vLLM serve command with LMCache enabled produces wrong output with GPT OSS 20B for subsequent invocations with the s... | https://github.com/vllm-project/vllm/issues/29436 | open | [
"bug"
] | 2025-11-25T19:27:24Z | 2025-11-25T19:27:24Z | 0 | ksuma2109 |
pytorch/ao | 3,389 | Is it possible to export a QAT model in AWQ Format? | I'm new to torchao and QAT but I'm pretty comfortable with PTQ techniques like AWQ and GPTQ. My deployment pipeline requires AWQ format (safetensors supported by autoawq or gptqmodel's new AWQ integration, needs to be in uint32 like Int4PackingFormat.PLAIN_INT32). I want to train a model with Int4WeightOnlyConfig and b... | https://github.com/pytorch/ao/issues/3389 | closed | [
"triaged"
] | 2025-11-25T17:30:03Z | 2025-12-12T17:27:25Z | 10 | ambroser53 |
pytorch/executorch | 15,978 | qnn_executor_runner - mismatch in the skel files ? | hi,
im testing qnn_executor_runner on s25 ultra,
a Snapdragon 8 Gen 4 processor.
it seems qnn backend choses libQnnHtpV79Skel.so as the backend
but these messages seem to point to some mismatch ? it tries to call hmx_v73_convf16 ?
i.e. shouldnt it call hmx_v79_convf16 ?
V b037a:4006: CDSP0:[R]: Process "/frp... | https://github.com/pytorch/executorch/issues/15978 | open | [
"partner: qualcomm",
"module: qnn"
] | 2025-11-25T15:14:00Z | 2025-12-19T02:26:49Z | 3 | eliyam32 |
pytorch/executorch | 15,973 | What should I do if there is SoC for my processor? | ### 📚 The doc issue
Hello. I have a device with a Snapdragon 685 processor, it is not on the Qualcomm SoCs list. In this case, the only thing left for me is to convert via Xnnpack? And will the model converted via Xnnpack work on android?
### Suggest a potential alternative/fix
_No response_
cc @cccclai @winskuo-q... | https://github.com/pytorch/executorch/issues/15973 | open | [
"partner: qualcomm",
"module: qnn"
] | 2025-11-25T13:29:32Z | 2025-11-26T01:50:30Z | null | kejndan |
vllm-project/vllm | 29,409 | [Usage]: Custom Logits Processors V1 how to get tokenizer into processor | ### Problem with tokenizer
For the second day now, I've been unable to figure out how to get a tokenizer inside a custom processor. I used the processor from the documentation as an example. I examined each object through debug, but couldn't find where to extract the tokenizer. In v0, this was done simply at the reque... | https://github.com/vllm-project/vllm/issues/29409 | closed | [
"usage"
] | 2025-11-25T13:24:17Z | 2025-12-02T10:33:18Z | 6 | cvadim130 |
pytorch/torchtitan | 2,086 | mxfp8 MoE train is slower for DeepSeekV3 16b and Qwen models | I have tested **mxfp8** train for **Qwen** MoE models, and for **DeepSeekV3 16b** on **B200**. It did not show any speed up and even slows down in some case when I use mxfp8 (quantize.grouped_mm.mx).
I found [this](https://github.com/pytorch/ao/tree/main/torchao/prototype/moe_training#low-precision-moe-training) in to... | https://github.com/pytorch/torchtitan/issues/2086 | open | [] | 2025-11-25T10:33:42Z | 2025-11-26T16:44:51Z | 2 | Yerniyaz |
vllm-project/vllm | 29,389 | [Bug]: race condition in shm_broadcast.py | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
# Problem
`ShmRingBuffer` is a lock-free queue, the implementation of which https://github.com/vllm-project/vllm/blo... | https://github.com/vllm-project/vllm/issues/29389 | open | [
"bug"
] | 2025-11-25T09:25:52Z | 2025-11-25T09:25:52Z | 0 | nvjullin |
pytorch/pytorch | 169,050 | [Graph Partition] [Inductor] UnboundLocalError: cannot access local variable 'buf271' where it is not associated with a value | ### 🐛 Describe the bug
Using "reduce-overhead" mode and "inductor backend for training, with `torch._inductor.config.graph_partition = True`. Run into inductor gen-code bug:
```
[rank0]: File "/home/tiger/.local/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 1044, in _fn
[rank0]: return fn(*ar... | https://github.com/pytorch/pytorch/issues/169050 | open | [
"triaged",
"module: cuda graphs",
"oncall: pt2",
"module: inductor"
] | 2025-11-25T08:29:02Z | 2025-12-01T22:19:24Z | null | wmhst7 |
vllm-project/vllm | 29,382 | [Doc]: Expert Parallel Deployment says "Tensor parallel size (always 1 for now)" is confusing | ### 📚 The doc issue
On page https://docs.vllm.ai/en/latest/serving/expert_parallel_deployment/#single-node-deployment it says Tensor parallel size can only be 1 but didn't mention the behavior of Attention Layers
On page https://docs.vllm.ai/en/latest/serving/data_parallel_deployment/ it says The expert layers will ... | https://github.com/vllm-project/vllm/issues/29382 | closed | [
"documentation"
] | 2025-11-25T07:54:42Z | 2025-12-13T17:38:01Z | 0 | xeonliu |
huggingface/transformers | 42,375 | SAM3 single image inference with multiple text prompt | Hi
I'm trying to run inference on a single image, aiming to get the bbox of objects from several different categories (e.g. "a person" and "a car").
the only example i found for prompting with multiple categories is in the "Batched Inference with Text Prompts" example, but then i need to unnecessarily duplicate my imag... | https://github.com/huggingface/transformers/issues/42375 | open | [] | 2025-11-25T06:20:09Z | 2026-01-05T16:16:01Z | 9 | iariav |
pytorch/pytorch | 169,035 | [Question] Why torch.ops.symm_mem.multimem_all_reduce_() don't support e4m3, e5m2, fp16? | ### 🚀 The feature, motivation and pitch
Hi PyTorch developer,
Is there any reason why torch.ops.symm_mem.multimem_all_reduce_() don't support e4m3, e5m2, fp16? From CUDA PTX doc https://docs.nvidia.com/cuda/parallel-thread-execution/#data-movement-and-conversion-instructions-multimem, those data type were supported ... | https://github.com/pytorch/pytorch/issues/169035 | open | [
"oncall: distributed",
"module: symm_mem"
] | 2025-11-25T02:39:22Z | 2025-11-26T15:00:34Z | 0 | XiaoSong9905 |
pytorch/pytorch | 169,033 | Pytorch CI is partially paused for the time being (updated 11/27) | ## Current Status
*ongoing*. Linux and Windows runners are re-enabled as of 12pm 11/27. Mac runners and ROCM/H100 still disabled.
## Error looks like
*No CI was running at all. No merges were processed.*
## Incident timeline (all times pacific)
*Include when the incident began, when it was detected, mitigated, root c... | https://github.com/pytorch/pytorch/issues/169033 | closed | [
"module: ci",
"triaged"
] | 2025-11-25T01:57:30Z | 2025-12-07T20:08:54Z | 3 | malfet |
huggingface/trl | 4,569 | [doc issue] doc on "GRPO with replay buffer" buggy | ### Reproduction
The code example in [doc for "GRPO with replay buffer"](https://huggingface.co/docs/trl/main/en/experimental#grpo-with-replay-buffer) is kind of buggy.
- It imports `GRPOWithReplayBufferTrainer` but never used.
- It uses `GRPOWithReplayBufferConfig` but never imported
- The code is apparently not e... | https://github.com/huggingface/trl/issues/4569 | closed | [
"🐛 bug",
"📚 documentation",
"🏋 GRPO"
] | 2025-11-25T01:30:28Z | 2025-11-25T21:28:00Z | 2 | DNXie |
pytorch/pytorch | 169,002 | Torch dynamo fails to do proper type promotion during export | ### 🐛 Describe the bug
When I tried to use torch.where with a boolean tensor, a float, and and int, torch dynamo tripped up on doing type promotion, and gave me a really unclear error message on what was wrong. When I explicitly converted the int input to float, it worked. Can we develop proper type promotion in the ... | https://github.com/pytorch/pytorch/issues/169002 | open | [
"oncall: pt2",
"oncall: export"
] | 2025-11-24T19:51:33Z | 2025-12-02T20:20:47Z | 1 | aboubezari |
pytorch/pytorch | 169,000 | Dr CI is temporarily not working due to API fairewall |
## Current Status
ongoing
## Incident timeline (all times pacific)
Since Nov 21st, 2025
## User impact
*How does this affect users of PyTorch CI?*
The jobs and Pr that depends on Dr CI will see no update.
## Root cause
*What was the root cause of this issue?*
We changed the configuration of our firewall, this chang... | https://github.com/pytorch/pytorch/issues/169000 | closed | [
"ci: sev"
] | 2025-11-24T19:22:26Z | 2025-12-01T22:13:09Z | 3 | yangw-dev |
pytorch/pytorch | 168,993 | [CI][B200] DGXB200-07 Is Having NVIDIA-CONTAINER-TOOLKIT Related Issues | ## Current Status
On-going
## Error looks like
Only affecting periodic jobs, not PR blocking.
Errors are like: (Using https://github.com/pytorch/pytorch/actions/runs/19630438757/job/56210849037 for example)
docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI run... | https://github.com/pytorch/pytorch/issues/168993 | closed | [
"module: cuda",
"module: ci",
"triaged"
] | 2025-11-24T18:35:16Z | 2025-12-02T19:18:32Z | 2 | nWEIdia |
pytorch/pytorch | 168,965 | max_autotuned BMM produces wrong result when multiple threads are used | ### 🐛 Describe the bug
I noticed that when I use aoti_compile_and_package with max_autotune, in certain conditions the result is wrong. Specifically:
1. It's important to `set_num_threads(4)`. With 1 threads it doesn't reproduce
2. It's important to do `import cv2`, without it the bug doesn't reproduce
3. Adding `os.... | https://github.com/pytorch/pytorch/issues/168965 | open | [
"triaged",
"module: correctness (silent)",
"oncall: pt2",
"oncall: export",
"oncall: cpu inductor",
"module: aotinductor"
] | 2025-11-24T12:41:52Z | 2025-12-11T12:23:10Z | 6 | mstebelev |
vllm-project/vllm | 29,306 | [Usage]: dots.llm.inst is not running due to a type error | ### Your current environment
I'm trying to run dots llm on 4xH100
```
vllm serve \
--uvicorn-log-level=info \
rednote-hilab/dots.llm1.inst \
--dtype auto \
--api-key xxx \
--host 0.0.0.0 \
--port 8000 \
--tensor-parallel-size 4
--ipc=host \
--trust-remote-code
```
It failed to run, I got the following crash... | https://github.com/vllm-project/vllm/issues/29306 | closed | [
"usage"
] | 2025-11-24T09:48:08Z | 2025-11-28T23:25:27Z | 1 | rain-1 |
pytorch/torchtitan | 2,077 | Context Parallel for Qwen3 | Thanks for supporting Qwen3 models!
> CP is not supported currently because of RoPE embedding implementation details.
Any plan to support CP + EP for Qwen3 MoE models?
If no plan in short time, can you help guide how can I implement it myself? | https://github.com/pytorch/torchtitan/issues/2077 | open | [
"high priority",
"triage review"
] | 2025-11-24T08:09:30Z | 2025-12-15T23:56:00Z | 8 | unavailableun |
huggingface/transformers | 42,353 | SAM3 point mode is not supported yet? | In [SAM3 official example](https://github.com/facebookresearch/sam3/blob/main/examples/sam3_for_sam1_task_example.ipynb
), they also support point mode. But it seems that transforms has not supported yet?
| https://github.com/huggingface/transformers/issues/42353 | closed | [] | 2025-11-24T07:16:52Z | 2025-11-26T15:16:25Z | 1 | haofanwang |
pytorch/executorch | 15,956 | [QNN] Support for in-place modification of mutable buffers (weights) within the QNN delegate? | ### 🚀 The feature, motivation and pitch
### Description
I am working on a model where certain buffers (serving as weights) are updated in-place during the `forward` pass (e.g., zero-order optimization algorithm).
I attempted to export this model and lower it to the QNN backend. My goal is to have the entire graph, ... | https://github.com/pytorch/executorch/issues/15956 | closed | [] | 2025-11-24T06:07:43Z | 2025-11-24T08:40:16Z | 0 | qqqqqqqwy |
vllm-project/vllm | 29,297 | [Bug]: What should the image embedding input be like? I have tested with multiple cases but it all fails | ### Your current environment
```text
==============================
System Info
==============================
OS : Red Hat Enterprise Linux release 8.10 (Ootpa) (x86_64)
GCC version : (GCC) 8.5.0 20210514 (Red Hat 8.5.0-26)
Clang version : Could not co... | https://github.com/vllm-project/vllm/issues/29297 | closed | [
"usage"
] | 2025-11-24T06:02:09Z | 2025-11-26T13:00:17Z | 2 | DamonZhao-sfu |
vllm-project/vllm | 29,294 | [CPU Backend] [Doc]: Update Installation Docs for Arm CPUs | ### 📚 The doc issue
This page https://docs.vllm.ai/en/stable/getting_started/installation/cpu/#arm-aarch64 is very out-dated.
We now release Arm CPU wheels and images thanks to #26931 and #27331
We need to update that page to reflect that :)
### Suggest a potential alternative/fix
_No response_
### Before submitt... | https://github.com/vllm-project/vllm/issues/29294 | closed | [
"documentation",
"cpu"
] | 2025-11-24T05:33:46Z | 2025-12-15T19:46:26Z | 5 | fadara01 |
pytorch/executorch | 15,954 | qnn_llama_runner on SA8295 outputs repetitive “sp” with Qwen3-1.7B after ExecuTorch export | ### 🐛 Describe the bug
use main commit b4d72f1e271915e9c0e1d313753a1eec840fbdee
I have tried some settings, the setting:( when I use other setting, the convert would be failed, and the error
" some op has incorrect Value 68, expected >= 73"
or
" [ERROR] [Qnn ExecuTorch]: fa_alloc.cc:2462::ERROR:graph requires esti... | https://github.com/pytorch/executorch/issues/15954 | closed | [
"partner: qualcomm",
"module: qnn"
] | 2025-11-24T03:28:00Z | 2025-12-04T03:41:00Z | 12 | lansexinhu |
pytorch/pytorch | 168,940 | [DTensor] aten.max.dim returns wrong indices when using DTensor | ### 🐛 Describe the bug
I found that current strategy of `aten.max.dim` may get incorrect indices output if sharded the dim for maximization.
Sample code:
```python
import torch
from torch.distributed.tensor import distribute_tensor, Shard
from torch.testing._internal.common_utils import run_tests
from torch.testing... | https://github.com/pytorch/pytorch/issues/168940 | open | [
"oncall: distributed",
"module: dtensor"
] | 2025-11-24T02:36:58Z | 2025-12-12T14:40:32Z | 11 | qqq6op |
vllm-project/vllm | 29,286 | [Performance]: cache system prompt token ids | ### Proposal to improve performance
As system prompt can be very long now, tokenize the system prompt can be slow.
Using H20, tokenize 5000 tokens cost about 10ms as below:

System prompts are usually fixed and reusable, so ca... | https://github.com/vllm-project/vllm/issues/29286 | open | [
"performance"
] | 2025-11-24T01:55:32Z | 2025-11-28T08:57:06Z | 2 | Eviannn |
vllm-project/vllm | 29,281 | [Usage]: Removing last generated token from output and kv cache | ### Your current environment
```text
vLLM 0.11.2
```
### How would you like to use vllm
Hey guys,
i am currently working on a research project where i load a moe-like model and i want to do routing based on the sequence state.
The goal is to let expert 0 generate until it reaches the eos token, then remove the eos... | https://github.com/vllm-project/vllm/issues/29281 | closed | [
"usage"
] | 2025-11-23T22:39:16Z | 2025-11-26T09:33:53Z | 0 | josefdra |
vllm-project/vllm | 29,277 | [Usage]: Creating and accessing per request arguments inside vLLM model | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
I want to implement token compression techniques on the output embeddings of Qwen-2.5VL which would occur dynamically as the number of requests change. Is there anyway to implement this in vLLM? I see t... | https://github.com/vllm-project/vllm/issues/29277 | open | [
"usage"
] | 2025-11-23T21:59:31Z | 2025-11-23T21:59:31Z | 0 | minlu21 |
huggingface/transformers | 42,344 | How to fine-tune SAM 3D models? | ### Model description
The recently released SAM 3D work is truly remarkable. Do you plan to integrate it into Transformers and enable fine-tuning?
https://huggingface.co/facebook/sam-3d-objects
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide usefu... | https://github.com/huggingface/transformers/issues/42344 | open | [
"New model"
] | 2025-11-23T17:40:57Z | 2025-11-23T17:40:57Z | null | bruno686 |
vllm-project/vllm | 29,264 | [Usage]: Monkey Patching SamplingParams | ### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Cou... | https://github.com/vllm-project/vllm/issues/29264 | closed | [
"usage"
] | 2025-11-23T11:45:54Z | 2025-11-24T13:03:50Z | 2 | josefdra |
vllm-project/vllm | 29,263 | [Feature]: Enable flash attention (and/or FlashMLA) for AMD GPUs | ### 🚀 The feature, motivation and pitch
In [this page from flash-attention](https://github.com/Dao-AILab/flash-attention?tab=readme-ov-file#amd-rocm-support), I checked that the upstream `flash-attention` currently has composable_kernel (for newer AMD GPUs) and WIP triton (for older RNDA GPUs, etc.) implementations. ... | https://github.com/vllm-project/vllm/issues/29263 | closed | [
"feature request",
"rocm"
] | 2025-11-23T11:28:47Z | 2025-12-05T01:54:08Z | 4 | Inokinoki |
vllm-project/vllm | 29,245 | [Usage]: 启动 qwen3 vl 超级超级超级慢,sglang 启动很快,可能的原因是什么? | ### Your current environment
连执行 python collect_env.py 都很慢,环境是直接 uv 安装的
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.2 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04... | https://github.com/vllm-project/vllm/issues/29245 | open | [
"usage"
] | 2025-11-22T20:41:27Z | 2025-12-11T11:23:54Z | 3 | hucorz |
huggingface/candle | 3,208 | `cudarc` dynamic loading support | Currently, `candle` uses `cudarc` with the `dynamic-linking` feature, which requires the executable to find the DLLs or SOs at startup. However, it would be more convenient if `candle` also supported the `dynamic-loading` feature from `cudarc` to load DLLs or SOs at runtime.
Is it possible for `candle` to support it? | https://github.com/huggingface/candle/issues/3208 | open | [] | 2025-11-22T18:18:25Z | 2025-11-25T09:00:27Z | 7 | mayocream |
huggingface/transformers | 42,331 | SAM3 does not support custom inference resolutions | ### System Info
Note: I am running the latest git version, sys Info should not be relevant to the issue
$ transformers env
Traceback (most recent call last):
File "/home/master-andreas/panopticon/test_env/bin/transformers", line 3, in <module>
from transformers.cli.transformers import main
File "/home/master... | https://github.com/huggingface/transformers/issues/42331 | closed | [
"bug"
] | 2025-11-21T22:17:08Z | 2025-12-10T22:46:39Z | 3 | Kallinteris-Andreas |
huggingface/lerobot | 2,500 | question about the gr00t policy | hi,
I see here https://huggingface.co/docs/lerobot/en/groot that gr00t is intergrated into lerobot.
is it in sync with the original repo: https://github.com/NVIDIA/Isaac-GR00T ?
I see in original repo that the dataset used to fine-tune, is a bit different from the original lerobot format, like libero dataset (https... | https://github.com/huggingface/lerobot/issues/2500 | open | [
"question",
"policies"
] | 2025-11-21T21:45:19Z | 2025-12-03T14:03:34Z | null | yanan1116 |
vllm-project/vllm | 29,192 | Tool Calling Parsers Fail to Populate tool_calls Array for Qwen2.5-Coder Models | # Tool Calling Parsers Fail to Populate `tool_calls` Array for Qwen2.5-Coder Models
## Environment
- **vLLM Version**: v0.11.2.dev115+g56669c1f2 (Blackwell build)
- **Model**: Qwen/Qwen2.5-Coder-14B-Instruct-AWQ
- **Quantization**: AWQ
- **Python Version**: 3.x (Docker container)
- **GPU**: NVIDIA GeForce RTX 5080 (16... | https://github.com/vllm-project/vllm/issues/29192 | open | [] | 2025-11-21T18:31:19Z | 2025-11-21T18:31:19Z | 0 | Platano78 |
vllm-project/vllm | 29,180 | [Bug]: Recorded `EngineCoreEventType.QUEUED` time is off | ### Your current environment
<details>
</details>
### 🐛 Describe the bug
When running benchmarking with the CLI:
- on one side the serving point `vllm serve ...`
- on the other side the benchmarking client : `vllm bench serve...`
(note that the two are running on the same machine, there is no networking delay)
I ... | https://github.com/vllm-project/vllm/issues/29180 | closed | [
"bug"
] | 2025-11-21T12:58:36Z | 2025-11-30T20:56:44Z | 4 | sducouedic |
vllm-project/vllm | 29,177 | [Usage]: Vllm + Intervl model local infra Image preprocessing / request adding becomes bottleneck even with more CPU cores — how to accelerate? | ### Your current environment
vllm 0.11.0
### How would you like to use vllm
### current phenomenon
When doing **batched image classification** (64 images per batch) with InternVL3_5-1B, the bottleneck is clearly in the **"Adding requests"** phase (image preprocessing).
Even after increasing CPU cores and setting ... | https://github.com/vllm-project/vllm/issues/29177 | open | [
"usage"
] | 2025-11-21T10:56:29Z | 2025-12-01T14:08:22Z | 3 | Passenger12138 |
pytorch/torchtitan | 2,073 | Slow Dataloader should use num_worker > 1 | I am trying to use torchtitan with procedurally generated data (data augmentation). This process is CPU-intensive and I strongly do not want to store each sample before. Under this setup, `torchtitan` is really slow to train and I'm seeing my MFU dropping by 4-5x compared to unbottlenecked dataloader (no data augmentat... | https://github.com/pytorch/torchtitan/issues/2073 | closed | [] | 2025-11-21T08:13:27Z | 2025-12-19T01:45:50Z | 3 | hypnopump |
huggingface/trl | 4,554 | Better packing of data with best-fit decrease strategy | Hello,
When using packing with the bfd strategy, it looks like too much truncation is done when the seq_length is smaller than the average length of the sequences we want to pack.
For example :
```python
from datasets import Dataset
from trl import pack_dataset
examples = {
"input_ids": [[1, 2, 3, 4], [5, 6], [... | https://github.com/huggingface/trl/issues/4554 | closed | [
"✨ enhancement",
"❓ question"
] | 2025-11-21T07:53:55Z | 2025-12-16T20:37:02Z | 3 | ntnq4 |
pytorch/FBGEMM | 5,161 | Does anyone know how to build fbgemm_gpu from source without fbgemm | I'd like to only build fbgemm_gpu from source without building fbgemm.
Seems that
```
cd fbgemm_gpu
python setup.py install
```
missed some arguments? | https://github.com/pytorch/FBGEMM/issues/5161 | closed | [] | 2025-11-21T07:40:18Z | 2025-11-27T08:45:52Z | null | fmo-mt |
vllm-project/vllm | 29,148 | [Usage]: Deployment of the embedding models | ### Your current environment
```text
==============================
System Info ... | https://github.com/vllm-project/vllm/issues/29148 | closed | [
"usage"
] | 2025-11-21T03:57:59Z | 2025-11-21T06:17:18Z | 3 | Root970103 |
vllm-project/vllm | 29,139 | [Feature]: Optimize collectives in TP MoE case using torch.compile pass | ### 🚀 The feature, motivation and pitch
To avoid redundant work in MoE models in the TP case, sequence parallelism was added to the Deepseek model definition in #24134 and expanded to other models in #24982. However, to avoid performing surgery on the linear layer, the current approach performs more communication tha... | https://github.com/vllm-project/vllm/issues/29139 | open | [
"help wanted",
"good first issue",
"performance",
"feature request",
"torch.compile"
] | 2025-11-21T01:36:06Z | 2025-12-07T15:39:48Z | 19 | ProExpertProg |
pytorch/pytorch | 168,291 | Remove unnecessary `ConstantVariable` wrapping in `raise_observed_exception` | ~We currently convert arguments to `ConstantVariable` before calling `raise_observed_exception` in several places. This conversion is unnecessary as the Python objects can be used directly. Doing so also improves readability of some error reports.~
Before:
```python
Observed exception
Explanation: ...
Hint: ...
... | https://github.com/pytorch/pytorch/issues/168291 | closed | [
"good first issue",
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2025-11-20T19:28:03Z | 2025-12-03T13:48:14Z | 8 | guilhermeleobas |
pytorch/executorch | 15,923 | 1008 Giene-t2t-OnSM8850 chippet | ### 🐛 Describe the bug
./genie-t2t-run -c genie_bundle_llama3.2-1b/genie_config.json -p "<|begin_of_text|><|start_header_id|>user<|end_header_id|>"$'\n\n'$"What is France's capital?<|eot_id|><|sta>
Using libGenie.so version 1.13.0
[ERROR] "Failed to create device: 1008"
[ERROR] "Device Creation failure"
Failure to i... | https://github.com/pytorch/executorch/issues/15923 | closed | [] | 2025-11-20T18:49:32Z | 2025-11-24T18:09:35Z | 3 | pbtsvinaysukhesh |
vllm-project/vllm | 29,097 | [Docs] Feedback for `/en/latest/` | ### 📚 The doc issue
no
### Suggest a potential alternative/fix
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of... | https://github.com/vllm-project/vllm/issues/29097 | closed | [
"documentation"
] | 2025-11-20T14:53:44Z | 2025-11-21T07:51:57Z | 2 | ch950684-svg |
pytorch/pytorch | 168,253 | nestedtensor inconsistency in `torch.masked_select` | ### 🐛 Describe the bug
Here is the code that left me with questions: I am not sure if it is a bug, but I feel it is not it would be a great addition to the docs. I would expect padded nt and padded nt1 to have the same values at the end of the script, but they are not. If it is not a bug, how can I achieve it: creat... | https://github.com/pytorch/pytorch/issues/168253 | open | [
"triaged",
"module: nestedtensor"
] | 2025-11-20T14:08:48Z | 2025-11-21T17:47:04Z | 2 | rustamzh |
vllm-project/vllm | 29,089 | [Performance]: Can we use CUDA graph to accelerate the Qwen2_5omniAudioEncoder in Qwen2.5-Omni-3B? | ### Proposal to improve performance
<img width="3088" height="1264" alt="Image" src="https://github.com/user-attachments/assets/535d7854-b9db-4e40-8f85-1abe08b4d35e" />
The trace graph shows that Qwen2_5omniAudioEncoder has a large number of small kernel startups, indicating significant room for optimization.
Can we u... | https://github.com/vllm-project/vllm/issues/29089 | open | [
"performance"
] | 2025-11-20T12:13:58Z | 2025-11-20T12:13:58Z | 0 | xq25478 |
pytorch/torchrec | 3,567 | how to use torch.distributed.checkpoint to save and load state dict | sparse_arch is a part of my model.
<img width="721" height="698" alt="Image" src="https://github.com/user-attachments/assets/cb35959b-418e-4ff4-8e12-4524528cbad2" />
<img width="1439" height="684" alt="Image" src="https://github.com/user-attachments/assets/d008966e-e2d2-404d-bcda-bce3e3285eed" /> | https://github.com/meta-pytorch/torchrec/issues/3567 | open | [] | 2025-11-20T09:30:47Z | 2025-11-20T09:30:47Z | 0 | haolujun |
vllm-project/vllm | 29,078 | [Performance]: 多实例导致的cpu占用过高 | ### Your current environment
GPU: RTX4090
cuda version: cuda12.8
vllm version: 0.11.0
中文:我使用triton server的 vllm backend 启动了4个 minerU2.5 模型的实例,我的服务器上有2张卡,我每张卡启动了1个实例,我发现cpu负载有时候极高,几乎占满了我的服务器,我的服务器有96核,vllm backend使用的是AsyncLLMEngine,我观察到在单卡上启动一个实例时,我发送200张小尺寸的文字图做OCR时,fps可以达到最高,也就是每秒可以处理200张的图片,cpu负载在40-50%左右,为了进一步增加性能,... | https://github.com/vllm-project/vllm/issues/29078 | closed | [
"usage"
] | 2025-11-20T08:26:35Z | 2025-11-21T02:17:51Z | 4 | zjq1996518 |
huggingface/transformers | 42,291 | Can we disable IPython progress bar and use normal tqdm bar? | I like the normal tqdm bar much better, it is lighter, cleaner, simpler, and less stress on my eyes (no green color). I would love to have an option to use tqdm bar and not IPython bar. | https://github.com/huggingface/transformers/issues/42291 | closed | [] | 2025-11-20T01:26:11Z | 2025-12-28T08:02:45Z | 1 | weathon |
pytorch/pytorch | 168,186 | 2nd example of large numeric divergence for torch compile vs eager in bf16 | ### 🐛 Describe the bug
First example is https://github.com/pytorch/pytorch/issues/168126.
Here's another smaller example where I'm seeing a significant difference (rtol 1.0) between eager and compiled when running under bf16. Somehow the call to `torch.chunk` in `Module2` causes a numeric divergence to occur. It's l... | https://github.com/pytorch/pytorch/issues/168186 | closed | [
"triaged",
"oncall: pt2",
"module: inductor"
] | 2025-11-19T21:19:58Z | 2025-12-01T19:20:59Z | 6 | jamin-chen |
vllm-project/vllm | 29,023 | [Feature]: Disable logging `/metrics` | ### 🚀 The feature, motivation and pitch
- IGW hits `/metrics` continuously to understand the current load on the system
- This leads to an overload of logs
- We can disable this with `--disable-uvicorn-access-log`, but lose access to all access logs
We should have `--disable-uvicorn-metrics-access-log` to avoid logg... | https://github.com/vllm-project/vllm/issues/29023 | open | [
"help wanted",
"good first issue",
"feature request"
] | 2025-11-19T18:25:48Z | 2025-11-19T21:57:34Z | 5 | robertgshaw2-redhat |
huggingface/sentence-transformers | 3,575 | How to override model's `max_seq_length`? | It seems that impossible to override model's max length from `sentence_bert_config.json`.
```python
from sentence_transformers import SentenceTransformer
m = SentenceTransformer("intfloat/e5-small", tokenizer_kwargs={"model_max_length":3})
print(m.tokenize(["hi hi hi hi hi hi hi hi hi hi hi hi hi"]))
# {'input_ids':... | https://github.com/huggingface/sentence-transformers/issues/3575 | open | [] | 2025-11-19T16:42:27Z | 2025-11-20T13:47:13Z | null | Samoed |
huggingface/trl | 4,546 | Does TRL support PipelineRL for compute efficiency? | Hi 👋,
I'm trying to understand whether TRL currently supports (or plans to support) the PipelineRL approach described here:
- Paper: [https://arxiv.org/pdf/2509.19128v2](https://arxiv.org/pdf/2509.19128v2?utm_source=chatgpt.com)
- Overview: [https://arxiv.org/html/2509.19128](https://arxiv.org/html/2509.19128?utm_so... | https://github.com/huggingface/trl/issues/4546 | open | [
"✨ enhancement",
"❓ question"
] | 2025-11-19T12:39:29Z | 2025-11-22T12:43:54Z | 3 | harisarang |
pytorch/torchrec | 3,561 | How can I export a trained model to the Triton inference server? | How can I export a trained model to the Triton inference server?
Are there any examples of exporting models, whether using Torch-TensorRT or TorchScript? | https://github.com/meta-pytorch/torchrec/issues/3561 | open | [] | 2025-11-19T08:20:51Z | 2025-11-19T08:20:51Z | 0 | intfish123 |
pytorch/pytorch | 168,148 | BF16 activation precision mismatch between eager ATen and compiled Triton | ### 🐛 Describe the bug
I’d like to report that for activation operators such as `sigmoid` and `tanh`, when the input dtype is `bf16`, the computation precision differs between eager mode and `compile[triton]`. In eager mode, ATen computes directly in `bf16`, but the generated Triton kernel upcasts to `fp32` → applies... | https://github.com/pytorch/pytorch/issues/168148 | closed | [
"high priority",
"triaged",
"oncall: pt2",
"module: inductor"
] | 2025-11-19T08:10:53Z | 2025-11-28T06:05:05Z | 6 | zhaoying9105 |
pytorch/torchrec | 3,559 | How to convert DistributedModelParallel to quantize_inference_model and use torch.jit.script to save? | I run a example in `https://github.com/facebookresearch/dlrm/tree/main/torchrec_dlrm`, and want to save model with `torch.jit.script`, but it has error.
command:
```
export LEARNING_RATE=0.5;
torchx run -s local_cwd dist.ddp -j 1x1 --script dlrm_main.py -- --batch_size 2048 --learning_rate $LEARNING_RATE -... | https://github.com/meta-pytorch/torchrec/issues/3559 | open | [] | 2025-11-19T06:51:01Z | 2025-11-19T06:53:01Z | 0 | intfish123 |
vllm-project/vllm | 28,996 | [Usage]: How to run a single data parallel deployment across multiple nodes without ray | ### Your current environment
2 Nodes, each node has 8 H20 GPUs.
### How would you like to use vllm
According to https://docs.vllm.ai/en/latest/serving/data_parallel_deployment/#internal-load-balancing
```shell
# node0
vllm serve Qwen3-Coder-480B-A35B-Instruct --trust-remote-code --max-num-seqs 64 --max-model-len 13... | https://github.com/vllm-project/vllm/issues/28996 | closed | [
"usage"
] | 2025-11-19T06:47:22Z | 2025-11-27T06:17:22Z | 3 | crystalww |
vllm-project/vllm | 28,986 | [Feature]: Fused Kernel for GPT-OSS Router | ### 🚀 The feature, motivation and pitch
<img width="1257" height="250" alt="Image" src="https://github.com/user-attachments/assets/31eba061-522c-4521-b0a9-9f25bb36c3df" />
- Right now, we spend ~3.5% of the layer in the expert selection
- The operation is unfused
Write a fused kernel like we have for deepseek group... | https://github.com/vllm-project/vllm/issues/28986 | open | [
"help wanted",
"good first issue",
"feature request"
] | 2025-11-19T03:18:25Z | 2025-12-12T16:16:37Z | 7 | robertgshaw2-redhat |
huggingface/transformers.js | 1,458 | ONNX Backend Env variable | ### Question
Hi,
For some context, I'm building an application that uses some of the models on huggingface as an annotation tool that helps create annotations for training a specialised model.
As for the specialised model, I am able to export them to onnx, and I was able to run this model in the same application, b... | https://github.com/huggingface/transformers.js/issues/1458 | open | [
"question"
] | 2025-11-19T01:26:02Z | 2025-11-25T15:36:13Z | null | Heinrik-20 |
pytorch/vision | 9,276 | where did torchvision v0.10.0 go? | I am trying to download torchvision v0.10.0 to my Jetson Nano to build it but I am always getting this error:
```
ams@ams-Alienware-m17-R3:~$ git ls-remote --tags https://github.com/pytorch/vision.git
remote: Internal Server Error
fatal: unable to access 'https://github.com/pytorch/vision.git/': The requested URL retu... | https://github.com/pytorch/vision/issues/9276 | closed | [] | 2025-11-18T21:32:56Z | 2025-11-19T09:03:29Z | 1 | abdosalem490 |
pytorch/pytorch | 168,099 | Unify pointwise DTensor and NestedTensor OP Coverage. Adds over 100 op overloads to DTensor and about to 10 to NestedTensor | ### 🚀 The feature, motivation and pitch
Currently, DTensor maintains it's own list of which ops are pointwise. NestedTensor has a similar requirement and instead elected to add a pointwise tag to OpInfo. Maintaining two separate lists of pointwise ops is error prone. We should have both use a single source of informa... | https://github.com/pytorch/pytorch/issues/168099 | open | [
"oncall: distributed",
"triaged",
"module: dtensor",
"llm-amenable"
] | 2025-11-18T19:47:48Z | 2025-11-24T19:04:58Z | 2 | Skylion007 |
vllm-project/vllm | 28,956 | [Bug]: OOM when profiling multimodal model with multiple images | ### Your current environment
vLLM 0.11.0
### 🐛 Describe the bug
As per title.
The error log is as follows:
```
[multiproc_executor.py:671] Traceback (most recent call last):
[multiproc_executor.py:671] File "/root/miniconda3/lib/python3.11/site-packages/vllm/v1/executor/multiproc_executor.py", line 666, in work... | https://github.com/vllm-project/vllm/issues/28956 | closed | [
"bug"
] | 2025-11-18T17:36:55Z | 2025-11-25T12:38:37Z | 7 | imShZh |
huggingface/lerobot | 2,475 | Why there is difference between async inference and local inference in image resize? | I read code between `src/lerobot/async_inference/policy_server.py` and `src/lerobot/scripts/lerobot_record.py`. I found difference in these 2 code about inference which causes different image shape
1. `src/lerobot/scripts/lerobot_record.py` use this to deal with observation
And `prepare_observation_for_inference` is li... | https://github.com/huggingface/lerobot/issues/2475 | open | [
"question"
] | 2025-11-18T14:32:17Z | 2025-11-24T02:23:13Z | null | milong26 |
pytorch/torchtitan | 2,053 | Training Qwen3-0.6B with loss mismatch. | ### Bug description
When using the config file 'torchtitan/models/qwen3/train_configs/qwen3_0.6b.toml', the starting loss of 12x suggests the weights may not have been loaded properly.
<img width="1541" height="510" alt="Image" src="https://github.com/user-attachments/assets/ed61a47c-1c6e-47e3-8503-ec84df085f83" />
... | https://github.com/pytorch/torchtitan/issues/2053 | closed | [
"question"
] | 2025-11-18T14:24:43Z | 2025-12-18T09:24:46Z | null | Joluck |
vllm-project/vllm | 28,943 | [Usage]: what's the right way to run embedding model in vllm 0.11.0 | ### Your current environment
```text
The output of `python collect_env.py`
```
in vllm 0.8.7,I use following code to run local vllm,all is right:
```
self.engine_args = EngineArgs(
model=self.model_path,
dtype='half',
task="embed",
trust_remote_code=True,
... | https://github.com/vllm-project/vllm/issues/28943 | open | [
"usage"
] | 2025-11-18T13:47:57Z | 2025-11-20T10:49:12Z | 3 | neverneverendup |
huggingface/trl | 4,541 | Is attn_implementation=sdpa not supported when using SFTTrainer with mllama? | When trying to use `sdpa` with mllama I get an error using the default collator. Upon writing my own collator it works.
When using `eager` implementation it gives cuda oom error. Is `sdpa` not supported? | https://github.com/huggingface/trl/issues/4541 | open | [] | 2025-11-18T11:57:01Z | 2025-11-18T11:57:01Z | 0 | osaidr |
vllm-project/vllm | 28,930 | [Usage]: How to build a qwen3vl embedding model with a custom mlp layer on the top use vllm? | ### Your current environment
```text
The output of `python collect_env.py`
```
Hi friends! I train a sft model built upon qwen3vl 2b model, we put a mlp layer on it to compress the embedding size of the backbone model. Now I want to use vllm 0.11.0 to serve it but I meet some confuse. Here is my custom class code
`... | https://github.com/vllm-project/vllm/issues/28930 | closed | [
"usage"
] | 2025-11-18T10:32:07Z | 2025-12-23T04:49:30Z | 10 | neverneverendup |
vllm-project/vllm | 28,929 | [Usage]: How | = | https://github.com/vllm-project/vllm/issues/28929 | closed | [
"usage"
] | 2025-11-18T10:26:17Z | 2025-11-18T10:30:53Z | 0 | neverneverendup |
huggingface/datasets | 7,869 | Why does dataset merge fail when tools have different parameters? | Hi, I have a question about SFT (Supervised Fine-tuning) for an agent model.
Suppose I want to fine-tune an agent model that may receive two different tools: tool1 and tool2. These tools have different parameters and types in their schema definitions.
When I try to merge datasets containing different tool definitions... | https://github.com/huggingface/datasets/issues/7869 | open | [] | 2025-11-18T08:33:04Z | 2025-11-30T03:52:07Z | 1 | hitszxs |
pytorch/pytorch | 168,065 | On aarch64, `pip install torch` resulted in the CPU version? | ### 🐛 Describe the bug
Hi, noticing that trying to `pip install torch` resulted in the CPU version of torch stable.
Repro:
1. Get an aarch64 machine, e.g. GB200
2. `pip install torch`
3. `pip list`, see if you see cudnn cublas etc
It can be bypassed with
```
pip3 install torch --index-url https://download.pytorch.... | https://github.com/pytorch/pytorch/issues/168065 | open | [
"module: docs",
"module: cuda",
"triaged"
] | 2025-11-18T04:59:16Z | 2025-11-24T19:19:58Z | 3 | henrylhtsang |
vllm-project/vllm | 28,903 | [Bug]: vllm inference on qwen3-vl when use_upstream_fa is False | ### Your current environment
pip show torch vllm flash-attn
Name: torch
Version: 2.8.0
---
Name: vllm
Version: 0.11.0
Name: flash_attn
Version: 2.8.3
### 🐛 Describe the bug
unit-test code as the follows,
when simple qwen3-0.6B can run; but qwen3-vl-4b not run
```python
#coding=utf-8
"""
写单元测试来验证FA和VLLM的可用性和兼容... | https://github.com/vllm-project/vllm/issues/28903 | closed | [
"bug"
] | 2025-11-18T03:54:11Z | 2025-11-18T08:18:09Z | 1 | hedes1992 |
huggingface/lerobot | 2,465 | loss:nan grdn:nan How to solve the gradient explosion problem in PI05 training? | When training Pi05 using Lerobot, has anyone encountered a situation where gradients explode immediately after training? Errors occur when the batch_size is set to 64 or 32. How can this be resolved?
Below are my training commands and error logs.
python src/lerobot/scripts/lerobot_train.py --dataset.repo_id=aa_merge... | https://github.com/huggingface/lerobot/issues/2465 | open | [
"bug",
"policies",
"training"
] | 2025-11-18T03:46:28Z | 2025-12-03T16:13:56Z | null | Lilgeneric |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.