repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
user
stringlengths
2
28
huggingface/lerobot
2,464
Questions about Pi0.5 Model Training Details and High Level Planning Implementation
Hello, while studying the Pi0.5 model, I have two questions regarding the model implementation that I would like to ask you: 1、The paper mentions that the model adopts two-stage pre-training and designs a comprehensive loss function. However, when checking the compute_loss part in the open-source code, it is found that...
https://github.com/huggingface/lerobot/issues/2464
open
[ "question", "training" ]
2025-11-18T01:27:59Z
2025-11-20T10:45:34Z
null
Ginldaj
vllm-project/vllm
28,876
[CI Failure]: should test_cumem.py use spawn or fork in cuda?
### Name of failing test tests/basic_correctness/test_cumem.py ### Basic information - [ ] Flaky test - [x] Can reproduce locally - [ ] Caused by external libraries (e.g. bug in `transformers`) ### 🧪 Describe the failing test The test only fails locally for me when I use vllm main branch and on the CI of my PR, e...
https://github.com/vllm-project/vllm/issues/28876
open
[ "ci-failure" ]
2025-11-17T18:58:08Z
2025-11-17T20:59:14Z
1
jerryzh168
vllm-project/vllm
28,868
[Bug]: When compiling with ranges, we should pass the range information to Inductor
### Your current environment main ### 🐛 Describe the bug Might be more of a feature request. Context is that https://github.com/vllm-project/vllm/pull/24248 adds a new compile ranges API, where a user can specify which ranges to compile on. We should tell Inductor how to constrain the compilation on the symints of...
https://github.com/vllm-project/vllm/issues/28868
open
[ "bug", "torch.compile" ]
2025-11-17T15:41:50Z
2026-01-05T23:37:12Z
1
zou3519
pytorch/pytorch
167,994
CI Not Detecting Failing Tests in test/distributed/elastic/*
A significant number of tests under `test/distributed/elastic/` are failing, but CI does **not** surface these failures, possibly same with test/distributed/launcher, Many of these tests appear to have been broken for a long time without detection. I opened a PR with fixes, but I believe this warrants an issue so the t...
https://github.com/pytorch/pytorch/issues/167994
open
[ "oncall: distributed", "module: ci" ]
2025-11-17T15:39:56Z
2025-11-17T18:18:56Z
0
harikodali
pytorch/pytorch
167,991
Warnings from inside Dynamo should include at least one level of stack trace
We saw the following in vLLM: ``` (Worker_TP6_EP6 pid=3247488) /home/robertgshaw2-redhat/vllm/.venv/lib64/python3.12/site-packages/torch/_dynamo/variables/functions.py:1692: UserWarning: Dynamo detected a call to a `functools.lru_cache`-wrapped function. Dynamo ignores the cache wrapper and directly traces the wrapped ...
https://github.com/pytorch/pytorch/issues/167991
closed
[ "triaged", "oncall: pt2", "module: dynamo", "vllm-compile", "module: compile ux", "module: vllm", "dynamo-triage-dec2025" ]
2025-11-17T15:31:02Z
2026-01-01T18:17:59Z
1
zou3519
vllm-project/vllm
28,866
[Usage]: When is going to be the next release?
Hi everyone, Thank you for developing such a great tool! I was wondering when the next release is scheduled. I’m interested in running Gemma3-text type architecture GGUF quantized models with VLLM. Are there any alternatives to do this with the latest release (v0.11.0)? I also noticed that you merged this PR with th...
https://github.com/vllm-project/vllm/issues/28866
open
[ "usage" ]
2025-11-17T15:24:47Z
2025-11-19T10:51:47Z
1
Invalid-coder
huggingface/transformers
42,241
How to use padding with Mistral?
I'm trying to understand how to use Mistral with `batch_size` > 1. One aspect of this is setting `padding="longest"` in, e.g., `MistralCommonTokenizer.encode()`. But I'm getting `TypeError: 'set' object is not callable` when I try this. Example: ```python import torch from transformers import MistralForCausalLM, Mistra...
https://github.com/huggingface/transformers/issues/42241
closed
[]
2025-11-17T12:54:21Z
2025-11-19T06:11:44Z
null
TopCoder2K
pytorch/audio
4,132
How can I use one streamwriter to write multiple videos?
### 🚀 The feature Use one streamwriter to write multiple videos. ### Motivation, pitch Can the streamwriter support writing multiple videos using the same object, with each video corresponding to a different stream when I use gpu to encode? In current situation, this result in writing to the same buffer, ultimately...
https://github.com/pytorch/audio/issues/4132
open
[]
2025-11-17T11:55:56Z
2025-11-17T11:55:56Z
null
Z-NAVY
pytorch/pytorch
167,977
[DTensor]Sharding propagation failed for custom operation with Tensor in kwargs
### 🐛 Describe the bug I try to register strategy for my custom operation by ```@register_sharding```, which has Tensor params in kwargs. And my custom strategy function provides strategies for all DTensor in args and kwargs. During sharding propagation, an AssertionError `assert len(input_specs) == len(input_args_st...
https://github.com/pytorch/pytorch/issues/167977
closed
[ "oncall: distributed", "module: dtensor" ]
2025-11-17T11:43:09Z
2025-11-24T05:24:21Z
3
qqq6op
huggingface/chat-ui
1,986
HI i would like to use default_headers={ "X-HF-Bill-To": "org-name" } in my chatui local deployment how i can??
Hi, So i want to bill my Inference usage to my organization and like to pass default_headers={ "X-HF-Bill-To": "org-name" } parameter how i can do that??
https://github.com/huggingface/chat-ui/issues/1986
open
[ "support" ]
2025-11-17T08:33:41Z
2025-11-17T08:33:41Z
null
aditya-oss-prog
huggingface/diffusers
12,672
How to set pipe "requires_grad=true"?
I have set the variable and the model "requires_grad=true" with the following: ` pipe.transformer.requires_grad = True pipe.vae.requires_grad = True` `prev_sample = prev_sample.detach().requires_grad_(True)` but the "requires_grad" of result by the pipe is still not true: `image_tar = pipe.vae.decode(prev_sampl...
https://github.com/huggingface/diffusers/issues/12672
closed
[]
2025-11-17T03:36:43Z
2025-11-20T12:19:20Z
null
micklexqg
pytorch/pytorch
167,950
Insufficient documentation about the batching logic of `torch.linalg.solve`
### 📚 The doc issue The documentation for `torch.linalg.solve` states that > > Letting _*_ be zero or more batch dimensions, > If `A` has shape _(*, n, n)_ and `B` has shape _(*, n)_ (a batch of vectors) or shape _(*, n, k)_ (a batch of matrices or “multiple right-hand sides”), this function returns _X_ of shape _(*...
https://github.com/pytorch/pytorch/issues/167950
open
[ "module: docs", "triaged", "module: linear algebra" ]
2025-11-17T02:02:25Z
2025-11-19T16:44:07Z
5
hchau630
huggingface/diffusers
12,669
Flux1-Dev inference with single file ComfyUI/SD-Forge Safetensors
Is it possible to run inference with diffusers using a single-file safetensors created for ComfyUI/SD-Forge? It looks like FluxPipeline.from_single_file() might be intended for this purpose, but I'm getting the following errors: ``` import torch from diffusers import FluxPipeline pipe = FluxPipeline.from_single_fil...
https://github.com/huggingface/diffusers/issues/12669
open
[]
2025-11-16T11:57:48Z
2025-12-03T16:53:58Z
12
ddpasa
pytorch/pytorch
167,906
Avoid Exception Refcycle Problems
### 🚀 The feature, motivation and pitch https://github.com/pytorch/pytorch/blob/d01a7b0241ed1c4cded7e7ca097249feb343f072/torch/_utils.py#L720-L726 The traceback refcycle problem can happen whenever an exception is stored in a local variable. This happened in many places across pytorch: ``` $ grep ' = e$' torch -R ...
https://github.com/pytorch/pytorch/issues/167906
open
[ "module: memory usage", "triaged", "better-engineering", "module: python frontend" ]
2025-11-15T16:47:16Z
2025-11-18T22:15:10Z
1
ppwwyyxx
pytorch/pytorch
167,901
nvalid _global_ write of size 16 bytes in torch.bmm with sparse tensors
### 🐛 Describe the bug When using `torch.bmm` with sparse tensors, a CUDA `__global__` memory write out-of-bounds error occurs. ```python import torch m1 = torch.randn(2, 291105, 1).to_sparse().cuda() m2 = torch.randn(2, 1, 1).cuda() print([m1.size(), m2.size()]) torch.bmm(m1, m2) ``` ### How to Reproduce 1. S...
https://github.com/pytorch/pytorch/issues/167901
open
[ "module: sparse", "triaged", "module: sanitizers" ]
2025-11-15T03:25:41Z
2025-11-24T04:30:02Z
1
supermarkli
huggingface/ai-deadlines
41
How to indicate ARR deadlines
Right now the yaml format assumes conferences with locations and dates, but ACL ARR has rolling deadlines not tied to a physical conference. We are largely operating around these deadlines. How can we incorporate these into this system?
https://github.com/huggingface/ai-deadlines/issues/41
open
[]
2025-11-15T00:26:33Z
2025-11-15T00:26:33Z
null
morrisalp
pytorch/torchtitan
2,046
Any interest in adding MLPerf Llama 3 8B to TorchTitan models ?
It will be great to have MLPerf LLama 3 pre-training working OOB with TorchTitan, Here are some references on that . [MLPerf Training Adds Llama 3.1 8B Benchmark](https://mlcommons.org/2025/10/training-llama-3-1-8b/) [small_llm_pretraining/nemo](https://github.com/mlcommons/training/tree/master/small_llm_pretraining...
https://github.com/pytorch/torchtitan/issues/2046
open
[]
2025-11-14T18:38:59Z
2026-01-05T22:49:56Z
14
githubsgi
pytorch/pytorch
167,843
Some docs are outdated about how to access ctx object in forward function?
### 📚 The doc issue I remember some docs said that the forward function (originally in torch.autograd.Function subclass) can pass anything to setup_context function by saving the data to ctx object. I was off for a while. Back in 2.6, the input param for forward function looks like (ctx, *input), but now it's(input_1...
https://github.com/pytorch/pytorch/issues/167843
closed
[ "module: autograd", "triaged" ]
2025-11-14T16:05:18Z
2025-11-28T05:06:40Z
null
YagaoDirac
pytorch/xla
9,712
Why isn't there a binding for clearing the XLAGraphExecutor::ComputationCache?
We have exposed this function in our [tenstorrent fork](https://github.com/tenstorrent/pytorch-xla/pull/16/files) and found that it works for post-test cleanup. My assumption was that TPU runtime does not require such a feature because it does not bind scarce device resources to PJRTComputation lifetime. So, implemen...
https://github.com/pytorch/xla/issues/9712
open
[ "question", "runtime" ]
2025-11-14T15:19:55Z
2025-11-24T18:55:09Z
null
jameszianxuTT
huggingface/diffusers
12,662
question on stable_audio_transformer.py
Execuse me, I am leaning the code of `class StableAudioDiTModel` , I do not know what is the argument ` global_states_input_dim` used to? It seems that it is a must component that should be packed before the hidden_states sequence. and its default dim seems larger then the transformer inner_dim. What is that...
https://github.com/huggingface/diffusers/issues/12662
open
[]
2025-11-14T09:26:01Z
2025-11-25T08:53:39Z
1
JohnHerry
vllm-project/vllm
28,717
[Usage]: Errors running vLLM docker in a closed environment with gpt-oss-120b on RTX 6000 Pro
### Your current environment Can't get vLLM to start with the below configuration. Seems to have issues loading in the model .safetensors. Any ideas on what could be causing it? vllm version: 0.11.1 CPU: Intel Xeon w7-2595X GPU: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition Model: https://huggingface.co/...
https://github.com/vllm-project/vllm/issues/28717
open
[ "usage" ]
2025-11-14T08:49:48Z
2025-11-20T15:45:21Z
3
antonkarlsson1
pytorch/pytorch
167,820
Why torch==2.9 compile qwen3 model with block ptr will crash?
### 🐛 Describe the bug torch==2.8 compile with “torch._inductor.config.triton.use_block_ptr = True“ is ok, 2.9 torch will crash as shown in the figure. <img width="1268" height="402" alt="Image" src="https://github.com/user-attachments/assets/9cc9742a-be5b-4754-a954-01aac02fb936" /> ```python import torch from vllm...
https://github.com/pytorch/pytorch/issues/167820
open
[ "needs reproduction", "triaged", "oncall: pt2", "vllm-compile", "module: vllm" ]
2025-11-14T08:06:29Z
2025-11-18T06:07:13Z
2
TracyMac1
pytorch/pytorch
167,818
undefined symbol for `at::meta::_index_put_impl_` when running or compiling executable on my own torch-related project.
### 🐛 Describe the bug I have a torch extended backend(PrivateUse1), somewhere in my code, I invoked `at::meta::_index_put_impl_` API. undefined symbol error occurs when I try to create executable or running python. `at::meta::_index_put_impl_` seems like a LOCAL symbol in libtorch_cpu.so, and not exist in dynsym, b...
https://github.com/pytorch/pytorch/issues/167818
open
[ "module: binaries", "triaged", "actionable", "module: PrivateUse1" ]
2025-11-14T07:32:33Z
2025-12-08T06:43:48Z
4
sunjiabin17
huggingface/trl
4,525
How to modify the advantage computation in GRPOTrainer
I’m looking to customize the advantage computation used in the DAPO algorithm. Do I need to subclass the full GRPOTrainer to do this, or is it sufficient to overwrite the logic in _generate_and_score_completions, since that method appears to handle the advantage calculation?
https://github.com/huggingface/trl/issues/4525
open
[ "❓ question", "🏋 GRPO" ]
2025-11-14T03:48:17Z
2025-11-14T11:37:18Z
null
Tuziking
huggingface/transformers
42,200
Request of rewriting implementation of prediction_step in trainer.py
### System Info Any system. Because it's a problem coming from source code. ### Who can help? @SunMarc ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (gi...
https://github.com/huggingface/transformers/issues/42200
open
[ "Good Second Issue", "bug" ]
2025-11-14T00:13:40Z
2025-12-18T14:29:32Z
3
Yacklin
huggingface/transformers
42,197
Attempt to access socket despite HF_HUB_OFFLINE = 1 if cache warmed outside current process
### System Info - `transformers` version: 4.57.1 - Platform: Linux-6.6.84.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.13.0 - Huggingface_hub version: 0.36.0 - Safetensors version: 0.6.2 - Accelerate version: not installed - Accelerate config: not found - DeepSpeed version: not installed - PyTor...
https://github.com/huggingface/transformers/issues/42197
closed
[ "Good Second Issue", "bug" ]
2025-11-13T21:38:29Z
2025-11-24T09:33:54Z
6
fr1ll
vllm-project/vllm
28,646
[Feature][P2]: Implement CI Build Time and Size Guards
### 🚀 The feature, motivation and pitch ### Description Once we optimize the Docker build, we need to prevent regressions. Create CI checks that fail if build time exceeds thresholds or if image size grows beyond acceptable limits. Also set up monitoring dashboards. ### What You'll Do 1. Create Python scripts to che...
https://github.com/vllm-project/vllm/issues/28646
open
[ "feature request", "ci/build" ]
2025-11-13T12:50:34Z
2025-11-13T18:55:29Z
0
rzabarazesh
pytorch/pytorch
167,721
Minimal, comprehensive test suite
We are building PyTorch from source using, among others, the system installed CUDA. Currently we are running the full test suite to ensure nothing got broken due to e.g. wrong dependency versions or missing dependencies. I.e. `python test/run_test.py --continue-through-error` However, that takes up to 3 days on a GP...
https://github.com/pytorch/pytorch/issues/167721
open
[ "module: docs", "feature", "triaged", "module: infra", "module: testing" ]
2025-11-13T12:18:42Z
2025-11-26T21:55:31Z
5
Flamefire
huggingface/diffusers
12,650
Question about the `# Copied from` system
Hi team! 👋 While working on improving docstrings and type hints across scheduler files (issue #9567), I've noticed the `# Copied from` pattern used extensively throughout the codebase. Examples: - Functions like `betas_for_alpha_bar` are duplicated across multiple schedulers - Output classes like `DDPMSchedulerOutpu...
https://github.com/huggingface/diffusers/issues/12650
open
[]
2025-11-13T11:53:22Z
2025-12-21T22:44:03Z
3
delmalih
huggingface/transformers
42,179
Add TileLang Kernel Support
### Feature request I would like to propose adding support for TileLang kernel in the transformers library. TileLang is a modular approach for writing attention kernels that could provide flexibility and performance benefits. github link: https://github.com/tile-ai/tilelang - Add TileLang as an optional attention back...
https://github.com/huggingface/transformers/issues/42179
open
[ "Feature request" ]
2025-11-13T11:38:33Z
2025-11-13T11:38:33Z
0
crownz248
huggingface/tokenizers
1,885
Feature request: Characters delimiter argument
I wish to develop a k-mer-character-based BPE tokenizer using your beautiful Rust package, for genomic applications. Unfortunately, it doesn't seem to support defining a characters delimiter. As I see it, it is a pretty straightforward change, instead of iterating a word by character, first split it by the delimiter an...
https://github.com/huggingface/tokenizers/issues/1885
open
[]
2025-11-13T10:40:29Z
2025-11-28T07:51:07Z
1
VasLem
vllm-project/vllm
28,629
[Usage]: TPOT per request information was not collected by vllm bench serve
### Your current environment ```text The output of `python collect_env.py` Collecting environment information... ============================== System Info ============================== OS : Ubuntu 24.04.2 LTS (x86_64) GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13....
https://github.com/vllm-project/vllm/issues/28629
open
[ "usage" ]
2025-11-13T09:20:19Z
2025-11-13T09:20:19Z
0
jlwang1996
vllm-project/vllm
28,626
[Bug]:Qwen3-VL-32B-AWQ model memory usage: 8k context limit with 40GB VRAM?
### Your current environment <details> <summary>The output of <code>python collect_env.py</code></summary> ```text Your output of `python collect_env.py` here ``` </details> ### 🐛 Describe the bug Running models on the latest stable vLLM release: https://huggingface.co/QuantTrio/Qwen3-VL-32B-Instruct-AWQ The mod...
https://github.com/vllm-project/vllm/issues/28626
open
[ "bug" ]
2025-11-13T08:00:20Z
2025-11-17T07:08:47Z
3
maxin9966
vllm-project/vllm
28,622
[Bug]: Can we able to benchmark Quantized MOE models Either W8A8 or W8A16 ?
### Your current environment <details> <summary>The output of <code>python collect_env.py</code></summary> ```text Collecting environment information... ============================== System Info ============================== OS : Ubuntu 24.04.2 LTS (x86_64) GCC version ...
https://github.com/vllm-project/vllm/issues/28622
open
[ "bug" ]
2025-11-13T07:26:56Z
2025-11-13T07:27:06Z
0
logesh13
pytorch/pytorch
167,716
`torch.sparse.mm` returns corrupted sparse tensor causing Segmentation fault in `to_dense()` on PyTorch 2.9.0
### 🐛 Describe the bug I experienced a problem while using the "torch.sparse.mm()" function, which prompted me to consult the official documentation for clarification. The documentation includes sample code that executes successfully. According to the documentation, the second matrix parameter accepts both sparse and...
https://github.com/pytorch/pytorch/issues/167716
closed
[ "module: sparse", "module: crash" ]
2025-11-13T07:25:20Z
2025-11-13T16:57:47Z
2
David-YB
vllm-project/vllm
28,610
[Usage]: Does 0.11.0 suport tree attenton with eagle?
### Your current environment Does 0.11.0 suport tree attenton with eagle? Do I need to enable it manually? ### How would you like to use vllm I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm. ### Before submitting a new issue... - [x] Make sure you already s...
https://github.com/vllm-project/vllm/issues/28610
open
[ "usage" ]
2025-11-13T03:35:02Z
2025-12-03T17:08:16Z
1
wincle
huggingface/datasets
7,864
add_column and add_item erroneously(?) require new_fingerprint parameter
### Describe the bug Contradicting their documentation (which doesn't mention the parameter at all), both Dataset.add_column and Dataset.add_item require a new_fingerprint string. This parameter is passed directly to the dataset constructor, which has the fingerprint parameter listed as optional; is there any reason i...
https://github.com/huggingface/datasets/issues/7864
open
[]
2025-11-13T02:56:49Z
2025-12-07T14:41:40Z
2
echthesia
vllm-project/vllm
28,566
[Usage]: pd disagg scenario , I discover in the decoder , also has the prefill operation, is it normal ?
### Your current environment when num_computed_tokens is less than num_prompt_tokens, it will enter prefill operation <img width="633" height="149" alt="Image" src="https://github.com/user-attachments/assets/bab96187-37c8-4ea2-ba68-9f52dda07f6b" /> and i found, num_computed_tokens is possible less than num_prompt_t...
https://github.com/vllm-project/vllm/issues/28566
open
[ "usage" ]
2025-11-12T16:18:53Z
2025-11-12T16:18:53Z
0
yangshanjun
vllm-project/vllm
28,564
[Usage]: Can't get ModernBert models to run in vllm serve
### Your current environment I am trying to download and use ModernBertModel with the vllm serve feature. At first I thought it was an issue with the model so I switched from trying to use BertEmbed with Alibaba-NLP/gte-modernbert-base since it appears in the docs as a model that supports embedding. Source: https://...
https://github.com/vllm-project/vllm/issues/28564
open
[ "usage" ]
2025-11-12T15:51:18Z
2025-11-12T15:51:18Z
0
Logikschleifen
pytorch/pytorch
167,631
`jit.export` analoge for `torch.export`
### 🚀 The feature, motivation and pitch According to the documentation, [`TorchScript` is deprecated in favor of `torch.export`](https://docs.pytorch.org/docs/stable/jit.html). However, `torch.jit.script` offered some functionality that does not seem to be covered by `torch.export`, specifically the ability to expo...
https://github.com/pytorch/pytorch/issues/167631
open
[ "oncall: pt2", "oncall: export" ]
2025-11-12T10:24:10Z
2025-11-17T18:41:46Z
3
randolf-scholz
pytorch/pytorch
167,630
Memory leak in aoti compile
### 🐛 Describe the bug I want to compile many exported programs into an aoti .so file. However it seems like there is a memory leak ```python import contextlib import gc import logging import os import tempfile from pathlib import Path import torch import torch._inductor import torch.nn as nn logging.basicConfig( ...
https://github.com/pytorch/pytorch/issues/167630
closed
[ "module: memory usage", "oncall: pt2", "oncall: export", "module: aotinductor" ]
2025-11-12T09:23:03Z
2025-11-19T03:42:13Z
1
ben-da6
vllm-project/vllm
28,527
💡 Bounty Platform for vLLM
Hi vLLM team! 👋 I wanted to share **Roxonn** - a decentralized bounty platform for accelerating AI/ML development. **What is Roxonn?** ✅ Fund GitHub issues with crypto bounties (XDC, USDC, ROXN) ✅ Notify 300+ AI/ML developers ✅ Auto-pay when PRs merge via blockchain ✅ Zero crypto setup needed **Quick flow:** 1. Reg...
https://github.com/vllm-project/vllm/issues/28527
closed
[]
2025-11-12T07:50:33Z
2025-11-13T12:36:15Z
0
dineshroxonn
huggingface/transformers
42,154
💡 Bounty Platform for Hugging Face Transformers
Hi Hugging Face Transformers team! 👋 I wanted to share **Roxonn** - a decentralized bounty platform for accelerating AI/ML development. **What is Roxonn?** ✅ Fund GitHub issues with crypto bounties (XDC, USDC, ROXN) ✅ Notify 300+ AI/ML developers ✅ Auto-pay when PRs merge via blockchain ✅ Zero crypto setup needed *...
https://github.com/huggingface/transformers/issues/42154
closed
[]
2025-11-12T07:49:59Z
2025-11-17T11:40:10Z
2
dineshroxonn
pytorch/pytorch
167,624
💡 Bounty Platform for PyTorch
Hi PyTorch team! 👋 I wanted to share **Roxonn** - a decentralized bounty platform for accelerating AI/ML development. **What is Roxonn?** ✅ Fund GitHub issues with crypto bounties (XDC, USDC, ROXN) ✅ Notify 300+ AI/ML developers ✅ Auto-pay when PRs merge via blockchain ✅ Zero crypto setup needed **Quick flow:** 1. ...
https://github.com/pytorch/pytorch/issues/167624
closed
[]
2025-11-12T07:49:51Z
2025-11-13T12:35:34Z
0
dineshroxonn
pytorch/pytorch
167,613
UNSTABLE inductor-periodic / inductor-smoke-test / test (inductor_torchbench_smoketest_perf)
I can't figure out from the logs what is wrong cc @ezyang @gchanan @kadeng @msaroufim @mcarilli @eellison @penguinwu @BoyuanFeng @chauhang @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov @coconutruben @seemether...
https://github.com/pytorch/pytorch/issues/167613
closed
[ "high priority", "module: ci", "triaged", "module: cuda graphs", "oncall: pt2", "module: inductor", "unstable" ]
2025-11-12T03:47:42Z
2026-01-05T15:15:52Z
6
zou3519
vllm-project/vllm
28,508
[Usage]: KVCacheManager Parameter question
I noticed that the parameter “self.req_to_block_hashes” has been removed from KVCacheManager since version v0.10.0. But this parameter is still preserved in the official documentation. Could you please provide an explanation of this change? - [Document Description](https://docs.vllm.ai/en/v0.9.2/api/vllm/v1/core/kv...
https://github.com/vllm-project/vllm/issues/28508
closed
[ "usage" ]
2025-11-12T03:10:18Z
2025-11-16T08:33:45Z
1
Liziqi-77
huggingface/diffusers
12,638
How to design network with DiT blocks that are friendly to Tensorrt fp16 conversion?
We had a network that structed as `a convnet pre-encoder -> DiT blocks -> final block for last sampling`, it worked well with torch format and onnx format, but when we tried to convert it into tensorrt fp16 format, the inference will get value overflow. we had seen the data differene [between onnx and trt fp16, wit...
https://github.com/huggingface/diffusers/issues/12638
open
[]
2025-11-12T02:23:37Z
2025-11-12T02:23:37Z
null
JohnHerry
huggingface/lerobot
2,428
how to eval the real world recorded dataset?
can lerobot eval the real world dataset with metric such as mse? I check the eval script and found that now it can only eval the sim env dataset
https://github.com/huggingface/lerobot/issues/2428
open
[ "question", "evaluation" ]
2025-11-12T02:08:44Z
2025-11-19T16:55:42Z
null
shs822
vllm-project/vllm
28,505
[Feature]: Is there a plan to introduce the new feature nano-pearl, a new engineering effort in speculative reasoning.
### 🚀 The feature, motivation and pitch Nano-pearl can support speculative inference with higher concurrency (larger batch sizes) and is seamlessly compatible with algorithms like Eagle. Is there a plan to introduce it? github:https://github.com/smart-lty/nano-PEARL ### Alternatives _No response_ ### Additional c...
https://github.com/vllm-project/vllm/issues/28505
open
[ "feature request" ]
2025-11-12T01:34:22Z
2025-11-17T06:14:09Z
1
Lexlum
pytorch/pytorch
167,596
[dynamo][feature] Guard on constants only if graph is specialized and not bytecode
### 🐛 Describe the bug When Dynamo creates guards, it specializes not just for Fx graph, but also for residual bytecode. For example, in the following codebase, the graph is same, but the `summary` update leads to a recompilation. This causes unnecessary compile time issues. Is it possible to create guards only for t...
https://github.com/pytorch/pytorch/issues/167596
open
[ "triaged", "enhancement", "oncall: pt2", "module: dynamo" ]
2025-11-12T00:55:17Z
2025-11-20T17:44:45Z
2
anijain2305
vllm-project/vllm
28,498
[Bug][RL]: Port Conflict
### Your current environment - bug report: ``` Hello vLLM team, I'm running into a suspicious ZMQ socket bug with my 2P 4D configuration for DeepSeek-V3 (see below). I thought it is caused by reusing same nodes for many vLLM launches, but now it happened also at a clean node. Seems like a DP bug of sorts. Please find...
https://github.com/vllm-project/vllm/issues/28498
open
[ "bug", "help wanted", "good first issue" ]
2025-11-11T22:51:35Z
2025-12-04T07:35:31Z
13
robertgshaw2-redhat
vllm-project/vllm
28,489
[Usage]: Online continuous batching
### Current environment ``` ============================== System Info ============================== OS : macOS 26.1 (arm64) GCC version : Could not collect Clang version : 17.0.0 (clang-1700.4.4.1) CMake version : Could not collect Libc...
https://github.com/vllm-project/vllm/issues/28489
open
[ "usage" ]
2025-11-11T20:51:58Z
2025-11-11T20:53:47Z
0
GenVr
pytorch/pytorch
167,566
include string names of types in logs when dynamo guards on input types
When debugging recompile reasons in dynamo, it is convenient to look at a tlparse to understand what is causing recompiles. One guard that dynamo has is a type_id guard, which guards on the id(type(x)) of an input. In the tlparse, when one these guards fails it shows up as this: <img width="576" height="29" alt="Imag...
https://github.com/pytorch/pytorch/issues/167566
closed
[ "triaged", "oncall: pt2", "module: dynamo", "module: compile ux" ]
2025-11-11T19:11:14Z
2025-12-10T17:14:27Z
0
bdhirsh
pytorch/pytorch
167,560
naming of periodic-dynamo-benchmarks-cpu-test / test (cpu_inductor_amp_freezing_torchbench, 1, 2, linux.8xlarge.amx) seems wrong
Why is it a dynamo benchmark but also running cpu_inductor_amp_freezing ? cc @chauhang @penguinwu
https://github.com/pytorch/pytorch/issues/167560
open
[ "triaged", "module: benchmark", "oncall: pt2" ]
2025-11-11T18:20:30Z
2025-11-17T16:51:51Z
0
zou3519
pytorch/pytorch
167,558
per_page=1000 doesn't work in hud.pytorch.org
e.g. https://hud.pytorch.org/hud/pytorch/pytorch/main/31?per_page=50&mergeEphemeralLF=true Whatever I set it to, it seems to just be 50. My use case is that I am trying to find the first date that a test began to fail. The test has been failing for weeks. I have to hit the next button a lot. cc @ZainRizvi @huydhn @cl...
https://github.com/pytorch/pytorch/issues/167558
open
[ "triaged", "module: devx" ]
2025-11-11T18:06:21Z
2025-11-11T19:30:25Z
1
zou3519
huggingface/trl
4,507
Can a multimodal model like Gemma be trained in the same way as a text-only model like Qwen, but with the goal of improving only its text capabilities?
As stated in the title, I hope to improve only the text capabilities of Gemma 3, but it doesn’t seem to have worked as expected. The model I used is gemma-3-4b-it, and I conducted the following simple tests: ```python dataset = Dataset.from_list( [ {"prompt": "What is 2+2?", "task": "math"}, ...
https://github.com/huggingface/trl/issues/4507
open
[ "🐛 bug", "⏳ needs more info" ]
2025-11-11T15:59:51Z
2025-11-21T05:58:50Z
0
Tuziking
vllm-project/vllm
28,472
[Usage]: Will the reasoning_content in the chat template still be applied correctly after switching reasoning_content to reasoning
### Your current environment ```text The output of `python collect_env.py` ``` ### How would you like to use vllm Will the message.reasoning_content for (which exists in default chat_template for qwen3-next-thinking qwen3-vl-thinking or other qwen3-thinking series or glm4.5 or kimi-k2-thinking or other models) in t...
https://github.com/vllm-project/vllm/issues/28472
closed
[ "usage" ]
2025-11-11T15:04:11Z
2025-11-13T06:25:29Z
4
zhcn000000
pytorch/pytorch
167,540
[Dtensor]:change the test_mm shape from (12,8) * (8,16) to (512, 512) * (512, 512), throw assert error
### 🐛 Describe the bug when I try to use (512, 512) * (512, 512) instead of the original shape in the testcase, it throw assert error. ```python @with_comms def test_mm(self): device_mesh = self.build_device_mesh() shard0_spec = Shard(0) shard1_spec = Shard(1) replica_spec = Re...
https://github.com/pytorch/pytorch/issues/167540
open
[ "oncall: distributed", "module: dtensor" ]
2025-11-11T13:14:49Z
2025-11-12T08:21:45Z
2
zhanghanleo93
vllm-project/vllm
28,456
[Usage]: benchmark_moe Usage
### Your current environment ```text (EngineCore_DP0 pid=7498) INFO 11-10 11:42:48 [shm_broadcast.py:466] No available shared memory broadcast block found in 60 seconds. This typically happens when some processes are hanging or doing some time-consuming work (e.g. compilation). (APIServer pid=7416) INFO 11-10 11:42:50...
https://github.com/vllm-project/vllm/issues/28456
open
[ "usage" ]
2025-11-11T09:22:33Z
2025-11-21T01:43:41Z
6
ekmekovski
huggingface/lerobot
2,422
Running inference on Libero with pi0
Hello, I am trying to run inference with pi0 but the commands referenced in this issue #683 are outdated I believe. What would the commands be to run inference in Lerobot, and also running inference with pi0 in Libero? Additionally, if there is any documentation for these commands in general for fine-tuning and eval, ...
https://github.com/huggingface/lerobot/issues/2422
open
[ "question", "policies", "evaluation" ]
2025-11-11T09:22:25Z
2025-11-19T16:53:27Z
null
thomasdeng2027
pytorch/pytorch
167,526
Missing documentation for CUTLASS backend
### 📚 The doc issue The release notes of PyTorch 2.8.0 report > Inductor CUTLASS backend support But it is missing information on how to activate/use that. There are multiple NVIDIA PYPI packages that are related: nvidia-cutlass, nvidia-cutlass-dsl And there is the CUTLASS repository on GitHub included under th...
https://github.com/pytorch/pytorch/issues/167526
open
[ "module: docs", "module: cuda", "triaged" ]
2025-11-11T08:33:22Z
2025-12-17T15:25:44Z
1
Flamefire
huggingface/lerobot
2,421
Seeking assistance with tactile data acquisition
I want to simultaneously collect tactile and visual data, with tactile data sampled at 150 fps and visual data at 30 fps. Each time an image frame is saved, I also want to store all tactile data collected during that time interval as additional features associated with the image. What would be the best approach to imp...
https://github.com/huggingface/lerobot/issues/2421
open
[ "question" ]
2025-11-11T02:49:57Z
2025-11-19T16:53:05Z
null
zhoushaoxiang
vllm-project/vllm
28,438
[Usage]: How do I install vLLM nightly?
### Your current environment The output of collect_env.py ```text ============================== System Info ============================== OS : Ubuntu 20.04.5 LTS (x86_64) GCC version : (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version : Could not co...
https://github.com/vllm-project/vllm/issues/28438
closed
[ "usage" ]
2025-11-11T02:24:47Z
2025-11-12T01:54:42Z
2
LittleLucifer1
pytorch/pytorch
167,499
check_compiler_is_gcc() fails to detect versioned GCC compilers (g++-13, g++-14, etc.)
### 🐛 Describe the bug 🐛 Describe the bug The torch.utils.cpp_extension.check_compiler_is_gcc() function only returns True when the compiler basename is exactly 'c++', failing to detect other GCC variants like g++, gcc, g++-13, g++-14, etc. This affects any PyTorch functionality that relies on GCC detection, causi...
https://github.com/pytorch/pytorch/issues/167499
closed
[ "module: cpp-extensions" ]
2025-11-11T01:11:22Z
2025-11-11T05:14:08Z
0
razaaliraza
vllm-project/vllm
28,425
[Feature][RL]: Fix Fp8 Weight Loading for RL
### 🚀 The feature, motivation and pitch Feedback from RL community that vLLM weight loading in fp8 is bad for RL - https://vllm-dev.slack.com/archives/C07UUL8E61Z/p1762811441757529 The cause is clear: in [fp8.py](https://github.com/vllm-project/vllm/blob/bf6a3d0ff5a69e0a30567f2ad417530c002eaa4e/vllm/model_executor/l...
https://github.com/vllm-project/vllm/issues/28425
open
[ "feature request" ]
2025-11-10T21:59:02Z
2025-11-10T23:25:37Z
1
robertgshaw2-redhat
pytorch/pytorch
167,480
OS command injection via torch.utils.cpp_extension precompiled-header build (use_pch path)
**Summary** There is an OS command injection risk in `torch/utils/cpp_extension.py` in the precompiled-header build helper. The helper constructs a compiler command including user-supplied values (e.g., `extra_cflags`, `extra_include_paths`) and executes the command via `subprocess.check_output(..., shell=True)`. If un...
https://github.com/pytorch/pytorch/issues/167480
closed
[ "module: cpp-extensions", "module: error checking", "triaged", "actionable" ]
2025-11-10T20:36:14Z
2025-11-11T07:27:44Z
1
sumantro93
huggingface/transformers.js
1,450
SmolVLM2 500M Video Instruct - Video inference
### Question Hey, is it possible to setup **video** inference through **transformers.js** (may be somehow else?) for the model SmolVLM2 500M Video Instruct? I can't make it work, but I saw, that it is possible in py transformers. I want to create something similar to https://huggingface.co/spaces/HuggingFaceTB/SmolVL...
https://github.com/huggingface/transformers.js/issues/1450
open
[ "question" ]
2025-11-10T19:51:07Z
2025-11-12T07:46:32Z
null
youchi1
vllm-project/vllm
28,409
[Usage]: There is any performance benchmark between running vLLM server via docker image and python?
### Your current environment ```text I mean, if I run a service with the vLLM docker image, it has any performance upgrade if comparing with running it as a python service (e.g., importing vllm package, setting up vllm inference, handling payload/responses, etc)? ``` ### How would you like to use vllm _No respons...
https://github.com/vllm-project/vllm/issues/28409
open
[ "usage" ]
2025-11-10T17:56:14Z
2025-11-10T17:56:14Z
0
rafaelsandroni
pytorch/pytorch
167,467
Tensor creation documentation: example code not consistent with its description
https://docs.pytorch.org/cppdocs/notes/tensor_creation.html#configuring-properties-of-the-tensor says “Here is an example of creating a `TensorOptions` object that represents a **64-bit float**, strided tensor that requires a gradient, and lives on CUDA device 1”, but then calls `.dtype(torch::kFloat32)`. cc @svekars ...
https://github.com/pytorch/pytorch/issues/167467
open
[ "module: docs", "triaged", "actionable" ]
2025-11-10T15:00:11Z
2025-11-10T21:04:17Z
0
sboukortt
vllm-project/vllm
28,393
[Feature]: Does vllm-jax plan to support GPU acceleration?
### 🚀 The feature, motivation and pitch Does vllm-jax plan to support GPU acceleration? ### Alternatives _No response_ ### Additional context _No response_ ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of th...
https://github.com/vllm-project/vllm/issues/28393
closed
[ "feature request" ]
2025-11-10T12:28:20Z
2025-11-10T21:44:57Z
2
south-ocean
pytorch/pytorch
167,459
Dynamic number of omp threads of torch.compile cache
### 🚀 The feature, motivation and pitch It looks like torch.compile hardcodes the number of omp threads in the cache. I can see things like `#pragma omp parallel num_threads(8)` in the cache. And if different number threads is used the performance is much worse. Is it possible to make it compatible for different numb...
https://github.com/pytorch/pytorch/issues/167459
open
[ "triaged", "oncall: pt2", "oncall: cpu inductor" ]
2025-11-10T10:24:23Z
2025-12-22T19:49:32Z
1
SUSYUSTC
vllm-project/vllm
28,388
[Bug]: 新版的vllm已经废弃了v0代码,而对qwen-omni系列的模型支持仅限于v0,似乎是因为这个原因,我们无法使用最新版的vllm推理qwen-omni模型
### Your current environment Name: vllm Version: 0.10.2 ### 🐛 Describe the bug 下面的官方样例代码似乎是无法运行的,会对其中的音频使用参数 "mm_processor_kwargs": { "use_audio_in_video": True, }, 进行报错: ```python # SPDX-License-Identifier: Apache-2.0 # SPDX-FileCopyrightText: Copyright contributors to the vLLM project ...
https://github.com/vllm-project/vllm/issues/28388
open
[ "bug" ]
2025-11-10T09:23:33Z
2025-11-16T05:51:42Z
1
Lee-xeo
huggingface/accelerate
3,836
When using gradient accumulation, does the order of optimizer.zero_grad() affect training?
if I use accelerate+deepspeed to train a model, and I set `deepspeed_config: gradient_accumulation_steps: 8 offload_optimizer_device: cpu offload_param_device: cpu zero3_init_flag: false zero_stage: 2` does the order of the order of backward(), step(), zero_grad() affect training? For example: `for batch in...
https://github.com/huggingface/accelerate/issues/3836
closed
[]
2025-11-10T03:11:21Z
2025-12-20T15:24:00Z
3
polestarss
huggingface/transformers
42,113
Add AutoMergeAdapters: Official Utility to Combine Multiple LoRA Adapters into One Unified Model
### Feature request Introduce a new built-in class AutoMergeAdapters to the Transformers/PEFT ecosystem that enables users to merge multiple LoRA adapters trained on different domains or datasets into a single model. This feature simplifies the process of creating multi-domain fine-tuned models for inference and depl...
https://github.com/huggingface/transformers/issues/42113
closed
[ "Feature request" ]
2025-11-09T18:43:20Z
2025-11-10T16:58:34Z
1
3015pavan
pytorch/torchtitan
2,008
On the TorchTitan Infrastructure Build-out (VLM)
In the past, I’ve always trained models with the Lightning framework; now I’d like to switch to a more efficient one (TorchTitan or Megatron). However, I’ve run into a few questions and would appreciate your advice: Can I simply import the encoder part straight from Hugging Face Transformers? (In VLM, the encoder usual...
https://github.com/pytorch/torchtitan/issues/2008
open
[ "question" ]
2025-11-09T15:03:35Z
2025-11-10T09:56:00Z
null
Joluck
huggingface/transformers
42,111
Add thinking-budget support (max_thinking_tokens) for reasoning-capable chat models
### Feature request A built-in way to cap how many tokens a reasoning model spends inside its ``<think> … </think>`` block. Today, we can only control the total response length via ``max_new_tokens``. No parameter limits the internal reasoning segment when ``enable_thinking=True``. ### Motivation - Reasoning models ...
https://github.com/huggingface/transformers/issues/42111
open
[ "Feature request" ]
2025-11-09T10:09:11Z
2025-11-09T10:09:11Z
0
AndresAlgaba
vllm-project/vllm
28,362
[Usage]: Can't get vLLM to run on an Intel 125H with XPU and Arc graphics
### Your current environment ```text Collecting environment information... ...
https://github.com/vllm-project/vllm/issues/28362
open
[ "usage", "intel-gpu" ]
2025-11-09T09:45:05Z
2025-11-12T00:19:39Z
2
phlibi
vllm-project/vllm
28,350
[Doc]: Running VLLM via Docker Swarm With Support for Tensor Parallelism
### 📚 Running VLLM via Docker Swarm With Support for Tensor Parallelism There's no documentation that I have found outlining how to run VLLM in a docker swarm when utilizing tensor parallelism. The issue is that ```ipc=host``` is not an available option within docker swarm. Consulting the AI feature on the VLLM we...
https://github.com/vllm-project/vllm/issues/28350
closed
[ "documentation" ]
2025-11-08T21:11:15Z
2025-11-19T16:37:31Z
2
ep5000
vllm-project/vllm
28,348
[Usage]: Does vllm support max_pixels in prompt on Qwen3-VL reasoning?
### Your current environment ```text The output of `python collect_env.py` ``` ### How would you like to use vllm I want to run inference of Qwen3-VL-A3B-Instruct, I tried to set max_pixels but it doesn't work. import json import base64 import requests img_path = r".\images\MMMU\735_1.jpg" base64_str = base64.b64e...
https://github.com/vllm-project/vllm/issues/28348
open
[ "usage" ]
2025-11-08T16:06:07Z
2025-11-08T16:56:17Z
1
leijie-ww
pytorch/pytorch
167,412
How can I train in C++ using a Pytorch torchscript model
### 🐛 Describe the bug dd ### Versions I trained a model in the PyTorch, and then saved it to Torchscript format using torch.jit.save. Now, I want to retrain on this model. I have a question about whether the torchscript model can be used for training. I have a few different questions about how to train the Torch...
https://github.com/pytorch/pytorch/issues/167412
open
[ "oncall: jit" ]
2025-11-08T13:11:14Z
2025-11-10T19:11:24Z
1
mullerhai
vllm-project/vllm
28,344
[Usage]: Function calling Request's sampling_params.structured_outputs is None?
Hi, I used openai server API to build a LLM backend when I tried to deploy a MCP server. I discovered that the prompt of vllm engine combined system prompt, tool lists and user prompt. but i saw sampling_params.structured_outputs is None. Although the result seemed correct, I think it's important to use structured ou...
https://github.com/vllm-project/vllm/issues/28344
closed
[ "usage" ]
2025-11-08T08:57:17Z
2025-11-10T07:51:51Z
5
wtr0504
vllm-project/vllm
28,340
[Installation]: Need offline wheel for vLLM 0.11.0rc2 (pip download fails) to deploy qwen3_vl_235b_a22b_instruct_i18n
### Your current environment I need to install vLLM 0.11.0rc2 in an offline environment. Is there an official wheel (.whl) available for vLLM==0.11.0rc2 that I can download directly? Running: ``` pip download vllm==0.11.0rc2 --pre --extra-index-url https://wheels.vllm.ai/nightly -d wheels ``` fails with an error: L...
https://github.com/vllm-project/vllm/issues/28340
closed
[ "installation" ]
2025-11-08T06:05:31Z
2025-11-08T06:08:37Z
0
FateForever0222
pytorch/ao
3,314
Loading 8bit optimizer state from checkpoint causes dtype mismatch
We are using torch2.8. Optimizer states are quantized to [8bit](https://github.com/pytorch/ao/blob/main/torchao/optim/subclass_8bit.py). Normal training jobs are fine, but jobs that resume from checkpoint fail at `optimizer.step()`. We use AdamW optimizer copied from some older version of torch/torchao, where computati...
https://github.com/pytorch/ao/issues/3314
open
[ "optimizer", "triaged" ]
2025-11-08T00:27:00Z
2025-12-05T01:12:07Z
6
yz-ppl
pytorch/pytorch
167,369
Dynamo fails to trace repr
### 🐛 Describe the bug ```python import torch import torch.nn as nn class Config: def __repr__(self): return "Config()" def forward(x, config): # Calling repr() on non-constant user object # This triggers the bug without the fix return x * len(repr(config)) config = Config() x = torch.ra...
https://github.com/pytorch/pytorch/issues/167369
closed
[ "oncall: pt2", "module: dynamo" ]
2025-11-07T22:02:51Z
2025-11-10T21:06:41Z
0
tugsbayasgalan
pytorch/pytorch
167,344
UnboundLocalError: cannot access local variable 'tracer_output' where it is not a ssociated with a value
(Worker_TP1 pid=243560) ERROR 11-07 10:44:16 [multiproc_executor.py:699] if tracer_output: (Worker_TP1 pid=243560) ERROR 11-07 10:44:16 [multiproc_executor.py:699] ^^^^^^^^^^^^^ (Worker_TP1 pid=243560) ERROR 11-07 10:44:16 [multiproc_executor.py:699] UnboundLocalError: cannot access local variable 'tracer_ou...
https://github.com/pytorch/pytorch/issues/167344
closed
[ "oncall: pt2" ]
2025-11-07T18:48:42Z
2025-11-07T22:31:29Z
null
zou3519
vllm-project/vllm
28,310
[Doc]: Update GPU requirements to include AMD gfx1150/gfx1151
### 📚 The doc issue Summary: The documentation for GPU requirements does not list AMD gfx1150 and gfx1151 architectures, which are now supported. Background: Support for AMD gfx1150 and gfx1151 GPUs was added in https://github.com/vllm-project/vllm/pull/25908. The GPU requirements page should be updated to reflect t...
https://github.com/vllm-project/vllm/issues/28310
closed
[ "documentation", "rocm" ]
2025-11-07T17:26:47Z
2025-11-08T03:01:08Z
1
hammmmy
pytorch/pytorch
167,331
[TEST FAILURE UT] TestForeachCUDA.test_foreach_copy_with_multi_dtypes_large_input_cuda fails
**TDLR** for_each test fails when ran with: `TEST_CONFIG=default python3 test/run_test.py --verbose --keep-going -i test_foreach` Adding @serialTest() decorator to the test function `test_foreach_copy_with_multi_dtypes_large_input` fixes this issue. ``` _____ TestForeachCUDA.test_foreach_copy_with_multi_dtypes_larg...
https://github.com/pytorch/pytorch/issues/167331
open
[ "triaged", "actionable", "module: mta" ]
2025-11-07T17:09:56Z
2025-11-07T17:16:33Z
2
arkadip-maitra
huggingface/transformers
42,093
Mbart decoder ignoring index 0 from labels | index 1 from dec in
### System Info I am creating a ocr model using VisionEncoderDecoderModel class by connecting plm vision tower and donut base decoder (mbart model). I am using teacher forcing method to train the model ( default training and i found out that the model is ignoring index 0 from the target ( index 1 from the decoder_i...
https://github.com/huggingface/transformers/issues/42093
closed
[ "bug" ]
2025-11-07T15:46:08Z
2025-11-07T16:27:10Z
1
jaaabir
vllm-project/vllm
28,292
[Usage]: Failure to Deploy Llama-3.2-11B-Vision-Instruct Locally via vllm Due to OOM
### Your current environment The output of <code>python collect_env.py</code> ```text ============================== System Info ============================== OS : Ubuntu 20.04.5 LTS (x86_64) GCC version : (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version ...
https://github.com/vllm-project/vllm/issues/28292
closed
[ "usage" ]
2025-11-07T12:01:04Z
2026-01-06T00:06:43Z
5
LittleLucifer1
huggingface/transformers
42,086
Does Trainer uses grad scaler for training?
I am not able to see the grad scaler usage in Trainer code. If not using it then I need to understand how are we using mixed precision training with fp16 precision without grad scaler.
https://github.com/huggingface/transformers/issues/42086
closed
[]
2025-11-07T10:10:16Z
2025-11-13T07:58:33Z
2
quic-meetkuma
vllm-project/vllm
28,283
[Bug]: nccl stuck issue
### Your current environment <details> <summary>The output of <code>python collect_env.py</code></summary> ```text Your output of `python collect_env.py` here ``` </details> ### 🐛 Describe the bug I am using a docker container for vLLM. I noticed that when I use `nvidia/cuda:13.0.X-cudnn-devel-ubuntu24.04` with ...
https://github.com/vllm-project/vllm/issues/28283
open
[ "bug" ]
2025-11-07T09:36:01Z
2025-11-07T09:40:17Z
1
seindum
pytorch/pytorch
167,304
RPC cannot run in jetson orin because of the specific uuid of orin
### 🐛 Describe the bug When run RPC demo in jetson orin, the uuid issue were shown as below: tensorpipe/channel/cuda_ipc/context_impl.cc:65 "uuidStr.substr(0, 4) != "GPU-"Couldn’t obtain valid UUID for GPU #0 from CUDA driver. The uuid of jetson does not begin with characters “GPU-” like RTX series, the failure mes...
https://github.com/pytorch/pytorch/issues/167304
open
[ "oncall: distributed", "module: cuda", "module: rpc" ]
2025-11-07T09:20:00Z
2025-11-07T15:33:35Z
0
mamba824824
pytorch/torchrec
3,525
Could Torchrec support PyTorch's PrivateUse1 Dispatch Key?
Hello, I've noticed that there are many conditional checks like if device.type == "cuda" in our TorchRec codebase. Without modifying TorchRec's source code, such fixed conditional logic might not be flexible enough to conveniently support third-party devices. From what I understand, PyTorch has introduced the PrivateU...
https://github.com/meta-pytorch/torchrec/issues/3525
open
[]
2025-11-07T07:17:42Z
2026-01-05T22:39:04Z
1
kwgqjj
pytorch/pytorch
167,291
[FSDP] Support param step with fp32
### 🚀 The feature, motivation and pitch In Megatron, we can keep a fp32 version of params. while doing optimizer.step, the gradient is used to update the fp32 version of params, and the cast the fp32 param to fp16 version. Can we do this in FSDP? ### Alternatives _No response_ ### Additional context _No response_...
https://github.com/pytorch/pytorch/issues/167291
open
[ "oncall: distributed" ]
2025-11-07T04:37:48Z
2025-11-07T15:34:42Z
0
yikaizhu-baseten
vllm-project/vllm
28,262
[Bug]: [gpt-oss] Responses API incorrect input/output handling
### Your current environment Any env ### 🐛 Describe the bug There is currently an implementation issue with gpt-oss on the Responses API in vLLM. This can be seen clearly in the [test which continues a conversation between API requests here](https://github.com/vllm-project/vllm/blob/4bf56c79cc252d285d0cb4f5edf323f0...
https://github.com/vllm-project/vllm/issues/28262
open
[ "bug" ]
2025-11-07T02:51:56Z
2025-11-08T19:39:06Z
1
alecsolder
huggingface/lerobot
2,399
Are there plans to support LoRa fine-tuning?
https://github.com/huggingface/lerobot/issues/2399
open
[ "question", "performance", "training" ]
2025-11-07T02:37:45Z
2025-11-10T10:23:33Z
null
Hukongtao
huggingface/candle
3,167
Qwen 3-1.7b looks like something is wrong and doesn't stop properly.
Candle version: main Platform: Mac Studio Max M1 Mode: Qwen 3-1.7b, (download by huggingface-cli) Execute cmd: git clone https://github.com/huggingface/candle.git cd candle-examples cargo run --release --example qwen -- \ --prompt "What is the speed of light?" \ --model 3-1.7b \ --tokenizer-file ../../models/qwen3-1.7...
https://github.com/huggingface/candle/issues/3167
open
[]
2025-11-07T02:23:05Z
2025-11-08T07:52:18Z
6
xiuno
pytorch/pytorch
167,276
Dynamo Fails to Trace Python Built-in Function print in Compile Mode
### 🐛 Describe the bug Description: When running a PyTorch model in Compile mode with torch.compile(), the Dynamo tracing mechanism fails to trace the Python built-in print() function, resulting in the following error. code: ``` import torch import torch.nn as nn class SimpleModel(nn.Module): def forward(self, x...
https://github.com/pytorch/pytorch/issues/167276
open
[ "triaged", "oncall: pt2", "module: dynamo" ]
2025-11-07T01:34:42Z
2025-11-18T19:05:11Z
2
Blooming-Tree
pytorch/pytorch
167,266
TorchDynamo Tracing Error: Unable to Trace Builtin bool() Operator on Tensor
### 🐛 Describe the bug Description When compiling a model with torch.compile, TorchDynamo fails to trace the builtin bool() operator when applied to PyTorch tensors, resulting in a compilation error. Error Details: Error Type: Tracing failure for builtin operator Failed Operation: bool operator applied to Tensor S...
https://github.com/pytorch/pytorch/issues/167266
closed
[ "triaged", "oncall: pt2", "module: dynamo", "dynamo-triage-dec2025" ]
2025-11-07T00:26:30Z
2025-12-24T03:49:22Z
1
Blooming-Tree
huggingface/lerobot
2,398
how to accelerate the iteration in dataset
hi, i want to get the frames of specific episode index when `episode_index_target` is large, like 100, it takes a lot of time to run. any solution to improve the iteration speed ? thanks. `lerobot.__version__ == '0.1.0'` ```python dataset = LeRobotDataset('yananchen/robomimic_lift') frames = [] for sample in datas...
https://github.com/huggingface/lerobot/issues/2398
closed
[ "question" ]
2025-11-06T21:37:33Z
2025-11-10T20:52:57Z
null
yanan1116