repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 β | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/pytorch | 163,519 | For mixed-precision training, does FSDP2 also need `amp.grad_scaler.GradScaler` ? or is FSDP2 already handled? | In mixed-precision training of DDP, `amp.grad_scaler.GradScaler` is needed to dynamically scale the loss, my question is does FSDP2 also need `amp.grad_scaler.GradScaler` ? or is FSDP2 already handled?
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @ezyang @msaroufim @dcci | https://github.com/pytorch/pytorch/issues/163519 | closed | [
"oncall: distributed"
] | 2025-09-22T15:01:43Z | 2025-09-29T08:19:23Z | 11 | EquationWalker |
huggingface/lerobot | 1,995 | Questions about SmolVLA design | Hi! I am looking into the details of SmolVLA implementation, and got some questions.
I wonder the following points are necessary, or beneficial for the performance.
1.
https://github.com/huggingface/lerobot/blob/f7283193ea9ae932423e3a1e27524a27fa5c0fe5/src/lerobot/policies/smolvla/smolvlm_with_expert.py#L354C63-L354... | https://github.com/huggingface/lerobot/issues/1995 | open | [
"question",
"policies"
] | 2025-09-22T11:53:01Z | 2025-10-17T01:58:12Z | null | gliese581gg |
huggingface/lerobot | 1,994 | How to improve success rate and generalization | Hi, I have one question regarding the success rate, if I ensure the object appears in the frame of wrist camera at the beginning of dataset collection/inference, will this lead to higher success rate for pick and place task?
My initial attempt was object appears in the side view camera but does not appear in the wrist... | https://github.com/huggingface/lerobot/issues/1994 | closed | [
"question",
"policies"
] | 2025-09-22T09:55:53Z | 2025-09-23T09:26:16Z | null | Liu9999ai |
pytorch/ao | 3,040 | IntWeightonly quantized model slower than default model ( x86 machine, A100) | My Int4WeightOnly quantized model is slower and more inaccurate in OCR as compared to the default model. Why is this happening?
Here is some info to help you guys
Model - Qwen2-VL-7B-Instruct fine-tuned and saved in 16bit using unsloth
GPU -
<img width="679" height="265" alt="Image" src="https://github.com/user-atta... | https://github.com/pytorch/ao/issues/3040 | open | [
"quantize_",
"triaged"
] | 2025-09-22T06:53:54Z | 2025-10-01T11:10:42Z | 5 | Rakshith12-pixel |
pytorch/torchtitan | 1,733 | Gradient accumulation broken in PP | ### Bug description
Using gradient accumulation is incompatible with PipleineSchedule(..., scale_grads=True) option, which defaults to True.
When this option is set, at each step, all gradients are scaled by the micro-batch size. This works fine for a single gradient accumulation step, but when using multiple steps, ... | https://github.com/pytorch/torchtitan/issues/1733 | closed | [
"high priority",
"triage review"
] | 2025-09-22T05:55:07Z | 2025-09-24T20:13:06Z | 8 | jdinalt |
huggingface/smol-course | 248 | [QUESTION] About applying chat template for base model via `clone_chat_template` from trl | In the course [Supervised Fine-Tuning](https://huggingface.co/learn/smol-course/unit1/3), author uses base model `HuggingFaceTB/SmolLM3-3B-Base` but I choose `HuggingFaceTB/SmolLM2-135M` because it is lighter. However, I found that the base model `SmolLM2-135M` does not have its own chat template but it already had spe... | https://github.com/huggingface/smol-course/issues/248 | open | [
"question"
] | 2025-09-22T03:03:56Z | 2025-09-22T19:13:17Z | null | binhere |
huggingface/transformers.js | 1,419 | Why is `token-classification` with T5 not available? (`T5ForTokenClassification`) | ### Question
In python `tranformers` i can do:
```python
model = AutoModelForTokenClassification.from_pretrained("google-t5/t5-base")
```
and use it with `Trainer` to train it (quite successfully).
Or
```python
classifier = pipeline("token-classification", model="google-t5/t5-base")
```
and use it for token classifica... | https://github.com/huggingface/transformers.js/issues/1419 | open | [
"question"
] | 2025-09-21T23:30:22Z | 2025-09-24T21:42:56Z | null | debevv |
huggingface/transformers.js | 1,418 | EmbeddingGemma usage | ### Question
I'm new to transformers.js
I want to use embeddinggemma into my web app and I've looked at the example on its usage at this link:
https://huggingface.co/blog/embeddinggemma#transformersjs
At the same time I've seen a different code, using pipeline, regarding embeddings:
https://huggingface.co/docs/tran... | https://github.com/huggingface/transformers.js/issues/1418 | open | [
"question",
"v4"
] | 2025-09-21T10:26:22Z | 2025-11-08T15:33:16Z | null | MithrilMan |
huggingface/diffusers | 12,359 | Chroma pipeline documentation bug regarding the `guidance_scale` parameter | ### Describe the bug
From my understanding, Chroma is a retrained and dedistilled version of the Flux architecture, so it uses true CFG, unlike Flux. I can indeed confirm that this is true by tracing through the source code.
However, currently the documentation for the `guidance_scale` parameter in the `ChromaPipelin... | https://github.com/huggingface/diffusers/issues/12359 | closed | [
"bug"
] | 2025-09-21T08:34:15Z | 2025-09-22T20:04:15Z | 1 | mingyi456 |
pytorch/pytorch | 163,435 | [Fuzzer][Eager/Compile Divergence] a var subtract by itself should equal 0? | ### π Describe the bug
```
import torch
import sys
torch._dynamo.config.capture_scalar_outputs = True
torch._dynamo.config.capture_dynamic_output_shape_ops = True
torch._inductor.config.emulate_precision_casts = True
def foo(arg0, arg1, arg2, arg3):
t0 = arg0 # size=(), stride=(), dtype=float16, device=cuda
... | https://github.com/pytorch/pytorch/issues/163435 | open | [
"triaged",
"oncall: pt2",
"module: inductor",
"topic: fuzzer"
] | 2025-09-21T05:19:32Z | 2025-09-24T17:43:02Z | 3 | bobrenjc93 |
pytorch/tutorials | 3,581 | Feedback about Parametrizations Tutorial | There is the following issue on this page: https://docs.pytorch.org/tutorials/intermediate/parametrizations.html
Parametrization is not a topic known to all. You could add some context about the definition of parametrization to the tutorial, why the need for it was born ? What does it solve ? The go into giving the ex... | https://github.com/pytorch/tutorials/issues/3581 | open | [] | 2025-09-21T00:21:09Z | 2025-09-21T00:21:09Z | 0 | pentanol2 |
pytorch/pytorch | 163,359 | RFC: Support CUDA Stream Protocol | ### π The feature, motivation and pitch
Hello! I am the CUDA Python tech lead and I'm filing this RFC to improve the interoperability between Python GPU libraries.
`cuda.core` is an official CUDA Python project: https://nvidia.github.io/cuda-python/cuda-core/latest/index.html. It offers a pythonic, self-contained, l... | https://github.com/pytorch/pytorch/issues/163359 | closed | [
"module: cuda",
"triaged",
"topic: new features"
] | 2025-09-19T19:23:41Z | 2025-09-25T19:45:40Z | 2 | leofang |
huggingface/trl | 4,110 | How does `trl` know what part of dataset is prompt and completion in the following situation? | ### Reproduction
```python
import torch
import trl as r
import peft as p
import datasets as d
import accelerate as a
import transformers as t
allowed_entities = ['AGE', 'EYECOLOR', 'GENDER', 'HEIGHT', 'WEIGHT', 'SEX']
entity_mapping = {
"ACCOUNTNAME": "account_name",
"ACCOUNTNUMBER": "account_number",
"AG... | https://github.com/huggingface/trl/issues/4110 | closed | [
"π bug",
"π documentation"
] | 2025-09-19T17:42:26Z | 2025-09-19T20:02:16Z | null | bminesh-shah |
pytorch/pytorch | 163,342 | [CD] - Manywheel CUDA builds failing since Sept 18 | ### π Describe the bug
This hasn't been seen in a nightly yet, but i just rebased onto `viable/strict` and i'm getting this error in the `ciflow/binaries_wheel` flow and it's happening in other people's jobs too.
Broken Workflow - https://github.com/pytorch/pytorch/actions/workflows/generated-linux-binary-manywheel-... | https://github.com/pytorch/pytorch/issues/163342 | closed | [
"high priority",
"triage review",
"module: binaries",
"module: cuda",
"triaged",
"module: regression"
] | 2025-09-19T14:29:24Z | 2025-09-20T12:16:28Z | 5 | robert-hardwick |
huggingface/transformers | 41,005 | Are we have Qwen3VL Official Model Published by Alibaba | ### Model description
Reference - https://huggingface.co/docs/transformers/main/en/model_doc/qwen3_vl#transformers.Qwen3VLForConditionalGeneration
If not when can we expect any guess? | https://github.com/huggingface/transformers/issues/41005 | closed | [
"New model"
] | 2025-09-19T13:59:34Z | 2025-09-20T10:00:04Z | 1 | Dineshkumar-Anandan-ZS0367 |
pytorch/pytorch | 163,331 | Support Query Bug !! | Hey Guys,
I have been working on an ML project, so i have a GPU server (An ancient one) which is backed by CUDA 3.0 .So what is the minimum version supported for PyTorch ?
Thank You :) | https://github.com/pytorch/pytorch/issues/163331 | closed | [] | 2025-09-19T10:09:06Z | 2025-09-20T14:42:36Z | 2 | Harishankar14 |
huggingface/transformers | 40,993 | HfArgumentParser cannot parse TRL Config | ### System Info
transformers==4.56.1
trl==0.17.0
I used to apply code below
```python
from transformers import HfArgumentParser
from trl import (
ScriptArguments, ModelConfig, SFTConfig
)
parser = HfArgumentParser((ScriptArguments, SFTConfig, ModelConfig))
script_arguments, trainer_config, model_config = parser.par... | https://github.com/huggingface/transformers/issues/40993 | closed | [
"bug"
] | 2025-09-19T08:29:48Z | 2025-09-19T09:06:20Z | 5 | caoyang-sufe |
huggingface/lerobot | 1,978 | Is there a best fit model to each sim envοΌ | I try to train diffusionοΌsmolvlaοΌeven pi0 on the aloha with 200k steps, and found that they all perform much worse (with less than 10% success rate) than act policy, why? Did each env task exist a best-fit policy? or there are problems on my training strategy. | https://github.com/huggingface/lerobot/issues/1978 | closed | [
"question",
"policies",
"simulation"
] | 2025-09-19T02:45:14Z | 2025-10-17T11:25:27Z | null | shs822 |
pytorch/pytorch | 163,283 | RFC move to Pyrefly for Type Checking | Currently, mypy is used to typecheck PyTorch, with lint runner and dmypy. We appreciate the communityβs work maintaining mypy and type coverage in PyTorch and want to build on that foundation. [Pyrefly](https://pyrefly.org/) is a new standards-compliant Python type checker. The Pyrefly team has been hard at work on bui... | https://github.com/pytorch/pytorch/issues/163283 | closed | [
"module: typing",
"triaged",
"needs research"
] | 2025-09-18T19:52:40Z | 2025-11-24T19:20:08Z | 3 | maggiemoss |
huggingface/accelerate | 3,784 | AttributeError: 'Accelerator' object has no attribute 'deepspeed_config'. Did you mean: 'deepspeed_plugin'? | ### System Info
```Shell
- Name: accelerate Version: 1.10.1
- Name: transformers Version: 4.54.0
- Name: deepspeed Version: 0.17.5
- Name: torch Version: 2.8.0
- Name: wandb Version: 0.21.4
```
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] One of the scripts in th... | https://github.com/huggingface/accelerate/issues/3784 | closed | [] | 2025-09-18T17:07:54Z | 2025-10-27T15:08:19Z | 1 | alexge233 |
huggingface/lerobot | 1,969 | how to record a multi-task dataset on so101? | I found that only can use "dataset.single_task" to record , but i need to record a dataset contains more than 3 tasks. how to solve it. | https://github.com/huggingface/lerobot/issues/1969 | closed | [] | 2025-09-18T10:18:00Z | 2025-09-21T02:50:59Z | null | Temmp1e |
huggingface/lerobot | 1,966 | SO101FollowerEndEffector? | I am trying to get inverse kinematics to work on my SO-101, and I found SO100FollowerEndEffector but there is no SO101FollowerEndEffector?
I suspect they are interchangeable, but when I use SO100FollowerEndEffector on my SO-101, it want me to recalibrate it, so I just want to make sure before I break anything. | https://github.com/huggingface/lerobot/issues/1966 | open | [
"question",
"robots"
] | 2025-09-17T23:56:38Z | 2025-10-30T08:56:22Z | null | cashlo |
pytorch/ao | 3,020 | How to use FP8 training with MoE models? |
Iβm trying to train a Mixture of Experts (MoE) model with FP8 precision. However, I couldnβt find any documentation or examples that describe how to enable FP8 training for MoE in torchao.
Is FP8 training for MoE models currently supported?
If yes, could you point me to a tutorial or usage guide?
If not, is there ... | https://github.com/pytorch/ao/issues/3020 | open | [
"moe"
] | 2025-09-17T12:18:14Z | 2025-10-02T18:20:44Z | null | BIGBALLON |
pytorch/torchtitan | 1,716 | float8 Grouped MM kernels | - **Is there any plan to support float8 Grouped MM for llama4 / qwen3 MoE model training?**
- **Is this the correct way to train a MoE model with FP8?**
Currently, the available Grouped GEMM kernels only support float16, and they do not work with float8.
``` python
@expert_parallel
def _run_experts_grouped_mm(
w1... | https://github.com/pytorch/torchtitan/issues/1716 | open | [
"question"
] | 2025-09-17T09:57:25Z | 2025-09-30T02:54:53Z | null | BIGBALLON |
pytorch/pytorch | 163,153 | FSDP2 implicit prefetch does not work | ### π Describe the bug
I'm using official [example of FSDP2](https://github.com/pytorch/examples/blob/acc295dc7b90714f1bf47f06004fc19a7fe235c4/distributed/FSDP2/example.py) with some small modifcations:
```python
# distributed/FSDP2/example.py
import argparse
import os
import torch
from checkpoint import Checkpoin... | https://github.com/pytorch/pytorch/issues/163153 | closed | [
"oncall: distributed"
] | 2025-09-17T09:30:31Z | 2025-09-17T18:04:42Z | 1 | zhc7 |
pytorch/tutorials | 3,569 | Feedback about What is torch.nn really? | There is the following issue on this page: https://docs.pytorch.org/tutorials/beginner/nn_tutorial.html
Github URL for MNIST archive needs to change from:
```
URL = "https://github.com/pytorch/tutorials/raw/main/_static/"
```
to
```
URL = 'https://github.com/pytorch/tutorials/raw/refs/heads/main/_static/'
``` | https://github.com/pytorch/tutorials/issues/3569 | open | [] | 2025-09-16T20:51:42Z | 2025-09-16T20:51:42Z | null | robertbcalhoun |
huggingface/lighteval | 970 | How to use a configuration file? | The documentation makes references to using configuration yaml files like [here](https://huggingface.co/docs/lighteval/main/en/use-litellm-as-backend) but it doesn't give the name of the file or which option to feed the config to lighteval. I tried making a `config.yaml`, `config.yml` in the current directory and tryin... | https://github.com/huggingface/lighteval/issues/970 | closed | [] | 2025-09-16T20:13:48Z | 2025-09-24T22:08:32Z | null | oluwandabira |
huggingface/transformers | 40,915 | HfArgumentParser does not support peft.LoraConfig | ### System Info
- `transformers` version: 4.57.0.dev0
- Platform: Linux-5.14.0-284.73.1.el9_2.x86_64-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.5.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch ... | https://github.com/huggingface/transformers/issues/40915 | closed | [
"bug"
] | 2025-09-16T16:23:56Z | 2025-09-23T05:16:14Z | 5 | romitjain |
pytorch/pytorch | 163,071 | Lintrunner not flagging CI issues in PRs | ### π Describe the bug
The PR https://github.com/pytorch/pytorch/pull/162659 introduced some small changes in the `.github/workflows/pull.yml` workflow, changing the `linux-jammy-py3_10-clang18-asan-build` job.
After merging, lintrunner started flagging the change as inconsistency in the workflows (https://github.co... | https://github.com/pytorch/pytorch/issues/163071 | closed | [
"module: lint",
"triaged"
] | 2025-09-16T12:56:57Z | 2025-09-22T15:00:35Z | 3 | jeanschmidt |
huggingface/diffusers | 12,338 | `AutoencoderDC` bug with `pipe.enable_vae_slicing()` and decoding multiple images | ### Describe the bug
When using the Sana_Sprint_1.6B_1024px and the SANA1.5_4.8B_1024px models, I cannot enable VAE slicing when generating multiple images. I guess this issue will affect the rest of the Sana model and pipeline configurations because they all use the same `AutoencoderDC` model.
I traced the issue to ... | https://github.com/huggingface/diffusers/issues/12338 | closed | [
"bug"
] | 2025-09-16T12:23:29Z | 2025-09-22T06:55:35Z | 0 | mingyi456 |
pytorch/pytorch | 163,066 | PyTorch is including internal headers, leading to ODR violations | ### π Describe the bug
In [functorch/csrc/dim/dim_opcode.c](https://github.com/pytorch/pytorch/blob/e3783a9575b810f9a3f51334270668357463958e/functorch/csrc/dim/dim_opcode.c#L8-L10) and [torch/csrc/dynamo/cpython_defs.c](https://github.com/pytorch/pytorch/blob/e3783a9575b810f9a3f51334270668357463958e/torch/csrc/dynamo... | https://github.com/pytorch/pytorch/issues/163066 | open | [
"module: build",
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2025-09-16T10:22:40Z | 2025-09-24T17:41:27Z | 6 | pganssle-google |
pytorch/pytorch | 163,061 | GIL is not released when calling torch.compile kernels | ### π Describe the bug
In most cases, PyTorch releases GIL when calling CUDA APIs, but I found the GIL is held when calling torch.compile kernels, is this expected? Is it possible to release GIL when calling torch.compile kernels?
To reproduce, script `torch_compile.py`:
```python
import torch
import triton
import t... | https://github.com/pytorch/pytorch/issues/163061 | closed | [
"module: performance",
"triaged",
"oncall: pt2",
"module: inductor"
] | 2025-09-16T09:00:22Z | 2025-09-30T06:49:01Z | 7 | syuoni |
pytorch/xla | 9,646 | Correct behavior of `torch.ops.xla.write_mlir_debuginfo` | ## β Correct behavior of `torch.ops.xla.write_mlir_debuginfo`
What is the correct behavior of `torch.ops.xla.write_mlir_debuginfo`? Seems it adds debug info all upstream operations not just a direct upstream op. Is it expected behavior?
```python
import torch
import torch_xla
import torch_xla.experimental.xla_mlir_de... | https://github.com/pytorch/xla/issues/9646 | open | [
"question",
"stablehlo"
] | 2025-09-16T00:20:05Z | 2025-09-16T14:01:06Z | null | tlsdmstn56 |
huggingface/optimum | 2,355 | Support exporting text-ranking for BERT models | ### Feature request
Currently, `optimum-cli export onnx --model cross-encoder/ms-marco-MiniLM-L-12-v2 cross-encoder--ms-marco-MiniLM-L-12-v2-onnx` says:
```
ValueError: Asked to export a bert model for the task text-ranking (auto-detected), but the Optimum ONNX exporter only supports the tasks feature-extraction, fi... | https://github.com/huggingface/optimum/issues/2355 | closed | [
"Stale"
] | 2025-09-15T21:23:35Z | 2025-10-21T02:10:29Z | 1 | kshitijl |
pytorch/pytorch | 162,971 | [CD] Reasonable time constraint for binary builds | ### π Describe the bug
It looks like both CUDA+aarch64, Win+XPU and ROCM build are close towards exceeding 6h threshold
- Could we have some sort of a plan on how to deal with those. I.e. can some build dependencies be cached and build ahead of time as part of the docker image?
- Is there a matrix somewhere on what ... | https://github.com/pytorch/pytorch/issues/162971 | open | [
"module: binaries",
"module: ci",
"triaged"
] | 2025-09-15T16:21:01Z | 2025-09-23T20:23:03Z | 1 | malfet |
pytorch/pytorch | 162,957 | torch.linalg.eigh uses a large amount of memory in pytorch 2.8.0 | ### π Describe the bug
Running torch.linalg.eigh spikes allocated GPU memory in pytorch 2.8.0. For repeated calls on tensors of different batch dimensions the allocated memory increases successively until reaching a plateau. In 2.7.0 the code below consistently uses ~200 MB, in 2.8.0 2-5 GB were allocated for differe... | https://github.com/pytorch/pytorch/issues/162957 | open | [
"needs reproduction",
"module: cuda",
"module: memory usage",
"triaged",
"module: linear algebra"
] | 2025-09-15T12:18:40Z | 2025-09-16T08:08:01Z | 2 | fjneumann |
pytorch/pytorch | 162,952 | The FSDPModule.set_requires_gradient_sync should control reduce-scatter sync and all-reduce sync separately | ### π The feature, motivation and pitch
The current `FSDPModule.set_requires_gradient_sync` implementation controls both `reduce-scatter` and `all-reduce` together. For the multi-node HSDP scenario (replication between nodes, intra-node parameter sharing), in gradient accumulation periods, turning `reduce-scatter` on... | https://github.com/pytorch/pytorch/issues/162952 | closed | [
"oncall: distributed"
] | 2025-09-15T09:01:29Z | 2025-09-21T03:01:33Z | 3 | EquationWalker |
pytorch/pytorch | 162,908 | new sparse tensor format implementation: tips | ### π The feature, motivation and pitch
Hi,
I'm currently working on implementing a new sparse tensor format. I wish to implement a method for the tensor object, such that i can do `A.to_new_format()`, where `A` is a tensor object.
Can someone point me on how to implement this kind of feature directly as a method o... | https://github.com/pytorch/pytorch/issues/162908 | closed | [] | 2025-09-14T11:31:49Z | 2025-09-14T22:07:20Z | 1 | ricvigi |
pytorch/pytorch | 162,898 | Script ./export/unflaten.py has some bugs. | ### π Describe the bug
I'm using torch.distributed.pipelining to implement Pipeline Parallelism for my model, but I'm encountering the following error:
<img width="2174" height="232" alt="Image" src="https://github.com/user-attachments/assets/fd9e00b0-8be8-4e41-aa27-07d79c568305" />
After reviewing the source code, ... | https://github.com/pytorch/pytorch/issues/162898 | open | [
"oncall: distributed",
"module: pipelining"
] | 2025-09-14T05:19:54Z | 2025-10-05T13:33:18Z | 1 | lileicaca |
pytorch/vision | 9,215 | MixUp and CutMix transforms for semantic segmentation | Is there any way to use the MixUp and CutMix transforms for semantic segmentation masks? I could not find any documentation on it.
If this functionality does not exist, I will be happy to submit a PR for the same.
Motivation - CutMix is used in SOTA semi-supervised semantic segmentation methods such as [UniMatch](htt... | https://github.com/pytorch/vision/issues/9215 | open | [] | 2025-09-13T11:23:35Z | 2025-09-19T18:52:48Z | 1 | vedantdalimkar |
pytorch/pytorch | 162,870 | [RFC] library function with 64+ arguments | ### Custom op support with 64+ arguments
Is there any plan to support 64+ argument? I have a custom kernel that takes 64+ arguments.
```python
import torch
from torch.library import Library, impl, register_fake
num_args = 65
# Create a new custom namespace
my_lib = Library("my_ops", "LIB")
# Define a custom opera... | https://github.com/pytorch/pytorch/issues/162870 | open | [
"triaged",
"module: custom-operators",
"module: library"
] | 2025-09-13T04:03:09Z | 2025-09-15T23:25:57Z | 1 | tlsdmstn56 |
pytorch/pytorch | 162,859 | [RFC] support symmetric memory in torch.compile | The proposal originally came up in vLLM-compile sync with @ProExpertProg, @Chillee, and @Amir-19 and was also discussed with @ngimel @kwen2501. Recording it here to make sure we're all on the same page.
## Pitch
For any collective operator (built-in or custom), a user can specify which input must have symmetric memor... | https://github.com/pytorch/pytorch/issues/162859 | open | [
"oncall: distributed",
"triaged",
"oncall: pt2",
"module: inductor",
"vllm-compile",
"module: vllm",
"module: symm_mem"
] | 2025-09-12T22:27:49Z | 2025-12-16T18:19:59Z | 26 | zou3519 |
pytorch/pytorch | 162,854 | Move test_quantization tests to run weekly | Currently test_quantization is running on every commit / PR, it's not necessary since we are deprecating the flow: https://docs.pytorch.org/docs/main/quantization.html
Although the API is still used, so we want to reduce the cadence the tests are running to weekly.
Main test file: https://github.com/pytorch/pytorch/b... | https://github.com/pytorch/pytorch/issues/162854 | closed | [
"oncall: quantization",
"module: ci",
"module: tests"
] | 2025-09-12T21:56:16Z | 2025-09-24T11:31:14Z | 1 | jerryzh168 |
huggingface/lerobot | 1,923 | Deploying SmolVLA with a simulator | Has anyone been able to deploy the SmolVLA model to control say the SO-100 on a simulator like IsaacSim?
Even if the fine-tuning reliably converges the observed performance on the simulator seems erratic. Do we apply the predicted actions from SmolVLA directly into the Articulation controller as positions? | https://github.com/huggingface/lerobot/issues/1923 | closed | [
"question",
"policies",
"simulation"
] | 2025-09-12T21:06:40Z | 2025-12-11T22:07:02Z | null | aditya1709 |
pytorch/torchtitan | 1,708 | FSDP + compiled autograd | Hi! I was trying out some debug runs using FSDP with compile enabled and found out that compiled autograd doesn't seem to work well with FSDP. (a single gpu run without FSDP seems to work)
Is it possible to make such a setup work or is it just not supported as of now?
Launching a train run with the arguments below
``... | https://github.com/pytorch/torchtitan/issues/1708 | open | [
"module: fsdp",
"module: torch.compile"
] | 2025-09-12T20:42:31Z | 2025-09-15T16:24:02Z | 3 | antony-frolov |
huggingface/swift-transformers | 237 | Please help. Seeing issues with Hub when integrating | Hello, I'm trying to integrate WhisperKit via https://github.com/argmaxinc/WhisperKit/blob/main/Package.swift but that seems to bring in [swift-transformers](https://github.com/huggingface/swift-transformers) and Hub. I'm seeing issues as below
Hub.package.swiftinterface:34:32: warning: 'BinaryDistinctCharacter' is n... | https://github.com/huggingface/swift-transformers/issues/237 | closed | [
"question"
] | 2025-09-12T17:06:28Z | 2025-09-17T15:36:52Z | null | rpatnayakuni22 |
pytorch/pytorch | 162,820 | [CI][CUDA][Distributed] test_ring_flex_attention failed on 8xB200 Runner | ### π Describe the bug
Tracked in umbrella https://github.com/pytorch/pytorch/issues/162178
Job link: https://github.com/pytorch/pytorch/actions/runs/17660052730/job/50193312091
Failure message:
`2025-09-12T05:47:07.8805304Z expect_out, expect_lse = compiled_flex_attention(
2025-09-12T05:47:07.8805570Z Fil... | https://github.com/pytorch/pytorch/issues/162820 | open | [
"oncall: distributed",
"module: ci",
"module: tests",
"triaged",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 2025-09-12T16:29:01Z | 2025-09-22T22:23:48Z | 3 | nWEIdia |
pytorch/ao | 2,989 | Quantized model is slower than original model! | Hello,
I have put together this benchmark and I am wondering why the quantised version is so much slower. Is there something that I have missed or is it simply that the model is small and the overhead of quantization is not worth it in this case?
The results are the following.
```
Benchmarking: model_fp32.onnx
Warmi... | https://github.com/pytorch/ao/issues/2989 | open | [] | 2025-09-12T05:00:23Z | 2025-09-12T18:31:28Z | 8 | timpiperseek |
pytorch/pytorch | 162,782 | Is `torch.nn.functional.gumbel_softmax` going to be deprecated? | Is this function really going to be deprecated going forward? If so I will write my own version. Thanks!
There is the following issue on this page: https://docs.pytorch.org/docs/stable/generated/torch.nn.functional.gumbel_softmax.html
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | https://github.com/pytorch/pytorch/issues/162782 | open | [
"module: nn",
"triaged",
"module: deprecation"
] | 2025-09-12T00:48:11Z | 2025-09-19T17:36:22Z | 1 | michaelfortunato |
pytorch/pytorch | 162,719 | linalg.eig does not get parallelized on CPU | ### π Describe the bug
I have a lengthy calculation that relies on eigendecomposition of non-Hermitian matrices in one place. The reason I picked PyTorch is the straightforward parallel nature of its ops, however that does not seem to be the case with `eig`. While I know it calls a BLAS routine under the hood, I am a... | https://github.com/pytorch/pytorch/issues/162719 | open | [
"module: performance",
"module: cpu",
"triaged",
"module: linear algebra"
] | 2025-09-11T12:09:57Z | 2025-10-02T12:03:49Z | 5 | krokosik |
huggingface/transformers | 40,815 | get_decoder feature regression in 4.56.0 | ### System Info
In the release of transformers v4.56.0, this PR https://github.com/huggingface/transformers/pull/39509 introduced a refactor of the public `get_decoder` method which previously existed on modes by moving it to the PreTrainedModel class.
Unfortunately this introduced a significant behavior change in th... | https://github.com/huggingface/transformers/issues/40815 | closed | [
"bug"
] | 2025-09-11T09:25:12Z | 2025-09-16T08:57:14Z | 4 | KyleMylonakisProtopia |
huggingface/transformers | 40,813 | Incorrect sharding configuration for Starcoder2 model | ### System Info
Transformers main branch (commit [0f1b128](https://github.com/huggingface/transformers/commit/0f1b128d3359a26bd18be99c26d7f04fb3cba914) )
- `transformers` version: 4.57.0.dev0
- Platform: Linux-5.15.0-1030-nvidia-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.34.4
- Safeten... | https://github.com/huggingface/transformers/issues/40813 | closed | [
"bug"
] | 2025-09-11T09:02:53Z | 2025-09-15T08:46:33Z | 1 | greg-kwasniewski1 |
huggingface/lerobot | 1,911 | How to avoid re-write cache data from pyarrow into parquet everytime? | Hi Authors,
When using lerobot dataset in a pytorch dataloader, lerobot dataset will write a huge cache data which is converted from pyarrow to Apache Parquet. How to avoid that?
I can think of two options:
1. Avoid converting to Parquet data and directly read from parquet data. But this may loose reading performanc... | https://github.com/huggingface/lerobot/issues/1911 | open | [] | 2025-09-10T22:19:25Z | 2025-09-10T22:19:25Z | null | songlinwei-we |
pytorch/pytorch | 162,638 | Gradient Clipping in Pipeline Parallelism Schedules | ### π The feature, motivation and pitch
The current PP schedules like `Schedule1F1B` don't seem to have built-in gradient clipping support.
Is there a recommended approach for implementing gradient clipping in pipeline parallelism, and what would be the most efficient way to compute global gradient norms across shar... | https://github.com/pytorch/pytorch/issues/162638 | open | [
"oncall: distributed",
"module: autograd"
] | 2025-09-10T20:48:20Z | 2025-09-11T15:12:36Z | 0 | nvlas |
pytorch/pytorch | 162,630 | [RFC] Intrusive Caching DLPack for Fast Conversion | Currently DLPack is being used for Tensor data exchange. This conversion, which involves populating metadata such as shape, data pointer, and strides, can introduce a small but non-negligible overhead, typically in the range of 40-80 nanoseconds on the C++ side. While this latency is already quite low, frequent tensor ... | https://github.com/pytorch/pytorch/issues/162630 | closed | [
"triaged",
"enhancement",
"module: dlpack"
] | 2025-09-10T20:00:53Z | 2025-09-12T20:26:48Z | 15 | tqchen |
pytorch/pytorch | 162,606 | Tensorpipe - ROCm support | Raising this issue to discuss on the path forward to enable tensorpipe feature on ROCm.
Why it is required
- UT gap, currently tensorpipe related UTs are skipped on ROCm but executed for CUDA.
Tensorpipe repo was archived few year back and no changes were accepted. Recently https://github.com/pytorch/tensorpipe/comm... | https://github.com/pytorch/pytorch/issues/162606 | open | [
"module: rocm",
"triaged",
"module: tensorpipe",
"rocm"
] | 2025-09-10T16:03:52Z | 2025-12-17T02:56:09Z | 8 | pruthvistony |
pytorch/ao | 2,967 | Deprecation for IntxWeightOnlyConfig/Int8DynamicActivationIntxWeightConfig (version 1) and the models | This issue is tracking the deprecation of the (1) configs (2) model checkpoints quantized with these configs.
What is deprecated:
* IntxWeightOnlyConfig/Int8DynamicActivationIntxWeightConfig with version=1 is now deprecated. Please use version=2 (current default).
* Quantized checkpoints quantized with version 1 conf... | https://github.com/pytorch/ao/issues/2967 | open | [] | 2025-09-09T20:35:13Z | 2025-10-02T20:50:10Z | 0 | metascroy |
pytorch/pytorch | 162,512 | Default Google Search to Off in docs | <img width="967" height="722" alt="Image" src="https://github.com/user-attachments/assets/820499bb-1237-4a9c-9946-71c67ef88f6d" />
Two comments on the search bar in the new UI:
1. It is inconvenient that the search bar is not on the same screen as the search results, so I cannot see both at the same time.
2. I searche... | https://github.com/pytorch/pytorch/issues/162512 | open | [
"module: docs",
"triaged"
] | 2025-09-09T18:14:06Z | 2025-09-09T18:24:50Z | 1 | janeyx99 |
huggingface/transformers | 40,767 | 3D Object Detection Models | ### Model description
Hi together,
is there a reason or any other thread where 3D models like those at mmdet3d are discussed to be implemented. I have not found any discussion.
Thanks
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links fo... | https://github.com/huggingface/transformers/issues/40767 | open | [
"New model"
] | 2025-09-09T13:16:33Z | 2025-11-13T21:18:40Z | 3 | SeucheAchat9115 |
pytorch/pytorch | 162,481 | Incosistent tracking of device activities when calling profiler.step() in torch profiler | ### π Describe the bug
Here is a simple example of using profiler's scheduling functionality:
```python
import torch
def bench_kineto(fn, num_tests: int):
flush_l2_size = int(8e9 // 4)
schedule = torch.profiler.schedule(wait=0, warmup=1, active=1, repeat=1)
profiler = torch.profiler.profile(activities=... | https://github.com/pytorch/pytorch/issues/162481 | open | [
"oncall: profiler"
] | 2025-09-09T11:58:11Z | 2025-12-01T18:41:45Z | 5 | youkaichao |
huggingface/lerobot | 1,899 | Has anyone tried to export the smolvla as onnx model for deployment? | I have tried to test the trained smolvla model on my PC, it works. I want now to deploy the smolvla on our target board.
I looked into the model structure of smolvla, for the vision-encoder and language embedding parts I can refer to the smolvlm and export them as tow onnx models. I think the robot state embedding al... | https://github.com/huggingface/lerobot/issues/1899 | open | [
"question",
"policies",
"performance"
] | 2025-09-09T10:41:14Z | 2025-10-07T20:50:12Z | null | TankerLee |
huggingface/huggingface_hub | 3,339 | What is the best replacement of HfFileSystem.glob with HfApi | In some of our code, we were using something like
```python
hf_fs = HfFileSystem()
files = hf_fs.glob('my/repo/*/model.onnx')
```
But I found that HfFileSystem is much less stable than HfApi, especially in those edge cases (e.g. network unstable)
So what is the best replacement of HfFileSystem.glob with HfApi? Any s... | https://github.com/huggingface/huggingface_hub/issues/3339 | closed | [] | 2025-09-09T09:02:07Z | 2025-09-15T09:12:04Z | null | narugo1992 |
huggingface/transformers | 40,754 | Potentially incorrect value assignment of Llama4TextModel's output in Llama4ForCausalLM's output? | ### System Info
**System Info**
- `transformers` version: 4.55.4
- Platform: Linux-6.15.9-201.fc42.x86_64-x86_64-with-glibc2.41
- Python version: 3.13.5
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.6.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTo... | https://github.com/huggingface/transformers/issues/40754 | closed | [
"Usage",
"bug"
] | 2025-09-08T12:31:39Z | 2025-09-16T19:25:03Z | 3 | st143575 |
huggingface/transformers | 40,752 | How to extract attention weights for the first generated token? | **Title:** Request for clarification: How to extract attention weights for the first generated token?
**Description:**
Hi, I'm trying to extract the attention weights **of the first generated token** (i.e., the first new token produced by `generate()`) with respect to the input prompt. However, I'm observing inconsis... | https://github.com/huggingface/transformers/issues/40752 | closed | [] | 2025-09-08T09:53:16Z | 2025-09-08T11:41:22Z | null | VincentLHH |
huggingface/transformers.js | 1,407 | Expected time to load a super-resolution model locally | ### Question
Loading a image super-resolution model locally can take more than 10 seconds on my MacBook Pro (M1 Max). Is this expected behavior?
```javascript
env.allowRemoteModels = false;
env.allowLocalModels = true;
env.backends.onnx.wasm.wasmPaths = `/wasm/`;
const upscaler = ref(null);
onMounted(async () => {
... | https://github.com/huggingface/transformers.js/issues/1407 | closed | [
"question"
] | 2025-09-08T06:26:49Z | 2025-09-30T19:22:34Z | null | ymtoo |
huggingface/lerobot | 1,891 | How to checkout a commit id? | The underlying datasets supports a "revision" flag. Does lerobot? | https://github.com/huggingface/lerobot/issues/1891 | closed | [] | 2025-09-08T04:39:37Z | 2025-09-10T22:53:18Z | null | richardrl |
huggingface/transformers | 40,743 | Support for 4D attention mask for T5 | ### Feature request
Currently, T5 cannot take 4D attention masks (batch_size, num_heads, seq_len, seq_len) as inputs. Passing a 4D attention_mask and a 4D decoder_attention_mask like so leads to a shape-related exception :
```python
import torch
from transformers import AutoTokenizer, T5ForConditionalGeneration
toke... | https://github.com/huggingface/transformers/issues/40743 | open | [
"Feature request"
] | 2025-09-07T07:18:05Z | 2025-09-09T11:43:33Z | 5 | Aethor |
huggingface/lerobot | 1,882 | Pretrain - Code for pretraining smolvla | ## Guidance on Replicating the Pre-training Process with Community Datasets
Hi team,
First off, thank you for the fantastic work on SmolVLA and for open-sourcing the model and code. It's a great contribution to the community.
I am trying to replicate the pre-training process as described in the original paper. I ha... | https://github.com/huggingface/lerobot/issues/1882 | closed | [
"question",
"dataset"
] | 2025-09-07T03:18:04Z | 2025-09-23T09:06:13Z | null | ruiheng123 |
pytorch/ao | 2,948 | Deprecation for Int4WeightOnlyConfig (version 1) and the models | This issue is tracking the deprecation of the (1) configs (2) model checkpoints quantized with these configs.
What is deprecated:
* We added version 2 Int4WeightOnlyConfig in various PRs in https://github.com/pytorch/ao/issues/2752 and switched the default version to 2 in https://github.com/pytorch/ao/pull/2949, the v... | https://github.com/pytorch/ao/issues/2948 | open | [
"tracker"
] | 2025-09-05T23:31:36Z | 2025-10-02T20:49:44Z | 0 | jerryzh168 |
huggingface/transformers | 40,708 | When using a custom model, it copies the code into Hugging Faceβs cache directory. | ```
model = AutoModel.from_pretrained(
model_args.model_name_or_path,
trust_remote_code=True,
torch_dtype=compute_dtype,
device_map=device_map,
# init_vision=True,
# init_audio=False,
# init_tts=False,
)
```
`model_args.model_name_or_path=/mnt/241hdd/wzr/M... | https://github.com/huggingface/transformers/issues/40708 | closed | [] | 2025-09-05T07:21:40Z | 2025-11-15T08:03:16Z | 4 | wzr0108 |
huggingface/transformers | 40,690 | Batches loaded from wrong epoch when resuming from second epoch | ### System Info
**Required system information**
```text
- `transformers` version: 4.57.0.dev0
- Platform: Linux-5.15.0-133-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.6.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed ve... | https://github.com/huggingface/transformers/issues/40690 | closed | [
"bug"
] | 2025-09-04T11:48:41Z | 2025-12-03T13:14:04Z | 6 | ngazagna-qc |
huggingface/optimum | 2,347 | Gemma3n convert to onnx format | Hello,
How do I convert the Gemma3n model to the ONNX format using the OptimumCLI command?
Thanks in advance. | https://github.com/huggingface/optimum/issues/2347 | closed | [
"Stale"
] | 2025-09-04T09:13:19Z | 2025-10-15T02:09:55Z | 2 | shahizat |
huggingface/transformers | 40,680 | Idea: Exploring Mathematical Extensions for GPT-style Models (teaser) | Hi Transformers team π,
Iβve been experimenting with a conceptual enhancement to GPT-style architecturesβintroducing mathematical mechanisms for memory and adaptive learningβwhile keeping the overall transformer backbone intact.
Iβve documented the approach in Markdown (README + comparison notes), but havenβt publis... | https://github.com/huggingface/transformers/issues/40680 | closed | [] | 2025-09-04T07:23:29Z | 2025-10-12T08:02:38Z | 3 | muzamil-ashiq |
pytorch/torchtitan | 1,680 | How is SDPA TP parallelized ? | In llama3, the TransformerBlock is TP parallelized [here](https://github.com/pytorch/torchtitan/blob/21799393c3e6dc710e694ef1a65852f2136ba58d/torchtitan/models/llama3/infra/parallelize.py#L204 ). However, I do not see any specific TP parallelization for scaled_dot_product . How is SDPA TP parallelized then ? | https://github.com/pytorch/torchtitan/issues/1680 | open | [] | 2025-09-04T03:23:27Z | 2025-09-04T22:11:08Z | 2 | githubsgi |
huggingface/transformers | 40,647 | how to get response text during training | I want to obtain the inferred output text during the evaluation step in the training process, not just the eval loss.
<img width="1264" height="211" alt="Image" src="https://github.com/user-attachments/assets/9dd432c5-74ea-4290-adff-7865cf3ea481" /> | https://github.com/huggingface/transformers/issues/40647 | closed | [] | 2025-09-03T10:37:51Z | 2025-10-12T08:02:43Z | null | zyandtom |
huggingface/diffusers | 12,276 | The image is blurry. | How to solve image blurriness during fine-tuning? | https://github.com/huggingface/diffusers/issues/12276 | open | [] | 2025-09-03T08:29:38Z | 2025-09-03T08:29:38Z | 0 | sucessfullys |
huggingface/gym-hil | 32 | how to perform hil in sim | https://github.com/huggingface/gym-hil/issues/32 | closed | [] | 2025-09-02T17:10:05Z | 2025-09-16T14:02:32Z | null | prathamv0811 | |
pytorch/vision | 9,202 | torch thread yield after launch nccl kernel | ### π Describe the bug
I'm using torch to benchmark nccl performance. The default nccl version that torch uses is 2.21.5. With default setting, the performance looks normal.
Then I use LD_PRELOAD to use the latest nccl version 2.27.7 instead, and the performance degrades drastically.
nsys shows that with nccl 2.27.7... | https://github.com/pytorch/vision/issues/9202 | closed | [] | 2025-09-02T13:09:26Z | 2025-09-02T13:44:52Z | 1 | tobi1031 |
huggingface/transformers | 40,606 | GPT-OSS attention backends available for SM120 other than Eager? | I was wondering any attention backend we can use for long context if using SM120 GPU? Since the "eager_attention_forward" uses the naive implementation that computes the full attention in one go, which can lead to OOM for large context, but I couldn't use other implementations since they either do not support sinks or ... | https://github.com/huggingface/transformers/issues/40606 | closed | [] | 2025-09-02T03:21:16Z | 2025-10-12T08:02:48Z | 4 | TheTinyTeddy |
pytorch/TensorRT | 3,803 | Performance Issue when using tools/llm | ## β Question
<!-- Your question -->
## What you have already tried
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 2.8.0
- CPU Architecture: amd
- OS (e.g., ... | https://github.com/pytorch/TensorRT/issues/3803 | open | [
"question"
] | 2025-09-01T17:10:38Z | 2025-09-04T08:43:24Z | null | ChiikawaSama |
huggingface/peft | 2,764 | merge_and_unload returns the base (prior to fine-tuning) back!!!! | I have fine-tune a model using PEFT and now I want to merge the base model to adapter. This is what I am doing:
```
base_model = AutoModelForCausalLM(model_id, device_map = 'auto')
model_finetuned = PeftModel.from_pretrained(base_model, adapter_path)
```
Now the size of `model_finetuned `is roughly 42GB but when I... | https://github.com/huggingface/peft/issues/2764 | closed | [] | 2025-09-01T04:07:36Z | 2025-10-09T15:26:15Z | 12 | manitadayon |
huggingface/lerobot | 1,822 | As of 08/31/2025, how do you create a v2.1 dataset from raw data? | My search is cursory, but I can't find any tutorial or example on creating a v2.1 dataset on the main branch. So, how do you create a Lerobot dataset in the current version? Should I refer to older commits | https://github.com/huggingface/lerobot/issues/1822 | open | [
"question",
"dataset"
] | 2025-08-31T18:29:34Z | 2025-10-08T13:02:44Z | null | IrvingF7 |
huggingface/text-generation-inference | 3,318 | Infinite tool call loop: `HuggingFaceModel` and `text-generation-inference` | ## Description
Hello. Needless to say, amazing library. Please let me know if you'd like me to try something or if you need more info.
I've been going through various local model providers trying to find one that works well, when I cam across a rather shocking bug when running against Huggingface's TGI model host.
T... | https://github.com/huggingface/text-generation-inference/issues/3318 | open | [] | 2025-08-31T08:23:46Z | 2025-08-31T08:58:13Z | 1 | baughmann |
pytorch/audio | 4,076 | [STABLE ABI] Porting rir/rir.cpp rir/ray_tracing.cpp | This issue collects tasks that block porting [rir/rir.cpp](https://github.com/pytorch/audio/blob/main/src/libtorchaudio/rir/rir.cpp) and [rir/ray_tracing.cpp](https://github.com/pytorch/audio/blob/main/src/libtorchaudio/rir/ray_tracing.cpp) to use torch stable ABI.
- [ ] implement `mutable_data_ptr<T>()` and `const_da... | https://github.com/pytorch/audio/issues/4076 | closed | [] | 2025-08-30T19:46:50Z | 2025-11-04T11:34:21Z | 2 | pearu |
pytorch/audio | 4,075 | [STABLE ABI] Porting overdrive.cpp | This issue collects tasks that block porting [overdrive.cpp](https://github.com/pytorch/audio/blob/main/src/libtorchaudio/overdrive.cpp) to use torch stable ABI.
- [x] implement `accessor` template as a `torch::stable::Tensor` template method
Fix available: https://github.com/pytorch/pytorch/pull/161967
- [x] ca... | https://github.com/pytorch/audio/issues/4075 | closed | [] | 2025-08-30T19:23:39Z | 2025-11-20T14:17:04Z | 0 | pearu |
pytorch/audio | 4,074 | [STABLE ABI] Porting lfilter.cpp | This issue collects tasks that block porting [lfilter.cpp](https://github.com/pytorch/audio/blob/main/src/libtorchaudio/lfilter.cpp) to use torch stable ABI.
- [x] implement `mutable_data_ptr<T>()` and `const_data_ptr<T>()` in torch/csrc/stable/tensor_struct.h. For instance, this simplifies porting of expressions like... | https://github.com/pytorch/audio/issues/4074 | closed | [] | 2025-08-30T19:13:55Z | 2025-12-01T09:41:54Z | 4 | pearu |
pytorch/ao | 2,914 | Support for LR-QAT | Qualcomm research proposed a technique LR-QAT in their paper "Low-Rank Quantization-Aware Training for LLMs".
The core idea is that the low-rank weights are placed within the quantization grid of the model's weights using a custom downcasting operator.
The unique advantage of this is that it allows for a low rank ada... | https://github.com/pytorch/ao/issues/2914 | open | [] | 2025-08-30T18:16:10Z | 2025-09-04T01:14:35Z | 1 | Juahyori |
huggingface/diffusers | 12,257 | [Looking for community contribution] support Wan 2.2 S2V: an audio-driven cinematic video generation model | We're super excited about the Wan 2.2 S2V (Speech-to-Video) model and want to get it integrated into Diffusers! This would be an amazing addition, and we're looking for experienced community contributors to help make this happen.
- **Project Page**: https://humanaigc.github.io/wan-s2v-webpage/
- **Source Code**: htt... | https://github.com/huggingface/diffusers/issues/12257 | open | [
"help wanted",
"Good second issue",
"contributions-welcome"
] | 2025-08-29T08:04:43Z | 2025-08-29T10:23:52Z | 0 | yiyixuxu |
pytorch/torchtitan | 1,661 | CPU Mode Request | Hi all, just getting in to using Torch Titan and have really loved it! One thing I personally would find useful is the ability to do small day to day development on my laptop in a CPU mode. I realize that TorchTitan is a distributed training repo, but I think a lot of researchers would still find a CPU dev/debug mode u... | https://github.com/pytorch/torchtitan/issues/1661 | open | [
"question"
] | 2025-08-29T07:48:31Z | 2025-08-29T17:32:46Z | null | djbyrne |
pytorch/executorch | 13,787 | How to enable XNN_ENABLE_SPARSE in Executorch | ### π The feature, motivation and pitch
I would like to ask if there is any plan to support XNN_ENABLE_SPARSE in Executorch.
I am working on a model that contains a significant amount of sparse operations, and I believe enabling XNN_ENABLE_SPARSE could lead to a substantial performance improvement.
Is this feature ... | https://github.com/pytorch/executorch/issues/13787 | open | [
"module: xnnpack"
] | 2025-08-29T04:04:39Z | 2025-09-08T16:32:36Z | null | HKLee2040 |
huggingface/optimum-onnx | 44 | How to use streaming inference for onnx models exported from QWEN3-4B models | How to use streaming inference for onnx models exported from QWEN3-4B models | https://github.com/huggingface/optimum-onnx/issues/44 | closed | [] | 2025-08-29T01:48:07Z | 2025-10-06T12:29:34Z | null | williamlzw |
huggingface/diffusers | 12,255 | [BUG] Misleading ValueError when subclassing StableDiffusionImg2ImgPipeline with a mismatched __init__ signature | ### Describe the bug
When subclassing diffusers.StableDiffusionImg2ImgPipeline, if the subclass's __init__ signature does not include the requires_safety_checker: bool = True argument, the default .from_pretrained() loader raises a confusing and indirect ValueError.
The official documentation for StableDiffusionImg2I... | https://github.com/huggingface/diffusers/issues/12255 | closed | [
"bug"
] | 2025-08-28T18:31:14Z | 2025-08-30T07:41:16Z | 2 | BoostZhu |
pytorch/torchtitan | 1,653 | Interleaved 1F1B weight-gradient computation decoupling | Hi torchtitan team,
The kimi K2 reports apparently do not use dualpipe, and instead use interleaved 1F1B and "decouple the weight-gradient computation from each micro-batchβs backward pass and execute it in parallel with the corresponding PP communication" to mitigate the PP communication overhead. I am curious how ha... | https://github.com/pytorch/torchtitan/issues/1653 | open | [
"question",
"module: pipelining"
] | 2025-08-28T18:21:15Z | 2025-09-05T20:19:24Z | null | vwxyzjn |
huggingface/peft | 2,759 | PeftModel trainable parameters with multiple adapters | ### System Info
peft-0.17.1
python 3.9
### Who can help?
@BenjaminBossan
### Reproduction
**1) modules_to_save gradient true even when is_trainable=False**
The adapters has both modules_to_save and target_modules
```
peft_backbone = PeftModel.from_pretrained(
target_backbone,
... | https://github.com/huggingface/peft/issues/2759 | closed | [] | 2025-08-28T16:36:25Z | 2025-10-06T15:04:09Z | 8 | NguyenRichard |
pytorch/ao | 2,896 | [CPU][FP8][Inductor] How to support fp8 quant for inductor on CPU | What we want to do is to enable FP8 quantization in PyTorch. Similar to INT8 quantization, this requires inserting quantize and dequantize operations into the computational graph. In order to reuse pattern matching logic of int8, we need register FP8 quant and dequant.
To address this, we attempted to register quant i... | https://github.com/pytorch/ao/issues/2896 | closed | [] | 2025-08-28T06:07:47Z | 2025-09-21T09:53:16Z | null | shiyang-weng |
pytorch/vision | 9,196 | Why am I getting a discrepency between SSDLite Scores and the Full Probability Vector? | I am noticitng a slight discrepency between the scores output by the SSDLite model and the Full Probability Vector you get from feeding the features extracted from the backbone through the model head. While the difference is slight, around .004, I find the behavior peculiar and cant find an explanation. Please see the ... | https://github.com/pytorch/vision/issues/9196 | closed | [
"question"
] | 2025-08-28T04:24:56Z | 2025-09-06T14:58:19Z | null | Aneesh-Sandhir |
huggingface/transformers | 40,462 | Question about RoPE Implementation in modeling_llama: Should torch.cat be repeat_interleave? | Hi,
I was going through the code for `modeling_llama` and the RoPE implementation. I came across the following function:
```
def forward(self, x, position_ids):
inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1).to(x.device)
position_ids_expanded = position_id... | https://github.com/huggingface/transformers/issues/40462 | closed | [] | 2025-08-26T16:32:41Z | 2025-08-27T10:01:11Z | 2 | abhidipbhattacharyya |
huggingface/transformers | 40,459 | `use_kernels=True` does not invoke custom kernels | ### System Info
- `transformers` version: 4.56.0.dev0
- Platform: Linux-5.4.0-216-generic-x86_64-with-glibc2.31
- Python version: 3.12.7
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.6.2
- Accelerate version: 1.10.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (ac... | https://github.com/huggingface/transformers/issues/40459 | closed | [
"bug"
] | 2025-08-26T13:32:35Z | 2025-09-16T08:50:55Z | 1 | ariG23498 |
huggingface/diffusers | 12,241 | WAN2.1 FLF2V: Incorrect MASK Creation???? | Hello! I think that it is maybe error. (Or not, please explain it for me!!)
In **WanImageToVideoPipeline** class in `pipline_wan_i2v.py`,
<img width="868" height="243" alt="Image" src="https://github.com/user-attachments/assets/8108a9e9-8632-44a1-93b8-abd9ae6a22cd" />
(the code is the part of `prepare_latents` funct... | https://github.com/huggingface/diffusers/issues/12241 | open | [] | 2025-08-26T12:23:09Z | 2025-08-27T02:10:49Z | 1 | KyujinHan |
huggingface/lerobot | 1,792 | how to train lerobot model offline with offline data? | Hi, I'm trying to configure lerobot to train with pre-downloaded models and datasets. I'm stuck, however, with how to organize the model cache and dataset cache, and how to tell the train script I'm using offline everything?
I tried to download the model and dataset:
```
$ hf download lerobot/pi0 --cache-dir ~/lerobot... | https://github.com/huggingface/lerobot/issues/1792 | closed | [] | 2025-08-26T10:20:56Z | 2025-09-03T10:48:37Z | null | dalishi |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.