repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 β | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/pytorch | 165,100 | Header files not found during build | ### π Describe the bug
I'm trying to build pytorch from source but getting the following error:
```
pytorch/aten/src/ATen/core/ivalue.h:4:10: fatal error: ATen/core/TensorBody.h: No such file or directory
```
Seems these files are generated and I see this line printed before
```
core header install: pytorch/build/... | https://github.com/pytorch/pytorch/issues/165100 | open | [
"module: build",
"triaged",
"has workaround"
] | 2025-10-09T20:51:23Z | 2025-10-10T13:43:50Z | 1 | tushar00jain |
vllm-project/vllm | 26,530 | [Bug]: Fix CVE-2023-48022 in docker image | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
Not required for this.
</details>
### π Describe the bug
The vllm/vllm-openai:v0.10.2 image seems to be affected by the [CVE-2023-48022](https://avd.aquasec.com/nvd/2023/cve-2023-48022/) **Critical** CVE wi... | https://github.com/vllm-project/vllm/issues/26530 | closed | [
"bug"
] | 2025-10-09T20:16:02Z | 2025-10-10T21:14:49Z | 3 | geodavic |
huggingface/lerobot | 2,156 | How to reproduce lerobot/pi0_libero_finetuned? | Thanks for the great work!
I evaluated lerobot/pi0_libero_finetuned on libero goal datasets.
When using n_action_steps=50, the success rate is ~ 75%
When using n_action_steps=10, the success rate is ~ 90%
I tried to reproduce the training results, so I mainly refered to [train_config.json](https://huggingface.co/lero... | https://github.com/huggingface/lerobot/issues/2156 | open | [
"question",
"policies",
"simulation"
] | 2025-10-09T18:11:47Z | 2025-10-22T09:27:03Z | null | PuzhenYuan |
pytorch/ao | 3,137 | README should highlight our huggingface models | We've got a few quantized models here and plan to keep adding to it: https://huggingface.co/pytorch. This should be highlighted close to the top of the README | https://github.com/pytorch/ao/issues/3137 | open | [
"topic: documentation"
] | 2025-10-09T18:07:51Z | 2025-10-09T18:08:06Z | 0 | andrewor14 |
huggingface/lerobot | 2,153 | Why canβt I find something like train_expert_only in the latest version of pi0? Do the current versions of pi0 and pi0.5 only support full-parameter training? | Why canβt I find something like βtrain_expert_onlyβ in the latest version of pi0?
Do the current versions of pi0 and pi0.5 only support full-parameter training? | https://github.com/huggingface/lerobot/issues/2153 | closed | [
"enhancement",
"question",
"policies",
"good first issue"
] | 2025-10-09T13:08:10Z | 2025-12-31T14:54:29Z | null | ZHHhang |
pytorch/pytorch | 165,051 | `[__recompiles] - 0/3: expected type of 'args[1]' to be a tensor type, ' but found <class 'torch.Tensor'>` cryptic recompilation cause | ### π Describe the bug
Hello,
In some private workload I am running (unfortunately I don't have a minimal repro - I can try to get one if needed), the recompilation cause:
```
V1009 11:33:51.404000 3024 site-packages/torch/_dynamo/guards.py:3006] [0/5] [__recompiles] Recompiling function inner in /root/miniforge3/l... | https://github.com/pytorch/pytorch/issues/165051 | open | [
"needs reproduction",
"triaged",
"oncall: pt2",
"module: dynamo"
] | 2025-10-09T11:43:27Z | 2025-10-10T17:59:08Z | 3 | fxmarty-amd |
huggingface/datasets | 7,802 | [Docs] Missing documentation for `Dataset.from_dict` | Documentation link: https://huggingface.co/docs/datasets/en/package_reference/main_classes
Link to method (docstring present): https://github.com/huggingface/datasets/blob/6f2502c5a026caa89839713f6f7c8b958e5e83eb/src/datasets/arrow_dataset.py#L1029
The docstring is present for the function, but seems missing from the... | https://github.com/huggingface/datasets/issues/7802 | open | [] | 2025-10-09T02:54:41Z | 2025-10-19T16:09:33Z | 2 | aaronshenhao |
pytorch/pytorch | 164,971 | [dynamo] Keep stack trace where mutations happened | ### π Describe the bug
This is essential to figure out where we want to use strict-export but there is a side effect, and we want to inform the user about how to rewrite their code to remove the side-effect.
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Gu... | https://github.com/pytorch/pytorch/issues/164971 | open | [
"triaged",
"oncall: pt2",
"module: dynamo",
"oncall: export"
] | 2025-10-08T18:52:09Z | 2025-10-09T17:23:32Z | 1 | anijain2305 |
pytorch/pytorch | 164,966 | XPU OOM when allocate tensor according to its reported available memory | ### π Describe the bug
run below
```
import torch
torch.xpu.empty_cache()
## bring up the context, it may occupy memory
a = torch.rand(5).to("xpu:0")
free_memory_bytes = torch.xpu.mem_get_info("xpu:0")[0]
required_memory_bytes = 5000 * 5000 * (32 // 8)
# Leaving 50 MB of free memory for possible buffers, etc.
n_v... | https://github.com/pytorch/pytorch/issues/164966 | open | [
"module: memory usage",
"triaged",
"module: xpu"
] | 2025-10-08T18:39:18Z | 2025-10-11T01:40:46Z | 3 | yao-matrix |
pytorch/pytorch | 164,951 | Docker checkouts take 30+ min on H100 runners | ### π Describe the bug
See https://github.com/pytorch/pytorch/actions/runs/18344478781/job/52264153169 for example where "Pull docker image" takes 37 min!!! Can we cache/slim the docker? Or connect those runners to more powerful IO system
### Versions
CI
cc @seemethere @pytorch/pytorch-dev-infra | https://github.com/pytorch/pytorch/issues/164951 | open | [
"module: ci",
"triaged"
] | 2025-10-08T17:12:15Z | 2025-10-08T17:12:25Z | 0 | malfet |
pytorch/pytorch | 164,922 | `torch.compile` fails to trace `datetime.now()` with Dynamo guard check failure | ### π Describe the bug
When compiling a model that uses `datetime.now()` function, `torch.compile` fails with a Dynamo guard check error. The warning message explicitly identifies this as a Python builtin that Dynamo cannot trace, and suggests filing an issue to add support.
```python
import torch
from datetime impor... | https://github.com/pytorch/pytorch/issues/164922 | open | [
"triaged",
"function request",
"oncall: pt2",
"module: dynamo"
] | 2025-10-08T10:19:41Z | 2025-10-14T20:25:33Z | 9 | LiSsHhUuAaIi |
huggingface/transformers | 41,431 | gradient scaling occurs even though total gradient remains < max_grad_norm in trainer.py | Even though gradients remain < max_grad_norm throughout training, the gradient still goes through a scaling process. For instance, I set max_grad_norm = 1, and grad_norm consistently remains <= 0.33. Because the trainer takes you through the grad clip process if max_grad_norm > 0 or not None, this operation always gets... | https://github.com/huggingface/transformers/issues/41431 | closed | [] | 2025-10-07T22:13:08Z | 2025-11-15T08:02:51Z | 7 | lorsonblair |
pytorch/pytorch | 164,878 | Ban and remove plain asserts with no message in our python code | In a similar spirit to https://github.com/pytorch/pytorch/issues/148114
We should remove asserts without any message explaining what is happening.
On top of that, we should move them to proper errors to avoid any issue with python -O.
There are two parts here:
- [x] Enable Ruff lint for this https://docs.astral.sh/ru... | https://github.com/pytorch/pytorch/issues/164878 | open | [
"module: error checking",
"triaged",
"actionable",
"module: python frontend"
] | 2025-10-07T21:36:50Z | 2025-12-16T20:02:43Z | 26 | albanD |
huggingface/candle | 3,120 | AutoModel / PreTrainedModel equivalent magic ? | Hello all, first, thanks a lot for this wonderful crate.
I was wondering if it's on the roadmap or if there is a solution to have the same magic as in python with a `AutoModel.from_pretrained("the_model_name_string")`
As I'm protoyping and am often changing models... which requires to change the architecture everyti... | https://github.com/huggingface/candle/issues/3120 | open | [] | 2025-10-07T21:27:31Z | 2025-10-09T13:02:35Z | 2 | ierezell |
huggingface/lerobot | 2,134 | what is the transformers version for latest lerobot pi0? | ### System Info
```Shell
- lerobot version: 0.3.4
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31
- Python version: 3.10.18
- Huggingface Hub version: 0.35.3
- Datasets version: 4.1.1
- Numpy version: 1.26.4
- PyTorch version: 2.7.1+cu126
- Is PyTorch built with CUDA support?: True
- Cuda version: 12.6
- GPU... | https://github.com/huggingface/lerobot/issues/2134 | closed | [] | 2025-10-07T12:06:52Z | 2025-11-14T20:04:50Z | null | PuzhenYuan |
pytorch/torchtitan | 1,805 | TP gradient update is wrong during MoE backward | ### Bug description
https://github.com/pytorch/torchtitan/blob/main/torchtitan/experiments/llama4/infra/parallelize.py#L454
TP used Dtensor's local tensor by calling to_local(), and the local tensor's gradient can not be correctly propagated back to the DTensor , because we didn't set grad_placements to tell autograd... | https://github.com/pytorch/torchtitan/issues/1805 | closed | [
"high priority",
"triage review"
] | 2025-10-07T03:43:55Z | 2025-10-15T03:32:04Z | 1 | wwwjn |
pytorch/pytorch | 164,786 | How should we handle PyTorch build flags in torch/headeronly for custom ops? | ### π Describe the bug
This isn't exactly a bug, per sΓ©, but it is misleading. Thanks to @mikaylagawarecki pointing out the following phenomenon in a parallel file, I'm realizing we have the following behavior in torch/headeronly/util/Half.h today:
Consider the following ifdef
https://github.com/pytorch/pytorch/blob... | https://github.com/pytorch/pytorch/issues/164786 | open | [
"module: build",
"triaged",
"module: custom-operators",
"oncall: pt2",
"module: pt2-dispatcher"
] | 2025-10-06T21:22:09Z | 2025-10-07T15:26:28Z | 1 | janeyx99 |
huggingface/diffusers | 12,441 | Support Wan2.2-Animate | [Wan2.2-Animate-14B](https://humanaigc.github.io/wan-animate), it's a unified model for character animation and replacement, with holistic movement and expression replication.
https://github.com/user-attachments/assets/351227d0-4edc-4f6c-9bf9-053e53f218e4
We would like open to the community, if anyone is interested, ... | https://github.com/huggingface/diffusers/issues/12441 | closed | [
"help wanted",
"contributions-welcome"
] | 2025-10-06T18:08:21Z | 2025-11-13T02:52:32Z | 0 | asomoza |
huggingface/lerobot | 2,124 | Question regarding downsampling and resizing dataset | Hi,
Thank you for providing this wonderful library! I was curious about how one can take an existing dataset (collected or downloaded) and modify the fps (downsample, resize images, or delete specific episodes (for v3) prior to policy training. I am finding this tricky to do particularly when the dataset is not loaded... | https://github.com/huggingface/lerobot/issues/2124 | open | [
"question",
"dataset",
"good first issue"
] | 2025-10-06T16:07:47Z | 2025-10-07T20:25:20Z | null | karthikm-0 |
huggingface/transformers | 41,363 | RT-Detr docs should reflect fixed 640x640 input size | The authors of RT-Detr mention that the model was trained on 640x640 images and was meant to be used for inference on 640x640 images. Also, the current implementation has certain quirks that make training/inferring on images of different sizes problematic. For example, the pixel masks used for batching images of varyin... | https://github.com/huggingface/transformers/issues/41363 | closed | [
"Documentation"
] | 2025-10-06T11:04:37Z | 2025-11-06T13:24:01Z | 4 | konstantinos-p |
pytorch/ao | 3,122 | Access to compact internal representation for `target_dtype=torch.uint4` | Hello, for my use case, I need to access and store the internal representation of 4-bit quantization. This is because I'd like to quantize and write back part of the full buffer. Think about "add some new channels" or "overwrite content of a channel".
I have problems getting to the compressed representation. I wrote t... | https://github.com/pytorch/ao/issues/3122 | open | [
"question",
"triaged"
] | 2025-10-06T11:02:12Z | 2025-10-09T08:29:55Z | null | mseeger |
pytorch/xla | 9,670 | `all_reduce` does not apply `scale` when `xr.world_size == 1` | ## β Questions and Help
Hi, I have noticed that when `world_size == 1`, `all_reduce` is a no-op and does not apply `scale`:
In `torch_xla.core.xla_model` in `def all_reduce`:
```
# No-op if there is only one device
if runtime.world_size() == 1 and not xu.getenv_as('XLA_ALWAYS_ALLREDUCE',
... | https://github.com/pytorch/xla/issues/9670 | open | [
"question",
"distributed"
] | 2025-10-06T04:40:24Z | 2025-10-17T06:31:12Z | null | afzalxo |
pytorch/pytorch | 164,696 | Support torch._inductor.config.inplace_buffers for custom_op whenever possible | ### π The feature, motivation and pitch
Is it possible to add this support to custom_op?
The user would annotate what buffers can be used for in_place and torch compile should reuse buffers whenever possible (if they are not required by other ops or backward etc).
This is to reduce mem usage.
### Alternatives
_No ... | https://github.com/pytorch/pytorch/issues/164696 | open | [
"triaged",
"module: custom-operators",
"function request",
"oncall: pt2",
"module: inductor",
"module: pt2-dispatcher"
] | 2025-10-05T08:30:21Z | 2025-11-12T20:52:44Z | 6 | mayank31398 |
huggingface/tokenizers | 1,873 | Why is my Python implementation faster than the Rust implementation? | I am comparing the tokenizers in the Python and the huggingface implementation as follows
```python
import json
import time
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
[... Define and save the texts as data.json]
with open('./data.json', 'w', encoding='utf-8') a... | https://github.com/huggingface/tokenizers/issues/1873 | closed | [] | 2025-10-05T08:02:47Z | 2025-10-08T17:41:28Z | 4 | sambaPython24 |
pytorch/pytorch | 164,662 | Improper batch processing in torch.linalg.eig with cuda | ### π The feature, motivation and pitch
When calculating large eigenvalues of non-symmetric matrices, I noticed that torch processes the matrices one by one, with only one core getting loaded. The processing time of multiple matrices is more or less similar between a Python loop and a batched execution of linalg.eig.... | https://github.com/pytorch/pytorch/issues/164662 | open | [
"module: cuda",
"triaged",
"module: linear algebra"
] | 2025-10-04T16:38:37Z | 2025-10-07T21:39:03Z | 0 | johannesz-codes |
huggingface/transformers | 41,336 | is there a bug in group_videos_by_shape for qwenvl video preprocessiong? | ### System Info
in src/transformers/video_utils.py,
group_videos_by_shape
grouped_videos = {shape: torch.stack(videos, dim=0) for shape, videos in grouped_videos.items()}, where each video is of shape BTCHW. This will create a new dimension.
However, in qwenvl video preprocess
batch_size, grid_t, channel = patches.... | https://github.com/huggingface/transformers/issues/41336 | closed | [
"bug"
] | 2025-10-03T22:26:26Z | 2025-10-03T22:44:43Z | 1 | dichencd |
pytorch/ao | 3,120 | Question: How to implement my quantization algorithm? | The docs mention that one could ask here for help if unsure how to implement a new quantization algorithm with `torchao`, so I'll use that chance.
First, in general, the current situation around pytorch quantization seems a bit unclear to me. As far as I understand:
- there used to be two quantization APIs: "Eager" an... | https://github.com/pytorch/ao/issues/3120 | closed | [] | 2025-10-03T19:18:39Z | 2025-10-04T19:56:39Z | null | jbirnick |
huggingface/lerobot | 2,111 | frame deletion | Great work on this project! I have a quick question - does LeRobotDataset support frame deletion? For example, in the DROID_lerobot dataset, the first few frames have an action value of 0 and I need to remove them.
I'd appreciate any insights you can provide. Thank you for your time and help! | https://github.com/huggingface/lerobot/issues/2111 | closed | [
"question",
"dataset"
] | 2025-10-03T13:05:12Z | 2025-10-10T12:17:53Z | null | Yysrc |
pytorch/pytorch | 164,559 | fwd_rng_state show up in the aot_export_joint grpah input | See https://github.com/pytorch/torchtitan/pull/1794
P1975157784: rank0_autograd_function_0fea2786.py
Setting `torch._functorch.config.graphsafe_rng_functionalization = False` doesn't work.
How to avoid `fwd_rng_state` from showing up?
cc @chauhang @penguinwu @zou3519 @bdhirsh | https://github.com/pytorch/pytorch/issues/164559 | open | [
"triaged",
"oncall: pt2",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 2025-10-03T07:28:10Z | 2025-10-06T19:10:58Z | 1 | SherlockNoMad |
pytorch/pytorch | 164,536 | Very confused about conda-forge | ### π Describe the bug
Is this the cpu or gpu version? https://anaconda.org/conda-forge/pytorch
What is this? https://anaconda.org/pytorch/pytorch-cuda
How should it be used? Is conda no longer a good way to install?
### Versions
Is this the cpu or gpu version? https://anaconda.org/conda-forge/pytorch
What is this?... | https://github.com/pytorch/pytorch/issues/164536 | closed | [] | 2025-10-03T01:25:26Z | 2025-10-03T05:26:01Z | 1 | 7735986 |
pytorch/pytorch | 164,529 | [RFC] Implement shrink_group API to expose ncclCommShrink | ### π The feature, motivation and pitch
### PyTorch Process Group Shrink API
Authors: @brchang24 @spotluri @bosilca
#### Summary
This document outlines proposed API changes to improve fault tolerance and flexibility in PyTorch Process Groups.
#### Motivation
**Fault Tolerance support**
The API is designed t... | https://github.com/pytorch/pytorch/issues/164529 | closed | [
"oncall: distributed"
] | 2025-10-03T00:26:11Z | 2025-10-17T17:55:06Z | 0 | brchang24 |
huggingface/lerobot | 2,108 | HIL-SERL Transform order for (tanh β rescale) is reversed | In `TanhMultivariateNormalDiag`:
```
transforms = [TanhTransform(cache_size=1)]
if low is not None and high is not None:
transforms.insert(0, RescaleFromTanh(low, high)) # puts Rescale *before* tanh
```
this applies RescaleFromTanh then Tanh, which is backwards. should we change it to tanh first, then rescale?
... | https://github.com/huggingface/lerobot/issues/2108 | open | [
"question",
"policies"
] | 2025-10-02T21:44:22Z | 2025-10-07T20:36:31Z | null | priest-yang |
pytorch/torchtitan | 1,790 | Distributed training hangs on local error instead of exit | In our model, we have the following code
```python
if x.shape[2:] != y.shape[2:]:
print(f"RANK {torch.distributed.get_rank()}: SPATIAL DIM MISMATCH!")
raise ValueError(f"x.shape[2:] != y.shape[2:], {x.shape[2:]=}, {y.shape[2:]=}")
x = torch.cat([x, y], dim=1)
```
However, if one rank get mismatch error, it can... | https://github.com/pytorch/torchtitan/issues/1790 | closed | [
"question"
] | 2025-10-02T21:18:54Z | 2025-10-03T02:49:24Z | null | yzhao30 |
huggingface/lerobot | 2,107 | Low Success Rate When Training SmolVLA-0.24B on LIBERO | Hi folks, I'm trying to replicate the 0.24B SmolVLA model on the LIBERO dataset. Intuitively, I just changed the base model `vlm_model_name: str = "HuggingFaceTB/SmolVLM2-256M-Video-Instruct"`. Here is the command I used to train.
`lerobot-train --policy.type=smolvla --policy.load_vlm_weights=true --dataset.repo_id=H... | https://github.com/huggingface/lerobot/issues/2107 | open | [
"question",
"policies",
"simulation"
] | 2025-10-02T19:11:55Z | 2025-12-20T09:30:58Z | null | zimgong |
huggingface/optimum-onnx | 66 | How to export a stateless whisper model via optimum-cli? | I observe that when exporting a Whisper model via Python API, the resulting model is stateless, i.e. the decoder is split into two models.
```python
import os
from optimum.onnxruntime import ORTModelForSpeechSeq2Seq
ORTModelForSpeechSeq2Seq.from_pretrained("openai/whisper-tiny", export=True).save_pretrained("./whisper/... | https://github.com/huggingface/optimum-onnx/issues/66 | closed | [
"question"
] | 2025-10-02T09:50:03Z | 2025-10-13T05:33:25Z | null | nikita-savelyevv |
huggingface/lerobot | 2,104 | Select the VLM backbone for SmolVLA | Hi may I ask about the vlm_model_name, is there any model more powerful than HuggingFaceTB/SmolVLM2-500M-Video-Instruct which can be used to train SmolVLA for Lerobot SO101? | https://github.com/huggingface/lerobot/issues/2104 | open | [
"question",
"policies",
"good first issue"
] | 2025-10-02T07:35:29Z | 2025-10-11T16:53:59Z | null | Llkhhb |
pytorch/torchtitan | 1,781 | How to add supervised finetuning mask in torchtitan? | How do I implement supervised fine-tuning (SFT) masking in TorchTitan for posttraining using a synthetic dataset? | https://github.com/pytorch/torchtitan/issues/1781 | open | [
"post training"
] | 2025-10-01T23:36:12Z | 2025-12-12T19:37:12Z | null | kailashg26 |
pytorch/pytorch | 164,360 | Would maintainers be open to a contribution that adds lightweight progress bar support (based on tqdm) in torch.utils? | ### π The feature, motivation and pitch
Feature request:
Add a lightweight progress bar utility (based on tqdm) in torch.utils that users can optionally import to visualize training/validation/test loop progress.
Motivation:
PyTorch core currently does not provide any built-in progress tracking for long-running loop... | https://github.com/pytorch/pytorch/issues/164360 | closed | [
"triaged",
"enhancement"
] | 2025-10-01T15:22:57Z | 2025-10-06T17:16:15Z | 2 | wtfPrethiv |
pytorch/xla | 9,662 | XLA mul with bf16Γbf16 upcasts to f32 β op math type and option to disable? | ## β Questions and Help
Hi folks, I have a question about the XLA mul op.
When both inputs are bf16, the generated graph converts to f32, performs the multiply, then converts back to bf16. Two questions:
In this case, is the op math type effectively f32 (not bf16)?
If this upcast exists primarily for TPU accuracy/s... | https://github.com/pytorch/xla/issues/9662 | closed | [
"enhancement",
"tracing"
] | 2025-10-01T14:12:53Z | 2025-10-03T18:22:12Z | 3 | sshonTT |
huggingface/diffusers | 12,415 | SVG 2 kernels | Can we support new sparse kernels in (Neurips 2025)
https://svg-project.github.io/v2/ | https://github.com/huggingface/diffusers/issues/12415 | open | [] | 2025-10-01T10:52:50Z | 2025-10-01T10:52:50Z | 0 | bhack |
pytorch/pytorch | 164,342 | Official support for sm_120 (RTX 50-series / Blackwell) in stable PyTorch builds | ### π Describe the bug
Hello PyTorch team,
I would like to kindly request official support for sm_120 (RTX 50-series / Blackwell GPUs, e.g. RTX 5070 Ti) in the stable PyTorch builds.
Current situation:
- CUDA 12.8/12.9 already includes support for Blackwell architectures.
- PyTorch nightly builds (e.g., 2.1... | https://github.com/pytorch/pytorch/issues/164342 | open | [
"needs reproduction",
"module: windows",
"module: cuda",
"triaged"
] | 2025-10-01T07:21:36Z | 2025-11-13T00:29:02Z | 14 | endvntgf-design |
huggingface/lerobot | 2,096 | How can I change the task name of already recorded episodes? | I recorded the dataset using:
--dataset.single_task="slice the clay until it becomes 4 pieces"
Now I want to update those recorded episodes to a different task name. How can I do that? | https://github.com/huggingface/lerobot/issues/2096 | open | [
"question",
"dataset",
"good first issue"
] | 2025-10-01T02:15:49Z | 2025-10-30T03:48:47Z | null | pparkgyuhyeon |
huggingface/transformers | 41,235 | i want to request a demo code for StatefulDataLoader , i want to use data checkpoint to recover the train stage`s data state, not only model state , how to use ,StatefulDataLoader or some code to reach it ? | i want to request a demo code for StatefulDataLoader , i want to use data checkpoint to recover the train stage`s data state, not only model state , how to use ,StatefulDataLoader or some code to reach it ?
recover data state ,not only model state , i wish i said my request clearly .
how to use accelerate + transforme... | https://github.com/huggingface/transformers/issues/41235 | closed | [
"bug"
] | 2025-09-30T17:07:07Z | 2025-11-08T08:04:40Z | null | ldh127 |
huggingface/accelerate | 3,802 | i want to request a demo code for StatefulDataLoader , i want to use data checkpoint to recover the train stage`s data state, not only model state , how to use ,StatefulDataLoader or some code to reach it ? | i want to request a demo code for StatefulDataLoader , i want to use data checkpoint to recover the train stage`s data state, not only model state , how to use ,StatefulDataLoader or some code to reach it ?
recover data state ,not only model state , i wish i said my request clearly .
how to use accelerate + transfor... | https://github.com/huggingface/accelerate/issues/3802 | closed | [] | 2025-09-30T15:58:32Z | 2025-11-09T15:06:58Z | null | ldh127 |
pytorch/pytorch | 164,247 | Dynamo graph break on flex attention code | ### π Describe the bug
```python
import torch
import torch.nn as nn
from torch.nn.attention.flex_attention import create_block_mask, flex_attention
class MixedFakeModeModel(nn.Module):
def __init__(self, dim=64):
super().__init__()
self.dim = dim
self.lin = torch.nn.Linear(64, 64)
d... | https://github.com/pytorch/pytorch/issues/164247 | closed | [
"high priority",
"triaged",
"oncall: pt2",
"module: dynamo",
"module: graph breaks",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 2025-09-30T15:16:18Z | 2025-10-17T17:44:48Z | 7 | tugsbayasgalan |
pytorch/torchtitan | 1,773 | Unreachable code in `CheckpointManager` | Hi! I've noticed that `def maybe_wait_for_staging` basically never does anything as `self.staging` is set to `False` in `__init__` and never modified. Is there something wrong or is this code never supposed to run?
https://github.com/pytorch/torchtitan/blob/a3104201ba3a0fa19e9c3cc5ba748b0398551410/torchtitan/component... | https://github.com/pytorch/torchtitan/issues/1773 | closed | [] | 2025-09-30T13:59:22Z | 2025-10-02T16:43:43Z | 3 | antony-frolov |
huggingface/transformers | 41,211 | Add DEIMv2 | ### Model description
It would be nice to integrate DEIMv2, a new state-of-the-art model for real-time object detection based on DINOv3. The weights are released under Apache 2.0.
Related thread: https://github.com/Intellindust-AI-Lab/DEIMv2/issues/20
### Open source status
- [x] The model implementation is availab... | https://github.com/huggingface/transformers/issues/41211 | open | [
"New model"
] | 2025-09-30T09:43:07Z | 2025-10-04T18:44:06Z | 4 | NielsRogge |
pytorch/torchtitan | 1,771 | Posttraining Library | # Posttraining Library Support
## Summary
I understand that torchtune is being phased out and the team announced in July 2025 that they are developing a new product in a new repo for end-to-end post-training with scale. It's now been several months since that announcement. Could you share an update on when this new li... | https://github.com/pytorch/torchtitan/issues/1771 | open | [
"post training"
] | 2025-09-30T09:42:49Z | 2025-10-24T07:58:26Z | 2 | MarkLiLabs |
huggingface/transformers | 41,208 | Integrate mamba SSM kernels from the hub | ### Feature request
Currently, mamba kernels are imported via the main source package ex, for [GraniteMoeHybrid](https://github.com/huggingface/transformers/blob/main/src/transformers/models/granitemoehybrid/modeling_granitemoehybrid.py#L44-L46)
Can we migrate this to use the kernels-hub (`kernels-community/mamba-ssm... | https://github.com/huggingface/transformers/issues/41208 | closed | [
"Feature request"
] | 2025-09-30T07:50:52Z | 2025-12-18T10:17:06Z | 15 | romitjain |
huggingface/tokenizers | 1,870 | How can I convert a trained tokenizer into `transformers` format | Hi guys,
I have trained a tokenizer which works pretty well and it is stored in a single `.json` file. Is there any method / API to convert it into a `transformers` toeknizer format?
If there's no such implementation I am happy to contribute. | https://github.com/huggingface/tokenizers/issues/1870 | closed | [] | 2025-09-30T06:09:52Z | 2025-09-30T13:53:53Z | 1 | dibbla |
huggingface/lighteval | 999 | How to print all pass@k scores when generating 16 samples? | Hi,
I want to print all results of pass@k metrics when generating 16 samples. (e.g., k=1, 2, 4, 8, 16)
```python
math_500_pass_k_at_16 = LightevalTaskConfig(
name="math_500_pass_k_at_16",
suite=["custom"],
prompt_function=math_500_prompt_fn,
hf_repo="HuggingFaceH4/MATH-500",
hf_subset="default",
... | https://github.com/huggingface/lighteval/issues/999 | open | [] | 2025-09-29T21:49:44Z | 2025-10-14T08:04:17Z | null | passing2961 |
pytorch/pytorch | 164,145 | Improvements to profiler for bitwise equivalence use case | ### π Describe the bug
Suppose that you want to verify that eager and aot_eager are numerically equivalent. The profiler can be a good tool for determining why there is a small numerical difference, as one might reasonably expect to get exactly the same kernels between the two. However, the profiler has obviously not... | https://github.com/pytorch/pytorch/issues/164145 | open | [
"oncall: profiler"
] | 2025-09-29T15:30:14Z | 2025-10-26T03:18:33Z | 2 | ezyang |
pytorch/pytorch | 164,133 | Use libtorch export onnx | ### π The feature, motivation and pitch
How to export onnx using libtorch after training a model with libtorch οΌ
### Alternatives
_No response_
### Additional context
_No response_ | https://github.com/pytorch/pytorch/issues/164133 | closed | [] | 2025-09-29T13:55:19Z | 2025-09-29T14:43:24Z | 1 | yongxin3344520 |
pytorch/pytorch | 164,124 | torch.compile compiles multiple Triton autotune kernels, but uses the wrong ones | ### π Describe the bug
When torch.compile autotunes a Triton kernel multiple times for different shapes, it uses the wrong kernel afterwards. Interestingly, this only happens when no torchinductor-cache files exist. On next run of the same program, it uses the correct kernels!
Here are the details:
I have adapted y... | https://github.com/pytorch/pytorch/issues/164124 | open | [
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: user triton"
] | 2025-09-29T10:19:35Z | 2025-09-29T16:50:17Z | 3 | dxqb |
huggingface/lerobot | 2,083 | How to train this RL model with my trained data | I want this model to load the trained model that I have already generated. So, I modified the output_dir and set resume to true, but then the problem shown in the figure occurred. How can I solve it?
`{ "output_dir": "outputs/train/2025-09-28/17-28-55_default",
"job_name": "default", "resume": true,
"seed": 1000, "nu... | https://github.com/huggingface/lerobot/issues/2083 | open | [] | 2025-09-29T07:22:08Z | 2025-10-07T20:32:04Z | null | 993984583 |
huggingface/lerobot | 2,082 | How to train this RL model with my model data | I want this model to load the trained model that I have already generated. So, I modified the output_dir and set resume to true, but then the problem shown in the figure occurred. How can I solve it?
`{
"output_dir": "outputs/train/2025-09-28/17-28-55_default",
"job_name": "default",
"resume": true,
"se... | https://github.com/huggingface/lerobot/issues/2082 | closed | [] | 2025-09-29T07:18:52Z | 2025-10-07T20:33:11Z | null | 993984583 |
pytorch/pytorch | 164,094 | Failed to change backward stream | in [pytorch cuda semantic ](https://docs.pytorch.org/docs/stable/notes/cuda.html#stream-semantics-of-backward-passes)
> Each backward CUDA op runs on the same stream that was used for its corresponding forward op. If your forward pass runs independent ops in parallel on different streams, this helps the backward pass e... | https://github.com/pytorch/pytorch/issues/164094 | closed | [
"module: autograd",
"triaged"
] | 2025-09-29T02:14:19Z | 2025-10-05T23:40:49Z | 16 | shadow150519 |
pytorch/pytorch | 164,074 | When will the version for ROCM 7 be released? | ### π The feature, motivation and pitch
The homepage still shows version 6.4.
### Alternatives
_No response_
### Additional context
_No response_
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | https://github.com/pytorch/pytorch/issues/164074 | closed | [
"module: rocm",
"triaged"
] | 2025-09-28T16:27:21Z | 2025-09-30T00:40:11Z | 3 | mihongyu |
huggingface/sentence-transformers | 3,532 | What is the proper way to use prompts? Do we have to format/render them ourselves? | Hi. First time using the Sentence Transformers library and I had a question regarding using prompts. Specifically, it seems like the [`SentenceTransformer.encode_document`](https://sbert.net/docs/package_reference/sentence_transformer/SentenceTransformer.html#sentence_transformers.SentenceTransformer.encode_document) m... | https://github.com/huggingface/sentence-transformers/issues/3532 | closed | [] | 2025-09-28T06:32:51Z | 2025-09-30T10:59:24Z | null | seanswyi |
pytorch/pytorch | 164,061 | GPU Memory Leak due to distributions | I am using the [MixStyle](https://arxiv.org/abs/2104.02008) methodology for domain adaptation and it involves using a custom layer which is inserted after every encoder stage. However, it is causing VRAM to grow linearly, which causes OOM error. No memory leak occurs on disabling the layer. Any idea on why this is happ... | https://github.com/pytorch/pytorch/issues/164061 | open | [
"module: distributions",
"triaged"
] | 2025-09-28T05:08:15Z | 2025-09-29T14:54:42Z | 1 | vedantdalimkar |
huggingface/transformers | 41,186 | Qwen2.5-VL restore tensor multi-image form |
Hello, I have recently been experimenting with qwen2.5-vl (https://github.com/huggingface/transformers/blob/v4.52-release/src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py). I noticed that multiple images are pre-merged here,
```
image_embeds = self.get_image_features(pixel_values, image_grid_thw)
```
but I w... | https://github.com/huggingface/transformers/issues/41186 | closed | [] | 2025-09-28T03:36:24Z | 2025-11-05T08:02:55Z | 2 | NiFangBaAGe |
huggingface/peft | 2,802 | Guide on training that requires both LoRA and base model forward calls ? | Hi, I'm working on some training variants that require hidden states from the base model and the hidden states produced with LoRA. I'm currently initializing two separate model objects:
```
from peft import get_peft_model
m1=AutoModelForCausalLM.from_pretrained(model_path)
m2=AutoModelForCausalL... | https://github.com/huggingface/peft/issues/2802 | closed | [] | 2025-09-27T23:12:23Z | 2025-10-15T10:26:15Z | 3 | thangld201 |
huggingface/lerobot | 2,072 | How to run lerobot with RTX 5090? If not possible, please add support | ### System Info
```Shell
- lerobot version: 0.3.4
- Platform: Linux-6.14.0-32-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface Hub version: 0.35.1
- Datasets version: 4.1.1
- Numpy version: 2.2.6
- PyTorch version: 2.8.0+cu128
- Is PyTorch built with CUDA support?: True
- Cuda version: 12.8
- GPU m... | https://github.com/huggingface/lerobot/issues/2072 | closed | [] | 2025-09-27T19:52:42Z | 2025-11-08T07:53:00Z | null | cijerezg |
huggingface/text-generation-inference | 3,333 | How to use prefix caching | Hi
I can't find a way to turn on the prefix caching
When I run any model, I always get:
Using prefix caching = False
Thanks a lot | https://github.com/huggingface/text-generation-inference/issues/3333 | open | [] | 2025-09-27T14:14:37Z | 2025-09-29T11:52:48Z | null | Noha-Magdy |
huggingface/smol-course | 259 | [QUESTION] Is this a bug in smollmv3's chat template? |
Hi
I am reading this
https://huggingface.co/learn/smol-course/unit1/2#chat-templates-with-tools
I feel like there is a bug in `HuggingFaceTB/SmolLM3-3B` 's chat template
from the example
```
# Conversation with tool usage
messages = [
{"role": "system", "content": "You are a helpful assistant with access to ... | https://github.com/huggingface/smol-course/issues/259 | closed | [
"question"
] | 2025-09-27T10:19:37Z | 2025-11-24T18:40:09Z | null | Nevermetyou65 |
pytorch/pytorch | 163,982 | Need to update Magma version in Pytorch | ### π Describe the bug
Need to look into updating Magma for Pytorch CUDA builds
Need to understand what is the perf increase.
Do we need MAGMA at all ?
### Versions
2.10.0
cc @ptrblck @msaroufim @eqy @jerryzh168 | https://github.com/pytorch/pytorch/issues/163982 | open | [
"module: cuda",
"triaged"
] | 2025-09-26T19:21:26Z | 2025-09-26T19:23:09Z | 0 | atalman |
huggingface/accelerate | 3,797 | Question: ReduceLROnPlateau wrapped by AcceleratedScheduler in DDP may multiply LR by num_processes? | Hi,
Iβm using ReduceLROnPlateau wrapped by AcceleratedScheduler in a multi-GPU / DDP setup (num_processes=8).
My main process calls:
```
lr_scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(
optimizer, mode="min", factor=self.hyper_params['lr_decay_factor'], patience=self.hyper_params['lr_reduce_patient']
)
... | https://github.com/huggingface/accelerate/issues/3797 | closed | [] | 2025-09-26T10:02:20Z | 2025-11-03T15:08:09Z | 1 | nicelulu |
pytorch/pytorch | 163,946 | ModuleNotFoundError: No module named 'importlib_metadata' | ### π Describe the bug
I encountered this error when I used torchrun.
Traceback (most recent call last):
File "xxx/bin/torchrun", line 5, in <module>
from torch.distributed.run import main
File "xxx/lib/python3.9/site-packages/torch/distributed/run.py", line 381, in <module>
from torch.distributed.elasti... | https://github.com/pytorch/pytorch/issues/163946 | closed | [
"needs reproduction",
"oncall: distributed"
] | 2025-09-26T08:26:50Z | 2025-11-06T07:20:57Z | 6 | yunyiyun |
huggingface/lerobot | 2,050 | I wonder how to use RL on so101 within sim environment? | https://github.com/huggingface/lerobot/issues/2050 | closed | [
"question",
"simulation",
"good first issue"
] | 2025-09-26T06:52:38Z | 2025-10-08T18:04:44Z | null | Temmp1e | |
huggingface/lerobot | 2,045 | I would appreciate it if you could explain how to train the slicing clay model | I am planning to conduct a clay-cutting task using pi0. Since this type of task is not typically included among pi0βs foundation model tasks, I would like to inquire how many episodes (and the approximate duration of each) would generally be required for such a custom task.
The task I have in mind involves cutting cla... | https://github.com/huggingface/lerobot/issues/2045 | open | [] | 2025-09-26T00:51:59Z | 2025-09-26T00:51:59Z | null | pparkgyuhyeon |
pytorch/pytorch | 163,900 | [Maintenance] MacOS runners update |
## Current Status
*ongoing*.
## Error looks like
MacOS jobs might fail with infra errors
## Incident timeline (all times pacific)
*Include when the incident began, when it was detected, mitigated, root caused, and finally closed.*
## User impact
*How does this affect users of PyTorch CI?*
## Root cause
*What was t... | https://github.com/pytorch/pytorch/issues/163900 | closed | [
"ci: sev"
] | 2025-09-25T22:30:08Z | 2025-09-26T11:27:33Z | 3 | malfet |
pytorch/torchx | 1,130 | The hosted doc server is not working | ## π Documentation
## Link
We are now redirected from https://docs.pytorch.org/torchx/main/quickstart.html to https://meta-pytorch.org/torchxmain/quickstart.html
## What does it currently say?
```
404
File not found
The site configured at this address does not contain the requested file.
If this is your site, mak... | https://github.com/meta-pytorch/torchx/issues/1130 | closed | [] | 2025-09-25T16:58:45Z | 2025-09-25T20:14:43Z | 2 | clumsy |
huggingface/lerobot | 2,042 | Question: How to train to get Task Recovery behavior? | We would need the robot to be able to detect a failure (like dropping an object) and attempt to correct it to continue with the task.
How would the training data would look like for this?
Thanks | https://github.com/huggingface/lerobot/issues/2042 | open | [] | 2025-09-25T15:52:55Z | 2025-09-25T15:52:55Z | null | raul-machine-learning |
huggingface/accelerate | 3,794 | Error when evaluating with multi-gpu | I met a problem when evaluating Llada-8B with multi-gpu ( **Nvidia V100** ) using accelerate+lm_eval. Error occurs when **num_processes>1**.
but there is no problem with single GPU, all the other cfgs are the same.
How can i solve this problem?
I use this command to evaluate
accelerate launch --config_file config... | https://github.com/huggingface/accelerate/issues/3794 | closed | [] | 2025-09-25T14:42:29Z | 2025-11-03T15:08:12Z | 1 | adfad1 |
huggingface/text-embeddings-inference | 728 | Compile error in multiple environments for CPU backend | ### System Info
TEI source code:
- Latest main branch(0c1009bfc49b759fe75eed4fd377b4fbad534ad5);
- Latest release `v1.8.2`;
- Release `v1.8.1`
Tested platform:
- Win: AMD 7950X+Windows 10 x64 Version 10.0.19045.6332;
- WSL2: AMD 7950X+Debian 13 on wsl2 (Linux DESKTOP 5.15.167.4-microsoft-standard-WSL2 # 1 SMP ... | https://github.com/huggingface/text-embeddings-inference/issues/728 | open | [
"documentation",
"question"
] | 2025-09-25T11:52:16Z | 2025-11-18T14:49:01Z | null | nkh0472 |
huggingface/transformers | 41,141 | Need a concise example of Tensor Parallelism (TP) training using Trainer/SFTTrainer. | ### Feature request
I have checked the code and there are few places which talk about TP. I saw from_pretrained method for model contains tp_plan and device_mesh. I also checked that the TrainingArgument can take parallelism_config which defines the TP/CP plan along with FSDP. However, I am not able to successfully st... | https://github.com/huggingface/transformers/issues/41141 | open | [
"Documentation",
"Feature request",
"Tensor Parallel"
] | 2025-09-25T03:01:02Z | 2026-01-04T14:05:36Z | 10 | meet-minimalist |
pytorch/pytorch | 163,801 | [CUDA][Triton][PTXAS] Triton Wheel Missing CUDA13 PTXAS - Breakage exists for the environment where CTK is not present | ### π Describe the bug
By default triton release/3.5x ships a PTXAS version that is based on CUDA12.8.
** in environments that the latest CTK is NOT installed**
Comparing to PTXAS from CUDA13.0, CUDA12.8 ptxas is not capable to handle THOR device (which underwent a renaming, see https://github.com/llvm/llvm-proje... | https://github.com/pytorch/pytorch/issues/163801 | closed | [
"module: binaries",
"triaged",
"module: third_party",
"has workaround",
"dependencies"
] | 2025-09-24T22:21:24Z | 2025-09-30T01:56:15Z | null | nWEIdia |
huggingface/lerobot | 2,034 | dataset v2.1 and groot n1.5 | for now, groot dose not support dataset v3.0 to fine_tune ? in this case, should we continue use v2.1 ? and if we already collect data from v3, how we can convert it back to v2.1? | https://github.com/huggingface/lerobot/issues/2034 | open | [
"question",
"policies",
"dataset"
] | 2025-09-24T21:12:26Z | 2025-12-24T00:05:45Z | null | zujian-y |
pytorch/pytorch | 163,789 | [docs] instructions to locally build docs are underspecified | *Note: moving the dependency conflict discussion to #164010.*
### π The doc issue
Docstring changes I made in #163120 caused the `linux-jammy-py3_10-gcc11-build` `docs_test` CI to fail. To debug this I had to build the docs locally, and ran into some rough edges:
1. There are small discrepancies between the instruc... | https://github.com/pytorch/pytorch/issues/163789 | open | [
"module: docs",
"triaged",
"actionable"
] | 2025-09-24T20:24:43Z | 2025-09-26T22:34:16Z | 2 | filipviz |
pytorch/pytorch | 163,785 | Revisit guarding on unbacked inputs ! | We generate guards on unbacked inputs now those are interesting,
- some we do not need at all because they are side effects of torch.check calls
- some are actually needed (striding properties that we did assert on ), shall we make them runtime assertions?
There are some examples in the tests [here](https://github.co... | https://github.com/pytorch/pytorch/issues/163785 | open | [
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 2025-09-24T19:17:35Z | 2025-10-29T22:58:35Z | 2 | laithsakka |
huggingface/tokenizers | 1,868 | How to set the cache_dir in the Rust implementation? | Hey, thank you for your great work with these tokenizers.
When I use the tokenizers through the Python API via transformers, I can set a specific cache_dir like this
```
from transformers import AutoTokenizer
self.tokenizer = AutoTokenizer.from_pretrained(self.tokenizer_name,cache_dir = self.cache_dir)
```
How can ... | https://github.com/huggingface/tokenizers/issues/1868 | open | [] | 2025-09-24T18:50:38Z | 2025-10-06T04:25:46Z | null | sambaPython24 |
huggingface/diffusers | 12,386 | Implement missing features on ModularPipeline | as i'm looking to take advantage of new `ModularPipeline` ask is to implement some currently missing features
my use case is to convert existing loaded model using standard pipeline into modular pipeline. that functionality was provided via #11915 and is now working.
first minor obstacle is that modular pipeline does... | https://github.com/huggingface/diffusers/issues/12386 | open | [
"roadmap"
] | 2025-09-24T15:49:23Z | 2025-09-29T05:46:29Z | 0 | vladmandic |
pytorch/pytorch | 163,761 | Does device mesh of (N,1) cause all_gather communication in HSDP of FSDP2? | In HSDP of FSDP2, let's say I have N GPUs, if the shape of device mesh is (N,1) (similar to DDP), will all_gather communication still happen in forward/backward? Or is this device mesh shape illegitimate?
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @ezyang @msaroufim @dcci @chauhang @p... | https://github.com/pytorch/pytorch/issues/163761 | open | [
"oncall: distributed"
] | 2025-09-24T13:51:27Z | 2025-09-25T18:59:28Z | 1 | EquationWalker |
pytorch/pytorch | 163,753 | Invalid __shared__ read of size 16 bytes in torch.conv_transpose3d | ### π Describe the bug
When using `torch.nn.ConvTranspose3d` with certain parameters, a CUDA `__shared__` memory read out-of-bounds error occurs.
```python
import torch
import torch.nn as nn
import os
os.environ['CUDA_LAUNCH_BLOCKING'] = '1'
def main():
if not torch.cuda.is_available() or not torch.backends.c... | https://github.com/pytorch/pytorch/issues/163753 | closed | [] | 2025-09-24T11:33:17Z | 2025-09-26T01:22:04Z | 4 | supermarkli |
pytorch/torchtitan | 1,750 | Inconsistent loss between different TP | ### Bug description
I have encountered different Inconsistent loss between different TP on both llama3 and llama4 moe model.
The toml configs are exactly the same except different tensor parallels.
The seed is set and deterministic is turned on.
tensorboard:
## llama4:
gradnorm:
<img width="1278" height="460" alt="I... | https://github.com/pytorch/torchtitan/issues/1750 | open | [
"question"
] | 2025-09-24T03:11:22Z | 2025-10-02T00:25:43Z | null | weixuansun |
huggingface/candle | 3,096 | [Question] Minimal documentation/example on including weights in compiled executable | Just what the title says: Is there a minimal code example on including weights in the compiled executable using include_bytes. Nervous to implement this without understanding best practices and end up with a suboptimal solution. | https://github.com/huggingface/candle/issues/3096 | closed | [] | 2025-09-24T02:47:28Z | 2025-10-07T04:49:26Z | 1 | bitanath |
pytorch/torchtitan | 1,749 | What is the benefit of using torchrun instead of python directly with slurm and other launchers ? | Is there any difference in the following two commands ?
srun torchrun --nnodes 4 --nproc_per_node 8 --rdzv_endpoint "$head_node_ip:29500" -m torchtitan.train ...
MASTER_ADDR= ip-adress MASTER_PORT=port-number srun --nodes=4 --ntasks-per-node=8 python -m torchtitan.train | https://github.com/pytorch/torchtitan/issues/1749 | open | [] | 2025-09-23T23:35:08Z | 2025-09-26T18:05:51Z | null | githubsgi |
pytorch/pytorch | 163,699 | Should we mark `TestExportOpInfo.test_fake_export` tests as distributed? | ### π Describe the bug
`TestExportOpInfo.test_fake_export` calls `_test_export_helper`
https://github.com/pytorch/pytorch/blob/8c8416b021e59a5ec58aceb38eeffc63885a28bc/test/export/test_export_opinfo.py#L125-L133
which sends tensor to `cuda:1`
https://github.com/pytorch/pytorch/blob/8c8416b021e59a5ec58aceb38eeffc6... | https://github.com/pytorch/pytorch/issues/163699 | closed | [
"module: tests",
"oncall: pt2",
"oncall: export"
] | 2025-09-23T22:12:42Z | 2025-09-30T16:12:42Z | 2 | xwang233 |
pytorch/pytorch | 163,690 | Recomputed values for the following tensors have different metadata than during the forward pass. | ### π Describe the bug
hi I have a model with linear layers which i wrap with LoRA layers applied as following
```
(attn): Attention(
(q_proj): LoRALinear(
(original_layer): Linear(in_features=4096, out_features=4096, bias=False)
(dropout): Identity()
)
(k_pro... | https://github.com/pytorch/pytorch/issues/163690 | closed | [
"needs reproduction",
"module: activation checkpointing",
"triaged"
] | 2025-09-23T21:21:49Z | 2025-09-24T01:04:09Z | 3 | asahni-sc |
pytorch/pytorch | 163,688 | [torch.distributed.pipelining] Gradients are None in first training step with ScheduleGPipe | ## Bug Description
When using `torch.distributed.pipelining` with `ScheduleGPipe`, gradients are unexpectedly `None` for parameters _in the first training step only_, and appear correctly in subsequent steps. This occurs despite the forward pass completing and losses computed.
This is leading to a significant diverg... | https://github.com/pytorch/pytorch/issues/163688 | open | [
"oncall: distributed",
"has workaround",
"module: amp (automated mixed precision)",
"module: pipelining"
] | 2025-09-23T21:03:37Z | 2025-09-26T14:36:14Z | 2 | tplr-y |
pytorch/pytorch | 163,684 | PyTorch 2.8 + CUDA 12.8 fails to initialize on RTX 5090 (WinError 1114) | ### π Describe the bug
Summary
Attempting to run a source-built PyTorch 2.8.0 against CUDA 12.8 with explicit sm_120 flags on RTX 5090 results in a DLL initialization failure:
Code
OSError: [WinError 1114] A dynamic link library (DLL) initialization routine failed.
Error loading "torch_cpu.dll" or one of its depende... | https://github.com/pytorch/pytorch/issues/163684 | closed | [] | 2025-09-23T20:31:52Z | 2025-09-23T22:16:59Z | 2 | tsondo |
huggingface/optimum-executorch | 149 | Add documentation for how to run each type of exported model on ExecuTorch | Blocked on runner / multimodal runner work in ExecuTorch | https://github.com/huggingface/optimum-executorch/issues/149 | open | [] | 2025-09-23T18:53:55Z | 2025-09-23T18:54:00Z | null | jackzhxng |
pytorch/pytorch | 163,664 | [BE] Add Linux aarch64 CUDA install and test to validation framework | ### π Describe the bug
Currently https://github.com/pytorch/test-infra/blob/main/.github/workflows/validate-aarch64-linux-binaries.yml only validates Linu aarch64 CPU builds.
These workflows are launched via validate-binaries. Here is an example of run: https://github.com/pytorch/test-infra/actions/runs/17628169416
... | https://github.com/pytorch/pytorch/issues/163664 | closed | [
"module: binaries",
"module: cuda",
"triaged",
"better-engineering",
"topic: binaries"
] | 2025-09-23T17:00:27Z | 2025-10-01T14:19:45Z | 0 | atalman |
pytorch/pytorch | 163,659 | Allow double in native_functions.yaml as a schema type | ### π The feature, motivation and pitch
Today, our schemas say "float" but that is a lie!! Internally we pass around doubles. I'm okay with this though.
My ask: can we allow schemas to say "double", so for user custom ops they can put "double" in the schema and double in their custom kernels and be less confused?
T... | https://github.com/pytorch/pytorch/issues/163659 | open | [
"module: cpp-extensions",
"triaged",
"module: dispatch",
"module: library",
"oncall: pt2",
"module: pt2-dispatcher"
] | 2025-09-23T16:27:39Z | 2025-09-24T18:45:19Z | 2 | janeyx99 |
huggingface/safetensors | 653 | `get_slice` is slow because it uses `tensors()` method instead of `info()` | ### Feature request
Replace
```rust
self.metadata.tensors().get(name)
```
with
```rust
self.metadata.info(name)
```
in `get_slice` method
### Motivation
I noticed that the `get_slice` method of `Open` [does](https://github.com/huggingface/safetensors/blob/0816a1ae1d6b731cefd67f061d80d1cadd0dd7bb/bindings/python/src... | https://github.com/huggingface/safetensors/issues/653 | closed | [] | 2025-09-23T15:09:51Z | 2025-09-28T16:42:45Z | 1 | PgLoLo |
huggingface/diffusers | 12,375 | What kernels should we integrate in Diffusers? | Now that we have an [integration](https://github.com/huggingface/diffusers/pull/12236) with the `kernels` lib to use Flash Attention 3 (FA3), it'd be nice to gather community interest about which kernels we should try to incorporate in the library through the [`kernels` lib](https://github.com/huggingface/kernels/). FA... | https://github.com/huggingface/diffusers/issues/12375 | open | [
"performance"
] | 2025-09-23T09:03:13Z | 2025-09-30T06:56:39Z | 8 | sayakpaul |
pytorch/pytorch | 163,624 | [aoti] [xpu] [null-pointer-deference] potential npt issue in `sycl_runtime_wrappers.h` | ### π Describe the bug
Code below in `sycl_runtime_wrappers.h` uses malloc to allocate the memory.
https://github.com/pytorch/pytorch/blob/5d749ceb92c2c28bcfbdf918b4ab99b1a91fcb50/torch/csrc/inductor/aoti_runtime/sycl_runtime_wrappers.h#L45-L58
However, there is a potential risk that the memory allocation fails. The... | https://github.com/pytorch/pytorch/issues/163624 | open | [
"triaged",
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | 2025-09-23T08:24:03Z | 2025-09-23T16:02:18Z | 4 | shaoyuyoung |
huggingface/peft | 2,798 | Add stricter type checking in LoraConfig for support with HfArgumentParser | ### System Info
System Info
transformers version: 4.57.0.dev0
Platform: Linux-5.14.0-284.73.1.el9_2.x86_64-x86_64-with-glibc2.39
Python version: 3.12.3
Huggingface_hub version: 0.34.4
Safetensors version: 0.5.2
Accelerate version: 1.10.1
Accelerate config: not found
DeepSpeed version: not installed
PyTorch version (ac... | https://github.com/huggingface/peft/issues/2798 | closed | [] | 2025-09-23T05:19:34Z | 2025-09-23T12:37:47Z | 3 | romitjain |
pytorch/pytorch | 163,576 | GPU Performance in Modern Computing | ### Release highlight for proposed Feature
Could you please review the PyTorch library and determine if performance evaluation tests would be helpful? https://github.com/pytorch/pytorch/pull/162107
GPU Performance in Modern Computing
In the realm of artificial intelligence and supercomputing, GPUs play a pivotal rol... | https://github.com/pytorch/pytorch/issues/163576 | closed | [
"triaged"
] | 2025-09-22T22:21:49Z | 2025-09-29T17:16:02Z | 7 | alpha-investor |
pytorch/torchtitan | 1,735 | For mixed-precision training, does FSDP2 also need `amp.grad_scaler.GradScaler` ? or is FSDP2 already handled? | In mixed-precision training of DDP, `amp.grad_scaler.GradScaler` is needed to dynamically scale the loss. I see that torchtitan do not use it to scale loss in FSDP2, so my question is does FSDP2 also need `amp.grad_scaler.GradScaler` ? or is FSDP2 already handled? | https://github.com/pytorch/torchtitan/issues/1735 | closed | [
"question"
] | 2025-09-22T15:05:37Z | 2025-09-24T20:12:20Z | null | EquationWalker |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.