repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/lerobot | 2,259 | Clarifications on fine-tuning on different envs and embodiments | Hi everyone,
I’m currently working on fine-tuning SmolVLA and π₀ using **[RLBench](https://github.com/stepjam/RLBench)**. The robot setup is a Franka Emika Panda (7DoF + gripper), and I’ve already collected custom LeRobot datasets for a pick-and-place task ([available on my Hugging Face](https://huggingface.co/RonPlusS... | https://github.com/huggingface/lerobot/issues/2259 | open | [
"question",
"policies",
"simulation"
] | 2025-10-20T13:24:22Z | 2025-12-23T10:37:31Z | null | RonPlusSign |
pytorch/pytorch | 165,902 | torchcodec in pytorch url | ### 🚀 The feature, motivation and pitch
Is it possible to have torchcodec in pytorch url?
pip3 install torch torchvision torchaudio torchcodec--index-url https://download.pytorch.org/whl/cu130
### Alternatives
_No response_
### Additional context
_No response_
cc @seemethere @malfet @atalman | https://github.com/pytorch/pytorch/issues/165902 | open | [
"module: binaries",
"triaged"
] | 2025-10-20T12:11:01Z | 2025-10-20T14:27:16Z | 0 | johnnynunez |
pytorch/pytorch | 165,900 | Converting weights `.pt` content between `dict` and `RecursiveScriptModule` | When using PyTorch inside Isaac Lab to train RL policies, the program saves weights `.pt` file as a Python dict (policy, value, and optimizer keys). It can be further loaded with `torch.load` function.
However, Isaac Sim's policy loader expects a `torch.jit._script.RecursiveScriptModule` object to be loaded with `torc... | https://github.com/pytorch/pytorch/issues/165900 | open | [
"oncall: jit"
] | 2025-10-20T11:25:01Z | 2025-10-20T14:27:26Z | 0 | PsorTheDoctor |
vllm-project/vllm | 27,184 | [Doc]: Multi-Modal Benchmark is too simple | ### 📚 The doc issue
The latest doc about Multi-Modal Benchmark shows :
- 1、download sharegpt4v_instruct_gpt4-vision_cap100k.json and COCO's 2017 Train images
- 2、vllm serve and vllm bench serve
But there is so much details to concern:
- 1、delete all json that not is coco`s in sharegpt4v_instruct_gpt4-vision_cap100k... | https://github.com/vllm-project/vllm/issues/27184 | open | [
"documentation"
] | 2025-10-20T06:24:18Z | 2025-10-20T16:44:17Z | 2 | BigFaceBoy |
vllm-project/vllm | 27,182 | [Feature]: INT8 Support in Blackwell Arch | ### 🚀 The feature, motivation and pitch
hello, I want to use w8a8(int8) in blackwell gpus, and when I read the source code, it says, the int8 is not support by sm120. According to the nvidia-ptx-instructions, blackwell series gpus still have a int8 tensor, is there another way we use w8a8 int8 in rtx5090 by vllm now ... | https://github.com/vllm-project/vllm/issues/27182 | open | [
"feature request"
] | 2025-10-20T06:04:03Z | 2025-10-20T06:04:03Z | 0 | nhanngoc94245 |
huggingface/optimum | 2,376 | Support qwen2_5_vl for ONNX export | ### Feature request
I would like to be able to convert [this model](https://huggingface.co/prithivMLmods/DeepCaption-VLA-V2.0-7B) which is based on Qwen 2.5 VL architecture using optimum. Right now, I get the error:
```
ValueError: Trying to export a qwen2_5_vl model, that is a custom or unsupported architecture, but... | https://github.com/huggingface/optimum/issues/2376 | open | [] | 2025-10-19T22:08:28Z | 2026-01-06T08:03:39Z | 8 | ayan4m1 |
pytorch/pytorch | 165,861 | Reflect padding: CUDA errror when one of the batch dimensions is larger than uint16 max value (2**16) | ### 🐛 Describe the bug
Reflect padding breaks when one of the batch dimensions is larger than uint16 max value (2**16).
The total memory footprint is not important, as when the tensor holds more numbers, but all but last dimension is within the uint16 range everything is fine.
Other padding modes behave fine, the p... | https://github.com/pytorch/pytorch/issues/165861 | closed | [
"module: cuda",
"triaged",
"module: edge cases"
] | 2025-10-19T12:07:23Z | 2025-10-22T21:53:53Z | 2 | michal-lukomski |
huggingface/transformers | 41,731 | transformers CLI documentation issue | ### System Info
- `transformers` version: 5.0.0.dev0
- Platform: Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.39
- Python version: 3.12.9
- Huggingface_hub version: 1.0.0.rc6
- Safetensors version: 0.6.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- Py... | https://github.com/huggingface/transformers/issues/41731 | closed | [
"bug"
] | 2025-10-19T09:31:46Z | 2025-12-22T08:03:09Z | 14 | ArjunPimpale |
huggingface/chat-ui | 1,947 | HuggingChat MoM (Mixture-of-Models) Integration Proposal 🤗 | # **HuggingChat MoM (Mixture-of-Models) Integration Proposal 🤗**
**Status:** Proposal
**Date:** 2025-10-19
**Version:** 1.0
**Authors**: vLLM-SR Team
---
## Executive Summary
This proposal outlines the integration of **vLLM Semantic Router** into HuggingChat as a new **MoM (Mixture-of-Models)** routing option.... | https://github.com/huggingface/chat-ui/issues/1947 | open | [
"enhancement"
] | 2025-10-19T08:17:14Z | 2025-10-20T11:12:30Z | 3 | Xunzhuo |
pytorch/xla | 9,681 | Improve PyTorch/XLA Documentation and Clarify SPMD Usage | ## 📚 Documentation
### [Feature Request / Documentation Improvement] Improve PyTorch/XLA Documentation and Clarify SPMD Usage
Hello PyTorch/XLA team,
During my TPU grant I encountered many undocumented pitfalls and unclear behaviors, which made the setup process very time-consuming and confusing.
I’d like to ask f... | https://github.com/pytorch/xla/issues/9681 | open | [
"distributed",
"documentation"
] | 2025-10-19T04:58:44Z | 2025-10-20T13:27:34Z | 1 | Muinez |
huggingface/tokenizers | 1,877 | encode bytes directly | Is there a way to directly encode bytes with a bpe based HF tokenizer without having to decode the string first? | https://github.com/huggingface/tokenizers/issues/1877 | open | [] | 2025-10-19T03:30:39Z | 2025-11-28T07:43:18Z | 2 | tsengalb99 |
vllm-project/vllm | 27,154 | [Installation]: How to reduce the vllm image | ### Your current environment
Hi,
I looked at docker pull vllm/vllm-openai:latest — the image is around 12 GB. I’m exploring ways to reduce the vLLM image size specifically for NVIDIA L40s (i use linux amd64). any ideas?
does building vllm from source help to reduce the image?
Here’s what I’ve tried so far (but not s... | https://github.com/vllm-project/vllm/issues/27154 | open | [
"installation"
] | 2025-10-18T17:52:07Z | 2025-10-20T17:45:39Z | 4 | geraldstanje |
vllm-project/vllm | 27,153 | [Feature]: Allow vllm bench serve in non-streaming mode with /completions API | ### 🚀 The feature, motivation and pitch
vLLM’s bench serve currently supports recording benchmark results only in the streaming mode - recording metrics like TTFT, TPOT, ITL etc. For my use case benchmarking [llm-d ](https://github.com/llm-d/llm-d)which uses vLLM, I would like to enable vllm bench serve in non-stream... | https://github.com/vllm-project/vllm/issues/27153 | open | [
"feature request"
] | 2025-10-18T17:47:44Z | 2025-10-18T20:50:49Z | 0 | susiejojo |
huggingface/candle | 3,137 | Strategic Discussion: Flicker's Hybrid Architecture for Lightweight Inference + Advanced Training | # Strategic Discussion: Flicker's Hybrid Architecture Evolution
## Overview
This issue proposes a comprehensive strategic discussion about flicker's positioning and architecture evolution. The detailed proposal is documented in `STRATEGIC_DISCUSSION_PROPOSAL.md`.
## Context
During analysis of flicker's capabilities v... | https://github.com/huggingface/candle/issues/3137 | closed | [] | 2025-10-18T17:27:24Z | 2025-10-21T16:18:51Z | 1 | jagan-nuvai |
huggingface/lerobot | 2,245 | release 0.4.0 and torch 2.8.0 | Hello Lerobot Team! :)
Quick question, do you have a time estimate for:
- lerobot release 0.4.0 (ie next stable release using the new v30 data format)
- bumping torch to 2.8
Thanks a lot in advance!
| https://github.com/huggingface/lerobot/issues/2245 | closed | [
"question",
"dependencies"
] | 2025-10-18T16:57:07Z | 2025-10-19T18:34:47Z | null | antoinedandi |
pytorch/torchtitan | 1,920 | Potentially incorrect attention flop calculation due to wrong head_dim? | ### Bug description
https://github.com/pytorch/torchtitan/blob/a8899e4b2cab74eadbe4b9a2ca2776ceb8829db3/torchtitan/models/utils.py#L432-L437
However, `head_dim` is not necessarily equal to `dim / n_heads`
e.g. Qwen3-4B, dim=2560, n_heads=32, head_dim=128
### Versions
latest main | https://github.com/pytorch/torchtitan/issues/1920 | closed | [
"high priority",
"triage review"
] | 2025-10-18T15:56:57Z | 2025-10-29T22:03:17Z | 4 | gau-nernst |
pytorch/pytorch | 165,836 | [ROCm][CI] Machines under the label linux.rocm.gpu.2 are undergoing maintenance. | > NOTE: Remember to label this issue with "`ci: sev`"
> If you want autorevert to be disabled, keep the ci: disable-autorevert label
<!-- Add the `merge blocking` label to this PR to prevent PRs from being merged while this issue is open -->
## Current Status
*Status could be: preemptive, ongoing, mitigated, c... | https://github.com/pytorch/pytorch/issues/165836 | closed | [
"module: rocm",
"ci: sev"
] | 2025-10-18T12:54:28Z | 2025-10-20T16:09:25Z | 0 | amdfaa |
huggingface/lerobot | 2,242 | Is it no longer possible to fine-tune the previously used π0 model? | I previously trained a model using the following command for fine-tuning:
`lerobot-train --dataset.repo_id=parkgyuhyeon/slice-clay --policy.path=lerobot/pi0 --output_dir=outputs/train/pi0_slice-clay --job_name=pi0_slice-clay --policy.device=cuda --wandb.enable=false --wandb.project=lerobot --log_freq=10 --steps=50000 ... | https://github.com/huggingface/lerobot/issues/2242 | closed | [
"question",
"policies"
] | 2025-10-18T08:42:35Z | 2025-10-20T00:18:03Z | null | pparkgyuhyeon |
huggingface/lerobot | 2,239 | Models trained using openpi pi0.5 on Lerobot's pi0.5 | Hi, can I check if models trained using the [pytorch port of openpi's pi0.5](https://github.com/Physical-Intelligence/openpi?tab=readme-ov-file#pytorch-support) are compatible with lerobot's defination of pi0.5?
Thanks! | https://github.com/huggingface/lerobot/issues/2239 | open | [
"question",
"policies"
] | 2025-10-18T02:01:45Z | 2025-10-18T10:54:06Z | null | brycegoh |
pytorch/pytorch | 165,811 | [RFC] A Python backend registration API | In this dev post (https://dev-discuss.pytorch.org/t/embrace-tensor-subclass-as-a-python-device-registration-api/2771) I have talked about creating a PyTorch backend purely in Python. After chatting with few folks (@FFFrog @gabrieldemarmiesse), we decided that it's a good idea to formalize APIs around registering Backen... | https://github.com/pytorch/pytorch/issues/165811 | open | [
"triaged",
"module: backend",
"module: python frontend"
] | 2025-10-18T00:46:37Z | 2025-10-27T17:28:37Z | 1 | qihqi |
pytorch/pytorch | 165,799 | `torch.where` does not accept scalar argument when `out=` is passed | ### 🐛 Describe the bug
`torch.where` accepts scalar arguments as per documentation. This works fine for the most part, but when the `out` argument is provided, then a `TypeError` is raise complaining that scalar arguments are not accepted.
To reproduce the error, run
```
import torch
x = torch.tensor([1.0, 2.0])
con... | https://github.com/pytorch/pytorch/issues/165799 | open | [
"triaged",
"module: python frontend"
] | 2025-10-17T22:30:10Z | 2025-10-19T19:21:34Z | null | hchau630 |
pytorch/executorch | 15,222 | How to support custom LLMs with qualcomm backend? | ``examples/qualcomm/oss_scripts/llama/llama.py`` gives an example on how to export LLMs.
I would like to know if there are any guidelines for supporting custom LLMs with architectures similar to LLaMA. Specifically, I have a huggingface-style checkpoint folder.
cc @cccclai @winskuo-quic @shewu-quic @haowhsu-quic @D... | https://github.com/pytorch/executorch/issues/15222 | closed | [
"partner: qualcomm",
"module: qnn"
] | 2025-10-17T15:22:28Z | 2025-10-30T21:20:11Z | null | xiaoxiaosuaxuan |
huggingface/lerobot | 2,228 | Trossen WidowX AI model, depth cameras and tests | Hi,
Would you be open to receive pull requests to support more recent trossen robotics setups as well as depth cameras? I think for the robot part the pattern is quite well established. For depth cameras we solved it by tweaking a bit the dataset utils.
Our implementation is fairly tested. | https://github.com/huggingface/lerobot/issues/2228 | closed | [
"question",
"robots"
] | 2025-10-17T09:32:22Z | 2025-10-31T19:15:25Z | null | lromor |
vllm-project/vllm | 27,090 | [Usage]: Does vLLM support a data-parallel group spanning multiple nodes when starting an online service? | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
Does vLLM support a data-parallel group spanning multiple nodes when starting an online service?
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and ask... | https://github.com/vllm-project/vllm/issues/27090 | open | [
"usage"
] | 2025-10-17T09:15:04Z | 2025-10-20T02:37:19Z | 2 | KrisLu999 |
vllm-project/vllm | 27,086 | [Bug]: After enabling P-D Disaggregation, the final output results are not entirely identical. | ### Your current environment
vllm VERSION: 0.10.1
### 🐛 Describe the bug
When I fixed the random seed and ensured all environment variables were consistent, I noticed that launching PD separation with the same configuration produced inconsistent final outputs. This phenomenon may require multiple attempts to fully ... | https://github.com/vllm-project/vllm/issues/27086 | open | [
"bug"
] | 2025-10-17T07:56:41Z | 2025-10-20T09:16:21Z | 4 | freedom-cui |
huggingface/lerobot | 2,227 | How to easily run inference with a trained model | Hello, and thank you for sharing such an inspiring project!
I’m currently working with a 7-DoF robotic arm (6 joint axes + 1 gripper) and generating datasets through video recordings for training on smolVLA. Since there’s still some ongoing engineering work related to dataset generation, I’d like to start by understan... | https://github.com/huggingface/lerobot/issues/2227 | open | [
"question"
] | 2025-10-17T05:41:15Z | 2025-12-16T02:57:00Z | null | Biz-Joe |
pytorch/torchtitan | 1,903 | Promlem with converting dcp ceckpoint to huggingface format | Hi ! I started a run with Llama_3_8b and saved the DCP checkpoint of step 0 (the original model). Then I used https://github.com/pytorch/torchtitan/blob/main/scripts/checkpoint_conversion/convert_to_hf.py
to convert the step-0 DCP checkpoint into .safetensors files, and copied the config.json and tokenizer from meta-ll... | https://github.com/pytorch/torchtitan/issues/1903 | closed | [
"question"
] | 2025-10-17T03:00:52Z | 2025-10-17T05:04:22Z | null | kv-wang |
huggingface/lerobot | 2,224 | Can i just modify the json the pretrained policy to adapt it to my own robot? | I just want to know if i can just modify the config json(shape of state, size of image .etc) to adapt the model to inference in my modified robot(have different number of feetect and different image resolution)? | https://github.com/huggingface/lerobot/issues/2224 | open | [
"question",
"policies"
] | 2025-10-17T01:33:32Z | 2025-10-20T16:40:26Z | null | shs822 |
pytorch/torchtitan | 1,900 | checkpoint.initial_load_in_hf should overwrite everything and load from hf weights. | ### Bug description
I have a `checkpoint` folder and I set `initial_load_in_hf: true` in yaml config like [this](https://github.com/meta-pytorch/forge/blob/main/apps/grpo/qwen3_1_7b.yaml#L78), when running `python -m apps.grpo.main --config apps/grpo/qwen3_1_7b.yaml`, I will get the error `step-1` not found. From the ... | https://github.com/pytorch/torchtitan/issues/1900 | open | [
"question"
] | 2025-10-16T21:08:59Z | 2025-10-16T21:33:32Z | null | wukaixingxp |
pytorch/xla | 9,679 | PJRT Computation Client Teardown Function | ## ❓ Questions and Help
Is there a teardown function that can be hooked from PJRT Plugin implementers for system teardown purposes? For example, graceful device closure at session termination?
It seems like the PJRT Computation Client is instantiated with a [leaky singleton](https://github.com/pytorch/xla/blob/d29162... | https://github.com/pytorch/xla/issues/9679 | open | [
"question"
] | 2025-10-16T20:21:27Z | 2025-10-17T16:52:08Z | null | jameszianxuTT |
huggingface/lerobot | 2,221 | Question about pre-trained weights usability and performance on Hugging Face models | Hello,
I would like to ask whether the weights provided on Hugging Face (for example, under the lerobot author page) can be directly downloaded and used for inference, or if they must be fine-tuned before achieving reasonable performance.
When I directly load and evaluate the models (e.g., lerobot/smolvla_base or ler... | https://github.com/huggingface/lerobot/issues/2221 | closed | [
"question"
] | 2025-10-16T14:14:39Z | 2025-10-31T16:26:45Z | null | MichaelWu99-lab |
vllm-project/vllm | 27,021 | [Usage]: Need guidance reproducing benchmark results from PR #25337 — results differ significantly from reported data | ## Background
Recently, we have been working on optimizing the position computation for multimodal models in vLLM.
During benchmarking, we noticed that our results were not as expected.
To investigate, we decided to reproduce the benchmark results from [PR #25337](https://github.com/vllm-project/vllm/pull/25337), com... | https://github.com/vllm-project/vllm/issues/27021 | open | [
"usage"
] | 2025-10-16T12:31:03Z | 2025-10-17T05:46:32Z | 5 | deitxfge |
vllm-project/vllm | 27,017 | [Doc]: KV Cache Memory allocations | ### 📚 The doc issue
Hello,
When serving a model via vLLM for text(token) generation:
1. Before a new request gets scheduled, does vLLM check if KV cache for a sequence length of `max_model_len` is available for that new request or does it check if KV cache for a sequence length of `input prompt + max_tokens` (if it'... | https://github.com/vllm-project/vllm/issues/27017 | closed | [
"documentation"
] | 2025-10-16T11:43:43Z | 2025-11-04T11:08:02Z | 7 | sneha5gsm |
vllm-project/vllm | 27,011 | [Usage]: Runnig GLM4.5-Air with Speculative Decoding | ### Your current environment
```
The output of `python collect_env.py`
```
### How would you like to use vllm
I want to run inference of a [GLM-4.5-Air](https://huggingface.co/zai-org/GLM-4.5-Air-FP8) with speculative decoding. From [GLM 4.5](https://huggingface.co/zai-org/GLM-4.5) page, it mentioned `All models use MT... | https://github.com/vllm-project/vllm/issues/27011 | open | [
"usage"
] | 2025-10-16T10:17:54Z | 2025-10-16T10:23:01Z | 0 | aqx95 |
vllm-project/vllm | 27,006 | [Usage]: In vLLM version 0.8.5, when I send an HTTP image URL directly, the model cannot recognize the image content, but it works correctly when I use a base64-encoded image. I’d like to understand why this happens. | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues... | https://github.com/vllm-project/vllm/issues/27006 | open | [
"usage"
] | 2025-10-16T08:09:29Z | 2025-10-16T10:33:49Z | 4 | Lislttt |
huggingface/lerobot | 2,218 | image pad value in pi0/pi05 | ### System Info
```Shell
the latest lerobot version
```
### Information
- [ ] One of the scripts in the examples/ folder of LeRobot
- [ ] My own task or dataset (give details below)
### Reproduction
def resize_with_pad_torch( # see openpi `resize_with_pad_torch` (exact copy)
images: torch.Tensor,
height: ... | https://github.com/huggingface/lerobot/issues/2218 | open | [
"bug",
"question",
"policies"
] | 2025-10-16T06:48:13Z | 2025-10-17T09:58:49Z | null | Tgzz666 |
huggingface/transformers | 41,640 | AttributeError: BartTokenizerFast has no attribute image_token. Did you mean: 'mask_token'? | ### System Info
Ubuntu
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
im... | https://github.com/huggingface/transformers/issues/41640 | closed | [
"bug"
] | 2025-10-16T06:34:02Z | 2025-10-17T09:00:36Z | 5 | conceptofmind |
huggingface/transformers.js | 1,439 | Integration to a CLI application created using PKG | ### Question
I'm trying to bundle a Node.js CLI tool that uses `@xenova/transformers` into a single executable using [pkg](https://github.com/vercel/pkg).
The build works fine, but when I run the packaged executable, I get this error:
```
Error: Cannot find module '../bin/napi-v3/linux/x64/onnxruntime_binding.node'
R... | https://github.com/huggingface/transformers.js/issues/1439 | open | [
"question"
] | 2025-10-16T05:30:32Z | 2025-10-26T23:32:41Z | null | JosephJibi |
huggingface/lerobot | 2,216 | gpu memory required to finetune pi05 | I tried to finetune pi05 with rxt a6000 (48GB) and get an insufficient memory error . Does anyone know how much GPU memory is needed to finetune a pi05 policy?
Thanks, | https://github.com/huggingface/lerobot/issues/2216 | open | [
"question",
"policies",
"performance"
] | 2025-10-16T04:46:21Z | 2025-12-22T07:42:45Z | null | jcl2023 |
pytorch/pytorch | 165,612 | RFC: Optionally accept NumPy dtypes in all APIs where torch dtypes are accepted | ### 🚀 The feature, motivation and pitch
On behalf of the Python Data API Consortium / Python array API standard, to follow up with the conclusion we reached in the September 18 meeting I am filing this RFC for PyTorch stakeholders to consider 🙂
The Python array API standard currently specifies that each array lib... | https://github.com/pytorch/pytorch/issues/165612 | open | [
"triaged",
"enhancement",
"module: python frontend",
"module: floatx (formerly float8)"
] | 2025-10-16T04:24:52Z | 2025-11-27T00:42:21Z | 1 | leofang |
vllm-project/vllm | 26,981 | [Usage]: Does vllm support use TokensPrompt for Qwen3VL model | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
My truncation strategy differs slightly from the standard approach (I wish to preserve the system prompt and the final suffix, only truncating the middle portion). It seems that the current version of v... | https://github.com/vllm-project/vllm/issues/26981 | open | [
"usage"
] | 2025-10-16T03:22:09Z | 2025-10-27T03:33:53Z | 10 | afalf |
huggingface/lerobot | 2,214 | Potential Scale Imbalance in smolVLA Embedding Pipeline | Hi, I noticed a potential scale inconsistency in the embedding pipeline.
Specifically, state_emb is not normalized, while both img_emb and lang_emb are explicitly scaled by math.sqrt(emb_dim):
https://github.com/huggingface/lerobot/blob/a6ff3cfebb0304f2c378515dd30ea06fff8f473f/src/lerobot/policies/smolvla/modeling_smo... | https://github.com/huggingface/lerobot/issues/2214 | open | [
"question",
"policies"
] | 2025-10-16T02:11:24Z | 2025-10-17T11:29:36Z | null | kkTkk012 |
vllm-project/vllm | 26,964 | [Bug]: Issue with Deepseek Reasoning parser with Qwen3 2507 chat templates | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
# wget https://raw.githubusercontent.com/vllm-project/vllm/main/vllm/collect_env.py
# For security purposes, please feel free to check the contents of collect_env.py before running it.
python collect_env... | https://github.com/vllm-project/vllm/issues/26964 | open | [
"bug"
] | 2025-10-16T00:39:12Z | 2025-10-20T17:47:02Z | 1 | MikeNatC |
pytorch/pytorch | 165,590 | RuntimeError: non-positive groups is not supported | ### 🐛 Describe the bug
torch==2.7.1
I got an RuntimeError: non-positive groups is not supported while using conv1d in my model. I tried to add more logs and asserts to find what is going wrong, but it didn't help. Even I set groups parameter to 128 the error remains
from output i got sizes if input tensors
```
torch... | https://github.com/pytorch/pytorch/issues/165590 | open | [
"needs reproduction",
"module: nn",
"triaged"
] | 2025-10-15T22:44:54Z | 2025-10-17T18:59:35Z | 1 | st085318 |
vllm-project/vllm | 26,949 | [Bug]: RuntimeError: CUDA driver error: invalid device ordinal when symmetric memory (symm_mem) is enabled in multi-GPU vLLM setup with 4H100 PCIe | ### My current environment
Environment:
Model: RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic
vLLM Version: latest main (installed via pip)
Hardware: 4× NVIDIA H100 PCIe (80GB)
Driver: 550.xx
CUDA: 12.2
PyTorch: 2.4.0
OS: Ubuntu 22.04
Launch Command:
python3 -m vllm.entrypoints.api_server \
--model /ephemeral... | https://github.com/vllm-project/vllm/issues/26949 | open | [
"bug"
] | 2025-10-15T22:08:34Z | 2025-12-25T03:42:49Z | 2 | vadapallij |
pytorch/pytorch | 165,578 | Out of tree backend documentation does not seem accurate | ### 📚 The doc issue
Looking at the "How does this mechanism apply to out-of-tree extensions" section of [the autoloading tutorial](https://docs.pytorch.org/tutorials/unstable/python_extension_autoload.html#how-to-apply-this-mechanism-to-out-of-tree-extensions), it looks to me like importing setting a backend `torch_f... | https://github.com/pytorch/pytorch/issues/165578 | open | [
"module: docs",
"triaged",
"module: PrivateUse1"
] | 2025-10-15T20:49:12Z | 2025-10-17T04:30:01Z | 1 | pganssle-google |
pytorch/pytorch | 165,577 | CI: What is the purpose of `slow.yml` | ### 🐛 Describe the bug
What is the purpose of `slow.yml` job, when we can shard more and probably can rely on TD to skip slow tests if they are not needed?
In the past `slow.yml` job was a way of keeping time to signal low, while running some tests post commit, but now that we have TD we probably can get rid of con... | https://github.com/pytorch/pytorch/issues/165577 | open | [
"module: ci",
"triaged",
"needs research"
] | 2025-10-15T20:33:34Z | 2025-10-16T09:24:45Z | null | malfet |
vllm-project/vllm | 26,940 | [Feature]: Support `inf` value for burstiness in benchmarks | ### 🚀 The feature, motivation and pitch
In the benchmarks, the burstiness value is used in a gamma distribution to sample the delays between consecutive requests.
```
theta = 1.0 / (current_request_rate * burstiness)
delay_ts.append(np.random.gamma(shape=burstiness, scale=theta))
```
[Theoretically ](https://en.wik... | https://github.com/vllm-project/vllm/issues/26940 | closed | [
"feature request"
] | 2025-10-15T19:39:03Z | 2025-11-03T18:33:19Z | 0 | sducouedic |
vllm-project/vllm | 26,914 | [Usage]: 为什么在采集的profiling中看不到通信算子? | ### Your current environment
```text
The output of `python collect_env.py`
```
通过llm.start_profile和stop_profile,我采集到了profiling,但kernel_details里面看不到通信算子。
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submitti... | https://github.com/vllm-project/vllm/issues/26914 | open | [
"usage"
] | 2025-10-15T13:38:14Z | 2025-10-15T13:38:14Z | 0 | sheep94lion |
pytorch/rl | 3,197 | [Question] How to handle MultiDiscrete action spaces in TorchRL | I have created a custom Parallel API PettingZoo environment with **MultiDiscrete action spaces**. The _env.action_spec()_ function succeeds.
I am using the **Multi-Agent PPO tutorial of TorchRL**, but I’m struggling to understand how to modify the architecture so it supports **MultiDiscrete action spaces**. Specifica... | https://github.com/pytorch/rl/issues/3197 | open | [] | 2025-10-15T11:56:00Z | 2025-10-16T19:38:12Z | null | AnastasiaPsarou |
vllm-project/vllm | 26,903 | [Usage]: vLLM for video input | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
I want to run inference of qwen2.5-vl or qwen2.5-omni.
When I convert the video to base64 for api calls (e.g. openai format), I found that vLLM seems to use all the video frames by checking the number... | https://github.com/vllm-project/vllm/issues/26903 | open | [
"usage"
] | 2025-10-15T09:29:23Z | 2025-12-11T03:26:33Z | 6 | King-king424 |
huggingface/diffusers | 12,492 | module transformers has no attribute CLIPFeatureExtractor | ### System Info
latest main
### Who can help?
@SunMarc
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
f... | https://github.com/huggingface/diffusers/issues/12492 | closed | [
"bug"
] | 2025-10-15T08:26:05Z | 2025-11-03T05:02:54Z | 3 | jiqing-feng |
pytorch/xla | 9,678 | Heterogeneous execution across multiple PJRT clients (GPU + custom accelerator) | ## ❓ Questions and Help
Hi, I’m developing a PJRT plugin for a custom accelerator, and I’m exploring whether PyTorch/XLA can support heterogeneous execution across multiple PJRT clients — for example, splitting a model or HLO module between GPU, CPU, and the custom accelerator.
Concretely, I’d like to enable availabil... | https://github.com/pytorch/xla/issues/9678 | closed | [
"question"
] | 2025-10-15T02:56:43Z | 2025-10-16T14:52:40Z | null | milinbhade1214 |
vllm-project/vllm | 26,858 | [RFC]: Top-level CLI interface for KV cache offloading | ### Motivation.
CPU (and tier-2 storage) offloading is an important feature in many cases (multi-round QA, document analysis, agent workflow, and reinforcement learning). With the recent advancement in the offloading connector, we already have the vLLM native CPU offloading implemented via the connector API. Also, the... | https://github.com/vllm-project/vllm/issues/26858 | closed | [
"RFC"
] | 2025-10-15T00:11:15Z | 2025-11-01T07:17:08Z | 8 | ApostaC |
huggingface/diffusers | 12,485 | How to enable Context Parallelism for training | Hi @a-r-r-o-w , I would like to ask you for tips on using Context Parallelism for distributed training.
**Is your feature request related to a problem? Please describe.**
Here is the minimal code for adapting Context Parallelism into diffusion model training
```python
# Diffusers Version: 0.36.0.dev0
from diffusers.m... | https://github.com/huggingface/diffusers/issues/12485 | closed | [] | 2025-10-14T21:48:35Z | 2025-10-15T20:33:30Z | null | liming-ai |
vllm-project/vllm | 26,840 | [Doc]: Update AWQ Guide | ### 📚 The doc issue
Situation: AutoAWQ functionality was adopted by llm-compressor but vllm [docs](https://docs.vllm.ai/en/latest/features/quantization/auto_awq.html) point to AutoAWQ which is deprecated
### Suggest a potential alternative/fix
1) Update the [AutoAWQ guide](https://github.com/vllm-project/vllm/blob... | https://github.com/vllm-project/vllm/issues/26840 | closed | [
"documentation"
] | 2025-10-14T20:02:21Z | 2025-11-03T15:39:12Z | 0 | HDCharles |
vllm-project/vllm | 26,838 | [Performance]: RTX 6000 PRO - FP8 in sglang is faster | ### Proposal to improve performance
Can we have a discussion about the sglang FP8 performance vs VLLM performance -
I'm able to get 133 tokens/sec with sglang GLM-4.5-Air-FP8 vs 78 tokens/sec in VLLM
```PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True USE_TRITON_W8A8_FP8_KERNEL=1 SGL_ENABLE_JIT_DEEPGEMM=0 python -... | https://github.com/vllm-project/vllm/issues/26838 | open | [
"performance"
] | 2025-10-14T19:41:14Z | 2025-12-29T14:52:57Z | 10 | voipmonitor |
pytorch/pytorch | 165,444 | AOTInductor not updating buffers inplace | Hey all,
I'd like to double check whether updating buffers inplace is currently supported with AOTInductor? Based on the answers on this issue https://github.com/pytorch/pytorch/issues/159124 I think it should be, but it does not seem to work when I load the module from file. If not, is there any workaround we can us... | https://github.com/pytorch/pytorch/issues/165444 | open | [
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | 2025-10-14T16:48:34Z | 2025-10-19T23:42:43Z | 2 | olarucatalin |
vllm-project/vllm | 26,817 | [Feature]: Add process_weights_after_loading to AttentionImpl | ### 🚀 The feature, motivation and pitch
Currently, in the `Attention` layer, we check if `process_weights_after_loading` exists and then call it conditionally, and after that we apply flashinfer-specific logic.
Instead, we should just add a `process_weights_after_loading` method to AttentionImpl (no-op) by default, ... | https://github.com/vllm-project/vllm/issues/26817 | closed | [
"help wanted",
"good first issue",
"feature request"
] | 2025-10-14T15:59:54Z | 2025-10-16T15:02:31Z | 2 | ProExpertProg |
vllm-project/vllm | 26,806 | [Usage]: MCP-USE with VLLM gpt-oss:20b via ChatOpenAI | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
i am trying to create an agent using gpt-oss:20B with mcp-use
most times the model returns "Agent completed the task successfully.", and sometimes the proper output which is required
### code
`vllm ... | https://github.com/vllm-project/vllm/issues/26806 | open | [
"usage"
] | 2025-10-14T13:00:38Z | 2025-11-20T06:33:29Z | 2 | Tahirc1 |
pytorch/pytorch | 165,428 | Using NCCL for Global Group and MPI for Sub-Groups in torch.distributed | ### 🚀 The feature, motivation and pitch
I want to mix NCCL and MPI backends in the `torch.distributed` package. Does torch.distributed support using NCCL as the backend when initializing the global process group with `torch.distributed.init_process_group()`, and then using MPI as the backend when creating a sub-proce... | https://github.com/pytorch/pytorch/issues/165428 | closed | [] | 2025-10-14T09:26:47Z | 2025-10-15T13:44:42Z | 11 | cq-eng |
vllm-project/vllm | 26,786 | [Usage]: cuda12.8 docker 0.11.0 Error occurs when launching the model, NCCL error: unhandled cuda error. | When I use only a single graphics card, the system can start up normally.
Below are Docker configuration files, logs, and environment information.
I encountered this issue when upgrading from version 10.1.1 to 10.2.
[The system generates an error when using dual graphics cards; version 10.1.1 functions correctly, but... | https://github.com/vllm-project/vllm/issues/26786 | closed | [
"usage"
] | 2025-10-14T09:01:39Z | 2025-11-07T17:17:32Z | 3 | ooodwbooo |
pytorch/pytorch | 165,419 | [RFC] Make PyTorch Expandable Segments interoperate with CUDA VMM-based allocators (NCCL ncclMemAlloc) | ## Summary
PyTorch’s expandable segments reduce fragmentation by using CUDA Virtual Memory Management (VMM) to grow/shrink virtual segments instead of relying on cudaMalloc blocks.
Separately, NCCL’s user buffer registration—including NVLS, General (intra-node) buffer registration, and Window Registration—expects buf... | https://github.com/pytorch/pytorch/issues/165419 | closed | [
"module: cuda",
"triaged",
"module: nccl",
"module: CUDACachingAllocator"
] | 2025-10-14T07:53:01Z | 2025-12-10T17:12:45Z | 14 | eee4017 |
vllm-project/vllm | 26,774 | [Usage]: how to use vllm on CUDA 12.9 | ### Your current environment
```text
Traceback (most recent call last):
File "/vllm-workspace/collect_env.py", line 825, in <module>
main()
File "/vllm-workspace/collect_env.py", line 804, in main
output = get_pretty_env_info()
^^^^^^^^^^^^^^^^^^^^^
File "/vllm-workspace/collect_env.py", lin... | https://github.com/vllm-project/vllm/issues/26774 | open | [
"usage"
] | 2025-10-14T07:30:56Z | 2025-10-14T07:40:08Z | 1 | Mrpingdan |
vllm-project/vllm | 26,772 | [Feature]: Option kv_event default config | ### 🚀 The feature, motivation and pitch
Current kv_event config publisher is null, but endpoint is zmq endpoint, so when not set publisher config, vllm cannot start, got a error: `EventPublisher.__init__() got an unexpected keyword argument 'endpoint'`.
Can we change this default publisher to zmq, when start enable_... | https://github.com/vllm-project/vllm/issues/26772 | closed | [
"feature request"
] | 2025-10-14T07:08:58Z | 2025-10-22T19:19:34Z | 5 | lengrongfu |
vllm-project/vllm | 26,762 | [Usage]: about curl http://ip:8000/metrics | ### Your current environment
When I run this command, I get the following results:
# HELP python_gc_objects_collected_total Objects collected during gc
# TYPE python_gc_objects_collected_total counter
python_gc_objects_collected_total{generation="0"} 12286.0
python_gc_objects_collected_total{generation="1"} 1244.0
py... | https://github.com/vllm-project/vllm/issues/26762 | open | [
"usage"
] | 2025-10-14T05:13:30Z | 2025-10-14T05:13:30Z | 0 | Renoshen |
huggingface/lerobot | 2,194 | During training with PI0, the loss is very low. Is this normal, and is the training proceeding correctly? | I am currently training with PI05.
<img width="1039" height="355" alt="Image" src="https://github.com/user-attachments/assets/5ab3f3e0-82bc-403c-8124-416b330dab14" />
`INFO 2025-10-14 04:57:11 ot_train.py:299 step:10 smpl:320 ep:0 epch:0.00 loss:0.468 grdn:3.522 lr:1.6e-07 updt_s:4.906 data_s:4.874 INFO 2025-10-14 04... | https://github.com/huggingface/lerobot/issues/2194 | closed | [
"question",
"policies"
] | 2025-10-14T05:04:31Z | 2025-10-14T08:19:29Z | null | pparkgyuhyeon |
huggingface/peft | 2,832 | Gradient checkpoint with multiple adapters | I'm not sure if it can be considered as a bug since I might be using the library differently from how it's supposed to be used.
**Context:**
I have a PeftModel that need to be infered with 2 different inputs.
For each input I have a pretrained adapter that is frozen and a new adapter for finetuning.
My forward doe... | https://github.com/huggingface/peft/issues/2832 | closed | [] | 2025-10-14T03:53:10Z | 2025-12-15T08:24:03Z | 3 | NguyenRichard |
huggingface/lerobot | 2,192 | how to test PI0's output | i use this code to test pi0's output:
def main():
# Create a directory to store the training checkpoint.
output_directory = Path("outputs/example_aloha_static_coffee")
output_directory.mkdir(parents=True, exist_ok=True)
# # Select your device
device = torch.device("cuda")
# Number of offline ... | https://github.com/huggingface/lerobot/issues/2192 | open | [
"question",
"policies"
] | 2025-10-14T03:36:43Z | 2025-10-17T09:56:46Z | null | Addog666 |
vllm-project/vllm | 26,749 | [Bug]: InternVL: passing image embeddings triggers TypeError: can only concatenate tuple (not "Tensor") to tuple in get_multimodal_embeddings, and v1 sanity check then expects a sequence of 2D tensors | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
# Title
InternVL: passing image **embeddings** triggers `TypeError: can only concatenate tuple (not "Tensor") to tup... | https://github.com/vllm-project/vllm/issues/26749 | closed | [
"bug"
] | 2025-10-14T03:01:33Z | 2025-10-14T09:36:22Z | 1 | BlueBlueFF |
huggingface/transformers | 41,554 | model.from_pretrained( . . . ) not loading needed weights/parameters | I am performing quantization of a PatchTSTForPrediction model and attempting to load a saved quantized model for testing. Model is saved using `model.save_pretrained( . . . )`. Testing proceeds perfectly once performed immediately after QAT (Hugging face trainer's handles loading at the end of training); however, when ... | https://github.com/huggingface/transformers/issues/41554 | closed | [] | 2025-10-13T23:20:20Z | 2025-11-24T08:03:05Z | 5 | lorsonblair |
pytorch/pytorch | 165,324 | How to enable Bfloat16 when using torch.func.jvp | ### 🐛 Describe the bug
```python
model_partial = partial(model_fn, **inputs)
jvp_args = (
lambda z, t, r: model_partial(latents=z, timestep=t, r_timestep=r),
(z, t, r),
(v_hat, torch.ones_like(t).to(x.dtype), torch.zeros_like(r).to(x.dtype)),
)
with torch.autocast(device_type="cuda", dtype=torch.bfloat16... | https://github.com/pytorch/pytorch/issues/165324 | open | [
"triaged",
"module: amp (automated mixed precision)",
"release notes: torch.func"
] | 2025-10-13T14:52:15Z | 2025-10-27T15:23:35Z | null | pnotp |
pytorch/pytorch | 165,319 | Memory leak when converting from numpy array | ### 🐛 Describe the bug
Just faced a weird memory leak in my code that uses both numpy and pytorch on cpu (to exploit some scipy functionalities first, before using pytorch ones). Here is a minimal example that reproduces the leak on my laptop. I faced it on python 3.10 and then python 3.13 with pytorch 2.8.0.
```pyt... | https://github.com/pytorch/pytorch/issues/165319 | open | [
"module: memory usage",
"triaged",
"module: numpy"
] | 2025-10-13T13:58:22Z | 2025-10-14T08:06:37Z | 4 | raphaelreme |
huggingface/lerobot | 2,186 | how to load pi0? | i use this code to load pi0:
```python
from lerobot.policies.pi0.modeling_pi0 import PI0Policy
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
pretrained_policy_path = "lerobot/pi0_libero_base"
policy = PI0Policy.from_pretrained(pretrained_policy_path).to(device)
```
but throws a... | https://github.com/huggingface/lerobot/issues/2186 | closed | [
"question",
"policies",
"python"
] | 2025-10-13T12:24:32Z | 2025-10-17T09:53:02Z | null | Addog666 |
huggingface/accelerate | 3,812 | RuntimeError during load_state | ### System Info
This issue is related to [prior issue 3101](https://github.com/huggingface/accelerate/issues/3101), but it hasn’t been fully resolved yet. The current workaround is to avoid using `safetensors`.
@Narsil suggested using [`load_file/save_file`](https://github.com/huggingface/safetensors/issues/657#issue... | https://github.com/huggingface/accelerate/issues/3812 | closed | [] | 2025-10-13T11:25:17Z | 2025-11-21T15:07:49Z | 2 | Silverster98 |
huggingface/lerobot | 2,185 | Has the lerobot data format been modified after June this year? | Has the lerobot data format been modified after June this year? The original data can no longer be used. | https://github.com/huggingface/lerobot/issues/2185 | closed | [
"question",
"dataset"
] | 2025-10-13T10:07:41Z | 2025-10-14T08:05:04Z | null | Addog666 |
huggingface/transformers | 41,539 | All POETRY operations fail on latest version 4.57.0 | ### System Info
I import transformers (always latest) in my poetry project.
I use poetry 2.1.2
After this transformers release (4.57.0) I regenerated the poetry lock with command: `poetry lock`
Then when retrying to generate the lock again after other updates - it fails with message:
`Could not parse constrains ver... | https://github.com/huggingface/transformers/issues/41539 | closed | [
"bug"
] | 2025-10-13T08:40:49Z | 2025-10-13T14:18:02Z | 1 | bfuia |
vllm-project/vllm | 26,692 | [Usage]: How to release KVCache? | ### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version : Cou... | https://github.com/vllm-project/vllm/issues/26692 | open | [
"usage"
] | 2025-10-13T08:28:20Z | 2025-10-13T08:28:20Z | 0 | shenxf1205 |
huggingface/lerobot | 2,184 | How to let an episode realize it has finished the task? | I have successfully trained my real-world lerobot to do several simple tasks from human demonstrations. Say, push an object from point A to point B. I noticed that after the robot arm has finished the task, it would return to its initial pose (same as the human demonstration) and stay idle for the remainder of the epis... | https://github.com/huggingface/lerobot/issues/2184 | open | [] | 2025-10-13T06:27:36Z | 2025-12-22T07:56:00Z | null | genkv |
pytorch/ao | 3,157 | Is there no tutorial for dynamic quantization of BERT model in torch.ao? | I saw that some quant related tutorials in [the PyTorch tutorials repo](https://github.com/pytorch/tutorials) have been deleted, and [the PR](https://github.com/pytorch/tutorials/pull/3432) stated that these tutorials will be moved to torchao. However, I can't find [the BERT dynamic quantization tutorial](https://gi... | https://github.com/pytorch/ao/issues/3157 | open | [
"triaged"
] | 2025-10-12T15:47:06Z | 2026-01-03T14:43:56Z | 6 | Esttelle |
vllm-project/vllm | 26,660 | [Usage]: Is there any way to enable beam search in online inference? | ### Your current environment
Is there any way to enable beam search in the `vllm serve` command? Or beam search is only available in offline inference code?
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
### Before submi... | https://github.com/vllm-project/vllm/issues/26660 | closed | [
"usage"
] | 2025-10-12T13:55:07Z | 2025-10-17T17:12:45Z | 1 | tiesanguaixia |
huggingface/transformers | 41,533 | Add_Specifical_tokens and resize_toked_embeddings result in an error | ### System Info
I want to add a few special tokens to my Qwen2.5VL model as separators, and after executing the following code, he received the following error message. I don't know how to solve this problem.
``` bash
[rank1]: Traceback (most recent call last):
[rank1]: RuntimeError: shape '[-1, 151936]' is invalid fo... | https://github.com/huggingface/transformers/issues/41533 | closed | [
"bug"
] | 2025-10-12T13:50:40Z | 2025-10-13T14:09:29Z | 3 | jialiangZ |
huggingface/lerobot | 2,181 | How to chage SmolVLA action_chunk_size? | I want to change 'action_chunk_size' from 50 to 10. I ran the command like this :
'''
python lerobot/scripts/train.py --policy.path=lerobot/smolvla_base --dataset.repo_id=Datasets/grasp_put --batch_size=16 --steps=40000 --output_dir=outputs/train/vla_chunk10 --job_name=smolvla_training --policy.device=cu... | https://github.com/huggingface/lerobot/issues/2181 | closed | [
"question",
"policies",
"python"
] | 2025-10-12T13:29:35Z | 2025-10-17T11:25:55Z | null | CCCY-0304 |
huggingface/transformers | 41,532 | where is examples/rag from original paper? | ### System Info
https://arxiv.org/pdf/2005.11401 mentions https://github.com/huggingface/transformers/blob/main/examples/rag but it is not there. Add redirect if possible
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially... | https://github.com/huggingface/transformers/issues/41532 | closed | [
"bug"
] | 2025-10-12T13:17:53Z | 2025-10-17T09:34:15Z | null | IgorKasianenko |
vllm-project/vllm | 26,653 | [Usage]: Qwen3VL image coordinates issue | ### Your current environment
Hi, i found same image, same prompt, the vLLM serving qwen3vl always have wrong cooridnates back.
this is vllm return:
Response: "{\"click_type\": \"left_click\", \"coordinate\": [815, 961]}"
<img width="1093" height="549" alt="Image" src="https://github.com/user-attachments/assets/f55c... | https://github.com/vllm-project/vllm/issues/26653 | closed | [
"usage"
] | 2025-10-12T07:02:29Z | 2025-10-13T03:56:53Z | 2 | lucasjinreal |
huggingface/accelerate | 3,811 | ValueError: Could not find the transformer layer class QwenImageTransformerBlock in the model. | Hi, I am trying to fine-tuning qwen-image-edit using accelerate in FSDP mode. I want to warp the ``QwenImageTransformerBlock`` in transformer and ``Qwen2_5_VLVisionBlock,Qwen2_5_VLDecoderLayer`` in text_encoder. I set the environment param
```
def set_fsdp_env():
os.environ["ACCELERATE_USE_FSDP"] = 'true'
os.en... | https://github.com/huggingface/accelerate/issues/3811 | closed | [] | 2025-10-11T10:13:14Z | 2025-11-22T15:06:54Z | 2 | garychan22 |
huggingface/lerobot | 2,172 | Add support for remote GPUs (with async inference!) | Hello,
I'm a student in not the first-world country, and unforturnately, I don't own a PC that would have an NVidia GPU - it costs about $1200 for a decent setup. On the other hand, it costs only $0.12-0.24/hr to rent RTX 4090 instances, so it's pretty cheap to simply rent a computer whenever I need to data collect/tra... | https://github.com/huggingface/lerobot/issues/2172 | open | [
"enhancement",
"question"
] | 2025-10-11T08:49:32Z | 2025-12-19T06:35:21Z | null | MRiabov |
huggingface/transformers | 41,518 | Add Structured Prompt Templates Registry for LLM / VLM / Diffusion Tasks | ### Feature request
Introduce transformers.prompt_templates — a YAML-based registry and accessor API:
```
from transformers import PromptTemplates
PromptTemplates.get("summarization") # "Summarize the following text:"
PromptTemplates.list_tasks() # ["summarization","vqa","ocr",...]
```
- Templates... | https://github.com/huggingface/transformers/issues/41518 | open | [
"Feature request"
] | 2025-10-11T08:10:20Z | 2025-10-13T15:06:20Z | 2 | Aki-07 |
vllm-project/vllm | 26,616 | [Usage]: How to enable MTP when using Qwen3-Next in local infer ( not vllm serve) | ### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.2 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Cou... | https://github.com/vllm-project/vllm/issues/26616 | open | [
"usage"
] | 2025-10-11T03:58:14Z | 2025-10-16T08:45:35Z | 1 | Kimagure7 |
vllm-project/vllm | 26,614 | [Usage]: attn_metadata.seq_lens is not equal to attn_metadata.num_actual_tokens | ### Your current environment
```
Collecting environment information...
uv is set
==============================
System Info
==============================
OS : Ubuntu 20.04.6 LTS (x86_64)
GCC version : (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version ... | https://github.com/vllm-project/vllm/issues/26614 | open | [
"usage"
] | 2025-10-11T03:35:38Z | 2025-10-11T03:36:31Z | 0 | betacatZ |
vllm-project/vllm | 26,612 | [Usage]: qwen3vl 30 A3B 启动vllm 服务报错 | ### 📚 The doc issue
A_A800-SXM4-80GB.json']
(Worker pid=1939690) INFO 10-11 10:42:13 [monitor.py:34] torch.compile takes 85.33 s in total
(Worker pid=1939690) INFO 10-11 10:42:14 [gpu_worker.py:298] Available KV cache memory: 13.69 GiB
(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] EngineCore failed ... | https://github.com/vllm-project/vllm/issues/26612 | closed | [
"usage"
] | 2025-10-11T02:45:20Z | 2025-10-16T23:00:39Z | 1 | renkexuan369 |
huggingface/lerobot | 2,171 | Data diffusion and data format conversion | 1. Can datasets collected in Lerobot format be disseminated?
2. Can data formats between different Lerobot versions be converted? I noticed that the data format collected in version 0.2.0 is different from the latest data format.
Thank you! | https://github.com/huggingface/lerobot/issues/2171 | open | [
"question",
"dataset"
] | 2025-10-11T02:16:55Z | 2025-10-17T02:02:36Z | null | FALCONYU |
vllm-project/vllm | 26,607 | [Bug]: Since version 0.9.2 comes with nccl built-in, using PCIE causes sys errors. How to disable nccl in vllm for versions after 0.9.2? | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
<img width="833" height="138" alt="Image" src="https://github.com/user-attachments/assets/a42c415b-8c5b-4698-aa6f-879edc44d512" />
### 🐛 De... | https://github.com/vllm-project/vllm/issues/26607 | open | [
"bug"
] | 2025-10-11T01:48:50Z | 2025-10-17T01:09:03Z | 0 | tina0852 |
pytorch/pytorch | 165,177 | cryptic symbolic shape error with FSDP2 and torch.compile | ### 🐛 Describe the bug
Using FSDP2 and torch.compile with Llama3 (and most other generative models on HuggingFace). I get the following error:
```
AssertionError: s52 (could be from ["L['position_ids']._base.size()[0]"]) not in {
s53: ["L['attention_mask'].size()[1]", "L['attention_mask'].stride()[0]"],
s58: ["L['cac... | https://github.com/pytorch/pytorch/issues/165177 | closed | [
"high priority",
"oncall: distributed",
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 2025-10-10T19:53:04Z | 2025-10-30T18:03:53Z | 8 | AndreasMadsen |
pytorch/tutorials | 3,611 | Feedback about Quickstart | There is the following issue on this page: https://docs.pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html#optimizing-the-model-parameters
System specs: Windows 11, python3.11, pytorch==2.8.0+xpu, Intel oneAPI 2025.2.
Been following this tut, I got this error raising from test function
```
correct += (pre... | https://github.com/pytorch/tutorials/issues/3611 | open | [
"question",
"core",
"module: xpu",
"windows"
] | 2025-10-10T17:06:21Z | 2025-10-20T03:25:53Z | null | BhavneetSingh7 |
huggingface/hf-hub | 131 | InvalidCertificate and how to fix it | I am trying to install a DuckDB extension written in Rust (https://github.com/martin-conur/quackformers) that uses the library.
During the install, I am getting a
```
HfHub(RequestError(Transport(Transport { kind: ConnectionFailed, message: Some("tls connection init failed"), url: Some(Url { scheme: "https", cannot_be... | https://github.com/huggingface/hf-hub/issues/131 | open | [] | 2025-10-10T14:42:12Z | 2025-10-10T18:18:28Z | null | sahuguet |
vllm-project/vllm | 26,585 | [Usage]: use vllm embedding to extract last token hidden states? | ### Your current environment
```/usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
import p... | https://github.com/vllm-project/vllm/issues/26585 | closed | [
"usage"
] | 2025-10-10T13:01:42Z | 2025-12-15T06:54:05Z | 2 | rxqy |
vllm-project/vllm | 26,582 | [Bug]: which triton-kernels version for MXFP4 Triton backend? | ### Your current environment
vllm v0.11.0 installed via `uv pip install vllm --torch-backend=auto`
triton + triton-kernels at different commits installed from source
### 🐛 Describe the bug
**Which triton + triton-kernels version does one have to install to run GPT-OSS with the MXFP4 Triton backend?**
No matter wh... | https://github.com/vllm-project/vllm/issues/26582 | closed | [
"bug"
] | 2025-10-10T11:51:59Z | 2025-12-12T20:30:06Z | 8 | matkle |
huggingface/lerobot | 2,162 | [Question] How to suppress verbose Svt[info] logs from video encoding during save_episode()? | Hi, thank you for this fantastic library!
I am currently using lerobot (Version: 0.3.3) to record and save robotics data. When I use the `dataset.save_episode() method`, I get a large number of verbose log messages prefixed with Svt[info]:
```shell
Svt[info]: ------------------------------------------- ... | https://github.com/huggingface/lerobot/issues/2162 | closed | [
"question",
"dataset"
] | 2025-10-10T08:56:52Z | 2025-10-13T05:43:01Z | null | zxytql |
huggingface/transformers | 41,494 | Incorrect tokenizer created for gemma gguf files | ### System Info
- `transformers` version: 4.57.0
- Platform: Linux-5.15.0-144-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.5.3
- Accelerate version: 0.34.2
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accel... | https://github.com/huggingface/transformers/issues/41494 | closed | [
"bug"
] | 2025-10-09T23:27:25Z | 2025-11-29T08:02:57Z | 4 | amychen85 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.