repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
user
stringlengths
2
28
vllm-project/vllm
28,246
[Bug]: Return Token Ids not returning Gen Token Ids for GPT-OSS-120b
### Your current environment <details> Using docker image vllm/vllm-openai:latest </details> ### 🐛 Describe the bug When passing in return_token_ids flag to v1/chat/completions endpoint for GPTOSS-120b, only prompt_token_ids are returned and not token_ids. We have not seen this happen with any other model except ...
https://github.com/vllm-project/vllm/issues/28246
open
[ "bug" ]
2025-11-06T21:08:16Z
2025-11-07T00:18:25Z
1
sophies-cerebras
pytorch/pytorch
167,242
CUDNN version in nightly pytorch 2.10.0 builds
Hi, I mainly use pytorch with ComfyUI. I know there is an issue with pytorch and CUDNN for which there have been made workarounds in ComfyUI code. I have seen here https://github.com/pytorch/pytorch/issues/166122 that CUDNN 9.15 solves the problem (from I can understand, as I'm not a developer). Checking today's torch...
https://github.com/pytorch/pytorch/issues/167242
open
[ "module: binaries", "module: cudnn", "triaged" ]
2025-11-06T20:16:08Z
2025-11-30T16:25:21Z
13
jovan2009
pytorch/ao
3,305
[MXFP8 MoE] What's the expected inference solution on H100s, after training with TorchAO MXFP8 MoE?
Hi team, Thanks for your great implementation of the new MXFP8 MoE! I have integrated it and consider to use it for prod training. But I got a concern about how to do inference. MXFP8 is only available on B200. What is the expected inference solution on H100 or even non-Nvidia GPUs after training with MXFP8. Other qu...
https://github.com/pytorch/ao/issues/3305
open
[ "question", "mx", "moe" ]
2025-11-06T18:45:31Z
2025-11-07T19:20:18Z
null
goldhuang
vllm-project/vllm
28,236
[Feature]: Implement naive prepare/finalize class to replace naive dispatching in fused_moe/layer.py
### 🚀 The feature, motivation and pitch The `FusedMoE` layer has a special case dispatch/combine for EP+DP when there is no specific all2all backend specified. This makes the code in `layer.py` a bit confusing and hard to follow. One way to simplify this is to implement a proper `FusedMoEPrepareAndFinalize` subclas...
https://github.com/vllm-project/vllm/issues/28236
open
[ "help wanted", "good first issue", "feature request" ]
2025-11-06T18:38:38Z
2025-11-12T06:36:29Z
4
bnellnm
vllm-project/vllm
28,233
[Usage]: LogitProcessor vLLM 0.9.1 run the same prompt 50 times with batching, apply logitprocessor independently on each
### Your current environment Goal Run the same prompt 50 times through vLLM 0.9.1, generating independent outputs with a custom LogitsProcessor that forces a comma token after some pattern "xyz" appears in each generation. What You Want Batched execution: Process all 50 generations efficiently in parallel Independent...
https://github.com/vllm-project/vllm/issues/28233
open
[ "usage" ]
2025-11-06T18:11:32Z
2025-11-06T18:11:32Z
0
jindalankush28
vllm-project/vllm
28,230
[Bug]: GPU VRAM continuously increase during Qwen3-VL usage over days until OOM
### Your current environment Setup: docker run -d \ --runtime nvidia \ --gpus '"device=3,4,5,6"' \ -e TRANSFORMERS_OFFLINE=1 \ -e DEBUG="true" \ -p 8000:8000 \ --ipc=host \ vllm/vllm-openai:v0.11.0 \ --gpu-memory-utilization 0.95 \ --model Qwen/Qwen3-VL-235B-A22B-Instruct-FP8 \ --tensor-parallel-si...
https://github.com/vllm-project/vllm/issues/28230
open
[ "bug" ]
2025-11-06T17:19:18Z
2025-12-02T16:50:26Z
15
yz342
pytorch/pytorch
167,219
Are there limitations to dtensor's registration strategy?
I have a IR schema like this func: my_scatter_add(Tensor x, Tensor(a!) y, Tensor index, Tensor? scale=None, bool use_high_prec=False) -> () This function has no return value, and the second parameter is an in-place parameter I tried the `register_sharding` method described in the Dtensor documentation. However, it thre...
https://github.com/pytorch/pytorch/issues/167219
open
[ "oncall: distributed", "module: dtensor" ]
2025-11-06T14:50:40Z
2025-11-11T13:37:24Z
4
Bin1024
huggingface/datasets
7,852
Problems with NifTI
### Describe the bug There are currently 2 problems with the new NifTI feature: 1. dealing with zipped files, this is mentioned and explained [here](https://github.com/huggingface/datasets/pull/7815#issuecomment-3496199503) 2. when uploading via the `niftifolder` feature, the resulting parquet only contains relative p...
https://github.com/huggingface/datasets/issues/7852
closed
[]
2025-11-06T11:46:33Z
2025-11-06T16:20:38Z
2
CloseChoice
huggingface/peft
2,901
AttributeError: 'float' object has no attribute 'meta'
### System Info peft== 0.17.1 torch== 2.5.1+cu118 transformers==4.57.0 python==3.12.7 ### Who can help? I am trying to use LoRA with DINOv3 (so a slightly modified vit-b). However, I am hitting after a random number of iterations this error. It is sadly difficult to reproduce. Maybe someone can hint at what is going...
https://github.com/huggingface/peft/issues/2901
closed
[]
2025-11-06T11:24:18Z
2025-11-17T15:34:08Z
6
Karol-G
vllm-project/vllm
28,192
[RFC]: Support separate NICs for KV cache traffic and MoE traffic
### Motivation. In MoE models with large KV caches, KV cache all-to-all and MoE expert communication share the same RNIC, causing congestion and degrading performance. Using dedicated NICs for each traffic type can improve bandwidth utilization and reduce interference. ### Proposed Change. Does vLLM currently suppor...
https://github.com/vllm-project/vllm/issues/28192
open
[ "RFC" ]
2025-11-06T07:31:17Z
2025-11-06T08:19:56Z
1
JayFzh
vllm-project/vllm
28,186
[Bug] Cannot load qwen3-vl series with lora adapter
I fine-tuned the `Qwen3-VL-8B-Instruct` model using Unsloth. I moved the saved QLoRA adapter and the `Qwen3-VL-2B-Instruct` model to my vLLM server. Then I ran a command to start model serving with vLLM as shown below. (For reference, the vLLM server has no issues—it was already serving official Qwen3-VL models.) ``` ...
https://github.com/vllm-project/vllm/issues/28186
open
[ "bug" ]
2025-11-06T06:02:33Z
2025-11-09T11:16:27Z
4
deepNoah
pytorch/pytorch
167,186
scripts/build_android.sh missing
### 🐛 Build scripts for android deleted, README outdated I was trying to build pytorch v2.9.0 for android, but it seems build_android.sh script was deleted. Is there any reason why it was deleted? The odd thing is that https://github.com/pytorch/pytorch/blob/v2.9.0/android/README.md references bash ./scripts/build_p...
https://github.com/pytorch/pytorch/issues/167186
closed
[ "triaged", "oncall: mobile" ]
2025-11-06T04:15:16Z
2025-11-07T00:56:14Z
1
ppavacic
pytorch/torchtitan
1,998
[Documentation] [BE] Add docs for MXFP8 training on Blackwell
We have [float8](https://github.com/pytorch/torchtitan/blob/main/docs/float8.md) docs, we should add mxfp8 docs as well, especially since we have a public blog post on accelerating training with torchtitan mxfp8 training: https://pytorch.org/blog/accelerating-2k-scale-pre-training-up-to-1-28x-with-torchao-mxfp8-and-tor...
https://github.com/pytorch/torchtitan/issues/1998
closed
[ "documentation" ]
2025-11-06T02:53:06Z
2025-12-03T21:54:51Z
0
danielvegamyhre
pytorch/pytorch
167,172
[Profiler][XPU] Is there a miss?
Found something: https://github.com/pytorch/pytorch/blob/943227f57bcd638ab288331442748769f907d8c1/torch/csrc/autograd/init.cpp#L390-L419 Is the XPU code should also be in the #if branch? Seems the XPU depends on macro `LIBKINETO_NOXPUPTI`? Hmmmm, or the #if control misses the `|| !defined(LIBKINETO_NOXPUPTI)` also? No...
https://github.com/pytorch/pytorch/issues/167172
closed
[ "triaged", "module: xpu" ]
2025-11-06T02:15:45Z
2025-11-19T05:42:57Z
1
KarhouTam
huggingface/trl
4,481
DPOTrainer._prepare_dataset() adds an extra eos_token to conversationally formatted inputs
## Overview The DPOTrainer unconditionally appends the eos_token to both the "chosen" and "rejected" sequences. Because conversationally formatted inputs will already have the chat template applied, this causes them to have duplicate eos_tokens (Ex. `...<|im_end|><|im_end|>`). A related problem was reported for the [...
https://github.com/huggingface/trl/issues/4481
open
[ "🐛 bug", "🏋 DPO" ]
2025-11-06T01:17:05Z
2025-11-06T18:40:39Z
0
DevonPeroutky
huggingface/trl
4,468
Move RLOOTrainer to trl.experimental
## Context Part of #4223 and #4374 - Moving trainers to experimental submodule for V1. ## Task Move RLOOTrainer from main trl module to trl.experimental: - [ ] Move trainer file to trl/experimental/ - [ ] Update imports in __init__.py files - [ ] Update documentation - [ ] Add deprecation warning in old location - ...
https://github.com/huggingface/trl/issues/4468
closed
[ "📚 documentation", "✨ enhancement" ]
2025-11-05T21:30:15Z
2025-12-05T18:21:41Z
2
behroozazarkhalili
huggingface/trl
4,466
Move PPOTrainer to trl.experimental
## Context Part of #4223 and #4374 - Moving trainers to experimental submodule for V1. ## Task Move PPOTrainer from main trl module to trl.experimental: - [ ] Move trainer file to trl/experimental/ - [ ] Update imports in __init__.py files - [ ] Update documentation - [ ] Add deprecation warning in old location - [...
https://github.com/huggingface/trl/issues/4466
closed
[ "📚 documentation", "✨ enhancement", "🏋 PPO" ]
2025-11-05T21:29:54Z
2025-11-13T19:01:20Z
0
behroozazarkhalili
huggingface/trl
4,465
Move ORPOTrainer to trl.experimental
## Context Part of #4223 and #4374 - Moving trainers to experimental submodule for V1. ## Task Move ORPOTrainer from main trl module to trl.experimental: - [ ] Move trainer file to trl/experimental/ - [ ] Update imports in __init__.py files - [ ] Update documentation - [ ] Add deprecation warning in old location - ...
https://github.com/huggingface/trl/issues/4465
closed
[ "📚 documentation", "✨ enhancement", "🏋 ORPO" ]
2025-11-05T21:29:44Z
2025-11-21T06:36:32Z
0
behroozazarkhalili
huggingface/trl
4,463
Move KTOTrainer to trl.experimental
## Context Part of #4223 and #4374 - Moving trainers to experimental submodule for V1. ## Task Move KTOTrainer from main trl module to trl.experimental: - [ ] Move trainer file to trl/experimental/ - [ ] Update imports in __init__.py files - [ ] Update documentation - [ ] Add deprecation warning in old location - [...
https://github.com/huggingface/trl/issues/4463
open
[ "📚 documentation", "✨ enhancement", "🏋 KTO" ]
2025-11-05T21:29:25Z
2025-11-05T21:29:50Z
0
behroozazarkhalili
huggingface/trl
4,461
Move OnlineDPOTrainer to trl.experimental
## Context Part of #4223 and #4374 - Moving trainers to experimental submodule for V1. ## Task Move OnlineDPOTrainer from main trl module to trl.experimental: - [ ] Move trainer file to trl/experimental/ - [ ] Update imports in __init__.py files - [ ] Update documentation - [ ] Add deprecation warning in old locati...
https://github.com/huggingface/trl/issues/4461
closed
[ "📚 documentation", "✨ enhancement", "🏋 Online DPO" ]
2025-11-05T21:28:08Z
2025-11-24T01:13:07Z
1
behroozazarkhalili
pytorch/pytorch
167,118
[CI][CUDA][B200] Why does job keep encountering "No devices were found" while "nvidia-smi" on bare-metal returns normal results
### 🐛 Describe the bug JOB link: https://github.com/pytorch/pytorch/actions/runs/19096449521/job/54559623146 Runner/user: dgxb200-08-1003 Nvidia-smi output when logged on the machine: <img width="673" height="560" alt="Image" src="https://github.com/user-attachments/assets/28d124a2-3a4e-408a-8301-4437b2541af5" /...
https://github.com/pytorch/pytorch/issues/167118
closed
[ "high priority", "triage review" ]
2025-11-05T20:06:16Z
2025-11-10T17:16:16Z
4
nWEIdia
vllm-project/vllm
28,152
[Feature]: Factor out `zero_expert_num` from `FusedMoE`
### 🚀 The feature, motivation and pitch We have many special cases in `FusedMoE` for `zero_expert_num` This parameter is used exclusively for `LongCatFlash`. We should factor this out of `FusedMoe` and put the complexity into the model file. ### Alternatives _No response_ ### Additional context _No response_ ##...
https://github.com/vllm-project/vllm/issues/28152
open
[ "help wanted", "feature request" ]
2025-11-05T19:05:54Z
2025-11-06T20:08:23Z
0
robertgshaw2-redhat
pytorch/ao
3,295
Examples of using llms with PT2E workflow?
Are there examples of using llms with PT2E workflow? I'm interested in static quantization using qwen3 .
https://github.com/pytorch/ao/issues/3295
closed
[ "triaged" ]
2025-11-05T18:33:13Z
2025-12-05T01:12:56Z
3
cjm715
vllm-project/vllm
28,150
[Bug]: -O.mode=NONE (or -cc.mode=NONE) should work
### Your current environment main ### 🐛 Describe the bug Right now -O.mode only accepts integer levels. Ideally it would accept ints and the string. `vllm serve -O.mode=NONE` # doesn't work `vllm serve -O.mode=0` # does work ### Before submitting a new issue... - [x] Make sure you already searched for relevant...
https://github.com/vllm-project/vllm/issues/28150
closed
[ "bug", "help wanted", "good first issue", "torch.compile" ]
2025-11-05T18:28:23Z
2025-11-12T00:46:20Z
1
zou3519
vllm-project/vllm
28,137
[Feature]: Refactor `aiter_shared_expert_fusion`
### 🚀 The feature, motivation and pitch We have a special case in `FusedMoE` layer for `aiter_shared_expert_fusion` which creates various if branches spattered across the layer We should factor this out ### Alternatives _No response_ ### Additional context _No response_ ### Before submitting a new issue... - [...
https://github.com/vllm-project/vllm/issues/28137
open
[ "help wanted" ]
2025-11-05T15:54:09Z
2025-12-20T22:00:55Z
3
robertgshaw2-redhat
vllm-project/vllm
28,132
[Usage]: How do I assign a specific GPU to a vLLM docker container?
### Your current environment stock vllm-openai:v0.11.0 docker image rootless Docker v.27.5.1 on Ubuntu 22.04.5 LTS on physical hardware Nvidia Driver Version: 570.133.20 CUDA Version: 12.8 GPUs: 4x H100 (NVLink), numbered 0,1,2,3 ### How would you like to use vllm I want to run inference of [SmolLM3-3B](https://hugg...
https://github.com/vllm-project/vllm/issues/28132
closed
[ "usage" ]
2025-11-05T14:42:17Z
2025-11-06T14:54:41Z
1
lindner-tj
huggingface/lerobot
2,389
How to resolve the issue that GROOT cannot train properly? Below is my training configuration and error log.
How to resolve the issue that GROOT cannot train properly? Below is my training configuration and error log. accelerate launch \ --multi_gpu \ --num_processes=2 \ $(which lerobot-train) \ --output_dir=./outputs/groot_training \ --save_checkpoint=true \ --batch_size=8 \ --steps=200000 \ --save_freq=2000...
https://github.com/huggingface/lerobot/issues/2389
open
[ "training" ]
2025-11-05T10:17:59Z
2025-11-07T17:47:50Z
null
wuxiaolianggit
huggingface/lerobot
2,388
how to improve the generalization of the vla model like gr00t
After fine-tuning the gr00t, i found that it only work for the prompt within the dataset, it is difficult for it to understand new words and new item that need to grab. so whether there is a method can protect the generalization, if i can create a new layer to map the output of the model to new dimensionality?
https://github.com/huggingface/lerobot/issues/2388
open
[]
2025-11-05T10:06:11Z
2025-11-05T10:44:38Z
null
Temmp1e
vllm-project/vllm
28,119
[Feature]: Will we support async scheduler for pipeline parallel?
### 🚀 The feature, motivation and pitch SGLang already have https://github.com/sgl-project/sglang/pull/11852 And I see huge perf gap on SM120 PP because of this. ### Alternatives _No response_ ### Additional context _No response_ ### Before submitting a new issue... - [x] Make sure you already searched for rel...
https://github.com/vllm-project/vllm/issues/28119
closed
[ "feature request" ]
2025-11-05T09:55:57Z
2025-11-07T06:14:19Z
4
weireweire
huggingface/gsplat.js
122
I want to add an object (such as a robot) to move around in the model. How can this be achieved?
I want to add an object (such as a robot) to move around in the model. How can this be achieved?
https://github.com/huggingface/gsplat.js/issues/122
open
[]
2025-11-05T09:16:39Z
2025-11-05T09:16:39Z
null
ThinkingInGIS
pytorch/pytorch
167,062
How to use torch.compile on Windows GPU?
### 🐛 Describe the bug I have installed Python 3.13.9 and PyTorch 2.9+cuda3.13 pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu130, And my GPU is RTX 380 12 GB. I have Windows 11 I followed up on those steps - MSVC v143 - VS 2022 C++ x64/x86 build tools - Windows 11 SDK - C++ CMake to...
https://github.com/pytorch/pytorch/issues/167062
open
[ "module: windows", "triaged", "oncall: pt2" ]
2025-11-05T09:04:27Z
2025-11-11T18:16:46Z
null
emadyounan
vllm-project/vllm
28,104
[Usage]: vllm bench serve不能用sharegpt数据集
### Your current environment ```text 我运行以下bencmmarks命令:vllm bench serve --model Qwen3 --tokenizer /mnt/workspace/models --host 127.0.0.1 --port 80 --num-prompts 400 --percentile-metrics ttft,tpot,itl,e2el --metric-percentiles 90,95,99 --dataset-name sharegpt --data set-path /mnt/workspace/benchmarks/sharegpt/ShareGPT_...
https://github.com/vllm-project/vllm/issues/28104
open
[ "usage" ]
2025-11-05T06:18:02Z
2025-11-06T14:24:46Z
1
uOnePiece
pytorch/pytorch
167,042
Requesting Cuda 13 support
### 🚀 The feature, motivation and pitch Hi! I am trying to run Torch with GPU support. I am running on Windows, with CUDA toolkit 13 installed, and the latest nvidia drivers. `torch.cuda.is_available()` is showing as False. Is it safe to assume this is because it needs CUDA 12? I'm brand new to Torch, but do a bit o...
https://github.com/pytorch/pytorch/issues/167042
closed
[]
2025-11-05T01:41:01Z
2025-11-05T01:51:37Z
1
David-OConnor
pytorch/pytorch
167,027
combine compiled vectorized function without recompiling already compiled part
### 🚀 The feature, motivation and pitch The nice thing of `torch.compile` is that it fuses the vectorized operations and avoid big intermediate tensors. For example, if I have ``` def func(x): y = f1(x) z = f2(y) return z ``` After `torch.compile` it becomes something like ``` for(int i=0;i<len(x);i++) { ...
https://github.com/pytorch/pytorch/issues/167027
open
[ "triaged", "intel", "oncall: pt2", "module: inductor" ]
2025-11-05T00:16:52Z
2025-11-11T18:15:06Z
1
SUSYUSTC
vllm-project/vllm
28,070
[Usage]: Is there a way to control default thinking behaviour of a model?
### Your current environment Is there a way to control default thinking behaviour for models deployed through vllm. As per https://docs.vllm.ai/en/stable/features/reasoning_outputs.html, IBM Grantie 3.2 reasoning is disabled by default. Qwen3, GLM 4.6, Deepseek V3.1 all have reasoning enabled by default. It would be g...
https://github.com/vllm-project/vllm/issues/28070
closed
[ "usage" ]
2025-11-04T22:03:32Z
2025-12-30T03:38:48Z
0
yz342
vllm-project/vllm
28,056
[Bug]: Missing libarm_compute.so in Arm CPU pip installed wheels
### Your current environment <details> <summary>The output of <code>python collect_env.py</code></summary> ```text Your output of `python collect_env.py` here ``` </details> ### 🐛 Describe the bug We now have vllm wheels for Arm CPUs in pypi thanks to https://github.com/vllm-project/vllm/pull/26931 and https://g...
https://github.com/vllm-project/vllm/issues/28056
closed
[ "bug" ]
2025-11-04T17:22:55Z
2025-11-13T05:43:10Z
2
fadara01
pytorch/torchtitan
1,989
Should MFU/tflops take tensor parallelism into account?
Right now model flops is computed before TP is applied. But TP changes the sizes of the matrices so I think the flops computation should be different as well?
https://github.com/pytorch/torchtitan/issues/1989
open
[ "question" ]
2025-11-04T16:51:12Z
2025-11-05T00:04:49Z
null
chelsea0x3b
vllm-project/vllm
28,046
Qwen3-Omni model inference : ValueError: Either SamplingParams or PoolingParams must be provided.
### Your current environment ```text The output of `python web_demo.py` ``` The above mentioned method provides the error below ``` qwen/Qwen3-Omni/collect_env.py", line 287, in get_vllm_version from vllm import __version__, __version_tuple__ ImportError: cannot import name '__version__' from 'vllm' (unknown lo...
https://github.com/vllm-project/vllm/issues/28046
closed
[ "usage" ]
2025-11-04T13:59:57Z
2025-11-24T19:24:39Z
22
Tortoise17
vllm-project/vllm
28,045
[Doc]: Any detailed documentation about how to load_weights in customized vllm model?
### 📚 The doc issue I don't know how to modify the attention and how the load_model works. The documentation says too few, I find it's hard to understand. Anyone has some more detailed experience? Thank you! ### Suggest a potential alternative/fix _No response_ ### Before submitting a new issue... - [x] Make su...
https://github.com/vllm-project/vllm/issues/28045
open
[ "documentation" ]
2025-11-04T13:23:25Z
2025-11-05T02:07:55Z
0
sleepwalker2017
vllm-project/vllm
28,035
[Usage]: deepseek-ocr The output token count is too low and unstable.
### Your current environment ```text The output of `python collect_env.py` ``` ### How would you like to use vllm python3 -m vllm.entrypoints.openai.api_server --served-model-name deepseek-ocr --model deepseekocr --tensor-parallel-size 1 --gpu-memory-utilization 0.95 --disable-log-requests --logits_processors vllm....
https://github.com/vllm-project/vllm/issues/28035
open
[ "usage" ]
2025-11-04T09:50:53Z
2025-11-04T09:50:53Z
0
sixgod-666
vllm-project/vllm
28,031
[Usage]: Error: Failed to initialize the TMA descriptor 700
### Your current environment vllm0.11.0 to train Qwen3-vl-8B The following error message appears intermittently during training. ``` [36m(WorkerDict pid=82555) TMA Desc Addr: 0x7f4e2736b080 (WorkerDict pid=82555) format 9 (WorkerDict pid=82555) dim 4 (WorkerDict pid=...
https://github.com/vllm-project/vllm/issues/28031
open
[ "usage" ]
2025-11-04T08:13:45Z
2025-12-11T08:18:15Z
4
DBMing
vllm-project/vllm
28,016
[Usage]: How to recognize PDFs in DeepSeek-OCR with openai
### Your current environment ``` vllm serve deepseek-ai/DeepSeek-OCR --logits_processors vllm.model_executor.models.deepseek_ocr.NGramPerReqLogitsProcessor --no-enable-prefix-caching --mm-processor-cache-gb 0 ``` ### How would you like to use vllm How to recognize PDFs and convert PDFs to Markdown with DeepSeek-OCR...
https://github.com/vllm-project/vllm/issues/28016
open
[ "usage" ]
2025-11-04T03:35:38Z
2025-11-04T07:33:07Z
2
shoted
vllm-project/vllm
28,003
[Usage]:
### Your current environment ```text Collecting environment information... ============================== System Info ============================== OS : Ubuntu 22.04.5 LTS (x86_64) GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version : Cou...
https://github.com/vllm-project/vllm/issues/28003
open
[ "usage" ]
2025-11-03T21:19:15Z
2025-11-26T15:32:40Z
1
amitmvyas
pytorch/ao
3,281
[moe training] Update torchao docsite with MoE training docs
Currently the MoE training docs live in this [README](https://github.com/pytorch/ao/blob/main/torchao/prototype/moe_training/README.md). To make the prototype more discoverable and usable, we should: 1. Update the the [docsite](https://docs.pytorch.org/ao/stable/index.html) 2. Update torchtitan docs with examples fo...
https://github.com/pytorch/ao/issues/3281
open
[ "topic: documentation", "moe" ]
2025-11-03T18:34:01Z
2025-11-03T18:34:10Z
0
danielvegamyhre
vllm-project/vllm
27,995
[RFC]: Make PassConfig flags less verbose
### Motivation. Almost all `PassConfig` field names have `enable_` in the name, which is unnecessarily verbose. They are also pretty long, and sometimes not descriptive enough. Finally, `enable_fusion` should be split into rmsnorm+quant and activation+quant flags as we want to control these flags separately. ### Prop...
https://github.com/vllm-project/vllm/issues/27995
closed
[ "help wanted", "good first issue", "RFC", "torch.compile" ]
2025-11-03T17:49:29Z
2025-12-03T19:53:01Z
7
ProExpertProg
huggingface/peft
2,888
Potential remote code execution via untrusted tokenizer_kwargs in PromptEmbedding
### Description A remote code execution vector exists in the PEFT prompt-tuning flow. A remote `adapter_config.json` can inject loader kwargs that are forwarded to `AutoTokenizer.from_pretrained` calls. If an attacker sets `"tokenizer_kwargs": {"trust_remote_code": true}` and points `tokenizer_name_or_path` at an atta...
https://github.com/huggingface/peft/issues/2888
closed
[]
2025-11-03T16:04:52Z
2025-11-04T17:50:28Z
3
Vancir
pytorch/pytorch
166,866
ROCm failures during provisioning step due to network issues
## Current Status Mitigated MI250 Cirrascale cluster had a network outage causing jobs to fail ## Error looks like Error during Set up job: ``` Download action repository 'pytorch/pytorch@main' (SHA:335b5c7d4bf3295d517902370142f007ca024cd0) Warning: Failed to download action 'https://api.github.com/repos/pytorch/pyto...
https://github.com/pytorch/pytorch/issues/166866
closed
[ "module: rocm", "ci: sev" ]
2025-11-03T15:57:42Z
2025-11-04T23:54:15Z
5
atalman
huggingface/lerobot
2,371
memory increase continuously during training Groot
### System Info ```Shell - lerobot version: 0.4.1 - Platform: Linux-5.4.250-2-velinux1u3-amd64-x86_64-with-glibc2.31 - Python version: 3.10.15 - Huggingface Hub version: 0.35.3 - Datasets version: 4.1.1 - Numpy version: 2.1.3 - PyTorch version: 2.7.1+cu126 - Is PyTorch built with CUDA support?: True - Cuda version: 12...
https://github.com/huggingface/lerobot/issues/2371
open
[ "question", "policies", "performance" ]
2025-11-03T14:38:52Z
2025-12-31T13:17:11Z
null
caoran2025
pytorch/torchtitan
1,979
question of PP x aux_loss for MoE
In short, does PP allow multiple-args input and multiple-args output? —— Hey, we’ve been stuck for a while on how to properly integrate aux loss for MoE training with PP and compile(full_graph). For context, both DeepSeek V3 and GLM 4.5 mention that > “We also applied an auxiliary sequence-level balance loss with ...
https://github.com/pytorch/torchtitan/issues/1979
open
[]
2025-11-03T13:37:44Z
2025-11-20T02:22:30Z
13
rakkit
vllm-project/vllm
27,982
[Usage]: How can I access or return hidden states (representations) after generation?
### Your current environment In my training pipeline (GRPO), I need to access hidden-state representations of all layers and store prompt representations alongside generated sequences. Is there any supported way to extract or return hidden states from the vLLM inference engine? Environment vllm==0.11.0 Python 3.12 #...
https://github.com/vllm-project/vllm/issues/27982
open
[ "usage" ]
2025-11-03T13:01:51Z
2025-11-04T03:07:40Z
1
hakbari14
huggingface/lerobot
2,368
Release 0.5.0
A Github Issue created for the upcoming release to discuss the planned features & changes: * Audio PR #967 * Bump transformers dependency to +v5
https://github.com/huggingface/lerobot/issues/2368
open
[ "bug", "question", "dependencies" ]
2025-11-03T12:46:51Z
2025-12-24T00:08:16Z
null
imstevenpmwork
vllm-project/vllm
27,981
[Usage]: qwenvl2.5如何指定max_pixels
### Your current environment 如题,我尝试了``--mm-processor-kwargs {"max_pixels": $MAX_PIXELS}``无效 ### How would you like to use vllm I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm. ### Before submitting a new issue... - [x] Make sure you already searched for rel...
https://github.com/vllm-project/vllm/issues/27981
open
[ "usage" ]
2025-11-03T12:38:34Z
2025-11-04T08:19:54Z
3
aJupyter
huggingface/accelerate
3,829
Does Accelerate automatically set the DataLoader’s sampler to a DistributedSampler?
```python from accelerate import Accelerator accelerator = Accelerator() device = accelerator.device model, optimizer, training_dataloader, scheduler = accelerator.prepare( model, optimizer, training_dataloader, scheduler ) for batch in training_dataloader: optimizer.zero_grad() inputs, targets = batch ...
https://github.com/huggingface/accelerate/issues/3829
closed
[]
2025-11-03T07:17:29Z
2025-12-16T15:09:43Z
2
caixxiong
vllm-project/vllm
27,957
[Usage]: What is the difference between embedding task and pooler task?
### Your current environment Any document about this? ### How would you like to use vllm I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot l...
https://github.com/vllm-project/vllm/issues/27957
closed
[ "usage" ]
2025-11-03T03:38:39Z
2025-11-03T10:20:18Z
1
sleepwalker2017
vllm-project/vllm
27,949
[Usage]: How do I deploy GGUF models with vLLM via Docker correct?
### Your current environment ```text The output of `python collect_env.py` ``` Here is the output from `sudo python3 collect_env.py` ``` Traceback (most recent call last): File "/export/nvme/vllm/collect_env.py", line 18, in <module> import regex as re ModuleNotFoundError: No module named 'regex' ``` ### How w...
https://github.com/vllm-project/vllm/issues/27949
open
[ "usage" ]
2025-11-02T23:33:49Z
2025-11-02T23:36:44Z
1
alpha754293
huggingface/xet-core
549
How to get the "Xet backed hash"?
Hi, On HuggingFace, every page has a "Xet backed hash" (I've attached an example below) and I am trying to figure out how to compute that locally. I've read the documentation and it says there are 4 types of different hashes but it's not really clear how a "Xet backed hash" is calculated. So I was just wondering if ...
https://github.com/huggingface/xet-core/issues/549
closed
[]
2025-11-02T09:40:39Z
2025-11-06T16:20:25Z
null
arch-btw
huggingface/lerobot
2,360
diffusion transformer
请问有大佬在lerobot中将diffusion unet改为DiT过吗
https://github.com/huggingface/lerobot/issues/2360
open
[ "question", "policies" ]
2025-11-02T09:05:30Z
2025-11-12T09:01:59Z
null
Benxiaogu
vllm-project/vllm
27,928
[Bug]: What happened to /get_world_size ?
### Your current environment vllm 0.11.0 trl 0.24.0 python 3.12 linux amd64 ### 🐛 Describe the bug TRL is expecting a `/get_world_size` route https://github.com/huggingface/trl/blob/main/trl/extras/vllm_client.py#L279 for its GRPO trainer. That gives a 404 on the latest version of vLLM. Was this changed to anothe...
https://github.com/vllm-project/vllm/issues/27928
open
[ "bug" ]
2025-11-01T22:56:45Z
2025-11-03T02:42:14Z
1
pbarker-synth
huggingface/lerobot
2,356
AsyncInference only running one action chunk
I have my SO101 arms connected to my computer, and I'm running an asynchronous server on a cloud GPU with a RTX 4090. When I start running Pi0.5, the model is loaded and the SO101 makes its first move by setting the robot to be at its middle position, but then no further actions are made although the server logs new o...
https://github.com/huggingface/lerobot/issues/2356
open
[ "question", "robots" ]
2025-11-01T20:31:10Z
2025-12-23T01:10:35Z
null
kevinjosethomas
pytorch/pytorch
166,802
add ability to automatically set `set_per_process_memory_fraction` using env variable
### 🚀 The feature, motivation and pitch Hi, In multi-user / multi-tenant GPU environments (e.g., Slurm clusters, Kubernetes GPU slicing, or MPS-based sharing), it is often desirable to constrain the GPU memory usage of a process externally, without modifying the application code. Currently, torch.cuda.set_per_proces...
https://github.com/pytorch/pytorch/issues/166802
closed
[ "module: cuda", "module: memory usage", "triaged" ]
2025-11-01T19:22:40Z
2025-11-07T16:58:15Z
4
orena1
pytorch/pytorch
166,796
[ROCm][CI] Machines under the label linux.rocm.gpu.2, label linux.rocm.gpu.4, linux.rocm.gpu.gfx1100 are undergoing maintenance.
> NOTE: Remember to label this issue with "`ci: sev`" > If you want autorevert to be disabled, keep the ci: disable-autorevert label <!-- Add the `merge blocking` label to this PR to prevent PRs from being merged while this issue is open --> ## Current Status *Status could be: preemptive, ongoing, mitigated, c...
https://github.com/pytorch/pytorch/issues/166796
closed
[ "module: rocm", "ci: sev" ]
2025-11-01T14:59:52Z
2025-11-03T11:04:50Z
0
amdfaa
vllm-project/vllm
27,916
[Feature]: Does the latest version support LoRa for visual models?
### 🚀 The feature, motivation and pitch When I loaded the QWEN2.5-VL model fine-tuned by LoRa using vllm version 0.8.4, I encountered the following prompt: > Regarding multimodal models, vLLM currently only supports adding LoRA to language model, visual.blocks.31.mlp.up_proj will be ignored. I found an issue https:...
https://github.com/vllm-project/vllm/issues/27916
closed
[ "feature request" ]
2025-11-01T12:23:36Z
2025-12-26T12:48:22Z
1
SmartNight-cc
huggingface/lerobot
2,354
Cannot reproduce SmolVLA results on LIBERO benchmark
Hello, I am trying to reproduce LIBERO benchmark results of [SmolVLA](https://huggingface.co/HuggingFaceVLA/smolvla_libero). However, I can't reproduce results on neither [leaderboard](https://huggingface.co/spaces/HuggingFaceVLA/libero-vla-leaderboard) and [paper](https://arxiv.org/abs/2506.01844) I am working on NV...
https://github.com/huggingface/lerobot/issues/2354
open
[ "question", "policies", "simulation" ]
2025-11-01T11:20:05Z
2026-01-05T08:38:48Z
null
Hesh0629
huggingface/trl
4,419
GRPO with reward model. CUDA out of memory. How to fix? Thank you very much.
train_grpo.py: ```python import argparse import os from typing import Callable, Dict, List, Optional import torch from datasets import Dataset, load_dataset from transformers import ( AutoModelForCausalLM, AutoTokenizer, AutoModelForSequenceClassification, pipeline, set_seed, ) from trl import GRPO...
https://github.com/huggingface/trl/issues/4419
open
[ "🏋 Reward", "🏋 GRPO" ]
2025-11-01T10:29:28Z
2025-11-20T12:26:50Z
null
guotong1988
pytorch/ao
3,274
Proposal to add a beginner-friendly introduction tutorial for TorchAO
Hello TorchAO community, I would like to contribute a beginner-friendly notebook tutorial that introduces TorchAO to users who are new to model optimization and to TorchAO (or even PyTorch in general). As someone coming from a different background with limited experience in quantization and model optimization, I foun...
https://github.com/pytorch/ao/issues/3274
open
[ "topic: documentation" ]
2025-11-01T07:47:08Z
2025-11-04T04:25:26Z
2
smishra8
vllm-project/vllm
27,912
[Usage]: How should I use the CPU to deploy QWEN3 VL 30B-A3B?
### Your current environment ```text The output of `python collect_env.py` ``` (APIServer pid=1033476) Traceback (most recent call last): (APIServer pid=1033476) File "/home/maxgameone/anaconda3/bin/vllm", line 33, in <module> (APIServer pid=1033476) sys.exit(load_entry_point('vllm==0.11.1rc6.dev33+g3a5de7d2d.cp...
https://github.com/vllm-project/vllm/issues/27912
open
[ "usage" ]
2025-11-01T07:40:04Z
2025-11-01T07:40:04Z
0
maxgameone
pytorch/torchtitan
1,977
Why is the ep mesh derived from a factoring of the dp mesh, instead of its own dimension?
I see that the data parallel shard dimension is factored into two dimensions, `dp_shard_mod_ep` and `dp_shard_in_ep`. The experts use `dp_shard_mod_ep` submesh for FSDP while the rest of the blocks use the regular `dp_shard_cp` submesh. Why can't the experts use FSDP on the regular `dp_mesh`? The reason for this is un...
https://github.com/pytorch/torchtitan/issues/1977
open
[ "question" ]
2025-11-01T02:07:24Z
2025-12-02T01:34:16Z
null
man2machine
vllm-project/vllm
27,899
[Bug]: Inductor specialize after 2.9 rebase
### Your current environment NA ### 🐛 Describe the bug Could you or someone have a look at compile ranges [PR](https://github.com/vllm-project/vllm/pull/24252) again? It seems to stop working with the update to pytorch 2.9. We started getting failed assertions in generated code like it was compiled for a single sha...
https://github.com/vllm-project/vllm/issues/27899
closed
[ "bug" ]
2025-10-31T22:16:27Z
2025-11-07T00:03:25Z
7
laithsakka
vllm-project/vllm
27,898
[Doc]: Multi-node EP on EFA (i.e. no IBGDA/DeepEP)
### 📚 The doc issue Usecase: On AWS we have EFA for high bandwidth interconnect, not Infiniband, so no IBGDA. The [documentation](https://docs.vllm.ai/en/latest/serving/expert_parallel_deployment.html#backend-selection-guide) indicates that the DeepEP kernels should be used for multi/inter-node EP, and pplx for sing...
https://github.com/vllm-project/vllm/issues/27898
open
[ "documentation" ]
2025-10-31T21:22:28Z
2025-11-06T19:50:07Z
1
nathan-az
huggingface/peft
2,884
[Question/Bug] How to safely continue LoRA fine-tuning under DeepSpeed ZeRO-3 (multi-stage training with modules_to_save)
Hi, I’m trying to perform multi-stage LoRA fine-tuning under DeepSpeed ZeRO-3 using PEFT. However, continuing training on an existing LoRA checkpoint without merging causes a series of errors and conflicts. Problem When I load the LoRA from Stage 1 and attempt to continue training: • load_state_dict() throws shape ...
https://github.com/huggingface/peft/issues/2884
closed
[]
2025-10-31T20:13:12Z
2025-12-09T15:05:26Z
null
XiangZhang-zx
pytorch/ao
3,270
[DOCS] Quick Start Guide PT2E Example does not work as is. Undefined objects
PT2E example in quick start guide does not work as is. Many undefined objects. No import for `convert_pt2e` and `example_inputs` is not defined for example. Also some indentation issues. See: https://docs.pytorch.org/ao/0.13/quick_start.html#pytorch-2-export-quantization
https://github.com/pytorch/ao/issues/3270
open
[ "topic: documentation", "triaged" ]
2025-10-31T18:46:28Z
2025-12-05T01:14:53Z
1
cjm715
pytorch/pytorch
166,736
Aarch64 unit test failures from nightly/manylinux build, jammy upgrade to gcc13 needed
### 🐛 Describe the bug We have noticed 2 test failures on AArch64 ( neoverse-v2 / c8g ) which are not happening in https://github.com/pytorch/pytorch/actions/workflows/linux-aarch64.yml ``` Mismatched elements: 1 / 513 (0.2%) Greatest absolute difference: 253 at index (512,) Greatest relative difference: 1.0 at inde...
https://github.com/pytorch/pytorch/issues/166736
closed
[ "module: binaries", "module: ci", "triaged", "module: arm" ]
2025-10-31T17:25:47Z
2025-12-09T20:47:45Z
11
robert-hardwick
huggingface/lerobot
2,351
Details of adapting SmolVLA to other robotic arms with different configurations
I want to deploy the untuned `smolvla_base` model directly onto my AgileX PIPER robotic arm.I ran into the following two issues along the way: 1. Missing normalization parameters in the metadata. ``` File "/home/zwt/miniconda3/envs/lerobot/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate...
https://github.com/huggingface/lerobot/issues/2351
closed
[ "question", "policies" ]
2025-10-31T14:55:35Z
2025-12-14T14:47:04Z
null
yquanli
vllm-project/vllm
27,880
[Installation]: [HELP]How to install the latest main version of vllm
### Your current environment I clone the vllm code, and run install commands, but it fails, Help!! ### How you are installing vllm ```sh VLLM_USE_PRECOMPILED=1 uv pip install --editable . Using Python 3.10.12 environment at: /home/alice/.venv × No solution found when resolving dependencies: ╰─▶ Because there is ...
https://github.com/vllm-project/vllm/issues/27880
closed
[ "installation" ]
2025-10-31T13:57:20Z
2025-11-13T07:25:13Z
7
sleepwalker2017
vllm-project/vllm
27,877
[Usage]: How to install nightly version??? Why this command doesn't work?
### Your current environment I run this to install vllm with the latest code. But, the installed vllm doesn't include the code I need. I check the `siglip.py` file, it's modified 4 days ago. But in the vllm installed, it doesn't contain this commit! https://github.com/vllm-project/vllm/pull/27566/files#diff-ca771...
https://github.com/vllm-project/vllm/issues/27877
open
[ "usage" ]
2025-10-31T12:29:51Z
2025-10-31T12:38:19Z
0
sleepwalker2017
pytorch/pytorch
166,721
Reference cycle in PyCodegen keeps tensors alive longer than necessary leading to OOM issues
### 🐛 Describe the bug PR with fix: https://github.com/pytorch/pytorch/pull/166714 Recursive function call creates a reference cycle: closure <- function <- cell inside closure Capturing self (PyCodegen instance) in same closure prolongs it's life until next gc.collect() which might result in worse resource manageme...
https://github.com/pytorch/pytorch/issues/166721
closed
[ "triaged", "oncall: pt2", "module: dynamo" ]
2025-10-31T12:02:30Z
2025-11-07T17:52:57Z
1
jwieczorekhabana
vllm-project/vllm
27,875
[Usage]: how to get profiler on OpenAI server
### Your current environment ```text INFO 10-31 10:27:06 [importing.py:17] Triton not installed or not compatible; certain GPU-related functions will not be available. WARNING 10-31 10:27:06 [importing.py:29] Triton is not installed. Using dummy decorators. Install it via `pip install triton` to enable kernel compilat...
https://github.com/vllm-project/vllm/issues/27875
closed
[ "usage" ]
2025-10-31T10:33:49Z
2025-10-31T14:38:04Z
1
zhaohaixu
vllm-project/vllm
27,872
[Feature]: AFD support load customer connect model from local path.
### 🚀 The feature, motivation and pitch Add `afd_connector_module_path` field in AFDConfig, user can implement customer afd connect, but don't need change vllm code. https://github.com/vllm-project/vllm/pull/25162 merge after. ### Alternatives _No response_ ### Additional context _No response_ ### Before subm...
https://github.com/vllm-project/vllm/issues/27872
open
[ "feature request" ]
2025-10-31T09:08:50Z
2025-12-08T03:32:33Z
1
lengrongfu
huggingface/trl
4,413
What is the default value of num_processes?
Based on the documentation on page docs/source/grpo_trainer.md, num_processes is used but nowhere does the documentation define what num_processes is or what is its default value.
https://github.com/huggingface/trl/issues/4413
closed
[ "📚 documentation", "❓ question", "🏋 GRPO" ]
2025-10-31T05:01:23Z
2025-10-31T17:31:33Z
null
thisisraghavkumar
huggingface/diffusers
12,564
[Proposals Welcome] Fal Flashpack integration for faster model loading
Hey! 👋 We've had a request to explore integrating Fal's Flashpack for faster DiT and Text Encoder loading (https://github.com/huggingface/diffusers/issues/12550). Before we jump into implementation, we wanted to open this up to the community to gather ideas and hear from anyone who's experimented with this. We'd lov...
https://github.com/huggingface/diffusers/issues/12564
open
[ "help wanted", "contributions-welcome" ]
2025-10-31T02:25:55Z
2025-10-31T12:26:13Z
2
yiyixuxu
vllm-project/vllm
27,832
[RFC]: Remap `CompilationConfig` from `-O` to `-cc` in CLI
### Motivation. With #20283 (and #26847), we're repurposing `-O0`/`-O1`/`-O2`/`-O3` to map to `optimization_level` instead of `CompilationConfig.level`/`CompilationConfig.mode`. This leaves us in a slightly confusing state where `-O` can refer to optimization level or compilation config depending on what follows it: -...
https://github.com/vllm-project/vllm/issues/27832
closed
[ "help wanted", "good first issue", "RFC", "torch.compile" ]
2025-10-30T20:29:31Z
2025-11-28T21:51:13Z
3
ProExpertProg
huggingface/trl
4,407
Complete paper index
These are the papers mentioned at least one in the codebase. - [ ] https://huggingface.co/papers/1707.06347 - [x] https://huggingface.co/papers/1909.08593 (only mentioned in notebook, no need to have in paper index) - [x] https://huggingface.co/papers/1910.02054 #4551 - [ ] https://huggingface.co/papers/1910.10683 - [...
https://github.com/huggingface/trl/issues/4407
open
[ "📚 documentation" ]
2025-10-30T20:23:26Z
2025-12-24T05:50:21Z
4
qgallouedec
vllm-project/vllm
27,830
[Usage]: GPS OSS 120b on L40S (Ada)
### Your current environment (Just a general question) ### How would you like to use vllm I want to run inference of a GPT OSS 120b with multiple L40S. I read the [docs](https://docs.vllm.ai/projects/recipes/en/latest/OpenAI/GPT-OSS.html) as it clearly says it is not natively supported yet. After I had no success w...
https://github.com/vllm-project/vllm/issues/27830
closed
[ "usage" ]
2025-10-30T20:07:42Z
2025-11-17T12:46:43Z
6
Hansehart
vllm-project/vllm
27,823
[Doc]: Multi-node distributed guide issues
### 📚 The doc issue For context, see a recent issue (https://github.com/ROCm/ROCm/issues/5567) where a user was trying to set up distributed inference with `ray` by following guidance at https://docs.vllm.ai/en/v0.8.0/serving/distributed_serving.html#running-vllm-on-multiple-nodes. I ran into several issues setting t...
https://github.com/vllm-project/vllm/issues/27823
open
[ "documentation" ]
2025-10-30T18:33:04Z
2025-10-30T18:33:04Z
0
schung-amd
huggingface/trl
4,399
Update or remove some of the notebooks
I suspect these notebooks to be outdated, if so they should be either updated or removed. - gpt2-sentiment-control.ipynb - best_of_n.ipynb - gpt2-sentiment.ipynb
https://github.com/huggingface/trl/issues/4399
closed
[ "📚 documentation" ]
2025-10-30T15:34:36Z
2025-11-04T23:52:50Z
0
qgallouedec
huggingface/trl
4,397
Remove or move Multi Adapter RL
I don't think this make sense to have this as a whole section in the doc. Either remove it or update and move it to PEFT integration
https://github.com/huggingface/trl/issues/4397
closed
[ "📚 documentation", "⚡ PEFT" ]
2025-10-30T15:12:58Z
2025-11-04T23:57:56Z
0
qgallouedec
pytorch/pytorch
166,633
Command '['ninja', '-v']' returned non-zero exit status 255.
### 🐛 Describe the bug I'm not sure it's linked to this warning message #[166580](https://github.com/pytorch/pytorch/issues/166580) and if it's a bug or how to correct it ``` ptxas info : Used 128 registers, used 16 barriers, 104 bytes cumulative stack size ptxas info : Compile time = 486.393 ms ptxas info ...
https://github.com/pytorch/pytorch/issues/166633
open
[ "needs reproduction", "module: cpp-extensions", "module: cuda", "triaged" ]
2025-10-30T11:07:43Z
2025-12-31T18:42:43Z
2
christopher5106
pytorch/torchtitan
1,968
Avoiding device-to-host sync for input/output split sizes in expert parallel
I want to use the torchtitan code for a different MoE model, and I saw that if EP is used, then for FSDP, the module prefetching for forward and backward has to be manually set. This would be quite cumbersome as more models are used, and there would not be an easy standard way to do EP + FSDP. I looked through the cod...
https://github.com/pytorch/torchtitan/issues/1968
closed
[ "question" ]
2025-10-30T10:00:34Z
2025-11-12T22:29:19Z
null
man2machine
huggingface/transformers
41,948
Does Qwen2VLImageProcessor treat two consecutive images as one group/feature?
When looking at Qwen3-VL model's image processor (which uses Qwen2-VL's one), I found the following lines of code hard to understand. `L296-300` checks the number of input images (`patches.shape[0]`), and repeat the last one to make it divisible by `temporal_patch_size`. This make the model processes two consecutive i...
https://github.com/huggingface/transformers/issues/41948
closed
[]
2025-10-30T09:23:50Z
2025-10-31T01:01:09Z
3
priancho
huggingface/transformers
41,947
why Smolvlm-256M-Instruct slower then Internvl-v2-1B ?
As title, Smolvlm have smaller model size (1/4 less matrix multiplication), smaller input embedding. But, both torch.CudaEvent, timer.perf_counter with torch.sync report the slower inference time ? I wonder that does this related with the wrong implementation of Smolvlm in transformers ? inference performance comparis...
https://github.com/huggingface/transformers/issues/41947
closed
[]
2025-10-30T08:10:28Z
2025-10-31T11:47:44Z
4
HuangChiEn
huggingface/trl
4,386
Reference supported trainers in Liger Kernel integration guide
Currently, we only have an example with SFT, and it's hard to know which trainer supports liger. We should list the trainer which support liger.
https://github.com/huggingface/trl/issues/4386
closed
[ "📚 documentation", "🏋 SFT" ]
2025-10-30T04:08:04Z
2025-11-03T18:16:04Z
0
qgallouedec
huggingface/trl
4,385
Use a common 'trl-lib` namespace for the models/datasets/spaces
In the doc, we have examples using different namespaces, like `kashif/stack-llama-2`, `edbeeching/gpt-neo-125M-imdb` etc. we should unify all these examples to use a common `trl-lib` namespace.
https://github.com/huggingface/trl/issues/4385
open
[ "📚 documentation", "✨ enhancement" ]
2025-10-30T04:04:10Z
2025-10-30T04:04:38Z
0
qgallouedec
huggingface/trl
4,384
Write the subsection "Multi-Node Training"
This section must be written, with a simple code example, and a link to the `accelerate` documentation
https://github.com/huggingface/trl/issues/4384
open
[ "📚 documentation", "⚡accelerate" ]
2025-10-30T03:57:53Z
2025-12-08T16:23:23Z
2
qgallouedec
huggingface/trl
4,383
Add PEFT subsection to "Reducing Memory Usage"
PEFT is a major technique to reduce memory usage of the training. We should have a small section pointing to the PEFT integration guide
https://github.com/huggingface/trl/issues/4383
closed
[ "📚 documentation", "✨ enhancement", "⚡ PEFT" ]
2025-10-30T03:55:55Z
2025-11-07T00:03:01Z
0
qgallouedec
huggingface/trl
4,382
Populate "Speeding Up Training"
Currently, this section only mentions vLLM. We should have a small guide for other methods, like flash attention. Ideally, to avoid repetition, we should have a very light example, and a link to the place in the doc where it's more extensively discussed, example vLLM pointing to vLLM integration guide
https://github.com/huggingface/trl/issues/4382
closed
[ "📚 documentation", "⚡accelerate" ]
2025-10-30T03:54:34Z
2025-12-01T09:47:23Z
0
qgallouedec
huggingface/trl
4,380
Fully transition from `flash-attn` to `kernels`
The new recommended way to use flash attention is to use kernels. We should update our tests, and documentation to use `kernels` instead of "flash_attention2". Eg https://github.com/huggingface/trl/blob/1eb561c3e9133892a2e907d84123b46e40cbc5a0/docs/source/reducing_memory_usage.md#L149 ```diff - training_args = DPOCon...
https://github.com/huggingface/trl/issues/4380
closed
[ "📚 documentation", "✨ enhancement" ]
2025-10-30T03:46:07Z
2025-11-13T04:07:35Z
0
qgallouedec
huggingface/trl
4,379
Remove or populate "Training customization"
Currently, this part of the documentation shows some possible customizations that applies to all trainers https://huggingface.co/docs/trl/main/en/customization However, it only features a few examples. This sections would make sense if it gets populated with other customizations, or removed. This thread can be used to...
https://github.com/huggingface/trl/issues/4379
closed
[ "📚 documentation" ]
2025-10-30T03:41:02Z
2025-12-01T09:39:09Z
0
qgallouedec
huggingface/trl
4,378
Extend basic usage example to all supported CLIs
currently https://huggingface.co/docs/trl/main/en/clis?command_line=Reward#basic-usage shows only basic example usage for SFT, DPO and Reward. We should have it for all supported CLIs (ie, GRPO, RLOO, KTO)
https://github.com/huggingface/trl/issues/4378
closed
[ "📚 documentation", "🏋 KTO", "🏋 RLOO", "📱 cli", "🏋 GRPO" ]
2025-10-30T03:35:36Z
2025-11-14T01:13:17Z
0
qgallouedec
vllm-project/vllm
27,783
[Usage]: Model performance different from api
### Your current environment ```text vllm==0.10.0 ``` ### How would you like to use vllm I'm running model Qwen3-8B with vllm. I also run the same experiment using Qwen3-8B api. But I find the result is quite different, the accuracy of api-model on my task is much higher than the vllm-model. I use the same temperat...
https://github.com/vllm-project/vllm/issues/27783
open
[ "usage" ]
2025-10-30T03:30:02Z
2025-10-30T03:30:02Z
0
fny21
vllm-project/vllm
27,782
[Usage]: The same configuration v0.11.0 will report insufficient video memory compared to v0.8.5
### Your current environment ```text The output of `python collect_env.py` ``` ### How would you like to use vllm The server is a 4090 with 4 cards Docker runs vllm openai: v0.8.5 deployment command: "command: --model /models/Qwen3/Qwen3-30B-A3B --enable-reasoning --reasoning-parser deepseek_r1 --tensor_parallel_s...
https://github.com/vllm-project/vllm/issues/27782
open
[ "usage" ]
2025-10-30T03:24:54Z
2025-11-06T06:53:15Z
2
lan-qh