One-step Latent-free Image Generation with Pixel Mean Flows
Abstract
Pixel MeanFlow introduces a one-step latent-free image generation method by separating network output space from loss space, achieving strong performance on ImageNet at multiple resolutions.
Modern diffusion/flow-based models for image generation typically exhibit two core characteristics: (i) using multi-step sampling, and (ii) operating in a latent space. Recent advances have made encouraging progress on each aspect individually, paving the way toward one-step diffusion/flow without latents. In this work, we take a further step towards this goal and propose "pixel MeanFlow" (pMF). Our core guideline is to formulate the network output space and the loss space separately. The network target is designed to be on a presumed low-dimensional image manifold (i.e., x-prediction), while the loss is defined via MeanFlow in the velocity space. We introduce a simple transformation between the image manifold and the average velocity field. In experiments, pMF achieves strong results for one-step latent-free generation on ImageNet at 256x256 resolution (2.22 FID) and 512x512 resolution (2.48 FID), filling a key missing piece in this regime. We hope that our study will further advance the boundaries of diffusion/flow-based generative models.
Community
One-step Latent-free Image Generation with Pixel Mean Flows
arXivLens breakdown of this paper ๐ https://arxivlens.com/PaperView/Details/one-step-latent-free-image-generation-with-pixel-mean-flows-4350-137b3dd1
- Executive Summary
- Detailed Breakdown
- Practical Applications
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SoFlow: Solution Flow Models for One-Step Generative Modeling (2025)
- One Layer Is Enough: Adapting Pretrained Visual Encoders for Image Generation (2025)
- RecTok: Reconstruction Distillation along Rectified Flow (2025)
- Fast, faithful and photorealistic diffusion-based image super-resolution with enhanced Flow Map models (2026)
- REGLUE Your Latents with Global and Local Semantics for Entangled Diffusion (2025)
- Both Semantics and Reconstruction Matter: Making Representation Encoders Ready for Text-to-Image Generation and Editing (2025)
- Few-Step Distillation for Text-to-Image Generation: A Practical Guide (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper