Abstract
Rectified flows and diffusion models are improved through κ-FC formulation that conditions the source distribution and MixFlow training strategy that reduces generative path curvatures and enhances sampling efficiency.
Diffusion models and their variations, such as rectified flows, generate diverse and high-quality images, but they are still hindered by slow iterative sampling caused by the highly curved generative paths they learn. An important cause of high curvature, as shown by previous work, is independence between the source distribution (standard Gaussian) and the data distribution. In this work, we tackle this limitation by two complementary contributions. First, we attempt to break away from the standard Gaussian assumption by introducing κ-FC, a general formulation that conditions the source distribution on an arbitrary signal κ that aligns it better with the data distribution. Then, we present MixFlow, a simple but effective training strategy that reduces the generative path curvatures and considerably improves sampling efficiency. MixFlow trains a flow model on linear mixtures of a fixed unconditional distribution and a κ-FC-based distribution. This simple mixture improves the alignment between the source and data, provides better generation quality with less required sampling steps, and accelerates the training convergence considerably. On average, our training procedure improves the generation quality by 12\% in FID compared to standard rectified flow and 7\% compared to previous baselines under a fixed sampling budget. Code available at: https://github.com/NazirNayal8/MixFlow{https://github.com/NazirNayal8/MixFlow}
Community
- We introduce κ-FC, a general formulation for conditioning the source distribution on an arbitrary signal that better aligns it with the data distribution.
- We propose MixFlow, a simple training strategy for rectified flows that mixes unconditional and conditional source distributions to reduce path curvature and improve sampling efficiency.
- We show that MixFlow improves the speed-quality trade-off and training convergence across CIFAR-10, FFHQ 64x64, and AFHQv2 64x64, outperforming standard rectified flow and prior baselines under fixed sampling budgets.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- COT-FM: Cluster-wise Optimal Transport Flow Matching (2026)
- Efficient Generative Modeling beyond Memoryless Diffusion via Adjoint Schrödinger Bridge Matching (2026)
- From Diffusion To Flow: Efficient Motion Generation In MotionGPT3 (2026)
- The Coupling Within: Flow Matching via Distilled Normalizing Flows (2026)
- BiFM: Bidirectional Flow Matching for Few-Step Image Editing and Generation (2026)
- Fair Benchmarking of Emerging One-Step Generative Models Against Multistep Diffusion and Flow Models (2026)
- Training-Free Refinement of Flow Matching with Divergence-based Sampling (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.09181 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper