Papers
arxiv:2511.22693

Generative Anchored Fields: Controlled Data Generation via Emergent Velocity Fields and Transport Algebra

Published on Nov 27, 2025
Authors:
,
,

Abstract

Generative Anchored Fields learns independent noise and data predictors from linear bridges to enable compositional control through transport algebra and efficient high-quality generation.

AI-generated summary

We present Generative Anchored Fields (GAF), a generative model that learns independent endpoint predictors, J (noise) and K (data), from any point on a linear bridge. Unlike existing approaches that use a single trajectory or score predictor, GAF is trained to recover the bridge endpoints directly via coordinate learning. The velocity field v=K-J emerges from their time-conditioned disagreement. This factorization enables Transport Algebra: algebraic operations on multiple J/K heads for compositional control. With class-specific K_n heads, GAF defines directed transport maps between a shared base noise distribution and multiple data domains, allowing controllable interpolation, multi-class composition, and semantic editing. This is achieved either directly on the predicted data coordinates (K) using Iterative Endpoint Refinement (IER), a novel sampler that achieves high-quality generation in 5-8 steps, or on the emergent velocity field (v). We achieve strong sample quality (FID 7.51 on ImageNet 256times256 and 7.27 on CelebA-HQ 256times 256, without classifier-free guidance) while treating compositional generation as an architectural primitive. Code available at https://github.com/IDLabMedia/GAF.

Community

Introducing Generative Anchored Fields (GAF).

Paper: https://arxiv.org/abs/2511.22693v2
Code: https://github.com/IDLabMedia/GAF

GAF learns independent endpoint predictors, J (noise) and K (data).
Instead of predicting v(x,t) directly (e.g., flow matching), GAF predicts where the trajectory starts (J) and where it ends (K).
The velocity emerges as v=K-J. This enables Transport Algebra: algebraic operations on J/K heads for compositional generation. In GAF, the transport is derived from endpoint disagreement, not trained as a standalone trajectory predictor.

transport_algebra_grid

In practice, this enables:
Interpolating between classes, switching manifolds mid-trajectory, combining heads for novel generation, without extra training, or guidance.

We also introduce Iterative Endpoint Refinement (IER), a native GAF sampler that iteratively refines the endpoints in forward and reverse directions. IER generates high-quality images with 5-8 steps.

Without classifier-free guidance:
FID 7.51 on ImageNet 256×256 (retrunked from DiT-XL/2 in only 100k steps)
FID 7.27 on CelebA-HQ 256×256 (trained from scratch)

fid_chart_across_IER_HEUN

Training GAF is simple: endpoint regression + residual and swap regularizers. Sampling is deterministic and guidance-free.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2511.22693 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2511.22693 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2511.22693 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.