DA-VAE: Plug-in Latent Compression for Diffusion via Detail Alignment
Abstract
Detail-Aligned VAE enhances pretrained VAE compression ratios while preserving latent structure through a lightweight adaptation of the diffusion backbone, enabling efficient high-resolution image generation.
Reducing token count is crucial for efficient training and inference of latent diffusion models, especially at high resolution. A common strategy is to build high-compression image tokenizers with more channels per token. However, when trained only for reconstruction, high-dimensional latent spaces often lose meaningful structure, making diffusion training harder. Existing methods address this with extra objectives such as semantic alignment or selective dropout, but usually require costly diffusion retraining. Pretrained diffusion models, however, already exhibit a structured, lower-dimensional latent space; thus, a simpler idea is to expand the latent dimensionality while preserving this structure. We therefore propose Detail-Aligned VAE, which increases the compression ratio of a pretrained VAE with only lightweight adaptation of the pretrained diffusion backbone. DA-VAE uses an explicit latent layout: the first C channels come directly from the pretrained VAE at a base resolution, while an additional D channels encode higher-resolution details. A simple detail-alignment mechanism encourages the expanded latent space to retain the structure of the original one. With a warm-start fine-tuning strategy, our method enables 1024 times 1024 image generation with Stable Diffusion 3.5 using only 32 times 32 tokens, 4times fewer than the original model, within 5 H100-days. It further unlocks 2048 times 2048 generation with SD3.5, achieving a 6times speedup while preserving image quality. We also validate the method and its design choices quantitatively on ImageNet.
Get this paper in your agent:
hf papers read 2603.22125 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper