Papers
arxiv:2605.06548

Continuous Latent Diffusion Language Model

Published on May 7
· Submitted by
taesiri
on May 8
#3 Paper of the day
Authors:
,
,
,
,
,
,
,
,
,
,

Abstract

Cola DLM presents a hierarchical latent diffusion language model that uses text-to-latent mapping, global semantic prior modeling, and conditional decoding to achieve efficient text generation with flexible non-autoregressive inductive bias.

AI-generated summary

Large language models have achieved remarkable success under the autoregressive paradigm, yet high-quality text generation need not be tied to a fixed left-to-right order. Existing alternatives still struggle to jointly achieve generation efficiency, scalable representation learning, and effective global semantic modeling. We propose Cola DLM, a hierarchical latent diffusion language model that frames text generation through hierarchical information decomposition. Cola DLM first learns a stable text-to-latent mapping with a Text VAE, then models a global semantic prior in continuous latent space with a block-causal DiT, and finally generates text through conditional decoding. From a unified Markov-path perspective, its diffusion process performs latent prior transport rather than token-level observation recovery, thereby separating global semantic organization from local textual realization. This design yields a more flexible non-autoregressive inductive bias, supports semantic compression and prior fitting in continuous space, and naturally extends to other continuous modalities. Through experiments spanning 4 research questions, 8 benchmarks, strictly matched ~2B-parameter autoregressive and LLaDA baselines, and scaling curves up to about 2000 EFLOPs, we identify an effective overall configuration of Cola DLM and verify its strong scaling behavior for text generation. Taken together, the results establish hierarchical continuous latent prior modeling as a principled alternative to strictly token-level language modeling, where generation quality and scaling behavior may better reflect model capability than likelihood, while also suggesting a concrete path toward unified modeling across discrete text and continuous modalities.

Community

the most interesting bit to me is the two-stage setup: a Text VAE to fix a stable text-to-latent mapping, then a block-causal diffusion transformer to model the latent prior. but i worry a bit about posterior collapse in the vAE and how the kl term plus the bert-style objectives balance fidelity vs compression, especially since the diffusion then operates in that latent space. an ablation on the latent block size and on gradient-stabilization steps would be really telling, i bet the granularity of blocks is not just a hyperparameter but a structural bottleneck. the arxivlens breakdown helped me parse the method details, btw, if you want a quick mental map: https://arxivlens.com/PaperView/Details/continuous-latent-diffusion-language-model-9852-239cae7d. curious how this holds up with longer prompts or multilingual data where the semantic prior might need to reorganize more aggressively.

·

Thanks a lot for the thoughtful comment! We fully agree that the two-stage design is one of the most critical parts of the method. The Text VAE is not just used as a preprocessing module, but is meant to establish a stable text-to-latent interface before the latent prior is learned. Regarding posterior collapse and the balance between reconstruction, KL regularization, and the BERT-style objective, one interesting observation we found is that text reconstruction itself is actually a relatively easy task in this setup: the reconstruction accuracy can quickly approach nearly 100%. This suggests that the latent representation space has a large degree of flexibility, and that there is still substantial room to study how this space should be organized, compressed, and made more semantically meaningful. In this sense, the VAE stage is not only about preserving fidelity, but also about shaping a useful latent carrier for subsequent prior modeling.

We also agree with your point that the latent block size is more than a simple hyperparameter. It effectively controls the granularity at which the model organizes semantic information, so it can become a structural bottleneck if chosen poorly. That is why we include ablations on different block sizes, and the results suggest that a moderate block size works better than either very fine-grained or overly coarse latent grouping.

For longer prompts, you can check our results on long-context understanding tasks such as RACE and SQuAD, where Cola shows encouraging performance compared with AR and other baselines. That said, we completely agree that extending this to much longer contexts is an important next step, especially when the latent prior needs to reorganize information more aggressively. Multilingual modeling is also a very interesting direction, since it directly tests whether the latent space is capturing language-independent semantics rather than surface token patterns.

Thanks again for the careful reading and for mentioning the arxivlens breakdown! We expect to release the code in around 1–2 weeks, and we would be very happy to have more people explore these questions together with us.

Thanks a lot for the thoughtful comment! We fully agree that the two-stage design is one of the most critical parts of the method. The Text VAE is not just used as a preprocessing module, but is meant to establish a stable text-to-latent interface before the latent prior is learned. Regarding posterior collapse and the balance between reconstruction, KL regularization, and the BERT-style objective, one interesting observation we found is that text reconstruction itself is actually a relatively easy task in this setup: the reconstruction accuracy can quickly approach nearly 100%. This suggests that the latent representation space has a large degree of flexibility, and that there is still substantial room to study how this space should be organized, compressed, and made more semantically meaningful. In this sense, the VAE stage is not only about preserving fidelity, but also about shaping a useful latent carrier for subsequent prior modeling.

We also agree with your point that the latent block size is more than a simple hyperparameter. It effectively controls the granularity at which the model organizes semantic information, so it can become a structural bottleneck if chosen poorly. That is why we include ablations on different block sizes, and the results suggest that a moderate block size works better than either very fine-grained or overly coarse latent grouping.

For longer prompts, you can check our results on long-context understanding tasks such as RACE and SQuAD, where Cola shows encouraging performance compared with AR and other baselines. That said, we completely agree that extending this to much longer contexts is an important next step, especially when the latent prior needs to reorganize information more aggressively. Multilingual modeling is also a very interesting direction, since it directly tests whether the latent space is capturing language-independent semantics rather than surface token patterns.

Thanks again for the careful reading and for mentioning the arxivlens breakdown! We expect to release the code in around 1–2 weeks, and we would be very happy to have more people explore these questions together with us.

Thank you all for your attention! The code will be open-sourced in about two weeks, and everyone is welcome to explore it together.

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.06548
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.06548 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.06548 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.06548 in a Space README.md to link it from this page.

Collections including this paper 2