Papers
arxiv:2603.07475

Skip to the Good Part: Representation Structure & Inference-Time Layer Skipping in Diffusion vs. Autoregressive LLMs

Published on Mar 8
· Submitted by
Raghavv Goel
on Mar 10
Authors:
,
,
,
,
,

Abstract

Diffusion language models exhibit distinct representational structures compared to autoregressive models, with hierarchical abstractions and reduced bias, enabling efficient layer-skipping inference without architectural modifications.

AI-generated summary

Autoregressive (AR) language models form representations incrementally through left-to-right prediction, whereas diffusion language models (dLLMs) are trained via full-sequence denoising. Although recent dLLMs match AR performance, it remains unclear whether diffusion objectives fundamentally reshape internal representations across depth. We perform the first layer- and token-wise representational analysis comparing native dLLMs (LLaDA), native AR models (Qwen2.5), and AR-initialized dLLMs (Dream-7B). We find that diffusion objectives result in different, more hierarchical abstractions with substantial early-layer redundancy and reduced recency bias, while AR objectives produce tightly coupled, depth-dependent representations. Critically, AR-initialized dLLMs retain AR-like representational dynamics despite diffusion training, revealing persistent initialization bias. Leveraging this observed representational redundancy, we introduce a static, task-agnostic inference-time layer-skipping method requiring no architectural changes or KV-cache sharing. Native dLLMs achieve up to 18.75% FLOPs reduction while preserving over 90% performance on reasoning and code generation benchmarks, whereas AR models degrade sharply under comparable skipping. These results link training objectives to representational structure and enable practical, cache-orthogonal efficiency gains.

Community

First effort towards analyzing internal representation of Native dLLM (LLada) and autoregressive (AR) model (Qwen2.5-7B) initialized dLLM (Dream). Native dLLM seems to learn more abstraction in early layer which can be used for skipping layers. On the other hand, AR initialized dLLM hidden representation aligns closely with the AR model, showcasing that initialization effect persists even though dLLM is trained with diffusion loss.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.07475 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.07475 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.07475 in a Space README.md to link it from this page.

Collections including this paper 1