Papers
arxiv:2602.24012

InfoNCE Induces Gaussian Distribution

Published on Feb 27
· Submitted by
Yossi levi
on Mar 2
Authors:
,
,
,

Abstract

Contrastive learning with InfoNCE objective creates Gaussian-like structures in learned representations, supported by theoretical analysis and experimental validation across different datasets and architectures.

AI-generated summary

Contrastive learning has become a cornerstone of modern representation learning, allowing training with massive unlabeled data for both task-specific and general (foundation) models. A prototypical loss in contrastive training is InfoNCE and its variants. In this work, we show that the InfoNCE objective induces Gaussian structure in representations that emerge from contrastive training. We establish this result in two complementary regimes. First, we show that under certain alignment and concentration assumptions, projections of the high-dimensional representation asymptotically approach a multivariate Gaussian distribution. Next, under less strict assumptions, we show that adding a small asymptotically vanishing regularization term that promotes low feature norm and high feature entropy leads to similar asymptotic results. We support our analysis with experiments on synthetic and CIFAR-10 datasets across multiple encoder architectures and sizes, demonstrating consistent Gaussian behavior. This perspective provides a principled explanation for commonly observed Gaussianity in contrastive representations. The resulting Gaussian model enables principled analytical treatment of learned representations and is expected to support a wide range of applications in contrastive learning.

Community

As scaling large models begins to saturate, it becomes increasingly important to revisit and deeply understand the fundamental tools we rely on.

In this work, we return to a basic question in contrastive learning: what distribution does InfoNCE actually induce in the embedding space?

We show, both theoretically and empirically, that optimizing InfoNCE drives representations toward a Gaussian distribution under mild assumptions. This provides a principled explanation for several empirical phenomena observed in contrastive models, including norm concentration and approximate isotropy.

Beyond theory, this perspective helps clarify why centering and whitening often improve performance in practice.

Curious to hear thoughts from the community, especially regarding implications for multimodal models such as CLIP.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.24012 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.24012 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.24012 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.