Papers
arxiv:2602.14486

Revisiting the Platonic Representation Hypothesis: An Aristotelian View

Published on Feb 16
· Submitted by
Fabian Gröger
on Feb 18
Authors:
,

Abstract

Network representations converge toward shared local neighborhood relationships rather than global statistical models, as revealed by calibrated similarity metrics.

AI-generated summary

The Platonic Representation Hypothesis suggests that representations from neural networks are converging to a common statistical model of reality. We show that the existing metrics used to measure representational similarity are confounded by network scale: increasing model depth or width can systematically inflate representational similarity scores. To correct these effects, we introduce a permutation-based null-calibration framework that transforms any representational similarity metric into a calibrated score with statistical guarantees. We revisit the Platonic Representation Hypothesis with our calibration framework, which reveals a nuanced picture: the apparent convergence reported by global spectral measures largely disappears after calibration, while local neighborhood similarity, but not local distances, retains significant agreement across different modalities. Based on these findings, we propose the Aristotelian Representation Hypothesis: representations in neural networks are converging to shared local neighborhood relationships.

Community

Paper author Paper submitter

Are neural nets across modalities really converging to the same representation as they scale, as the Platonic Representation Hypothesis suggests?

We show that common representational similarity metrics are confounded by network width & depth. We propose a permutation-based null calibration that fixes this.

Result❓
• Global convergence largely disappears.
• Local neighborhoods persist.

We propose the alternative Aristotelian Representation Hypothesis: Neural networks, trained with different objectives on different data and modalities, are converging to shared local neighborhood relationships

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.14486 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.14486 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.14486 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.