Papers
arxiv:2604.16054

Mind's Eye: A Benchmark of Visual Abstraction, Transformation and Composition for Multimodal LLMs

Published on Apr 17
· Submitted by
Aditya Kanade
on Apr 22
Authors:
,
,
,

Abstract

Multimodal large language models demonstrate significant limitations in visuospatial reasoning tasks compared to human performance, revealing deficiencies in visual attention, perceptual manipulation, and conceptual abstraction.

AI-generated summary

Multimodal large language models (MLLMs) have achieved impressive progress on vision language benchmarks, yet their capacity for visual cognitive and visuospatial reasoning remains less understood. We introduce "Mind's Eye", a multiple-choice benchmark of eight visuo-cognitive tasks inspired by classic human intelligence tests and organized under a novel "A-R-T" taxonomy: Abstraction, Relation, and Transformation. The tasks probe core processes of fluid intelligence such as pattern induction, analogical relation mapping, and mental transformation. We evaluate a diverse suite of closed-source and open-source MLLMs and compare their performance with human participants. Humans achieve 80% accuracy, while top performing MLLMs remain below 50%. Error analysis reveals failures in: (i) visual attention allocation, (ii) internal perceptual manipulation, and (iii) weak abstraction of underlying visual concepts. Our findings suggest that current MLLMs exhibit limited visuospatial reasoning capabilities, when compared with human participants, highlighting the need for more cognitively grounded evaluation frameworks.

Community

Paper author Paper submitter

Mind's Eye introduces a benchmark of eight visuo-cognitive tasks organized under an Abstraction-Relation-Transformation taxonomy, drawing from classic human intelligence tests to probe fluid intelligence in multimodal LLMs. With humans reaching 80% accuracy while top MLLMs stay below 50%, the work highlights significant gaps in visual attention, perceptual manipulation, and concept abstraction, suggesting current models still fall well short of human-level visuospatial reasoning.

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.16054
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.16054 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.16054 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.16054 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.