Abstract
Agentic automatic evaluation framework automates embodied vision-language model assessment through collaborative agents that reduce evaluation costs and improve ranking accuracy.
Current embodied VLM evaluation relies on static, expert-defined, manually annotated benchmarks that exhibit severe redundancy and coverage imbalance. This labor intensive paradigm drains computational and annotation resources, inflates costs, and distorts model rankings, ultimately stifling iterative development. To address this, we propose Agentic Automatic Evaluation (A2Eval), the first agentic framework that automates benchmark curation and evaluation through two collaborative agents. The Data Agent autonomously induces capability dimensions and assembles a balanced, compact evaluation suite, while the Eval Agent synthesizes and validates executable evaluation pipelines, enabling fully autonomous, high-fidelity assessment. Evaluated across 10 benchmarks and 13 models, A2Eval compresses evaluation suites by 85%, reduces overall computational costs by 77%, and delivers a 4.6x speedup while preserving evaluation quality. Crucially, A2Eval corrects systematic ranking biases, improves human alignment to Spearman's rho=0.85, and maintains high ranking fidelity (Kendall's tau=0.81), establishing a new standard for high-fidelity, low-cost embodied assessment. Our code and data will be public soon.
Community
A2Eval introduces an agentic framework that automates embodied VLM evaluation through two collaborative agents: one that curates balanced benchmarks by identifying capability dimensions, and another that synthesizes executable evaluation pipelines. The system compresses benchmarks by 85%, reduces costs by 77%, and improves human alignment while correcting ranking biases in model evaluations.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Omni-RRM: Advancing Omni Reward Modeling via Automatic Rubric-Grounded Preference Synthesis (2026)
- Wow, wo, val! A Comprehensive Embodied World Model Evaluation Turing Test (2026)
- DatBench: Discriminative, Faithful, and Efficient VLM Evaluations (2026)
- VLN-MME: Diagnosing MLLMs as Language-guided Visual Navigation agents (2025)
- Empowering Reliable Visual-Centric Instruction Following in MLLMs (2026)
- Can LLMs See Without Pixels? Benchmarking Spatial Intelligence from Textual Descriptions (2026)
- Evaluating Reward Model Generalization via Pairwise Maximum Discrepancy Competitions (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper