Papers
arxiv:2602.07054

AVERE: Improving Audiovisual Emotion Reasoning with Preference Optimization

Published on Feb 4
· Submitted by
Ashutosh Chaubey
on Feb 10
Authors:
,
,
,

Abstract

A benchmark and optimization technique are presented to improve multimodal large language models' emotion understanding by addressing spurious associations and hallucinations in audiovisual cues.

AI-generated summary

Emotion understanding is essential for building socially intelligent agents. Although recent multimodal large language models have shown strong performance on this task, two key challenges remain - spurious associations between emotions and irrelevant audiovisual cues, and hallucinations of audiovisual cues driven by text priors in the language model backbone. To quantify and understand these issues, we introduce EmoReAlM, a benchmark designed to evaluate MLLMs for cue-emotion associations, hallucinations and modality agreement. We then propose AVEm-DPO, a preference optimization technique that aligns model responses with both audiovisual inputs and emotion-centric queries. Specifically, we construct preferences over responses exhibiting spurious associations or hallucinations, and audiovisual input pairs guided by textual prompts. We also include a regularization term that penalizes reliance on text priors, thereby mitigating modality-specific cue hallucinations. Experimental results on DFEW, RAVDESS and EMER demonstrate that our method significantly improves the performance of the reference baseline models with 6-19% of relative performance gains in zero-shot settings. By providing both a rigorous benchmark and a robust optimization framework, this work enables principled evaluation and improvement of MLLMs for emotion understanding and social AI. Code, models and benchmark will be released at https://avere-iclr.github.io.

Community

Check out this latest release from our lab at the Institute for Creative Technologies at the University of Southern California. The proposed method improves emotion reasoning in audiovisual multimodal ("omni") LLMs, surpassing the state-of-the-art models on multiple benchmarks. Moreover, the method achieves state-of-the-art results on various traditional emotion benchmarks under a zero-shot setting, eliciting the importance of reasoning even for emotion perception.

Project page: https://avere-iclr.github.io

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.07054 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.07054 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.