Selective Training for Large Vision Language Models via Visual Information Gain
Abstract
Visual Information Gain metric quantifies the contribution of visual input to prediction uncertainty, enabling selective training that improves visual grounding and reduces language bias in vision-language models.
Large Vision Language Models (LVLMs) have achieved remarkable progress, yet they often suffer from language bias, producing answers without relying on visual evidence. While prior work attempts to mitigate this issue through decoding strategies, architectural modifications, or curated instruction data, they typically lack a quantitative measure of how much individual training samples or tokens actually benefit from the image. In this work, we introduce Visual Information Gain (VIG), a perplexity-based metric that measures the reduction in prediction uncertainty provided by visual input. VIG enables fine-grained analysis at both sample and token levels, effectively highlighting visually grounded elements such as colors, spatial relations, and attributes. Leveraging this, we propose a VIG-guided selective training scheme that prioritizes high-VIG samples and tokens. This approach improves visual grounding and mitigates language bias, achieving superior performance with significantly reduced supervision by focusing exclusively on visually informative samples and tokens.
Community
This paper introduces Visual Information Gain (VIG), a perplexity-based metric that quantifies how much visual input reduces prediction uncertainty at both sample and token levels, enabling fine-grained measurement of visual grounding and language bias in LVLMs. Leveraging VIG, it proposes a selective training scheme that prioritizes high-VIG samples and tokens, improving visual grounding and mitigating language bias with substantially reduced supervision.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper