Papers
arxiv:2603.24787

ReLope: KL-Regularized LoRA Probes for Multimodal LLM Routing

Published on Mar 25
Authors:
,
,
,

Abstract

Probe routing methods for multimodal language models are enhanced through attention-based aggregation and KL-regularized LoRA adapters to improve correctness signal extraction from hidden states.

AI-generated summary

Routing has emerged as a promising strategy for balancing performance and cost in large language model (LLM) systems that combine lightweight models with powerful but expensive large models. Recent studies show that probe routing, which predicts the correctness of a small model using its hidden states, provides an effective solution in text-only LLMs. However, we observe that these probes degrade substantially when applied to multimodal LLMs (MLLMs). Through empirical analysis, we find that the presence of visual inputs weakens the separability of correctness signals in hidden states, making them harder to extract using standard probe designs. To address this challenge, we introduce two complementary approaches for improving probe routing in MLLMs. First, we propose the Attention Probe, which aggregates hidden states from the preceding layer based on attention scores to recover distributed correctness signals. Second, we present the KL-Regularized LoRA Probe (ReLope), which inserts a lightweight LoRA adapter and applies a KL regularizer to learn routing-aware representations. Comprehensive experiments show that our methods consistently outperform baselines, suggesting that improving the quality of hidden states is key to effective routing in MLLMs. Our code is available at https://github.com/Spinozaaa/ReLope.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2603.24787
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.24787 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.24787 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.24787 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.