Contrastive Attribution in the Wild: An Interpretability Analysis of LLM Failures on Realistic Benchmarks
Abstract
Contrastive attribution methods for analyzing large language model failures show mixed effectiveness across different benchmarks and model sizes.
Interpretability tools are increasingly used to analyze failures of Large Language Models (LLMs), yet prior work largely focuses on short prompts or toy settings, leaving their behavior on commonly used benchmarks underexplored. To address this gap, we study contrastive, LRP-based attribution as a practical tool for analyzing LLM failures in realistic settings. We formulate failure analysis as contrastive attribution, attributing the logit difference between an incorrect output token and a correct alternative to input tokens and internal model states, and introduce an efficient extension that enables construction of cross-layer attribution graphs for long-context inputs. Using this framework, we conduct a systematic empirical study across benchmarks, comparing attribution patterns across datasets, model sizes, and training checkpoints. Our results show that this token-level contrastive attribution can yield informative signals in some failure cases, but is not universally applicable, highlighting both its utility and its limitations for realistic LLM failure analysis. Our code is available at: https://aka.ms/Debug-XAI.
Community
TL;DR: We study whether contrastive, LRP-based attribution can explain LLM failures on real benchmarks (not toy prompts). We attribute the logit difference between an incorrect token and a correct alternative back to input tokens and internal hidden states, and scale it to long contexts via a batch-packed multi-target backprop trick.
๐ What's new
- Problem reframing: LLM failure analysis as token-level contrastive attribution โ explaining why the model prefers the wrong token over a correct contrast token.
- Efficient AttnLRP extension: cross-layer hidden-state attribution graphs for long-context inputs, reducing backward passes from O(n) to O(n/B). Handles inputs up to ~12K tokens (GAIA2).
- Systematic empirical study across 4 realistic benchmarks (IFEval, GAIA2, MATH, EvalPlus), multiple Qwen3 sizes (0.6B/1.7B/4B/8B), and Olmo-3-7B-Think training checkpoints (SFT โ DPO โ RLVR).
๐ Key findings
- Underweighting Relevant Tokens (URT) is the dominant failure mode across benchmarks.
- Attribution graphs expose layer-wise dynamics invisible to input-level analysis.
- Model scaling corrects failures via systematic, interpretable shifts, notably more negative relevance on instruction tokens.
- Training (especially early SFT + DPO) progressively reshapes attribution patterns toward correct behavior.
- Honest limitation: a non-trivial fraction of math failures remain unexplained at the hidden-state level, neuron-level analysis is needed.
๐ Practical implications
- Masking only ~2 top-attributed tokens on average flips the wrong prediction โ cheap prompt debugging.
- Attribution trajectories across checkpoints โ training monitoring signal.
- Token-level importance โ alignment signal (e.g., token-level DPO).
๐ Resources
- ๐ Paper: https://arxiv.org/abs/2604.17761
- ๐ป Code: https://github.com/microsoft/Debug-XAI
- ๐งช Colab demo: https://colab.research.google.com/drive/19-n4e1pB5JSKLWwds3QQlGfT2vcjH73e
Happy to hear feedback, especially on failure cases where coarse hidden-state attribution falls short and finer-grained (neuron/circuit-level) analysis is needed!
Get this paper in your agent:
hf papers read 2604.17761 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper