Papers
arxiv:2604.17761

Contrastive Attribution in the Wild: An Interpretability Analysis of LLM Failures on Realistic Benchmarks

Published on Apr 20
ยท Submitted by
tan
on Apr 22
Authors:
,
,
,
,

Abstract

Contrastive attribution methods for analyzing large language model failures show mixed effectiveness across different benchmarks and model sizes.

AI-generated summary

Interpretability tools are increasingly used to analyze failures of Large Language Models (LLMs), yet prior work largely focuses on short prompts or toy settings, leaving their behavior on commonly used benchmarks underexplored. To address this gap, we study contrastive, LRP-based attribution as a practical tool for analyzing LLM failures in realistic settings. We formulate failure analysis as contrastive attribution, attributing the logit difference between an incorrect output token and a correct alternative to input tokens and internal model states, and introduce an efficient extension that enables construction of cross-layer attribution graphs for long-context inputs. Using this framework, we conduct a systematic empirical study across benchmarks, comparing attribution patterns across datasets, model sizes, and training checkpoints. Our results show that this token-level contrastive attribution can yield informative signals in some failure cases, but is not universally applicable, highlighting both its utility and its limitations for realistic LLM failure analysis. Our code is available at: https://aka.ms/Debug-XAI.

Community

Paper author Paper submitter

TL;DR: We study whether contrastive, LRP-based attribution can explain LLM failures on real benchmarks (not toy prompts). We attribute the logit difference between an incorrect token and a correct alternative back to input tokens and internal hidden states, and scale it to long contexts via a batch-packed multi-target backprop trick.

๐Ÿ”‘ What's new

  • Problem reframing: LLM failure analysis as token-level contrastive attribution โ€” explaining why the model prefers the wrong token over a correct contrast token.
  • Efficient AttnLRP extension: cross-layer hidden-state attribution graphs for long-context inputs, reducing backward passes from O(n) to O(n/B). Handles inputs up to ~12K tokens (GAIA2).
  • Systematic empirical study across 4 realistic benchmarks (IFEval, GAIA2, MATH, EvalPlus), multiple Qwen3 sizes (0.6B/1.7B/4B/8B), and Olmo-3-7B-Think training checkpoints (SFT โ†’ DPO โ†’ RLVR).

๐Ÿ“Š Key findings

  1. Underweighting Relevant Tokens (URT) is the dominant failure mode across benchmarks.
  2. Attribution graphs expose layer-wise dynamics invisible to input-level analysis.
  3. Model scaling corrects failures via systematic, interpretable shifts, notably more negative relevance on instruction tokens.
  4. Training (especially early SFT + DPO) progressively reshapes attribution patterns toward correct behavior.
  5. Honest limitation: a non-trivial fraction of math failures remain unexplained at the hidden-state level, neuron-level analysis is needed.

๐Ÿ›  Practical implications

  • Masking only ~2 top-attributed tokens on average flips the wrong prediction โ†’ cheap prompt debugging.
  • Attribution trajectories across checkpoints โ†’ training monitoring signal.
  • Token-level importance โ†’ alignment signal (e.g., token-level DPO).

๐Ÿ”— Resources

Happy to hear feedback, especially on failure cases where coarse hidden-state attribution falls short and finer-grained (neuron/circuit-level) analysis is needed!

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.17761
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.17761 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.17761 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.17761 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.