Papers
arxiv:2602.10210

How Much Reasoning Do Retrieval-Augmented Models Add beyond LLMs? A Benchmarking Framework for Multi-Hop Inference over Hybrid Knowledge

Published on Feb 10
· Submitted by
Junhong Lin
on Feb 18
Authors:
,
,
,
,
,
,

Abstract

HybridRAG-Bench evaluates retrieval-intensive multi-hop reasoning in large language models by combining unstructured text and structured knowledge graphs from recent scientific literature, providing a contamination-aware benchmark that distinguishes genuine retrieval and reasoning from parametric recall.

AI-generated summary

Large language models (LLMs) continue to struggle with knowledge-intensive questions that require up-to-date information and multi-hop reasoning. Augmenting LLMs with hybrid external knowledge, such as unstructured text and structured knowledge graphs, offers a promising alternative to costly continual pretraining. As such, reliable evaluation of their retrieval and reasoning capabilities becomes critical. However, many existing benchmarks increasingly overlap with LLM pretraining data, which means answers or supporting knowledge may already be encoded in model parameters, making it difficult to distinguish genuine retrieval and reasoning from parametric recall. We introduce HybridRAG-Bench, a framework for constructing benchmarks to evaluate retrieval-intensive, multi-hop reasoning over hybrid knowledge. HybridRAG-Bench automatically couples unstructured text and structured knowledge graph representations derived from recent scientific literature on arXiv, and generates knowledge-intensive question-answer pairs grounded in explicit reasoning paths. The framework supports flexible domain and time-frame selection, enabling contamination-aware and customizable evaluation as models and knowledge evolve. Experiments across three domains (artificial intelligence, governance and policy, and bioinformatics) demonstrate that HybridRAG-Bench rewards genuine retrieval and reasoning rather than parametric recall, offering a principled testbed for evaluating hybrid knowledge-augmented reasoning systems. We release our code and data at github.com/junhongmit/HybridRAG-Bench.

Community

Paper submitter

Modern LLM systems increasingly rely on retrieval-augmented generation (RAG) and knowledge-graph-augmented reasoning (KG-RAG) to handle knowledge-intensive tasks. But how much reasoning do these systems truly add beyond the base LLM’s parametric knowledge?

We introduce HybridRAG-Bench, a contamination-aware benchmarking framework for evaluating retrieval-intensive, multi-hop reasoning over hybrid knowledge (text + structured graphs). Rather than releasing a static dataset, we provide a reusable benchmark construction pipeline that generates challenging, reasoning-grounded QA pairs from time-scoped scientific corpora.

😆 Takeaways:

1️⃣ Benchmark contamination hides true reasoning gaps.
Many widely used QA benchmarks overlap heavily with LLM pretraining data, making it difficult to distinguish retrieval-based reasoning from parametric recall. When evaluated on temporally controlled, contamination-aware data, LLM-only performance drops substantially.

2️⃣ Retrieval alone is not enough.
Text-based RAG improves over LLM-only prompting, but struggles on structured multi-hop and relational questions. Naïve KG injection can even hurt performance due to noise.

3️⃣ Structured knowledge adds measurable reasoning benefits.
Hybrid methods that combine graph traversal with textual retrieval consistently outperform text-only RAG—especially on difficult multi-hop and compositional reasoning tasks.

4️⃣ Fine-grained question types reveal failure modes.
By breaking evaluation into single-hop, multi-hop, difficult multi-hop, counterfactual, and open-ended questions, HybridRAG-Bench surfaces distinct weaknesses in retrieval, evidence integration, and reasoning strategies that aggregate accuracy hides.

5️⃣ Reasoning gains are method-dependent—not just scale-dependent.
The performance gap between different retrieval/reasoning strategies is often larger than the gap from scaling model size alone, suggesting that system design matters as much as model scale.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.10210 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.10210 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.10210 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.