How Much Reasoning Do Retrieval-Augmented Models Add beyond LLMs? A Benchmarking Framework for Multi-Hop Inference over Hybrid Knowledge
Abstract
HybridRAG-Bench evaluates retrieval-intensive multi-hop reasoning in large language models by combining unstructured text and structured knowledge graphs from recent scientific literature, providing a contamination-aware benchmark that distinguishes genuine retrieval and reasoning from parametric recall.
Large language models (LLMs) continue to struggle with knowledge-intensive questions that require up-to-date information and multi-hop reasoning. Augmenting LLMs with hybrid external knowledge, such as unstructured text and structured knowledge graphs, offers a promising alternative to costly continual pretraining. As such, reliable evaluation of their retrieval and reasoning capabilities becomes critical. However, many existing benchmarks increasingly overlap with LLM pretraining data, which means answers or supporting knowledge may already be encoded in model parameters, making it difficult to distinguish genuine retrieval and reasoning from parametric recall. We introduce HybridRAG-Bench, a framework for constructing benchmarks to evaluate retrieval-intensive, multi-hop reasoning over hybrid knowledge. HybridRAG-Bench automatically couples unstructured text and structured knowledge graph representations derived from recent scientific literature on arXiv, and generates knowledge-intensive question-answer pairs grounded in explicit reasoning paths. The framework supports flexible domain and time-frame selection, enabling contamination-aware and customizable evaluation as models and knowledge evolve. Experiments across three domains (artificial intelligence, governance and policy, and bioinformatics) demonstrate that HybridRAG-Bench rewards genuine retrieval and reasoning rather than parametric recall, offering a principled testbed for evaluating hybrid knowledge-augmented reasoning systems. We release our code and data at github.com/junhongmit/HybridRAG-Bench.
Community
Modern LLM systems increasingly rely on retrieval-augmented generation (RAG) and knowledge-graph-augmented reasoning (KG-RAG) to handle knowledge-intensive tasks. But how much reasoning do these systems truly add beyond the base LLM’s parametric knowledge?
We introduce HybridRAG-Bench, a contamination-aware benchmarking framework for evaluating retrieval-intensive, multi-hop reasoning over hybrid knowledge (text + structured graphs). Rather than releasing a static dataset, we provide a reusable benchmark construction pipeline that generates challenging, reasoning-grounded QA pairs from time-scoped scientific corpora.
😆 Takeaways:
1️⃣ Benchmark contamination hides true reasoning gaps.
Many widely used QA benchmarks overlap heavily with LLM pretraining data, making it difficult to distinguish retrieval-based reasoning from parametric recall. When evaluated on temporally controlled, contamination-aware data, LLM-only performance drops substantially.
2️⃣ Retrieval alone is not enough.
Text-based RAG improves over LLM-only prompting, but struggles on structured multi-hop and relational questions. Naïve KG injection can even hurt performance due to noise.
3️⃣ Structured knowledge adds measurable reasoning benefits.
Hybrid methods that combine graph traversal with textual retrieval consistently outperform text-only RAG—especially on difficult multi-hop and compositional reasoning tasks.
4️⃣ Fine-grained question types reveal failure modes.
By breaking evaluation into single-hop, multi-hop, difficult multi-hop, counterfactual, and open-ended questions, HybridRAG-Bench surfaces distinct weaknesses in retrieval, evidence integration, and reasoning strategies that aggregate accuracy hides.
5️⃣ Reasoning gains are method-dependent—not just scale-dependent.
The performance gap between different retrieval/reasoning strategies is often larger than the gap from scaling model size alone, suggesting that system design matters as much as model scale.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Fine-Tuning vs. RAG for Multi-Hop Question Answering with Novel Knowledge (2026)
- CompactRAG: Reducing LLM Calls and Token Overhead in Multi-Hop Question Answering (2026)
- M3KG-RAG: Multi-hop Multimodal Knowledge Graph-enhanced Retrieval-Augmented Generation (2025)
- A Stepwise-Enhanced Reasoning Framework for Large Language Models Based on External Subgraph Generation (2025)
- RPO-RAG: Aligning Small LLMs with Relation-aware Preference Optimization for Knowledge Graph Question Answering (2026)
- N2N-GQA: Noise-to-Narrative for Graph-Based Table-Text Question Answering Using LLMs (2026)
- Use Graph When It Needs: Efficiently and Adaptively Integrating Retrieval-Augmented Generation with Graphs (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper