Papers
arxiv:2603.03293

SE-Search: Self-Evolving Search Agent via Memory and Dense Reward

Published on Feb 6
Authors:
,
,
,
,
,
,
,
,
,
,
,

Abstract

Self-Evolving Search agent improves retrieval-augmented generation by enhancing search behavior through memory purification, atomic query training, and dense rewards for better factual accuracy.

AI-generated summary

Retrieval augmented generation (RAG) reduces hallucinations and factual errors in large language models (LLMs) by conditioning generation on retrieved external knowledge. Recent search agents further cast RAG as an autonomous, multi-turn information-seeking process. However, existing methods often accumulate irrelevant or noisy documents and rely on sparse reinforcement learning signals. We propose Self-Evolving Search, a Self-Evolving Search agent that improves online search behavior through three components, memory purification, atomic query training, and dense rewards. SE-Search follows a Think-Search-Memorize strategy that retains salient evidence while filtering irrelevant content. Atomic query training promotes shorter and more diverse queries, improving evidence acquisition. Dense rewards provide fine-grained feedback that speeds training. Experiments on single-hop and multi-hop question answering benchmarks show that SE-Search-3B outperforms strong baselines, yielding a 10.8 point absolute improvement and a 33.8% relative gain over Search-R1.We will make the code and model weights publicly available upon acceptance.

Community

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.03293 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.03293 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.