DLLM-Searcher: Adapting Diffusion Large Language Model for Search Agents
Abstract
Diffusion Large Language Models are optimized for search agents through enhanced reasoning capabilities and reduced latency via a parallel reasoning paradigm.
Recently, Diffusion Large Language Models (dLLMs) have demonstrated unique efficiency advantages, enabled by their inherently parallel decoding mechanism and flexible generation paradigm. Meanwhile, despite the rapid advancement of Search Agents, their practical deployment is constrained by a fundamental limitation, termed as 1) Latency Challenge: the serial execution of multi-round reasoning, tool calling, and tool response waiting under the ReAct agent paradigm induces severe end-to-end latency. Intuitively, dLLMs can leverage their distinctive strengths to optimize the operational efficiency of agents under the ReAct agent paradigm. Practically, existing dLLM backbones face the 2) Agent Ability Challenge. That is, existing dLLMs exhibit remarkably weak reasoning and tool-calling capabilities, preventing these advantages from being effectively realized in practice. In this paper, we propose DLLM-Searcher, an optimization framework for dLLM-based Search Agents. To solve the Agent Ability Challenge, we design a two-stage post-training pipeline encompassing Agentic Supervised Fine-Tuning (Agentic SFT) and Agentic Variance-Reduced Preference Optimization Agentic VRPO, which enhances the backbone dLLM's information seeking and reasoning capabilities. To mitigate the Latency Challenge, we leverage the flexible generation mechanism of dLLMs and propose a novel agent paradigm termed Parallel-Reasoning and Acting P-ReAct. P-ReAct guides the model to prioritize decoding tool_call instructions, thereby allowing the model to keep thinking while waiting for the tool's return. Experimental results demonstrate that DLLM-Searcher achieves performance comparable to mainstream LLM-based search agents and P-ReAct delivers approximately 15% inference acceleration. Our code is available at https://anonymous.4open.science/r/DLLM-Searcher-553C
Community
🧠🔍 DLLM-Searcher: Adapting Diffusion Large Language Models for Search Agents
Diffusion Large Language Models (dLLMs) offer flexible generation but struggle as search agents due to latency and weak tool-use capabilities. This paper introduces DLLM-Searcher, a framework that adapts dLLMs for efficient, agentic search and retrieval.
🚀 Key ideas:
- Parallel-Reasoning and Acting (P-ReAct):
Enables parallel reasoning and tool execution using diffusion’s non-autoregressive generation, significantly reducing inference latency. - Agent-oriented post-training:
A two-stage pipeline with Agentic Supervised Fine-Tuning (SFT) + Agentic Variance-Reduced Preference Optimization (VRPO) improves reasoning structure, tool calling, and search reliability.
📊 Results:
- Competitive performance with strong autoregressive LLM-based search agents on multi-hop retrieval tasks
- Up to ~15% speedup in end-to-end inference with P-ReAct
💡 Why it matters:
DLLM-Searcher shows that diffusion LLMs can be practical and efficient search agents, opening a new direction for low-latency, agentic information retrieval systems.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DLLM Agent: See Farther, Run Faster (2026)
- SmartSearch: Process Reward-Guided Query Refinement for Search Agents (2026)
- The Bitter Lesson of Diffusion Language Models for Agentic Workflows: A Comprehensive Reality Check (2026)
- Dr. Zero: Self-Evolving Search Agents without Training Data (2026)
- ProRAG: Process-Supervised Reinforcement Learning for Retrieval-Augmented Generation (2026)
- Beyond Hard Masks: Progressive Token Evolution for Diffusion Language Models (2026)
- Video-o3: Native Interleaved Clue Seeking for Long Video Multi-Hop Reasoning (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper