AI Agents

WideSeek: Advancing Wide Research via Multi-Agent Scaling

ZZiyang HuangHHaolin RenXXiaowei YuanJJiawei WangZZhongtao JiangKKun XuSShizhu HeJJun ZhaoKKang Liu
Published
February 2, 2026
Authors
9

Abstract

Search intelligence is evolving from Deep Research to Wide Research, a paradigm essential for retrieving and synthesizing comprehensive information under complex constraints in parallel. However, progress in this field is impeded by the lack of dedicated benchmarks and optimization methodologies for search breadth. To address these challenges, we take a deep dive into Wide Research from two perspectives: Data Pipeline and Agent Optimization. First, we produce WideSeekBench, a General Broad Information Seeking (GBIS) benchmark constructed via a rigorous multi-phase data pipeline to ensure diversity across the target information volume, logical constraints, and domains. Second, we introduce WideSeek, a dynamic hierarchical multi-agent architecture that can autonomously fork parallel sub-agents based on task requirements. Furthermore, we design a unified training framework that linearizes multi-agent trajectories and optimizes the system using end-to-end RL. Experimental results demonstrate the effectiveness of WideSeek and multi-agent RL, highlighting that scaling the number of agents is a promising direction for advancing the Wide Research paradigm.

Keywords

Wide ResearchDeep Researchsearch intelligenceWideSeekBenchGBISmulti-phase data pipelinedynamic hierarchical multi-agent architectureparallel sub-agentsunified training frameworkmulti-agent trajectoriesend-to-end RL

More in AI Agents

View all
WideSeek: Advancing Wide Research via Multi-Agent Scaling | Paperchime