AI Agents

DeepSearchQA: Bridging the Comprehensiveness Gap for Deep Research Agents

NNikita GuptaRRiju ChatterjeeLLukas HaasCConnie TaoAAndrew WangCChang LiuHHidekazu OiwaEElena GribovskayaJJan AckermannJJohn BlitzerSSasha GoldshteinDDipanjan Das
Published
January 28, 2026
Authors
12
Word Count
7,356
Code
Includes code

DeepSearchQA benchmark highlights AI's retrieval challenges.

Abstract

We introduce DeepSearchQA, a 900-prompt benchmark for evaluating agents on difficult multi-step information-seeking tasks across 17 different fields. Unlike traditional benchmarks that target single answer retrieval or broad-spectrum factuality, DeepSearchQA features a dataset of challenging, handcrafted tasks designed to evaluate an agent's ability to execute complex search plans to generate exhaustive answer lists. This shift in design explicitly tests three critical, yet under-evaluated capabilities: 1) systematic collation of fragmented information from disparate sources, 2) de-duplication and entity resolution to ensure precision, and 3) the ability to reason about stopping criteria within an open-ended search space. Each task is structured as a causal chain, where discovering information for one step is dependent on the successful completion of the previous one, stressing long-horizon planning and context retention. All tasks are grounded in the open web with objectively verifiable answer sets. Our comprehensive evaluation of state-of-the-art agent architectures reveals significant performance limitations: even the most advanced models struggle to balance high recall with precision. We observe distinct failure modes ranging from premature stopping (under-retrieval) to hedging behaviors, where agents cast an overly wide net of low-confidence answers to artificially boost recall. These findings highlight critical headroom in current agent designs and position DeepSearchQA as an essential diagnostic tool for driving future research toward more robust, deep-research capabilities.

Key Takeaways

  • 1

    DeepSearchQA evaluates multi-step information-seeking tasks in AI agents.

  • 2

    Reveals significant 'Comprehensiveness Gap' in current AI capabilities.

  • 3

    Identifies 'Last Mile Problem' in state-of-the-art agent architectures.

Limitations

  • Outcome-based evaluation limits insight into agent reasoning.

  • Static web assumption restricts evaluation of volatile information.

Keywords

multi-step information-seeking taskscausal chainsystematic collationde-duplicationentity resolutionlong-horizon planningcontext retentionopen-ended search spaceagent architecturesrecallprecisionpremature stoppinghedging behaviorsdeep-research capabilities

More in AI Agents

View all
DeepSearchQA: Bridging the Comprehensiveness Gap for Deep Research Agents | Paperchime