AI Agents

Wiki Live Challenge: Challenging Deep Research Agents with Expert-Level Wikipedia Articles

SShaohan WangBBenfeng XuLLicheng ZhangMMingxuan DuCChiwei ZhuXXiaorui WangZZhendong MaoYYongdong Zhang
Published
February 2, 2026
Authors
8
Word Count
5,048
Code
Includes code

Wiki Live Challenge benchmarks Deep Research Agents rigorously.

Abstract

Deep Research Agents (DRAs) have demonstrated remarkable capabilities in autonomous information retrieval and report generation, showing great potential to assist humans in complex research tasks. Current evaluation frameworks primarily rely on LLM-generated references or LLM-derived evaluation dimensions. While these approaches offer scalability, they often lack the reliability of expert-verified content and struggle to provide objective, fine-grained assessments of critical dimensions. To bridge this gap, we introduce Wiki Live Challenge (WLC), a live benchmark that leverages the newest Wikipedia Good Articles (GAs) as expert-level references. Wikipedia's strict standards for neutrality, comprehensiveness, and verifiability serve as a great challenge for DRAs, with GAs representing the pinnacle of which. We curate a dataset of 100 recent Good Articles and propose Wiki Eval, a comprehensive evaluation framework comprising a fine-grained evaluation method with 39 criteria for writing quality and rigorous metrics for factual verifiability. Extensive experiments on various DRA systems demonstrate a significant gap between current DRAs and human expert-level Wikipedia articles, validating the effectiveness of WLC in advancing agent research. We release our benchmark at https://github.com/WangShao2000/Wiki_Live_Challenge

Key Takeaways

  • 1

    Wiki Live Challenge uses Wikipedia Good Articles for evaluation.

  • 2

    Current DRAs fall short in expert-level writing and fact accuracy.

  • 3

    Significant gap exists between DRAs and human-authored articles.

Limitations

  • DRAs struggle with specialized and high-difficulty details.

  • Current models show poor factual coverage and reference accuracy.

Keywords

Deep Research AgentsWikipedia Good Articleslive benchmarkevaluation frameworkfine-grained evaluationfactual verifiabilityagent research

More in AI Agents

View all
Wiki Live Challenge: Challenging Deep Research Agents with Expert-Level Wikipedia Articles | Paperchime