Multimodal AI

MMDeepResearch-Bench: A Benchmark for Multimodal Deep Research Agents

PPeizhou HuangZZixuan ZhongZZhongwei WanDDonghao ZhouSSamiul AlamXXin WangZZexin LiZZhihao DouLLi ZhuJJing XiongCChaofan TaoYYan XuDDimitrios DimitriadisTTuo ZhangMMi Zhang
arXiv ID
2601.12346
Published
January 18, 2026
Authors
15
Hugging Face Likes
46
Comments
2

Abstract

Deep Research Agents (DRAs) generate citation-rich reports via multi-step search and synthesis, yet existing benchmarks mainly target text-only settings or short-form multimodal QA, missing end-to-end multimodal evidence use. We introduce MMDeepResearch-Bench (MMDR-Bench), a benchmark of 140 expert-crafted tasks across 21 domains, where each task provides an image-text bundle to evaluate multimodal understanding and citation-grounded report generation. Compared to prior setups, MMDR-Bench emphasizes report-style synthesis with explicit evidence use, where models must connect visual artifacts to sourced claims and maintain consistency across narrative, citations, and visual references. We further propose a unified, interpretable evaluation pipeline: Formula-LLM Adaptive Evaluation (FLAE) for report quality, Trustworthy Retrieval-Aligned Citation Evaluation (TRACE) for citation-grounded evidence alignment, and Multimodal Support-Aligned Integrity Check (MOSAIC) for text-visual integrity, each producing fine-grained signals that support error diagnosis beyond a single overall score. Experiments across 25 state-of-the-art models reveal systematic trade-offs between generation quality, citation discipline, and multimodal grounding, highlighting that strong prose alone does not guarantee faithful evidence use and that multimodal integrity remains a key bottleneck for deep research agents.

Keywords

multimodal evidence usecitation-grounded report generationmultimodal understandingdeep research agentsFormula-LLM Adaptive EvaluationTrustworthy Retrieval-Aligned Citation EvaluationMultimodal Support-Aligned Integrity Check

More in Multimodal AI

View all
MMDeepResearch-Bench: A Benchmark for Multimodal Deep Research Agents | Paperchime