AI Agents

Interactive Benchmarks

BBaoqing YueZZihan ZhuYYifan ZhangJJichen FengHHufei YangMMengdi Wang
Published
March 5, 2026
Authors
6
Word Count
8,686
Code
Includes code

Interactive benchmarks evaluate AI reasoning through dialogue, revealing hidden capabilities static tests miss.

Abstract

Standard benchmarks have become increasingly unreliable due to saturation, subjectivity, and poor generalization. We argue that evaluating model's ability to acquire information actively is important to assess model's intelligence. We propose Interactive Benchmarks, a unified evaluation paradigm that assesses model's reasoning ability in an interactive process under budget constraints. We instantiate this framework across two settings: Interactive Proofs, where models interact with a judge to deduce objective truths or answers in logic and mathematics; and Interactive Games, where models reason strategically to maximize long-horizon utilities. Our results show that interactive benchmarks provide a robust and faithful assessment of model intelligence, revealing that there is still substantial room to improve in interactive scenarios. Project page: https://github.com/interactivebench/interactivebench

Key Takeaways

  • 1

    Interactive benchmarks measure AI reasoning by requiring models to actively query judges for information under budget constraints.

  • 2

    Models achieving zero percent accuracy on static tests unlock substantial reasoning ability when allowed interactive dialogue.

  • 3

    Interactive benchmarks reveal significant gaps in current AI evaluation methods that overlook information acquisition abilities.

Limitations

  • Dataset of only 46 situation puzzles may be too small for comprehensive evaluation of model capabilities.

  • Interactive game benchmarks require environment setup and other agents, potentially limiting generalization to real-world deployment.

Keywords

interactive benchmarksmodel intelligenceactive information acquisitionreasoning abilitybudget constraintsinteractive proofsinteractive gamesmodel evaluation

More in AI Agents

View all
Interactive Benchmarks | Paperchime