Multimodal AI

From Perception to Action: An Interactive Benchmark for Vision Reasoning

YYuhao WuMMaojia SongYYihuai LanLLei WangZZhiqiang HuYYao XiaoHHeng ZhouWWeihua ZhengDDylan RaharjaSSoujanya PoriaRRoy Ka-Wei Lee
Published
February 24, 2026
Authors
11

Abstract

Understanding the physical structure is essential for real-world applications such as embodied agents, interactive design, and long-horizon manipulation. Yet, prevailing Vision-Language Model (VLM) evaluations still center on structure-agnostic, single-turn setups (e.g., VQA), which fail to assess agents' ability to reason about how geometry, contact, and support relations jointly constrain what actions are possible in a dynamic environment. To address this gap, we introduce the Causal Hierarchy of Actions and Interactions (CHAIN) benchmark, an interactive 3D, physics-driven testbed designed to evaluate whether models can understand, plan, and execute structured action sequences grounded in physical constraints. CHAIN shifts evaluation from passive perception to active problem solving, spanning tasks such as interlocking mechanical puzzles and 3D stacking and packing. We conduct a comprehensive study of state-of-the-art VLMs and diffusion-based models under unified interactive settings. Our results show that top-performing models still struggle to internalize physical structure and causal constraints, often failing to produce reliable long-horizon plans and cannot robustly translate perceived structure into effective actions. The project is available at https://social-ai-studio.github.io/CHAIN/.

Keywords

Vision-Language Modeldiffusion-based modelsphysical constraintscausal constraintsinteractive 3Dstructured action sequenceslong-horizon planning

More in Multimodal AI

View all
From Perception to Action: An Interactive Benchmark for Vision Reasoning | Paperchime