Multimodal AI

SpatiaLab: Can Vision-Language Models Perform Spatial Reasoning in the Wild?

AAzmine Toushik WasiWWahid FaisalAAbdur RahmanMMahfuz Ahmed AnikMMunem ShahriarMMohsin Mahmud TopuSSadia Tasnim MeemRRahatun Nesa PritiSSabrina Afroz MituMMd. Iqramul HoqueSShahriyar Zaman RidoyMMohammed Eunus AliMMajd HawaslyMMohammad RazaMMd Rizwan Parvez
Published
February 3, 2026
Authors
15
Word Count
45,412
Code
Includes code

SPATIALAB benchmarks VLMs' spatial reasoning in real-world scenarios.

Abstract

Spatial reasoning is a fundamental aspect of human cognition, yet it remains a major challenge for contemporary vision-language models (VLMs). Prior work largely relied on synthetic or LLM-generated environments with limited task designs and puzzle-like setups, failing to capture the real-world complexity, visual noise, and diverse spatial relationships that VLMs encounter. To address this, we introduce SpatiaLab, a comprehensive benchmark for evaluating VLMs' spatial reasoning in realistic, unconstrained contexts. SpatiaLab comprises 1,400 visual question-answer pairs across six major categories: Relative Positioning, Depth & Occlusion, Orientation, Size & Scale, Spatial Navigation, and 3D Geometry, each with five subcategories, yielding 30 distinct task types. Each subcategory contains at least 25 questions, and each main category includes at least 200 questions, supporting both multiple-choice and open-ended evaluation. Experiments across diverse state-of-the-art VLMs, including open- and closed-source models, reasoning-focused, and specialized spatial reasoning models, reveal a substantial gap in spatial reasoning capabilities compared with humans. In the multiple-choice setup, InternVL3.5-72B achieves 54.93% accuracy versus 87.57% for humans. In the open-ended setting, all models show a performance drop of around 10-25%, with GPT-5-mini scoring highest at 40.93% versus 64.93% for humans. These results highlight key limitations in handling complex spatial relationships, depth perception, navigation, and 3D geometry. By providing a diverse, real-world evaluation framework, SpatiaLab exposes critical challenges and opportunities for advancing VLMs' spatial reasoning, offering a benchmark to guide future research toward robust, human-aligned spatial understanding. SpatiaLab is available at: https://spatialab-reasoning.github.io/.

Key Takeaways

  • 1

    Existing benchmarks oversimplify spatial reasoning tasks.

  • 2

    SPATIALAB introduces diverse, real-world spatial reasoning tasks.

  • 3

    State-of-the-art models lag significantly behind human performance.

Limitations

  • Current VLMs struggle with complex, real-world spatial reasoning.

  • Significant performance gap between models and humans.

Keywords

vision-language modelsspatial reasoningbenchmarkvisual question-answer pairsreal-world complexityspatial relationshipsdepth perceptionspatial navigation3D geometry

More in Multimodal AI

View all
SpatiaLab: Can Vision-Language Models Perform Spatial Reasoning in the Wild? | Paperchime