Multimodal AI

BagelVLA: Enhancing Long-Horizon Manipulation via Interleaved Vision-Language-Action Generation

YYucheng HuJJianke ZhangYYuanfei LuoYYanjiang GuoXXiaoyu ChenXXinshu SunKKun FengQQingzhou LuSSheng ChenYYangang ZhangWWei LiJJianyu Chen
Published
February 10, 2026
Authors
12
Word Count
12,887

BagelVLA enables robots to plan linguistically, predict visually, and act precisely for complex multi-step manipulation tasks.

Abstract

Equipping embodied agents with the ability to reason about tasks, foresee physical outcomes, and generate precise actions is essential for general-purpose manipulation. While recent Vision-Language-Action (VLA) models have leveraged pre-trained foundation models, they typically focus on either linguistic planning or visual forecasting in isolation. These methods rarely integrate both capabilities simultaneously to guide action generation, leading to suboptimal performance in complex, long-horizon manipulation tasks. To bridge this gap, we propose BagelVLA, a unified model that integrates linguistic planning, visual forecasting, and action generation within a single framework. Initialized from a pretrained unified understanding and generative model, BagelVLA is trained to interleave textual reasoning and visual prediction directly into the action execution loop. To efficiently couple these modalities, we introduce Residual Flow Guidance (RFG), which initializes from current observation and leverages single-step denoising to extract predictive visual features, guiding action generation with minimal latency. Extensive experiments demonstrate that BagelVLA outperforms existing baselines by a significant margin on multiple simulated and real-world benchmarks, particularly in tasks requiring multi-stage reasoning.

Key Takeaways

  • 1

    BagelVLA unifies language understanding, visual prediction, and action generation into a single coordinated framework for robot manipulation.

  • 2

    Interleaved planning breaks complex tasks into subtasks by generating linguistic plans, predicting future states, and executing precise actions sequentially.

  • 3

    Most existing VLAs struggle with multi-step reasoning because they treat vision, language, and control as separate problems rather than integrated capabilities.

Limitations

  • Some VLAs like RT-2 map directly to discrete action tokens, limiting precision in continuous control tasks.

  • Existing systems without explicit language planning struggle to break down complex goals into subtasks for sequential reasoning.

Keywords

Vision-Language-Action modelslinguistic planningvisual forecastingaction generationpretrained unified understandingresidual flow guidancedenoisingmulti-stage reasoning

More in Multimodal AI

View all
BagelVLA: Enhancing Long-Horizon Manipulation via Interleaved Vision-Language-Action Generation | Paperchime