Multimodal AI

LaViT: Aligning Latent Visual Thoughts for Multi-modal Reasoning

LLinquan WuTTianxiang JiangYYifei DongHHaoyu YangFFengji ZhangSShichaang MengAAi XuanLLinqi SongJJacky Keung
arXiv ID
2601.10129
Published
January 15, 2026
Authors
9
Hugging Face Likes
9
Comments
2

Abstract

Current multimodal latent reasoning often relies on external supervision (e.g., auxiliary images), ignoring intrinsic visual attention dynamics. In this work, we identify a critical Perception Gap in distillation: student models frequently mimic a teacher's textual output while attending to fundamentally divergent visual regions, effectively relying on language priors rather than grounded perception. To bridge this, we propose LaViT, a framework that aligns latent visual thoughts rather than static embeddings. LaViT compels the student to autoregressively reconstruct the teacher's visual semantics and attention trajectories prior to text generation, employing a curriculum sensory gating mechanism to prevent shortcut learning. Extensive experiments show that LaViT significantly enhances visual grounding, achieving up to +16.9% gains on complex reasoning tasks and enabling a compact 3B model to outperform larger open-source variants and proprietary models like GPT-4o.

More in Multimodal AI

View all
LaViT: Aligning Latent Visual Thoughts for Multi-modal Reasoning | Paperchime