Multimodal AI

InternVL-U: Democratizing Unified Multimodal Models for Understanding, Reasoning, Generation and Editing

CChangyao TianDDanni YangGGuanzhou ChenEErfei CuiZZhaokai WangYYuchen DuanPPenghao YinSSitao ChenGGanlin YangMMingxin LiuZZirun ZhuZZiqian FanLLeyao GuHHaomin WangQQi WeiJJinhui YinXXue YangZZhihang ZhongQQi QinYYi XinBBin FuYYihao LiuJJiaye GeQQipeng GuoGGen LuoHHongsheng LiYYu QiaoKKai ChenHHongjie Zhang
Published
March 10, 2026
Authors
29
Word Count
29,279
Code
Includes code

4B unified multimodal model outperforms 3× larger competitors in generation while maintaining strong understanding.

Abstract

Unified multimodal models (UMMs) that integrate understanding, reasoning, generation, and editing face inherent trade-offs between maintaining strong semantic comprehension and acquiring powerful generation capabilities. In this report, we present InternVL-U, a lightweight 4B-parameter UMM that democratizes these capabilities within a unified framework. Guided by the principles of unified contextual modeling and modality-specific modular design with decoupled visual representations, InternVL-U integrates a state-of-the-art Multimodal Large Language Model (MLLM) with a specialized MMDiT-based visual generation head. To further bridge the gap between aesthetic generation and high-level intelligence, we construct a comprehensive data synthesis pipeline targeting high-semantic-density tasks, such as text rendering and scientific reasoning, under a reasoning-centric paradigm that leverages Chain-of-Thought (CoT) to better align abstract user intent with fine-grained visual generation details. Extensive experiments demonstrate that InternVL-U achieves a superior performance - efficiency balance. Despite using only 4B parameters, it consistently outperforms unified baseline models with over 3x larger scales such as BAGEL (14B) on various generation and editing tasks, while retaining strong multimodal understanding and reasoning capabilities.

Key Takeaways

  • 1

    InternVL-U achieves unified multimodal capabilities with only 4B parameters, outperforming 14B models like BAGEL.

  • 2

    The model uses modality-specific architecture: Flow Matching for images, autoregressive modeling for text, in shared semantic space.

  • 3

    Comprehensive reasoning-centric data synthesis pipeline enables high-fidelity generation for text rendering and scientific reasoning tasks.

Limitations

  • Fully-native UMMs lack consensus on optimal design, with no single approach demonstrating decisive performance advantage.

  • Balancing conflicting data distributions across modalities during joint training presents substantial engineering challenges.

Keywords

Unified multimodal modelsMultimodal Large Language ModelMMDiT-based visual generation headChain-of-Thoughtvisual representationsmodality-specific modular designunified contextual modelingtext renderingscientific reasoninghigh-semantic-density tasks

More in Multimodal AI

View all
InternVL-U: Democratizing Unified Multimodal Models for Understanding, Reasoning, Generation and Editing | Paperchime