Multimodal AI

OneVision-Encoder: Codec-Aligned Sparsity as a Foundational Principle for Multimodal Intelligence

FFeilong TangXXiang AnYYunyao YanYYin XieBBin QinKKaicheng YangYYifei ShenYYuanhan ZhangCChunyuan LiSShikun FengCChangrui ChenHHuajie TanMMing HuMManyuan ZhangBBo LiZZiyong FengZZiwei LiuZZongyuan GeJJiankang Deng
Published
February 9, 2026
Authors
19
Word Count
14,400

OneVision-Encoder achieves sparse multimodal intelligence through codec-aligned vision architecture.

Abstract

Hypothesis. Artificial general intelligence is, at its core, a compression problem. Effective compression demands resonance: deep learning scales best when its architecture aligns with the fundamental structure of the data. These are the fundamental principles. Yet, modern vision architectures have strayed from these truths: visual signals are highly redundant, while discriminative information, the surprise, is sparse. Current models process dense pixel grids uniformly, wasting vast compute on static background rather than focusing on the predictive residuals that define motion and meaning. We argue that to solve visual understanding, we must align our architectures with the information-theoretic principles of video, i.e., Codecs. Method. OneVision-Encoder encodes video by compressing predictive visual structure into semantic meaning. By adopting Codec Patchification, OV-Encoder abandons uniform computation to focus exclusively on the 3.1%-25% of regions rich in signal entropy. To unify spatial and temporal reasoning under irregular token layouts, OneVision-Encoder employs a shared 3D RoPE and is trained with a large-scale cluster discrimination objective over more than one million semantic concepts, jointly capturing object permanence and motion dynamics. Evidence. The results validate our core hypothesis: efficiency and accuracy are not a trade-off; they are positively correlated. When integrated into LLM, it consistently outperforms strong vision backbones such as Qwen3-ViT and SigLIP2 across 16 image, video, and document understanding benchmarks, despite using substantially fewer visual tokens and pretraining data. Notably, on video understanding tasks, OV-Encoder achieves an average improvement of 4.1% over Qwen3-ViT. Codec-aligned, patch-level sparsity is a foundational principle, enabling OV-Encoder as a scalable engine for next-generation visual generalists.

Key Takeaways

  • 1

    Video understanding should focus on sparse, informative regions rather than processing every pixel uniformly across frames.

  • 2

    Codec-aligned sparsity reduces computational load by 75-96.9% while improving performance through motion and residual analysis.

  • 3

    3D Rotary Position Embeddings enable unified transformer processing across video, chunks, and single image modalities.

Limitations

  • The script cuts off before explaining the cluster discrimination objective and complete training methodology.

  • No comparison metrics or benchmark results are provided in the lesson script excerpt.

Keywords

artificial general intelligencecompression problemresonancedeep learningvisual signalsdiscriminative informationpixel gridscompute optimizationvideo compressionCodec Patchification3D RoPEcluster discrimination objectivesemantic conceptsobject permanencemotion dynamicsLLMvision backbonesvisual tokenspretraining datavideo understandingsparsity-driven encoding

More in Multimodal AI

View all
OneVision-Encoder: Codec-Aligned Sparsity as a Foundational Principle for Multimodal Intelligence | Paperchime