Generative AI

DreamWorld: Unified World Modeling in Video Generation

BBoming TanXXiangdong ZhangNNing LiaoYYuqing ZhangSShaofeng ZhangXXue YangQQi FanYYanyong Zhang
Published
February 28, 2026
Authors
8
Word Count
8,240

DreamWorld unifies world knowledge for physically plausible video generation using joint modeling.

Abstract

Despite impressive progress in video generation, existing models remain limited to surface-level plausibility, lacking a coherent and unified understanding of the world. Prior approaches typically incorporate only a single form of world-related knowledge or rely on rigid alignment strategies to introduce additional knowledge. However, aligning the single world knowledge is insufficient to constitute a world model that requires jointly modeling multiple heterogeneous dimensions (e.g., physical commonsense, 3D and temporal consistency). To address this limitation, we introduce DreamWorld, a unified framework that integrates complementary world knowledge into video generators via a Joint World Modeling Paradigm, jointly predicting video pixels and features from foundation models to capture temporal dynamics, spatial geometry, and semantic consistency. However, naively optimizing these heterogeneous objectives can lead to visual instability and temporal flickering. To mitigate this issue, we propose Consistent Constraint Annealing (CCA) to progressively regulate world-level constraints during training, and Multi-Source Inner-Guidance to enforce learned world priors at inference. Extensive evaluations show that DreamWorld improves world consistency, outperforming Wan2.1 by 2.26 points on VBench. Code will be made publicly available at https://github.com/ABU121111/DreamWorld{mypink{Github}}.

Key Takeaways

  • 1

    DreamWorld unifies multiple world knowledge sources to improve video generation beyond pixel-level plausibility.

  • 2

    Conflicting gradients from multiple expert models caused prior approaches to fail; joint modeling solves this.

  • 3

    Consistent Constraint Annealing progressively regulates world-level constraints during training to maintain visual quality.

Limitations

  • Naively optimizing heterogeneous objectives causes visual instability and temporal flickering in video generation.

  • Previous single-expert alignment approaches like VideoREPA cannot simultaneously integrate multiple types of world knowledge.

Keywords

video generationworld modeljoint world modeling paradigmtemporal dynamicsspatial geometrysemantic consistencyvisual stabilitytemporal flickeringconsistent constraint annealingmulti-source inner-guidance

More in Generative AI

View all
DreamWorld: Unified World Modeling in Video Generation | Paperchime