Multimodal AI

PhysRVG: Physics-Aware Unified Reinforcement Learning for Video Generative Models

QQiyuan ZhangBBiao GongSShuai TanZZheng ZhangYYujun ShenXXing ZhuYYuyuan LiKKelu YaoCChunhua ShenCChangqing Zou
arXiv ID
2601.11087
Published
January 16, 2026
Authors
10
Hugging Face Likes
8
Comments
3

Abstract

Physical principles are fundamental to realistic visual simulation, but remain a significant oversight in transformer-based video generation. This gap highlights a critical limitation in rendering rigid body motion, a core tenet of classical mechanics. While computer graphics and physics-based simulators can easily model such collisions using Newton formulas, modern pretrain-finetune paradigms discard the concept of object rigidity during pixel-level global denoising. Even perfectly correct mathematical constraints are treated as suboptimal solutions (i.e., conditions) during model optimization in post-training, fundamentally limiting the physical realism of generated videos. Motivated by these considerations, we introduce, for the first time, a physics-aware reinforcement learning paradigm for video generation models that enforces physical collision rules directly in high-dimensional spaces, ensuring the physics knowledge is strictly applied rather than treated as conditions. Subsequently, we extend this paradigm to a unified framework, termed Mimicry-Discovery Cycle (MDcycle), which allows substantial fine-tuning while fully preserving the model's ability to leverage physics-grounded feedback. To validate our approach, we construct new benchmark PhysRVGBench and perform extensive qualitative and quantitative experiments to thoroughly assess its effectiveness.

Keywords

reinforcement learningvideo generationphysical collision ruleshigh-dimensional spacesphysics-awareMimicry-Discovery CyclePhysRVGBench

More in Multimodal AI

View all