Computer Vision

Motion 3-to-4: 3D Motion Reconstruction for 4D Synthesis

HHongyuan ChenXXingyu ChenYYoujia ZhangZZexiang XuAAnpei Chen
arXiv ID
2601.14253
Published
January 20, 2026
Authors
5
Hugging Face Likes
8
Comments
2

Abstract

We present Motion 3-to-4, a feed-forward framework for synthesising high-quality 4D dynamic objects from a single monocular video and an optional 3D reference mesh. While recent advances have significantly improved 2D, video, and 3D content generation, 4D synthesis remains difficult due to limited training data and the inherent ambiguity of recovering geometry and motion from a monocular viewpoint. Motion 3-to-4 addresses these challenges by decomposing 4D synthesis into static 3D shape generation and motion reconstruction. Using a canonical reference mesh, our model learns a compact motion latent representation and predicts per-frame vertex trajectories to recover complete, temporally coherent geometry. A scalable frame-wise transformer further enables robustness to varying sequence lengths. Evaluations on both standard benchmarks and a new dataset with accurate ground-truth geometry show that Motion 3-to-4 delivers superior fidelity and spatial consistency compared to prior work. Project page is available at https://motion3-to-4.github.io/.

Keywords

4D dynamic objectsmonocular video3D reference meshcanonical reference meshmotion latent representationvertex trajectoriestemporally coherent geometryframe-wise transformer

More in Computer Vision

View all
Motion 3-to-4: 3D Motion Reconstruction for 4D Synthesis | Paperchime