Multimodal AI

Olaf-World: Orienting Latent Actions for Video World Modeling

YYuxin JiangYYuchao GuIIvor W. TsangMMike Zheng Shou
Published
February 10, 2026
Authors
4

Abstract

Scaling action-controllable world models is limited by the scarcity of action labels. While latent action learning promises to extract control interfaces from unlabeled video, learned latents often fail to transfer across contexts: they entangle scene-specific cues and lack a shared coordinate system. This occurs because standard objectives operate only within each clip, providing no mechanism to align action semantics across contexts. Our key insight is that although actions are unobserved, their semantic effects are observable and can serve as a shared reference. We introduce SeqΔ-REPA, a sequence-level control-effect alignment objective that anchors integrated latent action to temporal feature differences from a frozen, self-supervised video encoder. Building on this, we present Olaf-World, a pipeline that pretrains action-conditioned video world models from large-scale passive video. Extensive experiments demonstrate that our method learns a more structured latent action space, leading to stronger zero-shot action transfer and more data-efficient adaptation to new control interfaces than state-of-the-art baselines.

Keywords

action-controllable world modelslatent action learningtemporal feature differencesself-supervised video encodersequence-level control-effect alignmentaction-conditioned video world modelszero-shot action transferdata-efficient adaptation

More in Multimodal AI

View all
Olaf-World: Orienting Latent Actions for Video World Modeling | Paperchime