Multimodal AI

Utonia: Toward One Encoder for All Point Clouds

YYujia ZhangXXiaoyang WuYYunhan YangXXianzhe FanHHan LiYYuechen ZhangZZehao HuangNNaiyan WangHHengshuang Zhao
Published
March 3, 2026
Authors
9
Word Count
10,744

Utonia trains a single point cloud encoder across five diverse domains through granularity rescaling and modality robustness.

Abstract

We dream of a future where point clouds from all domains can come together to shape a single model that benefits them all. Toward this goal, we present Utonia, a first step toward training a single self-supervised point transformer encoder across diverse domains, spanning remote sensing, outdoor LiDAR, indoor RGB-D sequences, object-centric CAD models, and point clouds lifted from RGB-only videos. Despite their distinct sensing geometries, densities, and priors, Utonia learns a consistent representation space that transfers across domains. This unification improves perception capability while revealing intriguing emergent behaviors that arise only when domains are trained jointly. Beyond perception, we observe that Utonia representations can also benefit embodied and multimodal reasoning: conditioning vision-language-action policies on Utonia features improves robotic manipulation, and integrating them into vision-language models yields gains on spatial reasoning. We hope Utonia can serve as a step toward foundation models for sparse 3D data, and support downstream applications in AR/VR, robotics, and autonomous driving.

Key Takeaways

  • 1

    Utonia unifies point cloud encoders across five diverse domains by addressing granularity mismatches, gravity biases, and modality inconsistencies.

  • 2

    Perceptual granularity rescaling aligns spatial units across domains, enabling stable joint training on 250k cross-domain point clouds.

  • 3

    Joint pretraining yields emergent behaviors where domains benefit collectively rather than compete, improving both 3D perception and spatial reasoning tasks.

Limitations

  • The paper presents initial exploration rather than definitive solution; long-term stability and scalability of unified encoder remain open questions.

  • Emergent behaviors like improved robotic manipulation are demonstrated but lack comprehensive evaluation across broader downstream applications.

Keywords

point transformer encoderself-supervised learningrepresentation spacecross-domain transferembodied reasoningmultimodal reasoningvision-language-action policiesrobotic manipulationspatial reasoning

More in Multimodal AI

View all
Utonia: Toward One Encoder for All Point Clouds | Paperchime