Generative AI

Stable Velocity: A Variance Perspective on Flow Matching

DDonglin YangYYongxing ZhangXXin YuLLiang HouXXin TaoPPengfei WanXXiaojuan QiRRenjie Liao
Published
February 5, 2026
Authors
8
Word Count
13,304

Flow matching reveals hidden variance problem that destabilizes training in generative models.

Abstract

While flow matching is elegant, its reliance on single-sample conditional velocities leads to high-variance training targets that destabilize optimization and slow convergence. By explicitly characterizing this variance, we identify 1) a high-variance regime near the prior, where optimization is challenging, and 2) a low-variance regime near the data distribution, where conditional and marginal velocities nearly coincide. Leveraging this insight, we propose Stable Velocity, a unified framework that improves both training and sampling. For training, we introduce Stable Velocity Matching (StableVM), an unbiased variance-reduction objective, along with Variance-Aware Representation Alignment (VA-REPA), which adaptively strengthen auxiliary supervision in the low-variance regime. For inference, we show that dynamics in the low-variance regime admit closed-form simplifications, enabling Stable Velocity Sampling (StableVS), a finetuning-free acceleration. Extensive experiments on ImageNet 256times256 and large pretrained text-to-image and text-to-video models, including SD3.5, Flux, Qwen-Image, and Wan2.2, demonstrate consistent improvements in training efficiency and more than 2times faster sampling within the low-variance regime without degrading sample quality. Our code is available at https://github.com/linYDTHU/StableVelocity.

Key Takeaways

  • 1

    Flow matching training targets suffer from high variance, especially in later generative process phases.

  • 2

    Variance in conditional velocity estimates increases with dimensionality and time, creating unreliable training signals.

  • 3

    Stable Velocity uses multiple reference samples and composite conditional paths to reduce training variance.

Limitations

  • High-variance regime becomes more severe in higher-dimensional data, complicating optimization.

  • The paper focuses on variance issues but doesn't fully address computational overhead of multi-sample approaches.

Keywords

flow matchingconditional velocitiesvariance reductionStable Velocity MatchingVariance-Aware Representation AlignmentStable Velocity Samplingtraining efficiencysampling speed

More in Generative AI

View all
Stable Velocity: A Variance Perspective on Flow Matching | Paperchime