Reinforcement Learning

Online Causal Kalman Filtering for Stable and Effective Policy Optimization

SShuo HeLLang FengXXin ChengLLei FengBBo An
Published
February 11, 2026
Authors
5
Word Count
8,456
Code
Includes code

Kalman filtering stabilizes language model training by capturing temporal structure in token-level policy ratios.

Abstract

Reinforcement learning for large language models suffers from high-variance token-level importance sampling (IS) ratios, which would destabilize policy optimization at scale. To improve stability, recent methods typically use a fixed sequence-level IS ratio for all tokens in a sequence or adjust each token's IS ratio separately, thereby neglecting temporal off-policy derivation across tokens in a sequence. In this paper, we first empirically identify that local off-policy deviation is structurally inconsistent at the token level, which may distort policy-gradient updates across adjacent tokens and lead to training collapse. To address the issue, we propose Online Causal Kalman Filtering for stable and effective Policy Optimization (KPO). Concretely, we model the desired IS ratio as a latent state that evolves across tokens and apply a Kalman filter to update this state online and autoregressively based on the states of past tokens, regardless of future tokens. The resulting filtered IS ratios preserve token-wise local structure-aware variation while strongly smoothing noise spikes, yielding more stable and effective policy updates. Experimentally, KPO achieves superior results on challenging math reasoning datasets compared with state-of-the-art counterparts.

Key Takeaways

  • 1

    Token-level importance sampling ratios in GRPO training show high variance and inconsistent temporal structure that destabilizes policy gradients.

  • 2

    Previous methods either averaged all token ratios losing information or adjusted them individually without considering semantic relationships between tokens.

  • 3

    Kalman filtering can enforce temporal structure in off-policy deviation by modeling neighboring tokens' semantic relationships during policy optimization.

Limitations

  • Existing GRPO methods fail to capture temporal structure in token-level importance sampling ratios, causing training instability.

  • Token-level off-policy deviation shows chaotic switching patterns with short run-lengths averaging 1.6 tokens, creating gradient conflicts.

Keywords

importance samplingpolicy optimizationreinforcement learninglarge language modelsKalman filtertoken-levelsequence-leveloff-policy derivationpolicy gradienttraining collapse

More in Reinforcement Learning

View all
Online Causal Kalman Filtering for Stable and Effective Policy Optimization | Paperchime