Reinforcement Learning

STAPO: Stabilizing Reinforcement Learning for LLMs by Silencing Rare Spurious Tokens

SShiqi LiuZZeyu HeGGuojian ZhanLLetian TaoZZhilong ZhengJJiang WuYYinuo WangYYang GuanKKehua ShengBBo ZhangKKeqiang LiJJingliang DuanSShengbo Eben Li
Published
February 17, 2026
Authors
13

Abstract

Reinforcement Learning (RL) has significantly improved large language model reasoning, but existing RL fine-tuning methods rely heavily on heuristic techniques such as entropy regularization and reweighting to maintain stability. In practice, they often experience late-stage performance collapse, leading to degraded reasoning quality and unstable training. We derive that the magnitude of token-wise policy gradients in RL is negatively correlated with token probability and local policy entropy. Building on this result, we prove that training instability is driven by a tiny fraction of tokens, approximately 0.01\%, which we term spurious tokens. When such tokens appear in correct responses, they contribute little to the reasoning outcome but inherit the full sequence-level reward, leading to abnormally amplified gradient updates. Motivated by this observation, we propose Spurious-Token-Aware Policy Optimization (STAPO) for large-scale model refining, which selectively masks such updates and renormalizes the loss over valid tokens. Across six mathematical reasoning benchmarks using Qwen 1.7B, 8B, and 14B base models, STAPO consistently demonstrates superior entropy stability and achieves an average performance improvement of 7.13\% over GRPO, 20-Entropy and JustRL.

Keywords

reinforcement learningpolicy gradientstoken probabilitypolicy entropytraining instabilityspurious tokenspolicy optimizationgradient updatesmathematical reasoning benchmarksQwen modelsGRPOEntropyJustRL

More in Reinforcement Learning

View all
STAPO: Stabilizing Reinforcement Learning for LLMs by Silencing Rare Spurious Tokens | Paperchime