Reinforcement Learning

Unveiling Implicit Advantage Symmetry: Why GRPO Struggles with Exploration and Difficulty Adaptation

ZZhiqi YuZZhangquan ChenMMengting LiuHHeye ZhangLLiangqiong Qu
Published
February 5, 2026
Authors
5
Word Count
11,179

GRPO's mathematical symmetry prevents exploration and difficulty adaptation in language model reasoning training.

Abstract

Reinforcement Learning with Verifiable Rewards (RLVR), particularly GRPO, has become the standard for eliciting LLM reasoning. However, its efficiency in exploration and difficulty adaptation remains an open challenge. In this work, we argue that these bottlenecks stem from an implicit advantage symmetry inherent in Group Relative Advantage Estimation (GRAE). This symmetry induces two critical limitations: (i) at the group level, strict symmetry in weights between correct and incorrect trajectories leaves unsampled action logits unchanged, thereby hindering exploration of novel correct solution. (ii) at the sample level, the algorithm implicitly prioritizes medium-difficulty samples, remaining agnostic to the non-stationary demands of difficulty focus. Through controlled experiments, we reveal that this symmetric property is sub-optimal, yielding two pivotal insights: (i) asymmetrically suppressing the advantages of correct trajectories encourages essential exploration. (ii) learning efficiency is maximized by a curriculum-like transition-prioritizing simpler samples initially before gradually shifting to complex ones. Motivated by these findings, we propose Asymmetric GRAE (A-GRAE), which dynamically modulates exploration incentives and sample-difficulty focus. Experiments across seven benchmarks demonstrate that A-GRAE consistently improves GRPO and its variants across both LLMs and MLLMs.

Key Takeaways

  • 1

    GRPO's implicit advantage symmetry prevents exploration of low-probability correct solutions the model hasn't sampled.

  • 2

    GRPO fails to adapt training difficulty, treating all problems equally regardless of model performance.

  • 3

    Group Relative Advantage Estimation creates zero-sum properties that limit reasoning boundary expansion and diversity.

Limitations

  • GRPO achieves better first-attempt accuracy but loses diversity in discovering novel correct reasoning approaches.

  • GRPO cannot dynamically adjust training focus between easy problems the model solves and hard unsolved problems.

Keywords

Reinforcement Learning with Verifiable RewardsGRPOGroup Relative Advantage EstimationGRAEasymmetric suppressioncurriculum learningsample-difficulty focusexploration incentiveslarge language modelsmulti-modal large language models

More in Reinforcement Learning

View all
Unveiling Implicit Advantage Symmetry: Why GRPO Struggles with Exploration and Difficulty Adaptation | Paperchime