Large Language Models

Beyond Correctness: Learning Robust Reasoning via Transfer

HHyunseok LeeSSoheil AbbaslooJJihoon TackJJinwoo Shin
Published
February 9, 2026
Authors
4
Word Count
8,866

Training language models for transferable reasoning, not just correct answers.

Abstract

Reinforcement Learning with Verifiable Rewards (RLVR) has recently strengthened LLM reasoning, but its focus on final answer correctness leaves a critical gap: it does not ensure the robustness of the reasoning process itself. We adopt a simple philosophical view, robust reasoning should remain useful beyond the mind that produced it, and treat reasoning as a form of meaning transfer that must survive truncation, reinterpretation, and continuation. Building on this principle, we introduce Reinforcement Learning with Transferable Reward (RLTR), which operationalizes robustness via transfer reward that tests whether a partial reasoning prefix from one model can guide a separate model to the correct answer. This encourages LLMs to produce reasoning that is stable, interpretable, and genuinely generalizable. Our approach improves sampling consistency while improving final answer accuracy, and it reaches comparable performance in substantially fewer training steps. For example, on MATH500, RLTR achieves a +3.6%p gain in Maj@64 compared to RLVR and matches RLVR's average accuracy with roughly 2.5x fewer training steps, providing both more reliable reasoning and significantly more sample efficient.

Key Takeaways

  • 1

    RLVR trains models for correct answers but produces brittle reasoning that doesn't transfer between models.

  • 2

    RLTR adds transfer rewards by testing if truncated reasoning chains help separate frozen models solve problems.

  • 3

    Robust reasoning should remain useful beyond the original model, similar to how humans explain clearly to others.

Limitations

  • Process reward models require expensive step-level annotations of reasoning traces, introducing potential bias.

  • The script was cut off before fully explaining advantages over process reward model approaches.

Keywords

Reinforcement Learning with Verifiable RewardsReinforcement Learning with Transferable RewardLLM reasoningtransfer rewardreasoning robustnesscross-model guidancesampling consistencyfinal answer accuracyMATH500Maj@64

More in Large Language Models

View all
Beyond Correctness: Learning Robust Reasoning via Transfer | Paperchime