Large Language Models

On the Non-decoupling of Supervised Fine-tuning and Reinforcement Learning in Post-training

XXueyan NiuBBo BaiWWei HanWWeixi Zhang
arXiv ID
2601.07389
Published
January 12, 2026
Authors
4
Hugging Face Likes
1
Comments
2

Abstract

Post-training of large language models routinely interleaves supervised fine-tuning (SFT) with reinforcement learning (RL). These two methods have different objectives: SFT minimizes the cross-entropy loss between model outputs and expert responses, while RL maximizes reward signals derived from human preferences or rule-based verifiers. Modern reasoning models have widely adopted the practice of alternating SFT and RL training. However, there is no theoretical account of whether they can be decoupled. We prove that decoupling is impossible in either order: (1) SFT-then-RL coupling: RL increases SFT loss under SFT optimality and (2) RL-then-SFT coupling: SFT lowers the reward achieved by RL. Experiments on Qwen3-0.6B confirm the predicted degradation, verifying that SFT and RL cannot be separated without loss of prior performance in the post-training

More in Large Language Models

View all
On the Non-decoupling of Supervised Fine-tuning and Reinforcement Learning in Post-training | Paperchime