AI Agents

Self-Distillation Enables Continual Learning

IIdan ShenfeldMMehul DamaniJJonas HübotterPPulkit Agrawal
Published
January 27, 2026
Authors
4
Word Count
9,982
Code
Includes code

SDFT enables AI models to learn continually without forgetting.

Abstract

Continual learning, enabling models to acquire new skills and knowledge without degrading existing capabilities, remains a fundamental challenge for foundation models. While on-policy reinforcement learning can reduce forgetting, it requires explicit reward functions that are often unavailable. Learning from expert demonstrations, the primary alternative, is dominated by supervised fine-tuning (SFT), which is inherently off-policy. We introduce Self-Distillation Fine-Tuning (SDFT), a simple method that enables on-policy learning directly from demonstrations. SDFT leverages in-context learning by using a demonstration-conditioned model as its own teacher, generating on-policy training signals that preserve prior capabilities while acquiring new skills. Across skill learning and knowledge acquisition tasks, SDFT consistently outperforms SFT, achieving higher new-task accuracy while substantially reducing catastrophic forgetting. In sequential learning experiments, SDFT enables a single model to accumulate multiple skills over time without performance regression, establishing on-policy distillation as a practical path to continual learning from demonstrations.

Key Takeaways

  • 1

    SDFT reduces catastrophic forgetting in continual learning.

  • 2

    SDFT outperforms SFT in both skill and knowledge tasks.

  • 3

    On-policy learning enhances generalization in AI models.

Limitations

  • Depends on model's in-context learning capabilities.

  • Increased computational costs for on-policy rollouts.

Keywords

continual learningreinforcement learningsupervised fine-tuningon-policy learningoff-policy learningcatastrophic forgettingin-context learningself-distillationdemonstration-conditioned modelskill acquisition

More in AI Agents

View all
Self-Distillation Enables Continual Learning | Paperchime