Large Language Models

T3D: Few-Step Diffusion Language Models via Trajectory Self-Distillation with Direct Discriminative Optimization

TTunyu ZhangXXinxi ZhangLLigong HanHHaizhou ShiXXiaoxiao HeZZhuowei LiHHao WangKKai XuAAkash SrivastavaHHao WangVVladimir PavlovicDDimitris N. Metaxas
Published
February 12, 2026
Authors
12
Word Count
10,198
Code
Includes code

T3D enables fast few-step diffusion language models through trajectory self-distillation and discriminative optimization.

Abstract

Diffusion large language models (DLLMs) have the potential to enable fast text generation by decoding multiple tokens in parallel. However, in practice, their inference efficiency is constrained by the need for many refinement steps, while aggressively reducing the number of steps leads to a substantial degradation in generation quality. To alleviate this, we propose a trajectory self-distillation framework that improves few-step decoding by distilling the model's own generative trajectories. We incorporate Direct Discriminative Optimization (DDO), a reverse-KL objective that promotes mode-seeking distillation and encourages the student to concentrate on high-probability teacher modes. Across benchmarks, our approach consistently outperforms strong few-step baselines and standard training under tight step budgets. Although full-step decoding remains superior, we substantially narrow the gap, establishing a strong foundation towards practical few-step DLLMs. The source code is available at https://github.com/Tyrion58/T3D.

Key Takeaways

  • 1

    T3D reduces diffusion language model inference steps from dozens to one or two while maintaining quality.

  • 2

    Trajectory self-distillation trains student models on actual teacher inference paths, eliminating train-test distribution mismatch.

  • 3

    Direct discriminative optimization prevents mode-averaging by focusing on high-probability outputs instead of averaging all possibilities.

Limitations

  • Diffusion models require many refinement steps in practice, making real-time text generation slow and inefficient.

  • Mean-field approximation in masked diffusion assumes token independence, failing to capture complex token dependencies accurately.

Keywords

diffusion large language modelstrajectory self-distillationself-distillationDirect Discriminative Optimizationreverse-KL objectivemode-seeking distillationgenerative trajectoriesfew-step decodingtext generation

More in Large Language Models

View all
T3D: Few-Step Diffusion Language Models via Trajectory Self-Distillation with Direct Discriminative Optimization | Paperchime