Efficient AI

SpargeAttention2: Trainable Sparse Attention via Hybrid Top-k+Top-p Masking and Distillation Fine-Tuning

JJintao ZhangKKai JiangCChendong XiangWWeiqi FengYYuezhou HuHHaocheng XiJJianfei ChenJJun Zhu
Published
February 13, 2026
Authors
8
Word Count
8,208
Code
Includes code

SpargeAttention2 makes video generation 16x faster using hybrid sparse attention masking.

Abstract

Many training-free sparse attention methods are effective for accelerating diffusion models. Recently, several works suggest that making sparse attention trainable can further increase sparsity while preserving generation quality. We study three key questions: (1) when do the two common masking rules, i.e., Top-k and Top-p, fail, and how can we avoid these failures? (2) why can trainable sparse attention reach higher sparsity than training-free methods? (3) what are the limitations of fine-tuning sparse attention using the diffusion loss, and how can we address them? Based on this analysis, we propose SpargeAttention2, a trainable sparse attention method that achieves high sparsity without degrading generation quality. SpargeAttention2 includes (i) a hybrid masking rule that combines Top-k and Top-p for more robust masking at high sparsity, (ii) an efficient trainable sparse attention implementation, and (iii) a distillation-inspired fine-tuning objective to better preserve generation quality during fine-tuning using sparse attention. Experiments on video diffusion models show that SpargeAttention2 reaches 95% attention sparsity and a 16.2x attention speedup while maintaining generation quality, consistently outperforming prior sparse attention methods.

Key Takeaways

  • 1

    SpargeAttention2 reduces video attention computation from 97 seconds to 6 seconds without quality loss.

  • 2

    Hybrid Top-k+Top-p masking overcomes individual failures of each method at extreme sparsity levels.

  • 3

    Attention weights distribution determines which sparse masking strategy performs best for optimal results.

Limitations

  • Top-k masking fails with uniformly distributed attention weights, missing critical multi-token context.

  • Top-p masking struggles with highly skewed attention dominated by attention sink tokens.

Keywords

sparse attentiondiffusion modelsTop-kTop-ptrainable sparse attentionhybrid masking ruledistillation-inspired fine-tuningattention sparsityattention speedup

More in Efficient AI

View all
SpargeAttention2: Trainable Sparse Attention via Hybrid Top-k+Top-p Masking and Distillation Fine-Tuning | Paperchime