Efficient AI

Token Sparse Attention: Efficient Long-Context Inference with Interleaved Token Selection

DDongwon JoBBeomseok KangJJiwon SongJJae-Joon Kim
Published
February 3, 2026
Authors
4
Word Count
7,346

Efficient long-context inference via dynamic token selection.

Abstract

The quadratic complexity of attention remains the central bottleneck in long-context inference for large language models. Prior acceleration methods either sparsify the attention map with structured patterns or permanently evict tokens at specific layers, which can retain irrelevant tokens or rely on irreversible early decisions despite the layer-/head-wise dynamics of token importance. In this paper, we propose Token Sparse Attention, a lightweight and dynamic token-level sparsification mechanism that compresses per-head Q, K, V to a reduced token set during attention and then decompresses the output back to the original sequence, enabling token information to be reconsidered in subsequent layers. Furthermore, Token Sparse Attention exposes a new design point at the intersection of token selection and sparse attention. Our approach is fully compatible with dense attention implementations, including Flash Attention, and can be seamlessly composed with existing sparse attention kernels. Experimental results show that Token Sparse Attention consistently improves accuracy-latency trade-off, achieving up to times3.23 attention speedup at 128K context with less than 1% accuracy degradation. These results demonstrate that dynamic and interleaved token-level sparsification is a complementary and effective strategy for scalable long-context inference.

Key Takeaways

  • 1

    Dynamic token selection enhances computational efficiency.

  • 2

    Layer-wise adaptive sparsity improves model flexibility.

  • 3

    Preserves full sequence information for accurate results.

Limitations

  • Requires careful tuning of sparsity levels.

  • Potential overhead from repeated compression/decompression.

Keywords

attentiontoken-level sparsificationQKVFlash Attentionattention speeduplong-context inferencetoken selectionsparse attention

More in Efficient AI

View all
Token Sparse Attention: Efficient Long-Context Inference with Interleaved Token Selection | Paperchime