Efficient AI

Hybrid Linear Attention Done Right: Efficient Distillation and Effective Architectures for Extremely Long Contexts

YYingfa ChenZZhen Leng ThaiZZihan ZhouZZhu ZhangXXingyu ShenSShuo WangCChaojun XiaoXXu HanZZhiyuan Liu
Published
January 29, 2026
Authors
9

Abstract

Hybrid Transformer architectures, which combine softmax attention blocks and recurrent neural networks (RNNs), have shown a desirable performance-throughput tradeoff for long-context modeling, but their adoption and studies are hindered by the prohibitive cost of large-scale pre-training from scratch. Some recent studies have shown that pre-trained softmax attention blocks can be converted into RNN blocks through parameter transfer and knowledge distillation. However, these transfer methods require substantial amounts of training data (more than 10B tokens), and the resulting hybrid models also exhibit poor long-context performance, which is the scenario where hybrid models enjoy significant inference speedups over Transformer-based models. In this paper, we present HALO (Hybrid Attention via Layer Optimization), a pipeline for distilling Transformer models into RNN-attention hybrid models. We then present HypeNet, a hybrid architecture with superior length generalization enabled by a novel position encoding scheme (named HyPE) and various architectural modifications. We convert the Qwen3 series into HypeNet using HALO, achieving performance comparable to the original Transformer models while enjoying superior long-context performance and efficiency. The conversion requires just 2.3B tokens, less than 0.01% of their pre-training data

Keywords

Hybrid Transformer architecturessoftmax attention blocksrecurrent neural networksparameter transferknowledge distillationTransformer modelsRNN-attention hybrid modelsHALOHypeNetposition encodingHyPEQwen3 series

More in Efficient AI

View all
Hybrid Linear Attention Done Right: Efficient Distillation and Effective Architectures for Extremely Long Contexts | Paperchime