Large Language Models

Query-focused and Memory-aware Reranker for Long Context Processing

YYuqing LiJJiangnan LiMMo YuGGuoxuan DingZZheng LinWWeiping WangJJie Zhou
Published
February 12, 2026
Authors
7
Word Count
3,687
Code
Includes code

Train attention heads as rerankers for faster, more stable long-context document ranking.

Abstract

Built upon the existing analysis of retrieval heads in large language models, we propose an alternative reranking framework that trains models to estimate passage-query relevance using the attention scores of selected heads. This approach provides a listwise solution that leverages holistic information within the entire candidate shortlist during ranking. At the same time, it naturally produces continuous relevance scores, enabling training on arbitrary retrieval datasets without requiring Likert-scale supervision. Our framework is lightweight and effective, requiring only small-scale models (e.g., 4B parameters) to achieve strong performance. Extensive experiments demonstrate that our method outperforms existing state-of-the-art pointwise and listwise rerankers across multiple domains, including Wikipedia and long narrative datasets. It further establishes a new state-of-the-art on the LoCoMo benchmark that assesses the capabilities of dialogue understanding and memory usage. We further demonstrate that our framework supports flexible extensions. For example, augmenting candidate passages with contextual information further improves ranking accuracy, while training attention heads from middle layers enhances efficiency without sacrificing performance.

Key Takeaways

  • 1

    Large language models contain Query-focused Retrieval heads that naturally encode document relevance through attention patterns.

  • 2

    Training attention heads directly as rerankers produces continuous relevance scores faster and more stably than generation-based approaches.

  • 3

    Attention-based reranking overcomes the geometric bottleneck of embedding models while avoiding the instability of listwise generation methods.

Limitations

  • Pointwise rerankers miss global context about other documents in the candidate set, limiting ranking accuracy.

  • Generation-based ranking scores are unstable and unreliable, requiring special training data formats and parsing strategies.

Keywords

retrieval headsreranking frameworkattention scorespassage-query relevancelistwise solutioncandidate shortlistcontinuous relevance scoresLikert-scale supervisionpointwise rerankerslistwise rerankersLoCoMo benchmarkdialogue understandingmemory usagecontextual informationmiddle layers

More in Large Language Models

View all
Query-focused and Memory-aware Reranker for Long Context Processing | Paperchime