Efficient AI

POP: Prefill-Only Pruning for Efficient Large Model Inference

JJunhui HeZZhihui FuJJun WangQQingan Li
Published
February 3, 2026
Authors
4
Word Count
8,378

Efficient LLM inference via stage-aware pruning.

Abstract

Large Language Models (LLMs) and Vision-Language Models (VLMs) have demonstrated remarkable capabilities. However, their deployment is hindered by significant computational costs. Existing structured pruning methods, while hardware-efficient, often suffer from significant accuracy degradation. In this paper, we argue that this failure stems from a stage-agnostic pruning approach that overlooks the asymmetric roles between the prefill and decode stages. By introducing a virtual gate mechanism, our importance analysis reveals that deep layers are critical for next-token prediction (decode) but largely redundant for context encoding (prefill). Leveraging this insight, we propose Prefill-Only Pruning (POP), a stage-aware inference strategy that safely omits deep layers during the computationally intensive prefill stage while retaining the full model for the sensitive decode stage. To enable the transition between stages, we introduce independent Key-Value (KV) projections to maintain cache integrity, and a boundary handling strategy to ensure the accuracy of the first generated token. Extensive experiments on Llama-3.1, Qwen3-VL, and Gemma-3 across diverse modalities demonstrate that POP achieves up to 1.37times speedup in prefill latency with minimal performance loss, effectively overcoming the accuracy-efficiency trade-off limitations of existing structured pruning methods.

Key Takeaways

  • 1

    POP accelerates prefill stage without accuracy loss.

  • 2

    Asymmetric pruning decouples prefill and decode stages.

  • 3

    Offers practical solution for efficient LLM deployment.

Limitations

  • Requires full model weights for decode stage.

  • Best suited for compute-bound scenarios.

Keywords

structured pruninglarge language modelsvision-language modelsprefill stagedecode stagevirtual gate mechanismkey-value projectionsboundary handling strategyspeedupaccuracy-efficiency trade-off

More in Efficient AI

View all
POP: Prefill-Only Pruning for Efficient Large Model Inference | Paperchime