Multimodal AI

HERMES: KV Cache as Hierarchical Memory for Efficient Streaming Video Understanding

HHaowei ZhangSShudong YangJJinlan FuSSee-Kiong NgXXipeng Qiu
Published
January 21, 2026
Authors
5
Word Count
19,601
Code
Includes code

HERMES: Hierarchical memory for efficient real-time video understanding.

Abstract

Recent advancements in Multimodal Large Language Models (MLLMs) have demonstrated significant improvement in offline video understanding. However, extending these capabilities to streaming video inputs, remains challenging, as existing models struggle to simultaneously maintain stable understanding performance, real-time responses, and low GPU memory overhead. To address this challenge, we propose HERMES, a novel training-free architecture for real-time and accurate understanding of video streams. Based on a mechanistic attention investigation, we conceptualize KV cache as a hierarchical memory framework that encapsulates video information across multiple granularities. During inference, HERMES reuses a compact KV cache, enabling efficient streaming understanding under resource constraints. Notably, HERMES requires no auxiliary computations upon the arrival of user queries, thereby guaranteeing real-time responses for continuous video stream interactions, which achieves 10times faster TTFT compared to prior SOTA. Even when reducing video tokens by up to 68% compared with uniform sampling, HERMES achieves superior or comparable accuracy across all benchmarks, with up to 11.4% gains on streaming datasets.

Key Takeaways

  • 1

    HERMES efficiently manages KV cache for real-time video understanding.

  • 2

    Achieves state-of-the-art performance on streaming benchmarks.

  • 3

    10× speedup in Time to First Token (TTFT).

Limitations

  • Assumes predictability of future video frames and queries.

  • Requires significant GPU memory, potentially a bottleneck.

Keywords

Multimodal Large Language Modelsvideo understandingstreaming video inputsreal-time responsesKV cachehierarchical memory frameworkmechanistic attentionvideo tokensTTFT

More in Multimodal AI

View all
HERMES: KV Cache as Hierarchical Memory for Efficient Streaming Video Understanding | Paperchime