Multimodal AI

Agentic Very Long Video Understanding

AAniket RegeAArka SadhuYYuliang LiKKejie LiRRamya Korlakai VinayakYYuning ChaiYYong Jae LeeHHyo Jin Kim
Published
January 26, 2026
Authors
8
Word Count
12,760
Code
Includes code

EGAgent revolutionizes understanding of very long videos.

Abstract

The advent of always-on personal AI assistants, enabled by all-day wearable devices such as smart glasses, demands a new level of contextual understanding, one that goes beyond short, isolated events to encompass the continuous, longitudinal stream of egocentric video. Achieving this vision requires advances in long-horizon video understanding, where systems must interpret and recall visual and audio information spanning days or even weeks. Existing methods, including large language models and retrieval-augmented generation, are constrained by limited context windows and lack the ability to perform compositional, multi-hop reasoning over very long video streams. In this work, we address these challenges through EGAgent, an enhanced agentic framework centered on entity scene graphs, which represent people, places, objects, and their relationships over time. Our system equips a planning agent with tools for structured search and reasoning over these graphs, as well as hybrid visual and audio search capabilities, enabling detailed, cross-modal, and temporally coherent reasoning. Experiments on the EgoLifeQA and Video-MME (Long) datasets show that our method achieves state-of-the-art performance on EgoLifeQA (57.5%) and competitive performance on Video-MME (Long) (74.1%) for complex longitudinal video understanding tasks.

Key Takeaways

  • 1

    EGAgent achieves state-of-the-art performance on benchmarks.

  • 2

    Entity scene graphs enable detailed, temporally coherent analysis.

  • 3

    Agentic planning enhances understanding of very long videos.

Limitations

  • Relies on accuracy of upstream perception models.

  • Computationally intensive, requiring significant processing time.

Keywords

entity scene graphsagentic frameworklong-horizon video understandingstructured searchtemporal reasoningcross-modal reasoningEgoLifeQAVideo-MME

More in Multimodal AI

View all
Agentic Very Long Video Understanding | Paperchime