AI Agents

Agentic Uncertainty Quantification

JJiaxin ZhangPPrafulla Kumar ChoubeyKKung-Hsiang HuangCCaiming XiongCChien-Sheng Wu
arXiv ID
2601.15703
Published
January 22, 2026
Authors
5
Hugging Face Likes
7
Comments
2

Abstract

Although AI agents have demonstrated impressive capabilities in long-horizon reasoning, their reliability is severely hampered by the ``Spiral of Hallucination,'' where early epistemic errors propagate irreversibly. Existing methods face a dilemma: uncertainty quantification (UQ) methods typically act as passive sensors, only diagnosing risks without addressing them, while self-reflection mechanisms suffer from continuous or aimless corrections. To bridge this gap, we propose a unified Dual-Process Agentic UQ (AUQ) framework that transforms verbalized uncertainty into active, bi-directional control signals. Our architecture comprises two complementary mechanisms: System 1 (Uncertainty-Aware Memory, UAM), which implicitly propagates verbalized confidence and semantic explanations to prevent blind decision-making; and System 2 (Uncertainty-Aware Reflection, UAR), which utilizes these explanations as rational cues to trigger targeted inference-time resolution only when necessary. This enables the agent to balance efficient execution and deep deliberation dynamically. Extensive experiments on closed-loop benchmarks and open-ended deep research tasks demonstrate that our training-free approach achieves superior performance and trajectory-level calibration. We believe this principled framework AUQ represents a significant step towards reliable agents.

Keywords

Dual-Process Agentic UQUncertainty-Aware MemoryUncertainty-Aware ReflectionSpiral of Hallucinationclosed-loop benchmarkstrajectory-level calibration

More in AI Agents

View all
Agentic Uncertainty Quantification | Paperchime