Large Language Models

LRAgent: Efficient KV Cache Sharing for Multi-LoRA LLM Agents

HHyesung JeonHHyeongju HaJJae-Joon Kim
Published
February 1, 2026
Authors
3
Word Count
12,232
Code
Includes code

LRAgent optimizes KV cache sharing for multi-LoRA LLMs.

Abstract

Role specialization in multi-LLM agent systems is often realized via multi-LoRA, where agents share a pretrained backbone and differ only through lightweight adapters. Despite sharing base model weights, each agent independently builds and stores its own KV cache for the same long, tool-augmented trajectories, incurring substantial memory and compute overhead. Existing KV cache sharing methods largely overlook this multi-LoRA setting. We observe that, across agents, cache differences are dominated by adapter outputs, while activations from the shared pretrained backbone remain highly similar. Based on this observation, we propose LRAgent, a KV cache sharing framework for multi-LoRA agents that decomposes the cache into a shared base component from the pretrained weights and an adapter-dependent component from LoRA weights. LRAgent reduces memory overhead by sharing the base component and storing the adapter component in its inherent low-rank form, and further reduces compute overhead, enabled by shared-A multi-LoRA architectures, by also sharing the low-rank cache and avoiding redundant computations for contexts already processed by other agents. To efficiently reconstruct adapter contributions at runtime, we introduce Flash-LoRA-Attention, a kernel that reorders attention computation to avoid materializing the low-rank cache to full dimension. LRAgent achieves throughput and time-to-first-token latency close to fully shared caching, while preserving accuracy near the non-shared caching baseline across agentic question-answering benchmarks.

Key Takeaways

  • 1

    Efficient KV cache sharing reduces memory and inference latency.

  • 2

    Novel schemes preserve accuracy with minimal performance drop.

  • 3

    Significant gains in throughput and reduced latency achieved.

Limitations

  • Accuracy drop of up to 1.5% in some scenarios.

  • Requires specialized attention mechanism for efficiency.

Keywords

multi-LoRAKV cache sharingLoRA weightspretrained backboneFlash-LoRA-Attentionlow-rank cacheshared-$A$ multi-LoRAattention computation

More in Large Language Models

View all
LRAgent: Efficient KV Cache Sharing for Multi-LoRA LLM Agents | Paperchime