Multimodal AI

Rethinking Composed Image Retrieval Evaluation: A Fine-Grained Benchmark from Image Editing

TTingyu SongYYanzhao ZhangMMingxin LiZZhuoning GuoDDingkun LongPPengjun XieSSiyue ZhangYYilun ZhaoSShu Wu
arXiv ID
2601.16125
Published
January 22, 2026
Authors
9
Hugging Face Likes
13
Comments
2

Abstract

Composed Image Retrieval (CIR) is a pivotal and complex task in multimodal understanding. Current CIR benchmarks typically feature limited query categories and fail to capture the diverse requirements of real-world scenarios. To bridge this evaluation gap, we leverage image editing to achieve precise control over modification types and content, enabling a pipeline for synthesizing queries across a broad spectrum of categories. Using this pipeline, we construct EDIR, a novel fine-grained CIR benchmark. EDIR encompasses 5,000 high-quality queries structured across five main categories and fifteen subcategories. Our comprehensive evaluation of 13 multimodal embedding models reveals a significant capability gap; even state-of-the-art models (e.g., RzenEmbed and GME) struggle to perform consistently across all subcategories, highlighting the rigorous nature of our benchmark. Through comparative analysis, we further uncover inherent limitations in existing benchmarks, such as modality biases and insufficient categorical coverage. Furthermore, an in-domain training experiment demonstrates the feasibility of our benchmark. This experiment clarifies the task challenges by distinguishing between categories that are solvable with targeted data and those that expose intrinsic limitations of current model architectures.

Keywords

composed image retrievalmultimodal embedding modelsimage editingfine-grained benchmarkmodality biasescategorical coverage

More in Multimodal AI

View all
Rethinking Composed Image Retrieval Evaluation: A Fine-Grained Benchmark from Image Editing | Paperchime