Multimodal AI

Large Multimodal Models as General In-Context Classifiers

MMarco GarosiMMatteo FarinaAAlessandro ContiMMassimiliano ManciniEElisa Ricci
Published
February 26, 2026
Authors
5

Abstract

Which multimodal model should we use for classification? Previous studies suggest that the answer lies in CLIP-like contrastive Vision-Language Models (VLMs), due to their remarkable performance in zero-shot classification. In contrast, Large Multimodal Models (LMM) are more suitable for complex tasks. In this work, we argue that this answer overlooks an important capability of LMMs: in-context learning. We benchmark state-of-the-art LMMs on diverse datasets for closed-world classification and find that, although their zero-shot performance is lower than CLIP's, LMMs with a few in-context examples can match or even surpass contrastive VLMs with cache-based adapters, their "in-context" equivalent. We extend this analysis to the open-world setting, where the generative nature of LMMs makes them more suitable for the task. In this challenging scenario, LMMs struggle whenever provided with imperfect context information. To address this issue, we propose CIRCLE, a simple training-free method that assigns pseudo-labels to in-context examples, iteratively refining them with the available context itself. Through extensive experiments, we show that CIRCLE establishes a robust baseline for open-world classification, surpassing VLM counterparts and highlighting the potential of LMMs to serve as unified classifiers, and a flexible alternative to specialized models.

Keywords

Vision-Language ModelsLarge Multimodal Modelszero-shot classificationin-context learningopen-world classificationcache-based adaptersCIRCLEpseudo-labelingcontext refinement

More in Multimodal AI

View all
Large Multimodal Models as General In-Context Classifiers | Paperchime