Multimodal AI

MetaphorStar: Image Metaphor Understanding and Reasoning with End-to-End Visual Reinforcement Learning

CChenhao ZhangYYazhe NiuHHongsheng Li
Published
February 11, 2026
Authors
3
Word Count
9,194

MetaphorStar teaches AI to understand visual metaphor through reinforcement learning and true-false questions.

Abstract

Metaphorical comprehension in images remains a critical challenge for Nowadays AI systems. While Multimodal Large Language Models (MLLMs) excel at basic Visual Question Answering (VQA), they consistently struggle to grasp the nuanced cultural, emotional, and contextual implications embedded in visual content. This difficulty stems from the task's demand for sophisticated multi-hop reasoning, cultural context, and Theory of Mind (ToM) capabilities, which current models lack. To fill this gap, we propose MetaphorStar, the first end-to-end visual reinforcement learning (RL) framework for image implication tasks. Our framework includes three core components: the fine-grained dataset TFQ-Data, the visual RL method TFQ-GRPO, and the well-structured benchmark TFQ-Bench. Our fully open-source MetaphorStar family, trained using TFQ-GRPO on TFQ-Data, significantly improves performance by an average of 82.6% on the image implication benchmarks. Compared with 20+ mainstream MLLMs, MetaphorStar-32B achieves state-of-the-art (SOTA) on Multiple-Choice Question and Open-Style Question, significantly outperforms the top closed-source model Gemini-3.0-pro on True-False Question. Crucially, our experiments reveal that learning image implication tasks improves the general understanding ability, especially the complex visual reasoning ability. We further provide a systematic analysis of model parameter scaling, training data scaling, and the impact of different model architectures and training strategies, demonstrating the broad applicability of our method. We open-sourced all model weights, datasets, and method code at https://metaphorstar.github.io.

Key Takeaways

  • 1

    MLLMs can understand visual metaphor through reinforcement learning rather than architectural changes.

  • 2

    True-false questions provide high knowledge density and clear reward signals for training AI systems.

  • 3

    MetaphorStar bridges the gap between literal visual perception and deeper symbolic understanding in images.

Limitations

  • Previous approaches struggled with dynamic cultural references and the complexity of metaphor ontologies.

  • Passive chain-of-thought prompting fails due to the vast and chaotic search space for abstract reasoning.

Keywords

Multimodal Large Language ModelsVisual Question AnsweringTheory of Mindvisual reinforcement learningimage implication tasksfine-grained datasetvisual RL methodbenchmarkmodel parameter scalingtraining data scalingmodel architecturestraining strategies

More in Multimodal AI

View all
MetaphorStar: Image Metaphor Understanding and Reasoning with End-to-End Visual Reinforcement Learning | Paperchime