Multimodal AI

Reading, Not Thinking: Understanding and Bridging the Modality Gap When Text Becomes Pixels in Multimodal LLMs

KKaiser SunXXiaochuang YuanHHongjun LiuCChen ZhaoCCheng ZhangMMark DredzeFFan Bai
Published
March 10, 2026
Authors
7
Word Count
11,109
Code
Includes code

Multimodal LLMs struggle reading text-as-pixels due to distributional mismatch, not reasoning deficits.

Abstract

Multimodal large language models (MLLMs) can process text presented as images, yet they often perform worse than when the same content is provided as textual tokens. We systematically diagnose this "modality gap" by evaluating seven MLLMs across seven benchmarks in five input modes, spanning both synthetically rendered text and realistic document images from arXiv PDFs to Wikipedia pages. We find that the modality gap is task- and data-dependent. For example, math tasks degrade by over 60 points on synthetic renderings, while natural document images often match or exceed text-mode performance. Rendering choices such as font and resolution are strong confounds, with font alone swinging accuracy by up to 47 percentage points. To understand this, we conduct a grounded-theory error analysis of over 4,000 examples, revealing that image mode selectively amplifies reading errors (calculation and formatting failures) while leaving knowledge and reasoning errors largely unchanged, and that some models exhibit a chain-of-thought reasoning collapse under visual input. Motivated by these findings, we propose a self-distillation method that trains the model on its own pure text reasoning traces paired with image inputs, raising image-mode accuracy on GSM8K from 30.71% to 92.72% and transferring to unseen benchmarks without catastrophic forgetting. Overall, our study provides a systematic understanding of the modality gap and suggests a practical path toward improving visual text understanding in multimodal language models.

Key Takeaways

  • 1

    The modality gap between text and image inputs in MLLMs stems from reading errors, not reasoning failures.

  • 2

    Font choice and rendering style alone can swing accuracy by up to 47 percentage points across models.

  • 3

    Self-distillation training on text-mode reasoning traces improves image-mode accuracy from 30.71% to 92.72% on GSM8K.

Limitations

  • Evaluation limited to seven MLLMs and seven benchmarks, may not generalize to all architectures.

  • Self-distillation method requires access to model's text-mode reasoning traces, limiting applicability to closed-source models.

Keywords

multimodal large language modelsmodality gapvisual text understandingself-distillationreasoning tracesGSM8K

More in Multimodal AI

View all
Reading, Not Thinking: Understanding and Bridging the Modality Gap When Text Becomes Pixels in Multimodal LLMs | Paperchime