Multimodal AI

Render-of-Thought: Rendering Textual Chain-of-Thought as Images for Visual Latent Reasoning

YYifan WangSShiyu LiPPeiming LiXXiaochen YangYYang TangZZheng Wei
arXiv ID
2601.14750
Published
January 21, 2026
Authors
6
Hugging Face Likes
14
Comments
1

Abstract

Chain-of-Thought (CoT) prompting has achieved remarkable success in unlocking the reasoning capabilities of Large Language Models (LLMs). Although CoT prompting enhances reasoning, its verbosity imposes substantial computational overhead. Recent works often focus exclusively on outcome alignment and lack supervision on the intermediate reasoning process. These deficiencies obscure the analyzability of the latent reasoning chain. To address these challenges, we introduce Render-of-Thought (RoT), the first framework to reify the reasoning chain by rendering textual steps into images, making the latent rationale explicit and traceable. Specifically, we leverage the vision encoders of existing Vision Language Models (VLMs) as semantic anchors to align the vision embeddings with the textual space. This design ensures plug-and-play implementation without incurring additional pre-training overhead. Extensive experiments on mathematical and logical reasoning benchmarks demonstrate that our method achieves 3-4x token compression and substantial inference acceleration compared to explicit CoT. Furthermore, it maintains competitive performance against other methods, validating the feasibility of this paradigm. Our code is available at https://github.com/TencentBAC/RoT

Keywords

Chain-of-Thought promptingLarge Language Modelsvision encodersVision Language Modelstoken compressioninference accelerationreasoning chainsemantic anchorslatent reasoningtraceability

More in Multimodal AI

View all
Render-of-Thought: Rendering Textual Chain-of-Thought as Images for Visual Latent Reasoning | Paperchime