Multimodal AI

GutenOCR: A Grounded Vision-Language Front-End for Documents

HHunter HeidenreichBBen ElliottOOlivia DinicaYYosheb Getachew
arXiv ID
2601.14490
Published
January 20, 2026
Authors
4
Hugging Face Likes
28
Comments
3

Abstract

GutenOCR is a family of grounded OCR front-ends obtained by fine-tuning Qwen2.5-VL-3B and Qwen2.5-VL-7B. The resulting single-checkpoint vision-language models expose reading, detection, and grounding through a unified, prompt-based interface. Trained on business documents, scientific articles, and synthetic grounding data, the models support full-page and localized reading with line- and paragraph-level bounding boxes and conditional ``where is x?'' queries. We introduce a grounded OCR evaluation protocol and show that GutenOCR-7B more than doubles the composite grounded OCR score of its Qwen2.5-VL-7B backbone on 10.5K held-out business and scientific pages (0.40 to 0.82). On Fox and OmniDocBench v1.5, our approach substantially improves region- and line-level OCR as well as text-detection recall, but reveals trade-offs in page-level linearization, color-guided OCR, and formula-heavy layouts.

Keywords

vision-language modelsfine-tuninggrounded OCRprompt-based interfacedocument understandingOCR evaluation protocolpage-level linearizationtext-detection recall

More in Multimodal AI

View all
GutenOCR: A Grounded Vision-Language Front-End for Documents | Paperchime