Large Language Models

CHIMERA: Compact Synthetic Data for Generalizable LLM Reasoning

XXinyu ZhuYYihao FengYYanchao SunXXianzhi DuPPingzhi LiOOlli SaarikiviYYun ZhuYYu Meng
Published
March 1, 2026
Authors
8
Word Count
20,206
Code
Includes code

Compact synthetic dataset enables small models to match large model reasoning performance.

Abstract

Large Language Models (LLMs) have recently exhibited remarkable reasoning capabilities, largely enabled by supervised fine-tuning (SFT)- and reinforcement learning (RL)-based post-training on high-quality reasoning data. However, reproducing and extending these capabilities in open and scalable settings is hindered by three fundamental data-centric challenges: (1) the cold-start problem, arising from the lack of seed datasets with detailed, long Chain-of-Thought (CoT) trajectories needed to initialize reasoning policies; (2) limited domain coverage, as most existing open-source reasoning datasets are concentrated in mathematics, with limited coverage of broader scientific disciplines; and (3) the annotation bottleneck, where the difficulty of frontier-level reasoning tasks makes reliable human annotation prohibitively expensive or infeasible. To address these challenges, we introduce CHIMERA, a compact synthetic reasoning dataset comprising 9K samples for generalizable cross-domain reasoning. CHIMERA is constructed with three key properties: (1) it provides rich, long CoT reasoning trajectories synthesized by state-of-the-art reasoning models; (2) it has broad and structured coverage, spanning 8 major scientific disciplines and over 1K fine-grained topics organized via a model-generated hierarchical taxonomy; and (3) it employs a fully automated, scalable evaluation pipeline that uses strong reasoning models to cross-validate both problem validity and answer correctness. We use CHIMERA to post-train a 4B Qwen3 model. Despite the dataset's modest size, the resulting model achieves strong performance on a suite of challenging reasoning benchmarks, including GPQA-Diamond, AIME 24/25/26, HMMT 25, and Humanity's Last Exam, approaching or matching the reasoning performance of substantially larger models such as DeepSeek-R1 and Qwen3-235B.

Key Takeaways

  • 1

    CHIMERA solves three critical bottlenecks in reasoning dataset creation: cold-start problem, limited domain coverage, and expensive human annotation.

  • 2

    The dataset contains 9,225 problems with exceptionally detailed reasoning trajectories averaging 11,121 words, spanning eight scientific disciplines.

  • 3

    A 4B parameter model fine-tuned on CHIMERA matches performance of substantially larger models like DeepSeek-R1 and Qwen3-235B on challenging benchmarks.

Limitations

  • Existing synthetic datasets are either too easy, too narrow, or unreliable for training generalizable reasoning models.

  • Frontier-level reasoning tasks make reliable human annotation prohibitively expensive or sometimes impossible to obtain.

Keywords

Chain-of-Thoughtsupervised fine-tuningreinforcement learninglarge language modelssynthetic reasoning datasetcross-validationhierarchical taxonomyreasoning benchmarksGPQA-DiamondAIMEHMMTHumanity's Last Exam

More in Large Language Models

View all
CHIMERA: Compact Synthetic Data for Generalizable LLM Reasoning | Paperchime