Large Language Models

Training LLMs for Divide-and-Conquer Reasoning Elevates Test-Time Scalability

XXiao LiangZZhong-Zhi LiZZhenghao LinEEric Hancheng JiangHHengyuan ZhangYYelong ShenKKai-Wei ChangYYing Nian WuYYeyun GongWWeizhu Chen
Published
February 2, 2026
Authors
10
Word Count
9,759
Code
Includes code

DAC reasoning enhances LLMs' problem-solving capabilities.

Abstract

Large language models (LLMs) have demonstrated strong reasoning capabilities through step-by-step chain-of-thought (CoT) reasoning. Nevertheless, at the limits of model capability, CoT often proves insufficient, and its strictly sequential nature constrains test-time scalability. A potential alternative is divide-and-conquer (DAC) reasoning, which decomposes a complex problem into subproblems to facilitate more effective exploration of the solution. Although promising, our analysis reveals a fundamental misalignment between general-purpose post-training and DAC-style inference, which limits the model's capacity to fully leverage this potential. To bridge this gap and fully unlock LLMs' reasoning capabilities on the most challenging tasks, we propose an end-to-end reinforcement learning (RL) framework to enhance their DAC-style reasoning capacity. At each step, the policy decomposes a problem into a group of subproblems, solves them sequentially, and addresses the original one conditioned on the subproblem solutions, with both decomposition and solution integrated into RL training. Under comparable training, our DAC-style framework endows the model with a higher performance ceiling and stronger test-time scalability, surpassing CoT by 8.6% in Pass@1 and 6.3% in Pass@32 on competition-level benchmarks.

Key Takeaways

  • 1

    DAC reasoning significantly elevates LLMs' test-time scalability.

  • 2

    RL framework integrates decomposition and solution training.

  • 3

    Substantial performance gains on competition-level benchmarks.

Limitations

  • Reinforcement learning is computationally intensive.

  • Quality of subproblem decomposition is critical.

Keywords

chain-of-thought reasoningdivide-and-conquer reasoningreinforcement learninglarge language modelspolicy decompositionsolution integrationtest-time scalabilityPass@1Pass@32

More in Large Language Models

View all
Training LLMs for Divide-and-Conquer Reasoning Elevates Test-Time Scalability | Paperchime