Large Language Models

PRISM: Pushing the Frontier of Deep Think via Process Reward Model-Guided Inference

RRituraj SharmaWWeiyuan ChenNNoah ProvenzanoTTu Vu
Published
March 3, 2026
Authors
4
Word Count
7,092
Code
Includes code

PRISM uses step-level reward models to guide deep reasoning refinement, achieving state-of-the-art math competition performance.

Abstract

DEEPTHINK methods improve reasoning by generating, refining, and aggregating populations of candidate solutions, which enables strong performance on complex mathematical and scientific tasks. However, existing frameworks often lack reliable correctness signals during inference, which creates a population-enhancement bottleneck where deeper deliberation amplifies errors, suppresses correct minority solutions, and yields weak returns to additional compute. In this paper, we introduce a functional decomposition of DEEPTHINK systems and propose PRISM, a Process Reward Model (PRM)-guided inference algorithm that uses step-level verification to guide both population refinement and solution aggregation. During refinement, PRISM treats candidate solutions as particles in a PRM-defined energy landscape and reshapes the population through score-guided resampling and stochastic refinement, which concentrates probability mass on higher-quality reasoning while preserving diversity. Across mathematics and science benchmarks, PRISM is competitive with or outperforms existing DEEPTHINK methods, reaching 90.0%, 75.4%, and 71.4% with gpt-oss-20b on AIME25, HMMT25, and GPQA Diamond, respectively, while matching or exceeding gpt-oss-120b. Additionally, our analysis shows that PRISM produces consistent net-directional correction during refinement, remains reliable when the initial population contains few correct candidates, and often lies on the compute-accuracy Pareto frontier.

Key Takeaways

  • 1

    PRISM uses Process Reward Models to guide solution refinement by scoring reasoning steps, enabling directional error correction instead of stochastic rewriting.

  • 2

    Simple parallel sampling with majority voting matches sophisticated refinement methods, indicating initial population diversity matters more than iterative refinement.

  • 3

    PRISM achieves 90.0% on AIME25 with gpt-oss-20b, matching or exceeding gpt-oss-120b while mitigating majority dilution failure modes.

Limitations

  • Method requires training accurate Process Reward Models, which may not generalize across diverse reasoning domains or problem types.

  • Approach relies on step-level correctness signals that may be difficult to obtain for open-ended or subjective reasoning tasks.

Keywords

DEEPTHINKcandidate solutionspopulation enhancementreasoningProcess Reward Modelstep-level verificationscore-guided resamplingstochastic refinementenergy landscapecompute-accuracy Pareto frontier

More in Large Language Models

View all
PRISM: Pushing the Frontier of Deep Think via Process Reward Model-Guided Inference | Paperchime