Large Language Models

V_1: Unifying Generation and Self-Verification for Parallel Reasoners

HHarman SinghXXiuyu LiKKusha SareenMMonishwaran MaheswaranSSijun TanXXiaoxia WuJJunxiong WangAAlpay AriyakQQingyang WuSSamir KhakiRRishabh TiwariLLong LianYYucheng LuBBoyi LiAAlane SuhrBBen AthiwaratkunKKurt Keutzer
Published
March 4, 2026
Authors
17
Word Count
12,405
Code
Includes code

V1 framework unifies generation and verification through efficient pairwise ranking for parallel reasoning.

Abstract

Test-time scaling for complex reasoning tasks shows that leveraging inference-time compute, by methods such as independently sampling and aggregating multiple solutions, results in significantly better task outcomes. However, a critical bottleneck is verification: sampling is only effective if correct solutions can be reliably identified among candidates. While existing approaches typically evaluate candidates independently via scalar scoring, we demonstrate that models are substantially stronger at pairwise self-verification. Leveraging this insight, we introduce V_1, a framework that unifies generation and verification through efficient pairwise ranking. V_1 comprises two components: V_1-Infer, an uncertainty-guided algorithm using a tournament-based ranking that dynamically allocates self-verification compute to candidate pairs whose relative correctness is most uncertain; and V_1-PairRL, an RL framework that jointly trains a single model as both generator and pairwise self-verifier, ensuring the verifier adapts to the generator's evolving distribution. On code generation (LiveCodeBench, CodeContests, SWE-Bench) and math reasoning (AIME, HMMT) benchmarks, V_1-Infer improves Pass@1 by up to 10% over pointwise verification and outperforms recent test-time scaling methods while being significantly more efficient. Furthermore, V_1-PairRL achieves 7--9% test-time scaling gains over standard RL and pointwise joint training, and improves base Pass@1 by up to 8.7% over standard RL in a code-generation setting.

Key Takeaways

  • 1

    LLMs are substantially better at comparing two solutions head-to-head than rating single solutions in isolation.

  • 2

    V1-Infer uses a tournament-based pairwise comparison algorithm to improve verification accuracy without quadratic comparisons.

  • 3

    V1-PairRL jointly trains models as both generators and pairwise verifiers, achieving 7-9% test-time scaling gains over pointwise methods.

Limitations

  • Pointwise verification suffers from calibration collapse due to lack of globally comparable reference frames for absolute scores.

  • Self-aggregation methods experience diversity collapse where correct solutions are lost during iterative refinement steps.

Keywords

test-time scalingcomplex reasoning tasksinference-time computesamplingaggregationverificationscalar scoringpairwise self-verificationV_1 frameworkuncertainty-guided algorithmtournament-based rankingdynamic allocationreinforcement learningjoint trainingPass@1LiveCodeBenchCodeContestsSWE-BenchAIMEHMMT

More in Large Language Models

View all
V_1: Unifying Generation and Self-Verification for Parallel Reasoners | Paperchime