Large Language Models

Scalable Power Sampling: Unlocking Efficient, Training-Free Reasoning for LLMs via Distribution Sharpening

XXiaotong JiRRasul TutunovMMatthieu ZimmerHHaitham Bou Ammar
Published
January 29, 2026
Authors
4
Word Count
10,311
Code
Includes code

Unlock LLM reasoning efficiency with scalable sampling.

Abstract

Reinforcement learning (RL) post-training is a dominant approach for improving the reasoning performance of large language models (LLMs), yet growing evidence suggests that its gains arise primarily from distribution sharpening rather than the acquisition of new capabilities. Recent work has shown that sampling from the power distribution of LLMs using Markov chain Monte Carlo (MCMC) can recover performance comparable to RL post-training without relying on external rewards; however, the high computational cost of MCMC makes such approaches impractical for widespread adoption. In this work, we propose a theoretically grounded alternative that eliminates the need for iterative MCMC. We derive a novel formulation showing that the global power distribution can be approximated by a token-level scaled low-temperature one, where the scaling factor captures future trajectory quality. Leveraging this insight, we introduce a training-free and verifier-free algorithm that sharpens the base model's generative distribution autoregressively. Empirically, we evaluate our method on math, QA, and code tasks across four LLMs, and show that our method matches or surpasses one-shot GRPO without relying on any external rewards, while reducing inference latency by over 10x compared to MCMC-based sampling.

Key Takeaways

  • 1

    Enhances LLM reasoning without additional training.

  • 2

    Reduces inference latency by over 10 times.

  • 3

    Uses token-level scaled low-temperature distribution for efficiency.

Limitations

  • Requires autoregressive Monte Carlo estimation.

  • May introduce bias needing jackknife correction.

Keywords

reinforcement learninglarge language modelsdistribution sharpeningMarkov chain Monte Carlopower distributionlow-temperature samplingautoregressive generationGRPOinference latency

More in Large Language Models

View all
Scalable Power Sampling: Unlocking Efficient, Training-Free Reasoning for LLMs via Distribution Sharpening | Paperchime