Large Language Models

Multiplex Thinking: Reasoning via Token-wise Branch-and-Merge

YYao TangLLi DongYYaru HaoQQingxiu DongFFuru WeiJJiatao Gu
arXiv ID
2601.08808
Published
January 13, 2026
Authors
6
Hugging Face Likes
33
Comments
4

Abstract

Large language models often solve complex reasoning tasks more effectively with Chain-of-Thought (CoT), but at the cost of long, low-bandwidth token sequences. Humans, by contrast, often reason softly by maintaining a distribution over plausible next steps. Motivated by this, we propose Multiplex Thinking, a stochastic soft reasoning mechanism that, at each thinking step, samples K candidate tokens and aggregates their embeddings into a single continuous multiplex token. This preserves the vocabulary embedding prior and the sampling dynamics of standard discrete generation, while inducing a tractable probability distribution over multiplex rollouts. Consequently, multiplex trajectories can be directly optimized with on-policy reinforcement learning (RL). Importantly, Multiplex Thinking is self-adaptive: when the model is confident, the multiplex token is nearly discrete and behaves like standard CoT; when it is uncertain, it compactly represents multiple plausible next steps without increasing sequence length. Across challenging math reasoning benchmarks, Multiplex Thinking consistently outperforms strong discrete CoT and RL baselines from Pass@1 through Pass@1024, while producing shorter sequences. The code and checkpoints are available at https://github.com/GMLR-Penn/Multiplex-Thinking.

Keywords

Chain-of-Thoughtstochastic soft reasoningmultiplex tokencontinuous multiplex tokenon-policy reinforcement learningPass@1Pass@1024reasoning trajectories

More in Large Language Models

View all