Large Language Models

Qwen3-Coder-Next Technical Report

RRuisheng CaoMMouxiang ChenJJiawei ChenZZeyu CuiYYunlong FengBBinyuan HuiYYuheng JingKKaixin LiMMingze LiJJunyang LinZZeyao MaKKashun ShumXXuwu WangJJinxi WeiJJiaxi YangJJiajun ZhangLLei ZhangZZongmeng ZhangWWenting ZhaoFFan Zhou
Published
February 28, 2026
Authors
20

Abstract

We present Qwen3-Coder-Next, an open-weight language model specialized for coding agents. Qwen3-Coder-Next is an 80-billion-parameter model that activates only 3 billion parameters during inference, enabling strong coding capability with efficient inference. In this work, we explore how far strong training recipes can push the capability limits of models with small parameter footprints. To achieve this, we perform agentic training through large-scale synthesis of verifiable coding tasks paired with executable environments, allowing learning directly from environment feedback via mid-training and reinforcement learning. Across agent-centric benchmarks including SWE-Bench and Terminal-Bench, Qwen3-Coder-Next achieves competitive performance relative to its active parameter count. We release both base and instruction-tuned open-weight versions to support research and real-world coding agent development.

Keywords

language modelparameter-efficient fine-tuningagentic trainingverifiable coding tasksexecutable environmentsmid-trainingreinforcement learningSWE-BenchTerminal-Bench

More in Large Language Models

View all
Qwen3-Coder-Next Technical Report | Paperchime