Efficient AI

ECO: Quantized Training without Full-Precision Master Weights

MMahdi NikdanAAmir ZandiehDDan AlistarhVVahab Mirrokni
Published
January 29, 2026
Authors
4

Abstract

Quantization has significantly improved the compute and memory efficiency of Large Language Model (LLM) training. However, existing approaches still rely on accumulating their updates in high-precision: concretely, gradient updates must be applied to a high-precision weight buffer, known as master weights. This buffer introduces substantial memory overhead, particularly for Sparse Mixture of Experts (SMoE) models, where model parameters and optimizer states dominate memory usage. To address this, we introduce the Error-Compensating Optimizer (ECO), which eliminates master weights by applying updates directly to quantized parameters. ECO quantizes weights after each step and carefully injects the resulting quantization error into the optimizer momentum, forming an error-feedback loop with no additional memory. We prove that, under standard assumptions and a decaying learning rate, ECO converges to a constant-radius neighborhood of the optimum, while naive master-weight removal can incur an error that is inversely proportional to the learning rate. We show empirical results for pretraining small Transformers (30-800M), a Gemma-3 1B model, and a 2.1B parameter Sparse MoE model with FP8 quantization, and fine-tuning DeepSeek-MoE-16B in INT4 precision. Throughout, ECO matches baselines with master weights up to near-lossless accuracy, significantly shifting the static memory vs validation loss Pareto frontier.

Keywords

quantizationLarge Language ModelsSparse Mixture of Expertsmaster weightsgradient updateserror-compensating optimizererror-feedback loopconvergencePareto frontierFP8 quantizationINT4 precision

More in Efficient AI

View all
ECO: Quantized Training without Full-Precision Master Weights | Paperchime