Large Language Models

Arcee Trinity Large Technical Report

VVarun SinghLLucas KraussSSami JaghouarMMatej SirovatkaCCharles GoddardFFares ObiedJJack Min OngJJannik StraubeFFernAAria HarleyCConner StewartCColin KealtyMMaziyar PanahiSSimon KirstenAAnushka DeshpandeAAnneketh VijAArthur BresnuPPranav VeldurthiRRaghav RavishankarHHardik BishnoiDDatologyAI TeamAArcee AI TeamPPrime Intellect TeamMMark McQuadeJJohannes HagemannLLucas Atkins
Published
February 19, 2026
Authors
26

Abstract

We present the technical report for Arcee Trinity Large, a sparse Mixture-of-Experts model with 400B total parameters and 13B activated per token. Additionally, we report on Trinity Nano and Trinity Mini, with Trinity Nano having 6B total parameters with 1B activated per token, Trinity Mini having 26B total parameters with 3B activated per token. The models' modern architecture includes interleaved local and global attention, gated attention, depth-scaled sandwich norm, and sigmoid routing for Mixture-of-Experts. For Trinity Large, we also introduce a new MoE load balancing strategy titled Soft-clamped Momentum Expert Bias Updates (SMEBU). We train the models using the Muon optimizer. All three models completed training with zero loss spikes. Trinity Nano and Trinity Mini were pre-trained on 10 trillion tokens, and Trinity Large was pre-trained on 17 trillion tokens. The model checkpoints are available at https://huggingface.co/arcee-ai.

Keywords

Mixture-of-Expertssparse Mixture-of-Expertsattentiongated attentiondepth-scaled sandwich normsigmoid routingMuon optimizerMoE load balancingSoft-clamped Momentum Expert Bias Updates

More in Large Language Models

View all
Arcee Trinity Large Technical Report | Paperchime