AI Safety & Alignment

Shaping capabilities with token-level data filtering

NNeil RathiAAlec Radford
Published
January 29, 2026
Authors
2
Word Count
16,429

Token-level filtering shapes model capabilities during pretraining.

Abstract

Current approaches to reducing undesired capabilities in language models are largely post hoc, and can thus be easily bypassed by adversaries. A natural alternative is to shape capabilities during pretraining itself. On the proxy task of removing medical capabilities, we show that the simple intervention of filtering pretraining data is highly effective, robust, and inexpensive at scale. Inspired by work on data attribution, we show that filtering tokens is more effective than filtering documents, achieving the same hit to undesired capabilities at a lower cost to benign ones. Training models spanning two orders of magnitude, we then demonstrate that filtering gets more effective with scale: for our largest models, token filtering leads to a 7000x compute slowdown on the forget domain. We also show that models trained with token filtering can still be aligned on the forget domain. Along the way, we introduce a methodology for labeling tokens with sparse autoencoders and distilling cheap, high-quality classifiers. We also demonstrate that filtering can be robust to noisy labels with sufficient pretraining compute.

Key Takeaways

  • 1

    Token-level filtering reduces undesired capabilities effectively.

  • 2

    Scales well with model size and resists adversarial attacks.

  • 3

    Allows alignment on forget domain while retaining desired tasks.

Limitations

  • Relies on quality of token labels for effectiveness.

  • Uncertainty in generalization across different domains.

Keywords

language modelspretrainingtoken filteringdata attributionsparse autoencodersclassifier distillationmodel scalingcomputational efficiencyadversarial robustness

More in AI Safety & Alignment

View all
Shaping capabilities with token-level data filtering | Paperchime