AI Safety & Alignment

SLIME: Stabilized Likelihood Implicit Margin Enforcement for Preference Optimization

MMaksim AfanasyevIIllarion Iov
Published
February 2, 2026
Authors
2
Word Count
5,834

SLIME: Align LLMs with human preferences without quality loss.

Abstract

Direct preference optimization methods have emerged as a computationally efficient alternative to Reinforcement Learning from Human Feedback (RLHF) for aligning Large Language Models (LLMs). Latest approaches have streamlined the alignment process by deriving implicit reward functions, yet they often suffer from a critical objective mismatch: optimizing the relative margin between chosen and rejected responses does not guarantee the preservation of the chosen response's absolute likelihood. This can lead to ``unlearning'', where the model degrades the probability of high-quality outputs to satisfy margin constraints, and ``formatting collapse'' caused by the over-penalization of rejected sequences. In this work, we introduce SLIME (Stabilized Likelihood Implicit Margin Enforcement), a reference-free alignment objective designed to decouple preference learning from generation quality. SLIME incorporates a three-pronged objective: (1) an anchoring term to maximize the likelihood of preferred responses; (2) a stabilizing penalty that prevents the probabilities of rejected tokens from collapsing to zero; and (3) a dual-margin mechanism that combines hard and soft constraints for precise boundary shaping. Our results demonstrate that SLIME achieves superior performance compared to state-of-the-art baselines while maintaining higher generation stability.

Key Takeaways

  • 1

    SLIME aligns LLMs with human preferences effectively.

  • 2

    Preserves generation quality through a three-pronged approach.

  • 3

    Consistently outperforms baselines on diverse benchmarks.

Limitations

  • Evaluated on smaller models, effectiveness on larger models uncertain.

  • Limited to English benchmarks, multilingual evaluation needed.

Keywords

direct preference optimizationReinforcement Learning from Human FeedbackRLHFLarge Language ModelsLLMsimplicit reward functionsobjective mismatchrelative marginabsolute likelihoodunlearningformatting collapsereference-free alignmentanchoring termstabilizing penaltydual-margin mechanismgeneration stability

More in AI Safety & Alignment

View all
SLIME: Stabilized Likelihood Implicit Margin Enforcement for Preference Optimization | Paperchime