AI Safety & Alignment

Secure Code Generation via Online Reinforcement Learning with Vulnerability Reward Model

TTianyi WuMMingzhe DuYYue LiuCChengran YangTTerry Yue ZhuoJJiaheng ZhangSSee-Kiong Ng
Published
February 7, 2026
Authors
7
Word Count
16,064
Code
Includes code

SecCoderX uses reinforcement learning to generate secure code without sacrificing functionality.

Abstract

Large language models (LLMs) are increasingly used in software development, yet their tendency to generate insecure code remains a major barrier to real-world deployment. Existing secure code alignment methods often suffer from a functionality--security paradox, improving security at the cost of substantial utility degradation. We propose SecCoderX, an online reinforcement learning framework for functionality-preserving secure code generation. SecCoderX first bridges vulnerability detection and secure code generation by repurposing mature detection resources in two ways: (i) synthesizing diverse, reality-grounded vulnerability-inducing coding tasks for online RL rollouts, and (ii) training a reasoning-based vulnerability reward model that provides scalable and reliable security supervision. Together, these components are unified in an online RL loop to align code LLMs to generate secure and functional code. Extensive experiments demonstrate that SecCoderX achieves state-of-the-art performance, improving Effective Safety Rate (ESR) by approximately 10% over unaligned models, whereas prior methods often degrade ESR by 14-54%. We release our code, dataset and model checkpoints at https://github.com/AndrewWTY/SecCoderX.

Key Takeaways

  • 1

    Existing secure code generation methods improve security by 11-16% but reduce functionality by 14-54%, making them impractical.

  • 2

    SecCoderX repurposes vulnerability detection datasets to create 24,000 realistic coding prompts across five programming languages.

  • 3

    A CWE-conditioned vulnerability reward model guides reinforcement learning to generate both secure and functional code simultaneously.

Limitations

  • Traditional static analysis tools like CodeQL cannot effectively evaluate all types of vulnerabilities in generated code.

  • The functionality-security paradox makes it difficult to improve security without significantly degrading code functionality.

Keywords

large language modelssecure code generationonline reinforcement learningvulnerability detectionreward modelfunctionality-preservingcode alignmentsecurity supervision

More in AI Safety & Alignment

View all
Secure Code Generation via Online Reinforcement Learning with Vulnerability Reward Model | Paperchime