Reinforcement Learning

KAGE-Bench: Fast Known-Axis Visual Generalization Evaluation for Reinforcement Learning

EEgor CherepanovDDaniil ZelezetskyAAlexey K. KovalevAAleksandr I. Panov
arXiv ID
2601.14232
Published
January 20, 2026
Authors
4
Hugging Face Likes
8
Comments
2

Abstract

Pixel-based reinforcement learning agents often fail under purely visual distribution shift even when latent dynamics and rewards are unchanged, but existing benchmarks entangle multiple sources of shift and hinder systematic analysis. We introduce KAGE-Env, a JAX-native 2D platformer that factorizes the observation process into independently controllable visual axes while keeping the underlying control problem fixed. By construction, varying a visual axis affects performance only through the induced state-conditional action distribution of a pixel policy, providing a clean abstraction for visual generalization. Building on this environment, we define KAGE-Bench, a benchmark of six known-axis suites comprising 34 train-evaluation configuration pairs that isolate individual visual shifts. Using a standard PPO-CNN baseline, we observe strong axis-dependent failures, with background and photometric shifts often collapsing success, while agent-appearance shifts are comparatively benign. Several shifts preserve forward motion while breaking task completion, showing that return alone can obscure generalization failures. Finally, the fully vectorized JAX implementation enables up to 33M environment steps per second on a single GPU, enabling fast and reproducible sweeps over visual factors. Code: https://avanturist322.github.io/KAGEBench/.

Keywords

pixel-based reinforcement learningvisual distribution shiftlatent dynamicsreward functionJAX-native2D platformervisual axesstate-conditional action distributionPPO-CNNenvironment steps per second

More in Reinforcement Learning

View all
KAGE-Bench: Fast Known-Axis Visual Generalization Evaluation for Reinforcement Learning | Paperchime