Large Language Models

RubricHub: A Comprehensive and Highly Discriminative Rubric Dataset via Automated Coarse-to-Fine Generation

SSunzhu LiJJiale ZhaoMMiteto WeiHHuimin RenYYang ZhouJJingwen YangSShunyu LiuKKaike ZhangWWei Chen
arXiv ID
2601.08430
Published
January 13, 2026
Authors
9
Hugging Face Likes
50
Comments
3

Abstract

Reinforcement Learning with Verifiable Rewards (RLVR) has driven substantial progress in reasoning-intensive domains like mathematics. However, optimizing open-ended generation remains challenging due to the lack of ground truth. While rubric-based evaluation offers a structured proxy for verification, existing methods suffer from scalability bottlenecks and coarse criteria, resulting in a supervision ceiling effect. To address this, we propose an automated Coarse-to-Fine Rubric Generation framework. By synergizing principle-guided synthesis, multi-model aggregation, and difficulty evolution, our approach produces comprehensive and highly discriminative criteria capable of capturing the subtle nuances. Based on this framework, we introduce RubricHub, a large-scale (sim110k) and multi-domain dataset. We validate its utility through a two-stage post-training pipeline comprising Rubric-based Rejection Sampling Fine-Tuning (RuFT) and Reinforcement Learning (RuRL). Experimental results demonstrate that RubricHub unlocks significant performance gains: our post-trained Qwen3-14B achieves state-of-the-art (SOTA) results on HealthBench (69.3), surpassing proprietary frontier models such as GPT-5. The code and data will be released soon.

Keywords

Reinforcement Learning with Verifiable Rewardsrubric-based evaluationprinciple-guided synthesismulti-model aggregationdifficulty evolutionRubricHubRubric-based Rejection Sampling Fine-TuningReinforcement Learning

More in Large Language Models

View all
RubricHub: A Comprehensive and Highly Discriminative Rubric Dataset via Automated Coarse-to-Fine Generation | Paperchime