C2: Scalable Rubric-Augmented Reward Modeling from Binary Preferences
Abstract
Cooperative yet Critical reward modeling (C2) enhances reward model reliability by enabling critical collaboration between a reward model and a rubric generator trained exclusively from binary preferences, achieving superior performance without requiring costly rubric annotations.
Rubric-augmented verification guides reward models with explicit evaluation criteria, yielding more reliable judgments than single-model verification. However, most existing methods require costly rubric annotations, limiting scalability. Moreover, we find that rubric generation is vulnerable to a failure of cooperation; low-quality rubrics actively mislead reward models rather than help. Inspired by the principle of cooperative communication, we propose Cooperative yet Critical reward modeling (C2), a framework that significantly improves reward model judgments by having the reward model critically collaborate with a rubric generator trained solely from binary preferences. In C2, we synthesize helpful and misleading rubric pairs by measuring how each rubric shifts the reward model toward or away from the correct preference. Using these contrastive pairs, we train a cooperative rubric generator to propose helpful rubrics, and a critical verifier to assess rubric validity before making its judgment, following only rubrics it deems helpful at inference time. C2 outperforms reasoning reward models trained on the same binary preferences, with gains of up to 6.5 points on RM-Bench and 6.0 points length-controlled win rate on AlpacaEval 2.0. Without external rubric annotations, C2 enables an 8B reward model to match performance achieved with rubrics from a 4times larger model. Overall, our work demonstrates that eliciting deliberate cooperation in rubric-augmented verification makes reward models more trustworthy in a scalable way.
Community
Rubrics are a powerful harness for aligning reward models with human judgment, but they come with two problems.
- Human-annotated rubrics are costly.
- Auto-generated rubrics are often vague or misleading, and can hurt the RM more than help.
We introduce C2, a reward modeling framework that builds robust rubric-based verification from binary preferences alone.
The generator learns which rubrics help the verifier reach the correct judgment, and the verifier learns which rubrics to trust.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- CDRRM: Contrast-Driven Rubric Generation for Reliable and Interpretable Reward Modeling (2026)
- Rationale Matters: Learning Transferable Rubrics via Proxy-Guided Critique for VLM Reward Models (2026)
- Visual Preference Optimization with Rubric Rewards (2026)
- When Rubrics Fail: Error Enumeration as Reward in Reference-Free RL Post-Training for Virtual Try-On (2026)
- SibylSense: Adaptive Rubric Learning via Memory Tuning and Adversarial Probing (2026)
- RubiCap: Rubric-Guided Reinforcement Learning for Dense Image Captioning (2026)
- Personalized RewardBench: Evaluating Reward Models with Human Aligned Personalization (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.13618 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper