Update README.md
Browse files
README.md
CHANGED
|
@@ -10,7 +10,7 @@ library_name: transformers
|
|
| 10 |
|
| 11 |
# Robometer 4B
|
| 12 |
|
| 13 |
-
**Paper:** [arXiv
|
| 14 |
|
| 15 |
**Robometer** is a general-purpose vision-language reward model for robotics. It is trained on [RBM-1M](https://huggingface.co/datasets/) with **Qwen3-VL-4B** to predict **per-frame progress**, **per-frame success**, and **trajectory preferences** from rollout videos. The model combines (1) frame-level progress supervision on expert data and (2) trajectory-comparison preference supervision, so it can learn from both successful and failed rollouts and generalize across diverse robot embodiments and tasks.
|
| 16 |
|
|
@@ -56,11 +56,10 @@ uv run python scripts/example_inference.py \
|
|
| 56 |
If you use this model, please cite:
|
| 57 |
|
| 58 |
```bibtex
|
| 59 |
-
@
|
| 60 |
-
title={Robometer: Scaling General-Purpose Robotic Reward Models via Trajectory Comparisons},
|
| 61 |
-
author={Anthony Liang
|
| 62 |
year={2026},
|
| 63 |
-
|
| 64 |
-
note={arXiv coming soon}
|
| 65 |
}
|
| 66 |
```
|
|
|
|
| 10 |
|
| 11 |
# Robometer 4B
|
| 12 |
|
| 13 |
+
**Paper:** [arXiv](https://arxiv.org/abs/2603.02115)
|
| 14 |
|
| 15 |
**Robometer** is a general-purpose vision-language reward model for robotics. It is trained on [RBM-1M](https://huggingface.co/datasets/) with **Qwen3-VL-4B** to predict **per-frame progress**, **per-frame success**, and **trajectory preferences** from rollout videos. The model combines (1) frame-level progress supervision on expert data and (2) trajectory-comparison preference supervision, so it can learn from both successful and failed rollouts and generalize across diverse robot embodiments and tasks.
|
| 16 |
|
|
|
|
| 56 |
If you use this model, please cite:
|
| 57 |
|
| 58 |
```bibtex
|
| 59 |
+
@article{liang2026robometer,
|
| 60 |
+
title = {Robometer: Scaling General-Purpose Robotic Reward Models via Trajectory Comparisons},
|
| 61 |
+
author={Anthony Liang and Yigit Korkmaz and Jiahui Zhang and Minyoung Hwang and Abrar Anwar and Sidhant Kaushik and Aditya Shah and Alex S. Huang and Luke Zettlemoyer and Dieter Fox and Yu Xiang and Anqi Li and Andreea Bobu and Abhishek Gupta and Stephen Tu and Erdem Biyik and Jesse Zhang},
|
| 62 |
year={2026},
|
| 63 |
+
journal = {arXiv preprint arXiv:2603.02115}
|
|
|
|
| 64 |
}
|
| 65 |
```
|