Vision-R1-CI-7B / README.md
nielsr's picture
nielsr HF Staff
Add model card and metadata for Vision-R1-CI-7B
696d434 verified
|
raw
history blame
2.57 kB
metadata
license: apache-2.0
library_name: transformers
pipeline_tag: image-text-to-text
base_model: Qwen/Qwen2.5-VL-7B-Instruct
tags:
  - multimodal
  - reasoning
  - vision-r1
  - qwen2.5-vl
  - chain-of-thought

Vision-R1-CI-7B

Vision-R1-CI-7B is a multimodal reasoning model that serves as the Cold-start Initialization (CI) checkpoint for the Vision-R1 project. It is introduced in the paper Vision-R1: Incentivizing Reasoning Capability in Multimodal Large Language Models.

Model Description

Vision-R1-CI (Cold-start Initialized) is a 7B parameter multimodal large language model (MLLM) developed to bridge the gap between standard vision-language tasks and complex reasoning. It was obtained by fine-tuning the Qwen2.5-VL-7B-Instruct base model on the Vision-R1-cold dataset—a 200K high-quality multimodal Chain-of-Thought (CoT) dataset constructed by leveraging DeepSeek-R1 and existing MLLMs through modality bridging and data filtering.

This model acts as the critical starting point for subsequent Reinforcement Learning (RL) using Group Relative Policy Optimization (GRPO) and the Progressive Thinking Suppression Training (PTST) strategy, which enables the emergence of "Aha moments" and self-reflective reasoning in multimodal contexts.

Performance

The Vision-R1 series demonstrates strong performance across various math-centric multimodal benchmarks. Vision-R1-7B (the version after RL training) achieves significant improvements:

Model MathVista MathVerse MM-Math DynaMath AVG.
Qwen2.5-VL-7B 68.1 46.7 34.1 50.7 47.9
Vision-R1-7B (Ours) 73.5 52.4 40.2 56.3 53.8

Citation

If you find this model useful in your research, please cite the following paper:

@article{huang2025visionr1,
  title={Vision-R1: Incentivizing Reasoning Capability in Multimodal Large Language Models},
  author={Huang, Wenxuan and Jia, Bohan and Zhai, Zijie and Cao, Shaosheng and Ye, Zheyu and Zhao, Fei and Hu, Yao and Lin, Shaohui},
  journal={arXiv preprint arXiv:2503.06749},
  year={2025}
}