Transformers
Diffusers
Safetensors

Update metadata and improve model card

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +32 -13
README.md CHANGED
@@ -1,26 +1,27 @@
1
  ---
2
  base_model:
3
  - Wan-AI/Wan2.2-I2V-A14B-Diffusers
 
4
  license: apache-2.0
5
- library_name: transformers
6
  ---
7
 
8
  # VBVR: A Very Big Video Reasoning Suite
9
 
10
  <a href="https://video-reason.com" target="_blank">
11
- <img alt="Code" src="https://img.shields.io/badge/Project%20-%20Homepage-4285F4" height="20" />
12
  </a>
13
- <a href="https://github.com/orgs/Video-Reason/repositories" target="_blank">
14
  <img alt="Code" src="https://img.shields.io/badge/VBVR-Code-100000?style=flat-square&logo=github&logoColor=white" height="20" />
15
  </a>
16
- <a href="https://arxiv.org/abs/2602.20159" target="_blank">
17
- <img alt="arXiv" src="https://img.shields.io/badge/arXiv-VBVR-red?logo=arxiv" height="20" />
18
- </a>
19
  <a href="https://huggingface.co/datasets/Video-Reason/VBVR-Dataset" target="_blank">
20
- <img alt="Leaderboard" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Dataset-Data-ffc107?color=ffc107&logoColor=white" height="20" />
21
  </a>
22
  <a href="https://huggingface.co/datasets/Video-Reason/VBVR-Bench-Data" target="_blank">
23
- <img alt="Leaderboard" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Bench-Data-ffc107?color=ffc107&logoColor=white" height="20" />
24
  </a>
25
  <a href="https://huggingface.co/spaces/Video-Reason/VBVR-Bench-Leaderboard" target="_blank">
26
  <img alt="Leaderboard" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Bench-Leaderboard-ffc107?color=ffc107&logoColor=white" height="20" />
@@ -31,13 +32,16 @@ library_name: transformers
31
  Video reasoning grounds intelligence in spatiotemporally consistent visual environments that go beyond what text can naturally capture,
32
  enabling intuitive reasoning over motion, interaction, and causality. Rapid progress in video models has focused primarily on visual quality.
33
  Systematically studying video reasoning and its scaling behavior suffers from a lack of video reasoning (training) data.
 
34
  To address this gap, we introduce the Very Big Video Reasoning (VBVR) Dataset, an unprecedentedly large-scale resource spanning 200 curated reasoning tasks
35
  and over one million video clips—approximately three orders of magnitude larger than existing datasets. We further present VBVR-Bench,
36
  a verifiable evaluation framework that moves beyond model-based judging by incorporating rule-based, human-aligned scorers,
37
  enabling reproducible and interpretable diagnosis of video reasoning capabilities.
 
38
  Leveraging the VBVR suite, we conduct one of the first large-scale scaling studies of video reasoning and observe early signs of emergent generalization
39
  to unseen reasoning tasks. **Together, VBVR lays a foundation for the next stage of research in generalizable video reasoning.**
40
 
 
41
 
42
  <table>
43
  <tr>
@@ -135,7 +139,7 @@ to unseen reasoning tasks. **Together, VBVR lays a foundation for the next stage
135
  </table>
136
 
137
  ## Release Information
138
- VBVR-Wan2.2 is trained from Wan2.2-I2V-A14B without architectural modifications, as the goal of VBVR-Wan2.2 is to *investigate data scaling behavior* and provide a *strong baseline model* for the video reasoning research community. Leveraging the VBVR-Dataset, which to our knowledge constitutes one of the largest video reasoning datasets to date, VBVR-Wan2.2 achieved highest score on VBVR-Bench.
139
 
140
  In this release, we present
141
  [**VBVR-Wan2.2**](https://huggingface.co/Video-Reason/VBVR-Wan2.2),
@@ -168,11 +172,26 @@ python example.py \
168
 
169
  ## 🖊️ Citation
170
 
171
- ```bib
172
  @article{vbvr2026,
173
- title={A Very Big Video Reasoning Suite},
174
- author={Maijunxian Wang and Ruisi Wang and Juyi Lin and Ran Ji and Thaddäus Wiedemer and Qingying Gao and Dezhi Luo and Yaoyao Qian and Lianyu Huang and Zelong Hong and Jiahui Ge and Qianli Ma and Hang He and Yifan Zhou and Lingzi Guo and Lantao Mei and Jiachen Li and Hanwen Xing and Tianqi Zhao and Fengyuan Yu and Weihang Xiao and Yizheng Jiao and Jianheng Hou and Danyang Zhang and Pengcheng Xu and Boyang Zhong and Zehong Zhao and Gaoyun Fang and John Kitaoka and Yile Xu and Hua Xu and Kenton Blacutt and Tin Nguyen and Siyuan Song and Haoran Sun and Shaoyue Wen and Linyang He and Runming Wang and Yanzhi Wang and Mengyue Yang and Ziqiao Ma and Raphaël Millière and Freda Shi and Nuno Vasconcelos and Daniel Khashabi and Alan Yuille and Yilun Du and Ziming Liu and Bo Li and Dahua Lin and Ziwei Liu and Vikash Kumar and Yijiang Li and Lei Yang and Zhongang Cai and Hokin Deng},
 
 
 
 
 
 
 
 
 
 
 
 
 
 
175
  journal = {arXiv preprint arXiv:2602.20159},
176
- year = {2026}
 
177
  }
178
  ```
 
1
  ---
2
  base_model:
3
  - Wan-AI/Wan2.2-I2V-A14B-Diffusers
4
+ library_name: diffusers
5
  license: apache-2.0
6
+ pipeline_tag: image-to-video
7
  ---
8
 
9
  # VBVR: A Very Big Video Reasoning Suite
10
 
11
  <a href="https://video-reason.com" target="_blank">
12
+ <img alt="Project Page" src="https://img.shields.io/badge/Project%20-%20Homepage-4285F4" height="20" />
13
  </a>
14
+ <a href="https://github.com/Video-Reason/VBVR-EvalKit" target="_blank">
15
  <img alt="Code" src="https://img.shields.io/badge/VBVR-Code-100000?style=flat-square&logo=github&logoColor=white" height="20" />
16
  </a>
17
+ <a href="https://huggingface.co/papers/2602.20159" target="_blank">
18
+ <img alt="arXiv" src="https://img.shields.io/badge/arXiv-VBVR-red?logo=arxiv" height="20" />
19
+ </a>
20
  <a href="https://huggingface.co/datasets/Video-Reason/VBVR-Dataset" target="_blank">
21
+ <img alt="Dataset" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Dataset-Data-ffc107?color=ffc107&logoColor=white" height="20" />
22
  </a>
23
  <a href="https://huggingface.co/datasets/Video-Reason/VBVR-Bench-Data" target="_blank">
24
+ <img alt="Bench Data" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Bench-Data-ffc107?color=ffc107&logoColor=white" height="20" />
25
  </a>
26
  <a href="https://huggingface.co/spaces/Video-Reason/VBVR-Bench-Leaderboard" target="_blank">
27
  <img alt="Leaderboard" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Bench-Leaderboard-ffc107?color=ffc107&logoColor=white" height="20" />
 
32
  Video reasoning grounds intelligence in spatiotemporally consistent visual environments that go beyond what text can naturally capture,
33
  enabling intuitive reasoning over motion, interaction, and causality. Rapid progress in video models has focused primarily on visual quality.
34
  Systematically studying video reasoning and its scaling behavior suffers from a lack of video reasoning (training) data.
35
+
36
  To address this gap, we introduce the Very Big Video Reasoning (VBVR) Dataset, an unprecedentedly large-scale resource spanning 200 curated reasoning tasks
37
  and over one million video clips—approximately three orders of magnitude larger than existing datasets. We further present VBVR-Bench,
38
  a verifiable evaluation framework that moves beyond model-based judging by incorporating rule-based, human-aligned scorers,
39
  enabling reproducible and interpretable diagnosis of video reasoning capabilities.
40
+
41
  Leveraging the VBVR suite, we conduct one of the first large-scale scaling studies of video reasoning and observe early signs of emergent generalization
42
  to unseen reasoning tasks. **Together, VBVR lays a foundation for the next stage of research in generalizable video reasoning.**
43
 
44
+ The model was presented in the paper [A Very Big Video Reasoning Suite](https://huggingface.co/papers/2602.20159).
45
 
46
  <table>
47
  <tr>
 
139
  </table>
140
 
141
  ## Release Information
142
+ VBVR-Wan2.2 is trained from Wan2.2-I2V-A14B without architectural modifications, as the goal of VBVR-Wan2.2 is to *investigate data scaling behavior* and provide a *strong baseline model* for the video reasoning research community. Leveraging the VBVR-Dataset, which constitutes one of the largest video reasoning datasets to date, VBVR-Wan2.2 achieved highest score on VBVR-Bench.
143
 
144
  In this release, we present
145
  [**VBVR-Wan2.2**](https://huggingface.co/Video-Reason/VBVR-Wan2.2),
 
172
 
173
  ## 🖊️ Citation
174
 
175
+ ```bibtex
176
  @article{vbvr2026,
177
+ title = {A Very Big Video Reasoning Suite},
178
+ author = {Wang, Maijunxian and Wang, Ruisi and Lin, Juyi and Ji, Ran and
179
+ Wiedemer, Thadd{\"a}us and Gao, Qingying and Luo, Dezhi and
180
+ Qian, Yaoyao and Huang, Lianyu and Hong, Zelong and Ge, Jiahui and
181
+ Ma, Qianli and He, Hang and Zhou, Yifan and Guo, Lingzi and
182
+ Mei, Lantao and Li, Jiachen and Xing, Hanwen and Zhao, Tianqi and
183
+ Yu, Fengyuan and Xiao, Weihang and Jiao, Yizheng and
184
+ Hou, Jianheng and Zhang, Danyang and Xu, Pengcheng and
185
+ Zhong, Boyang and Zhao, Zehong and Fang, Gaoyun and Kitaoka, John and
186
+ Xu, Yile and Xu, Hua bureau and Blacutt, Kenton and Nguyen, Tin and
187
+ Song, Siyuan and Sun, Haoran and Wen, Shaoyue and He, Linyang and
188
+ Wang, Runming and Wang, Yanzhi and Yang, Mengyue and Ma, Ziqiao and
189
+ Milli{\`e}re, Rapha{\"e}l and Shi, Freda and Vasconcelos, Nuno and
190
+ Khashabi, Daniel and Yuille, Alan and Du, Yilun and Liu, Ziming and
191
+ Lin, Dahua and Liu, Ziwei and Kumar, Vikash and Li, Yijiang and
192
+ Yang, Lei and Cai, Zhongang and Deng, Hokin},
193
  journal = {arXiv preprint arXiv:2602.20159},
194
+ year = {2026},
195
+ url = {https://arxiv.org/abs/2602.20159}
196
  }
197
  ```