nielsr HF Staff commited on
Commit
749cc1c
·
verified ·
1 Parent(s): 2effadf

Add image-to-video task category, paper link and sample usage

Browse files

Hi! I'm Niels from the community science team at Hugging Face.

This PR improves the dataset card by:
- Adding `task_categories: image-to-video` to the metadata for better discoverability.
- Updating the paper link to point to the Hugging Face paper page.
- Adding a "Sample Usage" section with instructions from the official repository on how to use the evaluation toolkit with this benchmark.

Files changed (1) hide show
  1. README.md +63 -40
README.md CHANGED
@@ -1,12 +1,15 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - en
5
- tags:
6
- - video-generation
7
- pretty_name: VBVR-Bench-Data
8
  size_categories:
9
  - n<1K
 
 
 
 
 
 
10
  configs:
11
  - config_name: VBVR-Bench-Data
12
  data_files:
@@ -17,19 +20,19 @@ configs:
17
  # VBVR: A Very Big Video Reasoning Suite
18
 
19
  <a href="https://video-reason.com" target="_blank">
20
- <img alt="Code" src="https://img.shields.io/badge/Project%20-%20Homepage-4285F4" height="20" />
21
  </a>
22
- <a href="https://github.com/orgs/Video-Reason/repositories" target="_blank">
23
  <img alt="Code" src="https://img.shields.io/badge/VBVR-Code-100000?style=flat-square&logo=github&logoColor=white" height="20" />
24
  </a>
25
- <a href="https://arxiv.org/abs/2602.20159" target="_blank">
26
- <img alt="arXiv" src="https://img.shields.io/badge/arXiv-VBVR-red?logo=arxiv" height="20" />
27
  </a>
28
  <a href="https://huggingface.co/Video-Reason/VBVR-Wan2.2" target="_blank">
29
- <img alt="Leaderboard" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Wan2.2-Model-ffc107?color=ffc107&logoColor=white" height="20" />
30
  </a>
31
  <a href="https://huggingface.co/datasets/Video-Reason/VBVR-Dataset" target="_blank">
32
- <img alt="Leaderboard" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Dataset-Data-ffc107?color=ffc107&logoColor=white" height="20" />
33
  </a>
34
  <a href="https://huggingface.co/spaces/Video-Reason/VBVR-Bench-Leaderboard" target="_blank">
35
  <img alt="Leaderboard" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Bench-Leaderboard-ffc107?color=ffc107&logoColor=white" height="20" />
@@ -43,22 +46,35 @@ To address this gap, we introduce the Very Big Video Reasoning (VBVR) Dataset, a
43
  and over one million video clips—approximately three orders of magnitude larger than existing datasets. We further present VBVR-Bench,
44
  a verifiable evaluation framework that moves beyond model-based judging by incorporating rule-based, human-aligned scorers,
45
  enabling reproducible and interpretable diagnosis of video reasoning capabilities.
46
- Leveraging the VBVR suite, we conduct one of the first large-scale scaling studies of video reasoning and observe early signs of emergent generalization
47
- to unseen reasoning tasks. **Together, VBVR lays a foundation for the next stage of research in generalizable video reasoning.**
48
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
 
50
  ## Release Information
51
  We are pleased to release the official **VBVR-Bench** test dataset, designed for standardized and rigorous evaluation of video-based visual reasoning models.
52
- The test split is designed along with the evaluation toolkit provided by Video-Reason at [VBVR-Bench evaluation code](https://github.com/Video-Reason/VBVR-Bench).
53
 
54
  After running evaluation, you can compare your model’s performance on the public leaderboard at [VBVR-Bench Leaderboard](https://huggingface.co/spaces/Video-Reason/VBVR-Bench-Leaderboard).
55
 
56
- In this release, we present
57
- [**VBVR-Wan2.2**](https://huggingface.co/Video-Reason/VBVR-Wan2.2),
58
- [**VBVR-Dataset**](https://huggingface.co/datasets/Video-Reason/VBVR-Dataset),
59
- [**VBVR-Bench-Data**](https://huggingface.co/datasets/Video-Reason/VBVR-Bench-Data) and
60
- [**VBVR-Bench-Leaderboard**](https://huggingface.co/spaces/Video-Reason/VBVR-Bench-Leaderboard).
61
-
62
  ## Data Structure
63
  The dataset is organized by domain and task generator. For example:
64
 
@@ -71,33 +87,40 @@ In-Domain_50/
71
  ground_truth.mp4
72
  prompt.txt
73
  ```
74
- Structure Description
75
-
76
- - In-Domain_50/Out-of-Domain_50:
77
- Evaluation splits indicating whether samples belong to in-domain or out-of-domain settings.
78
-
79
- - G-XXX_task-name_data-generator:
80
- A specific reasoning task category and its corresponding data generator.
81
-
82
- - 00000-00004:
83
- Individual sample instances.
84
-
85
- Each sample directory contains
86
- - first_frame.png: The initial frame of the video
87
-
88
- - final_frame.png: The final frame
89
 
90
- - ground_truth.mp4: The full video sequence
 
 
91
 
92
- - prompt.txt: The textual reasoning question or instruction
 
 
 
 
93
 
94
  ## 🖊️ Citation
95
 
96
- ```bib
97
  @article{vbvr2026,
98
- title={A Very Big Video Reasoning Suite},
99
- author={Maijunxian Wang and Ruisi Wang and Juyi Lin and Ran Ji and Thaddäus Wiedemer and Qingying Gao and Dezhi Luo and Yaoyao Qian and Lianyu Huang and Zelong Hong and Jiahui Ge and Qianli Ma and Hang He and Yifan Zhou and Lingzi Guo and Lantao Mei and Jiachen Li and Hanwen Xing and Tianqi Zhao and Fengyuan Yu and Weihang Xiao and Yizheng Jiao and Jianheng Hou and Danyang Zhang and Pengcheng Xu and Boyang Zhong and Zehong Zhao and Gaoyun Fang and John Kitaoka and Yile Xu and Hua Xu and Kenton Blacutt and Tin Nguyen and Siyuan Song and Haoran Sun and Shaoyue Wen and Linyang He and Runming Wang and Yanzhi Wang and Mengyue Yang and Ziqiao Ma and Raphaël Millière and Freda Shi and Nuno Vasconcelos and Daniel Khashabi and Alan Yuille and Yilun Du and Ziming Liu and Bo Li and Dahua Lin and Ziwei Liu and Vikash Kumar and Yijiang Li and Lei Yang and Zhongang Cai and Hokin Deng},
 
 
 
 
 
 
 
 
 
 
 
 
 
 
100
  journal = {arXiv preprint arXiv:2602.20159},
101
- year = {2026}
 
102
  }
103
  ```
 
1
  ---
 
2
  language:
3
  - en
4
+ license: apache-2.0
 
 
5
  size_categories:
6
  - n<1K
7
+ task_categories:
8
+ - image-to-video
9
+ pretty_name: VBVR-Bench-Data
10
+ tags:
11
+ - video-generation
12
+ - video-reasoning
13
  configs:
14
  - config_name: VBVR-Bench-Data
15
  data_files:
 
20
  # VBVR: A Very Big Video Reasoning Suite
21
 
22
  <a href="https://video-reason.com" target="_blank">
23
+ <img alt="Project Page" src="https://img.shields.io/badge/Project%20-%20Homepage-4285F4" height="20" />
24
  </a>
25
+ <a href="https://github.com/Video-Reason/VBVR-EvalKit" target="_blank">
26
  <img alt="Code" src="https://img.shields.io/badge/VBVR-Code-100000?style=flat-square&logo=github&logoColor=white" height="20" />
27
  </a>
28
+ <a href="https://huggingface.co/papers/2602.20159" target="_blank">
29
+ <img alt="Paper" src="https://img.shields.io/badge/Paper-HF-red?logo=huggingface" height="20" />
30
  </a>
31
  <a href="https://huggingface.co/Video-Reason/VBVR-Wan2.2" target="_blank">
32
+ <img alt="Model" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Wan2.2-Model-ffc107?color=ffc107&logoColor=white" height="20" />
33
  </a>
34
  <a href="https://huggingface.co/datasets/Video-Reason/VBVR-Dataset" target="_blank">
35
+ <img alt="Data" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Dataset-Data-ffc107?color=ffc107&logoColor=white" height="20" />
36
  </a>
37
  <a href="https://huggingface.co/spaces/Video-Reason/VBVR-Bench-Leaderboard" target="_blank">
38
  <img alt="Leaderboard" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Bench-Leaderboard-ffc107?color=ffc107&logoColor=white" height="20" />
 
46
  and over one million video clips—approximately three orders of magnitude larger than existing datasets. We further present VBVR-Bench,
47
  a verifiable evaluation framework that moves beyond model-based judging by incorporating rule-based, human-aligned scorers,
48
  enabling reproducible and interpretable diagnosis of video reasoning capabilities.
 
 
49
 
50
+ For more details, please refer to the paper: [A Very Big Video Reasoning Suite](https://huggingface.co/papers/2602.20159).
51
+
52
+ ## Sample Usage
53
+
54
+ To evaluate a model using the VBVR suite, you can use the official evaluation toolkit [VBVR-EvalKit](https://github.com/Video-Reason/VBVR-EvalKit):
55
+
56
+ ```bash
57
+ # Install the toolkit
58
+ git clone https://github.com/Video-Reason/VBVR-EvalKit.git && cd VBVR-EvalKit
59
+ python -m venv venv && source venv/bin/activate
60
+ pip install -e .
61
+
62
+ # Setup a model (example: SVD)
63
+ bash setup/install_model.sh --model svd --validate
64
+
65
+ # Inference
66
+ python examples/generate_videos.py --questions-dir /path/to/VBVR-Bench-Data --output-dir ./outputs --model svd
67
+
68
+ # Evaluation (VBVR-Bench)
69
+ python examples/score_videos.py --inference-dir ./outputs
70
+ ```
71
 
72
  ## Release Information
73
  We are pleased to release the official **VBVR-Bench** test dataset, designed for standardized and rigorous evaluation of video-based visual reasoning models.
74
+ The test split is designed along with the evaluation toolkit provided by Video-Reason at [VBVR-EvalKit](https://github.com/Video-Reason/VBVR-EvalKit).
75
 
76
  After running evaluation, you can compare your model’s performance on the public leaderboard at [VBVR-Bench Leaderboard](https://huggingface.co/spaces/Video-Reason/VBVR-Bench-Leaderboard).
77
 
 
 
 
 
 
 
78
  ## Data Structure
79
  The dataset is organized by domain and task generator. For example:
80
 
 
87
  ground_truth.mp4
88
  prompt.txt
89
  ```
90
+ ### Structure Description
 
 
 
 
 
 
 
 
 
 
 
 
 
 
91
 
92
+ - **In-Domain_50/Out-of-Domain_50**: Evaluation splits indicating whether samples belong to in-domain or out-of-domain settings.
93
+ - **G-XXX_task-name_data-generator**: A specific reasoning task category and its corresponding data generator.
94
+ - **00000-00004**: Individual sample instances.
95
 
96
+ Each sample directory contains:
97
+ - `first_frame.png`: The initial frame of the video
98
+ - `final_frame.png`: The final frame
99
+ - `ground_truth.mp4`: The full video sequence
100
+ - `prompt.txt`: The textual reasoning question or instruction
101
 
102
  ## 🖊️ Citation
103
 
104
+ ```bibtex
105
  @article{vbvr2026,
106
+ title = {A Very Big Video Reasoning Suite},
107
+ author = {Wang, Maijunxian and Wang, Ruisi and Lin, Juyi and Ji, Ran and
108
+ Wiedemer, Thadd{\"{a}}us and Gao, Qingying and Luo, Dezhi and
109
+ Qian, Yaoyao and Huang, Lianyu and Hong, Zelong and Ge, Jiahui and
110
+ Ma, Qianli and He, Hang and Zhou, Yifan and Guo, Lingzi and
111
+ Mei, Lantao and Li, Jiachen and Xing, Hanwen and Zhao, Tianqi and
112
+ Yu, Fengyuan and Xiao, Weihang and Jiao, Yizheng and
113
+ Hou, Jianheng and Zhang, Danyang and Xu, Pengcheng and
114
+ Zhong, Boyang and Zhao, Zehong and Fang, Gaoyun and Kitaoka, John and
115
+ Xu, Yile and Xu, Hua and Blacutt, Kenton and Nguyen, Tin and
116
+ Song, Siyuan and Sun, Haoran and Wen, Shaoyue and He, Linyang and
117
+ Wang, Runming and Wang, Yanzhi and Yang, Mengyue and Ma, Ziqiao and
118
+ Milli{\`e}re, Rapha{\"{e}}l and Shi, Freda and Vasconcelos, Nuno and
119
+ Khashabi, Daniel and Yuille, Alan and Du, Yilun and Liu, Ziming and
120
+ Bo Li and Dahua Lin and Ziwei Liu and Vikash Kumar and Yijiang Li and
121
+ Lei Yang and Zhongang Cai and Hokin Deng},
122
  journal = {arXiv preprint arXiv:2602.20159},
123
+ year = {2026},
124
+ url = {https://arxiv.org/abs/2602.20159}
125
  }
126
  ```