wkun03 commited on
Commit
c27e9dc
Β·
verified Β·
1 Parent(s): 257738d

Upload 3 files

Browse files
Files changed (3) hide show
  1. README.md +137 -3
  2. expert1.pth +3 -0
  3. expert2.pth +3 -0
README.md CHANGED
@@ -1,3 +1,137 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - pytorch
5
+ ---
6
+
7
+ <a id="top"></a>
8
+ <div align="center">
9
+ <h1>πŸš€ ViSAGE @ CVPR-NTIRE Video Saliency Prediction Challenge 2026</h1>
10
+
11
+ <p>
12
+ <b>Kun Wang</b><sup>1</sup>&nbsp;
13
+ <b>Yupeng Hu</b><sup>1</sup>&nbsp;
14
+ <b>Zhiran Li</b><sup>1</sup>&nbsp;
15
+ <b>Hao Liu</b><sup>1</sup>&nbsp;
16
+ <b>Qianlong Xiang</b><sup>2,3,4</sup>&nbsp;
17
+ <b>Liqiang Nie</b><sup>2</sup>
18
+ </p>
19
+
20
+ <p>
21
+ <sup>1</sup>School of Software, Shandong University, Jinan, China<br>
22
+ <sup>2</sup>School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, China<br>
23
+ <sup>3</sup>City University of Hong Kong<br>
24
+ <sup>4</sup>Shenzhen Loop Area Institute
25
+ </p>
26
+ </div>
27
+
28
+ These are the official implementation, pre-trained model weights, and configuration files for **ViSAGE**, designed for the NTIRE 2026 Challenge on Video Saliency Prediction (CVPRW 2026).
29
+
30
+ πŸ”— **Paper:** [Accepted by CVPRW 2026](https://arxiv.org)
31
+ πŸ”— **GitHub Repository:** [iLearn-Lab/CVPRW26-ViSAGE](https://github.com/iLearn-Lab/CVPRW26-ViSAGE.git)
32
+ πŸ”— **Challenge Page:** [NTIRE 2026 VSP Challenge](https://www.codabench.org/competitions/12842/)
33
+
34
+ ---
35
+
36
+ <p align="center">
37
+ <video src="https://github.com/user-attachments/assets/a2dbabc0-9d8e-4f7a-8b16-c2d56af7b071" controls width="95%"></video>
38
+ </p>
39
+
40
+ ---
41
+
42
+ ## πŸ“Œ Model Information
43
+
44
+ ### 1. Model Name
45
+ **ViSAGE(Video Saliency with Adaptive Gated Experts)**
46
+
47
+ ### 2. Task Type & Applicable Tasks
48
+ - **Task Type:** Video Saliency Prediction (VSP) / Computer Vision
49
+ - **Applicable Tasks:** Robust and adaptive prediction of human visual attention (saliency maps) in dynamic video sequences.
50
+
51
+ ### 3. Project Introduction
52
+ Video Saliency Prediction requires capturing complex spatio-temporal dynamics and human visual priors. **ViSAGE** tackles this by leveraging a powerful multi-expert ensemble framework.
53
+
54
+ > πŸ’‘ **Method Highlight:** The framework consists of a shared **InternVideo2 backbone** adapted via two-stage LoRA fine-tuning, alongside dual specialized experts utilizing Temporal Modulation (for explicit spatial priors) and Multi-Scale Fusion (for adaptive data-driven perception). For robust performance, the **Ensemble Fusion Module** obtains the final prediction by converting the expert outputs to logit space before averaging, which provides significantly more accurate estimation than simple saliency map averaging.
55
+
56
+ ### 4. Training Data Source
57
+ - Dataset provided by the **NTIRE 2026 Video Saliency Prediction Challenge** (Private Test and Validation sets).
58
+
59
+ ---
60
+
61
+ ## πŸš€ Usage & Basic Inference
62
+
63
+ ### Step 1: Prepare the Environment
64
+ Clone the GitHub repository and set up the Conda environment:
65
+ ```bash
66
+ git clone https://github.com/iLearn-Lab/CVPRW26-ViSAGE.git
67
+ cd ViSAGE
68
+ ```
69
+ ```bash
70
+ conda create -n visage python=3.10 -y
71
+ conda activate visage
72
+ pip install -r requirements.txt
73
+ ```
74
+
75
+ ### Step 2: Data & Pre-trained Weights Preparation
76
+ 1. **Challenge Data:** Use the provided scripts to extract frames from the source videos. The extracted frames will be automatically saved to `derived_fullfps`.
77
+ *(⚠️ **Important:** Do not modify the output directory name `derived_fullfps` unless you manually update the path configs in all inference scripts.)*
78
+ ```bash
79
+ python video_to_frames.py
80
+ ```
81
+ 2. **ViSAGE Checkpoints:** Download our model checkpoints(https://huggingface.co/iLearn-Lab/CVPRW26-ViSAGE).
82
+ 3. **InternVideo2 Backbone:** Download the pre-trained `InternVideo2-Stage2_6B-224p-f4` model from [Hugging Face](https://huggingface.co/OpenGVLab/InternVideo2-Stage2_6B-224p-f4) and clone the `InternVideo` repo:
83
+ ```bash
84
+ git clone https://github.com/OpenGVLab/InternVideo.git
85
+ *(Update the pre-trained weight paths in `Expert1/inference.py` and `Expert2/inference.py` to match your local directory).*
86
+ ```
87
+ ### Step 3: Run Inference & Ensemble
88
+
89
+ **1. Inference:** Generate predictions for both experts.
90
+ ```bash
91
+ python Expert1/inference.py
92
+ python Expert2/inference.py
93
+ ```
94
+ **2. Ensemble:** Merge the inference results from Expert 1 and Expert 2 in logit space.
95
+ ```bash
96
+ python ensemble.py
97
+ ```
98
+ **3. Format Check & Video Generation:** Validate your submission format and render the predicted saliency outputs onto the source video frames.
99
+ ```bash
100
+ python check.py
101
+ python makevideos.py
102
+ ```
103
+
104
+ ### Step 4: Training (Optional)
105
+ If you wish to train the model from scratch, run the two-stage LoRA fine-tuning pipeline:
106
+ ```bash
107
+ python trainnew.py # Stage 1
108
+ python trainnew2.py # Stage 2
109
+ ```
110
+
111
+ ---
112
+
113
+ ## ⚠️ Limitations & Notes
114
+
115
+ **Disclaimer:** This framework and its pre-trained weights are intended for **academic research purposes only**.
116
+ - The model relies heavily on the InternVideo2 backbone; out-of-memory (OOM) errors may occur on GPUs with less than 24GB VRAM.
117
+ - Inference speed and performance may fluctuate depending on the hardware utilized.
118
+
119
+ ---
120
+
121
+ ## 🀝 Acknowledgements & Contact
122
+
123
+ - **Contact:** If you have any questions or encounter issues, feel free to open an issue or contact the author Kun Wang at `khylon.kun.wang@gmail.com`.
124
+
125
+ ---
126
+
127
+ ## πŸ“β­οΈ Citation
128
+
129
+ If you find this project useful for your research, please consider citing:
130
+
131
+
132
+ @inproceedings{ntire26visage,
133
+ title={{ViSAGE @ NTIRE 2026 Challenge on Video Saliency Prediction: Methods and Results}},
134
+ author={Wang, Kun and Hu, Yupeng and Li, Zhiran and Liu, Hao and Xiang, Qianlong and Nie, Liqiang},
135
+ booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
136
+ year={2026}
137
+ }
expert1.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ecce3f629cc17a194cc47bf63fd67941e447b01ee5a5bdc9906edeb30ca7c4a1
3
+ size 766381375
expert2.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e04506d205bd323ba051b3082fc0fefc9afe22eadc0a424df49c835a5fdecbe
3
+ size 812542151