| --- |
| license: mit |
| datasets: |
| - mjjung/ActivityNet-VTune |
| language: |
| - en |
| --- |
| |
| # TimeChat-7B-ActivityNet-VTune Model |
|
|
| ## Model details |
|
|
| We trained [TimeChat](https://arxiv.org/abs/2312.02051) using VTune, a developed instruction-tuning method specifically designed to account for consistency. |
|
|
| For the tuning, we utilized 10K training videos from ActivityNet-Captions with 205K automatically generated annotations. |
|
|
| ## Evaluation |
| We evaluated the model on ActivtyNet-CON and ActivtyNet-Captions. |
|
|
| - ActivityNet-CON |
| | Metric | Value | |
| |-----------------|-------------| |
| | Ground | 37.4 | |
| | R-Ground | 28.3 (75.6) | |
| | S-Ground | 10.6 (28.3) | |
| | H-Verify | 19.6 (52.3) | |
| | C-Verify | 19.5 (51.5) | |
|
|
| - ActivityNet-Captions |
| | Metric | Value | |
| |-----------------|---------| |
| | R@1 IoU=0.3 | 57.74 | |
| | R@1 IoU=0.5 | 41.05 | |
| | R@1 IoU=0.7 | 23.72 | |
| | mIoU | 40.89 | |
|
|
| **Paper and Code for more information:** |
| [Paper](https://arxiv.org/abs/2411.12951), [Code](https://github.com/minjoong507/consistency-of-video-llm) |
|
|
| ## Citation |
| If you find our research and codes useful, please consider starring our repository and citing our paper: |
|
|
| ``` |
| @article{jung2024consistency, |
| title={On the Consistency of Video Large Language Models in Temporal Comprehension}, |
| author={Jung, Minjoon and Xiao, Junbin and Zhang, Byoung-Tak and Yao, Angela}, |
| journal={arXiv preprint arXiv:2411.12951}, |
| year={2024} |
| } |
| ``` |