Improve dataset card for RMOT26: add metadata, paper, project, and code links

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +30 -3
README.md CHANGED
@@ -1,3 +1,30 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - video-text-to-text
5
+ ---
6
+
7
+ # RMOT26
8
+
9
+ RMOT26 is a large-scale benchmark for **Query-Driven Multi-Object Tracking**, introduced in the paper [QTrack: Query-Driven Reasoning for Multi-modal MOT](https://huggingface.co/papers/2603.13759).
10
+
11
+ - **Project Page:** [https://gaash-lab.github.io/QTrack/](https://gaash-lab.github.io/QTrack/)
12
+ - **Repository:** [https://github.com/gaash-lab/QTrack](https://github.com/gaash-lab/QTrack)
13
+ - **Paper:** [https://arxiv.org/abs/2603.13759](https://arxiv.org/abs/2603.13759)
14
+
15
+ ## Description
16
+
17
+ Multi-object tracking (MOT) has traditionally focused on estimating trajectories of all objects in a video. RMOT26 introduces a query-driven tracking paradigm that formulates tracking as a spatiotemporal reasoning problem conditioned on natural language queries.
18
+
19
+ Given a reference frame, a video sequence, and a textual query, the goal is to localize and track only the target(s) specified in the query while maintaining temporal coherence and identity consistency. RMOT26 features grounded queries and sequence-level splits to prevent identity leakage and enable robust evaluation of generalization.
20
+
21
+ ## Citation
22
+
23
+ ```bibtex
24
+ @article{ashraf2026qtrack,
25
+ title={QTrack: Query-Driven Reasoning for Multi-modal MOT},
26
+ author={Ashraf, Tajamul and Tariq, Tavaheed and Yadav, Sonia and Ul Riyaz, Abrar and Tak, Wasif and Abdar, Moloud and Bashir, Janibul},
27
+ journal={arXiv preprint arXiv:2603.13759},
28
+ year={2026}
29
+ }
30
+ ```