RMOT26 / README.md
tawheed-tariq's picture
Improve dataset card for RMOT26: add metadata, paper, project, and code links (#2)
fa81b3f
metadata
license: mit
task_categories:
  - video-text-to-text

RMOT26

RMOT26 is a large-scale benchmark for Query-Driven Multi-Object Tracking, introduced in the paper QTrack: Query-Driven Reasoning for Multi-modal MOT.

Description

Multi-object tracking (MOT) has traditionally focused on estimating trajectories of all objects in a video. RMOT26 introduces a query-driven tracking paradigm that formulates tracking as a spatiotemporal reasoning problem conditioned on natural language queries.

Given a reference frame, a video sequence, and a textual query, the goal is to localize and track only the target(s) specified in the query while maintaining temporal coherence and identity consistency. RMOT26 features grounded queries and sequence-level splits to prevent identity leakage and enable robust evaluation of generalization.

Citation

@article{ashraf2026qtrack,
  title={QTrack: Query-Driven Reasoning for Multi-modal MOT},
  author={Ashraf, Tajamul and Tariq, Tavaheed and Yadav, Sonia and Ul Riyaz, Abrar and Tak, Wasif and Abdar, Moloud and Bashir, Janibul},
  journal={arXiv preprint arXiv:2603.13759},
  year={2026}
}