AD-FG-Diff / UBnormal /README.md
ModelsWeights's picture
Upload folder using huggingface_hub
ebb66db verified

UBnormal dataset | COSKAD | Original Repository


Skeletal version proposed in

Contracting Skeletal Kinematics for Human-Related Video Anomaly Detection
Alessandro Flaborea*, Guido D'Amely*, Stefano D'Arrigo*, Marco Aurelio Sterpa, Alessio Sampieri, Fabio Galasso

We propose HR-UBnormal as an extension of the original UBnormal dataset [1] with kinematic motion representations and a selected set of anomalies that relate only to human behaviors. First, AlphaPose [2] was used to extract the poses, and PoseFlow [3] was used to track the skeletons throughout each video. Then, we filtered out the non-human related anomalies. We removed the sub-sequences in which the only anomalous object was not a person (e.g., a car) or the anomaly cannot be detected using only body poses (e.g., fire in the scene). As a result, we left the validation set unaltered while eliminating the frames 2.32% of the test set.

Notes regarding the file names' format

The UBnormal dataset annotated with the skeletal representation and its Human-Related version (HR) are released with the following directory structure:

UBnormal
|
|__________ hr_bool_masks
|           |
|           |__________ testing
|           |           |
|           |           |__________ test_frame_mask
|           |                       |_______________{scene_id}_{clip_id}.npy
|           |                       |_______________...
|           |                       |_______________{scene_id}_{clip_id}.npy
|           |
|           |__________ validating
|                       |
|                       |__________ test_frame_mask
|                                   |_______________{scene_id}_{clip_id}.npy
|                                   |_______________...
|                                   |_______________{scene_id}_{clip_id}.npy
|
|__________ training
|           |
|           |__________ trajectories
|                       |
|                       |_________{scene_id}_{clip_id}
|                                 |
|                                 |_________00001.csv
|                                 |_________...
|                                 |_________0000{n}.csv
|
|__________ testing
|           |
|           |__________ trajectories
|           |           |
|           |           |_________{scene_id}_{clip_id}
|           |                     |
|           |                     |_________00001.csv
|           |                     |_________...
|           |                     |_________0000{n}.csv
|           |
|           |__________ test_frame_mask
|                       |
|                       |_______________{scene_id}_{clip_id}.npy
|                       |_______________...
|                       |_______________{scene_id}_{clip_id}.npy
|
|__________ validating
            |
            |__________ trajectories
            |           |
            |           |_________{scene_id}_{clip_id}
            |                     |
            |                     |_________00001.csv
            |                     |_________...
            |                     |_________0000{n}.csv
            |
            |__________ test_frame_mask
                        |
                        |_______________{scene_id}_{clip_id}.npy
                        |_______________...
                        |_______________{scene_id}_{clip_id}.npy

In the hr_bool_masks, the frames which were anomalous in the original version but where the anomaly didn't involve any human being are toggled to 'normal', i.e., they are toggled from 1 to 0.

Regarding the naming of the files, since our code expects the scene_id and the clip_id to be integers and some of the file names in the original dataset were overloaded, the following mapping has been adopted:

-

scene_id = {c1c2c3}

where c1 is the scene type ({'abnormal':0, 'normal':1}) and c2c3 is the scene number of the corresponding file in the original dataset.

-

clip_id = {c1c2c3c4}

where c1c2 is the scenario number (i.e., clip id) of the corresponding file in the original dataset, c3c4 is the remaining id part dubbed as version. Indeed, in the original dataset some videos have names as in the following example:

  • normal_scene_1_scenario1_1
  • normal_scene_1_scenario1_10
  • abnormal_scene_9_scenario_1_fog
  • abnormal_scene_12_scenario_1_smoke

In order to keep the information regarding the environment in the clip (e.g., fog, smoke, ...), this mapping has been adopted:

{'fog': 51, 'fire': 52, 'smoke': 53}

Citation

If you find this dataset useful in your research, please consider cite:

@misc{flaborea2023contracting,
      title={Contracting Skeletal Kinematics for Human-Related Video Anomaly Detection}, 
      author={Alessandro Flaborea and Guido D'Amely and Stefano D'Arrigo and Marco Aurelio Sterpa and Alessio Sampieri and Fabio Galasso},
      year={2023},
      eprint={2301.09489},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

@InProceedings{Acsintoae_CVPR_2022,
  author    = {Andra Acsintoae and Andrei Florescu and Mariana{-}Iuliana Georgescu and Tudor Mare and  Paul Sumedrea and Radu Tudor Ionescu and Fahad Shahbaz Khan and Mubarak Shah},
  title     = {UBnormal: New Benchmark for Supervised Open-Set Video Anomaly Detection},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  month     = {June},
  year      = {2022},
  }

References

[1] A. Acsintoae, A. Florescu, M.-I. Georgescu, T. Mare, P. Sumedrea, R. T. Ionescu, F. S. Khan, M. Shah, Ubnormal: New benchmark for supervised open-set video anomaly detection, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp.20143–20153.

[2] H.-S. Fang, S. Xie, Y.-W. Tai, C. Lu, Rmpe: Regional multi-person pose estimation, in: ICCV, 2017, pp. 2334–2343.

[3] Y. Xiu, J. Li, H. Wang, Y. Fang, C. Lu, Pose Flow: Efficient online pose tracking, in: BMVC, 2018.