anonymousatom commited on
Commit
0bf299d
·
verified ·
1 Parent(s): a1fbfbc

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +77 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: id
5
+ dtype: int64
6
+ - name: name
7
+ dtype: string
8
+ - name: video_path
9
+ dtype: string
10
+ - name: count
11
+ sequence:
12
+ dtype: int64
13
+ - name: fuzzy_action
14
+ dtype: bool
15
+ - name: complex_action
16
+ dtype: bool
17
+ splits:
18
+ - name: test
19
+ num_examples: 227
20
+ configs:
21
+ - config_name: default
22
+ data_files:
23
+ - split: test
24
+ path: data/test.jsonl
25
+ license: cc-by-4.0
26
+ task_categories:
27
+ - video-classification
28
+ - visual-question-answering
29
+ tags:
30
+ - video
31
+ - counting
32
+ - repetition-counting
33
+ - exercise
34
+ - benchmark
35
+ pretty_name: PushUpBench
36
+ size_categories:
37
+ - n<1K
38
+ ---
39
+
40
+ # PushUpBench: Video Repetition Counting Benchmark
41
+
42
+ PushUpBench is a benchmark for evaluating vision-language models on their ability to count exercise repetitions in videos.
43
+
44
+ ## Dataset Description
45
+
46
+ - **Total samples**: 227
47
+ - **Video format**: MP4
48
+ - **Task**: Count the number of repetitions of a specified exercise in a video
49
+
50
+ ## Dataset Structure
51
+
52
+ Each sample contains:
53
+ - `name`: Action description (e.g., "push ups", "leg lift", "knee to chest")
54
+ - `video_path`: Filename of the video
55
+ - `count`: List of acceptable count values (some exercises have ambiguous boundaries)
56
+ - `fuzzy_action`: Whether the action has ambiguous start/end boundaries
57
+ - `complex_action`: Whether the action is compound/complex
58
+
59
+ ## Usage with lmms-eval
60
+
61
+ ```bash
62
+ # Set the video directory
63
+ export PUSHUPBENCH_VIDEO_DIR=/path/to/videos
64
+
65
+ # Run evaluation
66
+ python -m lmms_eval \
67
+ --model <model> \
68
+ --tasks pushupbench \
69
+ --batch_size 1 \
70
+ --output_path results/
71
+ ```
72
+
73
+ ## Metrics
74
+
75
+ - **Exact Match**: Prediction matches any value in the ground truth count list
76
+ - **MAE**: Mean Absolute Error between prediction and primary ground truth
77
+ - **OBO**: Off-By-One accuracy (prediction within 1 of any ground truth)