Datasets:

Languages:
English
ArXiv:
License:
CaesarWang commited on
Commit
748df47
·
verified ·
1 Parent(s): e8aab43

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +176 -3
README.md CHANGED
@@ -4,6 +4,179 @@ language:
4
  - en
5
  size_categories:
6
  - 100M<n<1B
7
- task_categories:
8
- - translation
9
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - en
5
  size_categories:
6
  - 100M<n<1B
7
+ ---
8
+
9
+
10
+ # Dataset Card for Dataset Curation of 3DXTalker
11
+
12
+ ## Dataset Description
13
+
14
+ - **Repository:** [Link to your GitHub/Project Page]
15
+ - **Paper:** [Link to your 3DXTalker or relevant paper]
16
+ - **Project :** [Link to your Project Page]
17
+
18
+ ### Dataset Summary
19
+
20
+ This dataset is a large-scale, curated collection of talking head videos built for tasks such as high-fidelity 3D talking avatar generation, lip synchronization, and pose dynamics modeling.
21
+
22
+ The dataset aggregates and standardizes data from six prominent sources (**GRID, RAVDESS, MEAD, VoxCeleb2, HDTF, Celebv-HQ**), processed through a rigorous data curation pipeline to ensure high quality in terms of face alignment, resolution, and audio-visual synchronization. It covers diverse environments (Lab vs. Wild) and a wide range of subjects.
23
+
24
+ ### Supported Tasks and Leaderboards
25
+
26
+ - **3D Talking Head Generation:** Synthesizing realistic talking videos from driving speech.
27
+ - **Audio-Driven Lip Synchronization:** Aligning lip movements precisely with input speech.
28
+ - **Emotion Analysis & Synthesis:** Leveraging the emotional diversity in datasets like RAVDESS and MEAD.
29
+ - **Audio-Driven Head Pose Synthesis:** Modeling natural head movements and orientation directly driving speech.
30
+
31
+ ## Dataset Structure
32
+ ```
33
+ trainset/
34
+
35
+ ├── V0-GRID/ # 6,570 sequences from GRID corpus
36
+
37
+ │ ├── V0-s1-00001/
38
+
39
+ │ │ ├── audio.wav # (N,) audio data
40
+
41
+ │ │ ├── cam.npy # (T, 3) camera parameters
42
+
43
+ │ │ ├── detailcode.npy # (T, 128) facial details
44
+
45
+ │ │ ├── envelope.npy # (N,) audio envelope
46
+
47
+ │ │ ├── expcode.npy # (T, 50) expression codes
48
+
49
+ │ │ ├── lightcode.npy # (T, 9, 3) lighting
50
+
51
+ │ │ ├── metadata.pkl # Sequence metadata
52
+
53
+ │ │ ├── posecode.npy # (T, 6) head pose
54
+
55
+ │ │ ├── refimg.npy # (C, H, W) reference image
56
+
57
+ │ │ ├── shapecode.npy # (T, 100) shape codes
58
+
59
+ │ │ └── texcode.npy # (T, 50) texture codes
60
+
61
+ │ ├── V0-s1-00002/
62
+
63
+ │ │ └── ... (same 11 files)
64
+
65
+ │ ├── V0-s1-00003/
66
+
67
+ │ └── ... (6,570 total sequences)
68
+
69
+
70
+
71
+ ├── V1-RAVDESS/ # 583 sequences from RAVDESS dataset
72
+
73
+ │ ├── V1-Song-Actor_01-00001/
74
+
75
+ │ │ └── ... (same 11 files)
76
+
77
+ │ ├── V1-Song-Actor_01-00002/
78
+
79
+ │ ├── V1-Speech-Actor_01-00001/
80
+
81
+ │ ├── V1-Speech-Actor_02-00001/
82
+
83
+ │ └── ... (583 total sequences)
84
+
85
+
86
+
87
+ ├── V2-MEAD/ # 1,939 sequences from MEAD dataset
88
+
89
+ │ ├── V2-M003-angry-00001/
90
+
91
+ │ │ └── ... (same 11 files)
92
+
93
+ │ ├── V2-M003-angry-00002/
94
+
95
+ │ ├── V2-M003-happy-00001/
96
+
97
+ │ ├── V2-W009-sad-00001/
98
+
99
+ │ └── ... (1,939 total sequences)
100
+
101
+
102
+
103
+ ├── V3-VoxCeleb2/ # 1,296 sequences from VoxCeleb2
104
+
105
+ │ ├── {sequence_id}/
106
+
107
+ │ │ └── ... (same 11 files)
108
+
109
+ │ └── ... (1,296 total sequences)
110
+
111
+
112
+
113
+ ├── V4-HDTF/ # 350 sequences from HDTF dataset
114
+
115
+ │ ├── {sequence_id}/
116
+
117
+ │ │ └── ... (same 11 files)
118
+
119
+ │ └── ... (350 total sequences)
120
+
121
+
122
+
123
+ └── V5-CelebV-HQ/ # 768 sequences from CelebV-HQ dataset
124
+
125
+ ├── {sequence_id}/
126
+
127
+ │ └── ... (same 11 files)
128
+
129
+ └── ... (768 total sequences)
130
+
131
+ ```
132
+
133
+ ### Data Statistics
134
+
135
+ The dataset comprises **11,706** total video samples, spanning approximately **67.4 hours** of self-talking footage. The data is categorized by environment (Lab vs. Wild) and includes varying resolutions and subject diversity.
136
+
137
+ #### Detailed Statistics (from Curation Pipeline)
138
+
139
+ | Dataset | ID | Environment | Year | Raw Resolution | Size (samples) | Subject | Total Duration (s) | Hours (h) | Avg. Duration (s/sample) |
140
+ |-------------|----|-------------|------|----------------|----------------|---------|--------------------|-----------|--------------------------|
141
+ | **GRID** | V0 | Lab | 2006 | 720 × 576 | 6,600 | 34 | 99,257.81 | 27.57 | 15.04 |
142
+ | **RAVDESS** | V1 | Lab | 2018 | 1280 × 1024 | 613 | 24 | 10,071.88 | 2.80 | 16.43 |
143
+ | **MEAD** | V2 | Lab | 2020 | 1920 × 1080 | 1,969 | 60 | 42,868.77 | 11.91 | 21.77 |
144
+ | **VoxCeleb2**| V3| Wild | 2018 | 360P~720P | 1,326 | 1k+ | 21,528.20 | 5.98 | 16.24 |
145
+ | **HDTF** | V4 | Wild | 2021 | 720P~1080P | 400 | 300+ | 55,452.08 | 15.40 | 138.63 |
146
+ | **Celebv-HQ**| V5| Wild | 2022 | 512 × 512 | 798 | 700+ | 13,486.20 | 3.75 | 16.90 |
147
+
148
+ ### Data Splits
149
+
150
+ The dataset follows a strict training and testing split protocol to ensure fair evaluation. The testing set is composed of a balanced selection from each sub-dataset.
151
+
152
+ | Dataset | ID | Total Size | Training Set | Test Set |
153
+ | ------------- | --- | ---------- | ------------ | -------- |
154
+ | **GRID** | V0 | 6,600 | 6,570 | 30 |
155
+ | **RAVDESS** | V1 | 613 | 583 | 30 |
156
+ | **MEAD** | V2 | 1,969 | 1,939 | 30 |
157
+ | **VoxCeleb2** | V3 | 1,326 | 1,296 | 30 |
158
+ | **HDTF** | V4 | 400 | 350 | 50 |
159
+ | **Celebv-HQ** | V5 | 798 | 768 | 30 |
160
+ | **Summary** | | **11,706** | **11,506** | **200** |
161
+
162
+ ## Dataset Creation
163
+
164
+ ### Curation Rationale
165
+
166
+ Raw videos from the wild (e.g., VoxCeleb2, Celebv-HQ) often contain background noise, diverse languages, or varying resolutions. This dataset is the result of the following data curation pipeline designed to ensure high-quality audio-visual consistency:
167
+
168
+ 1. **Duration Filtering:** To facilitate temporal modeling, short clips from lab datasets are concatenated to form 10–20s sequences, while wild samples shorter than 10s are filtered out.
169
+ 2. **Signal-to-Noise Ratio (SNR) Filtering:** Clips with strong background noise, music, or environmental interference are removed based on SNR thresholds to ensure clean audio features.
170
+ 3. **Language Filtering:** Linguistic consistency is enforced by using **Whisper** to discard non-English samples or those with low detection confidence.
171
+ 4. **Audio-Visual Sync Filtering:** **SyncNet** is used to eliminate clips with poor lip synchronization, abrupt scene cuts, or off-screen speakers (e.g., voice-overs).
172
+ 5. **Resolution Normalization:** All videos are resized and center-cropped to a unified **512×512** resolution and re-encoded at **25 FPS** with standardized RGB to harmonize data from diverse sources.
173
+
174
+ ### Source Video Data
175
+
176
+ - **GRID:**
177
+ - **RAVDESS:**
178
+ - **MEAD:**
179
+ - **VoxCeleb2:**
180
+ - **HDTF:**
181
+ - **Celebv-HQ:**
182
+