Datasets:

Languages:
English
ArXiv:
License:
CaesarWang commited on
Commit
ec40962
·
verified ·
1 Parent(s): 16729f9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -0
README.md CHANGED
@@ -130,6 +130,39 @@ trainset/
130
 
131
  ```
132
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
133
  ### Data Statistics
134
 
135
  The dataset comprises **11,706** total video samples, spanning approximately **67.4 hours** of self-talking footage. The data is categorized by environment (Lab vs. Wild) and includes varying resolutions and subject diversity.
@@ -180,3 +213,31 @@ Raw videos from the wild (e.g., VoxCeleb2, Celebv-HQ) often contain background n
180
  - **HDTF:** https://huggingface.co/datasets/global-optima-research/HDTF
181
  - **Celebv-HQ:** https://github.com/CelebV-HQ/CelebV-HQ/
182
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
130
 
131
  ```
132
 
133
+
134
+ ## Data Format Details
135
+
136
+ ### File Overview
137
+ | File | Type | Shape | Description |
138
+ |------|------|-------|-------------|
139
+ | `audio.wav` | Audio | (N_samples,) | Original audio waveform|
140
+ | `cam.npy` | Parameters | (N_frames, 3) | Camera parameters (position/scale) |
141
+ | `detailcode.npy` | Parameters | (N_frames, 128) | Facial detail codes (wrinkles, fine features) |
142
+ | `envelope.npy` | Parameters | (N_audio_samples,) | Audio envelope/amplitude over time |
143
+ | `expcode.npy` | Parameters | (N_frames, 50) | FLAME expression parameters (50-dim) |
144
+ | `lightcode.npy` | Parameters | (N_frames, 9, 3) | Spherical harmonics lighting (9 bands × RGB) |
145
+ | `metadata.pkl` | Metadata | N/A | Sequence metadata (integer or dict) |
146
+ | `posecode.npy` | Parameters | (N_frames, 6) | 3 head pose + 3 jaw pose |
147
+ | `refimg.npy` | Image | (3, 224, 224) | Reference image (RGB, 224×224 pixels) |
148
+ | `shapecode.npy` | Parameters | (N_frames, 100) | FLAME shape parameters (100-dim) |
149
+ | `texcode.npy` | Parameters | (N_frames, 50) | Texture codes (50-dim) |
150
+
151
+ ### Coordinate Systems and Conventions
152
+ - **FLAME model**: 3D Morphable Face Model with 5023 vertices
153
+ - **Expression space**: 50-dimensional linear basis
154
+ - **Shape space**: 100-dimensional PCA space
155
+ - **Pose representation**: 3 head pose + 3 jaw pose
156
+ - **Lighting**: 2nd-order spherical harmonics (9 bands)
157
+
158
+ ### Temporal Synchronization
159
+ - **Video frames**: 25 FPS (frames per second)
160
+ - **Audio samples**: 16,000 samples per second
161
+ - All video parameters (`expcode`, `shapecode`, `detailcode`, `posecode`, `cam`, `lightcode`, `texcode`) share the same `N_frames` dimension
162
+ - Audio and video are temporally aligned (frame 0 corresponds to start of audio)
163
+
164
+
165
+
166
  ### Data Statistics
167
 
168
  The dataset comprises **11,706** total video samples, spanning approximately **67.4 hours** of self-talking footage. The data is categorized by environment (Lab vs. Wild) and includes varying resolutions and subject diversity.
 
213
  - **HDTF:** https://huggingface.co/datasets/global-optima-research/HDTF
214
  - **Celebv-HQ:** https://github.com/CelebV-HQ/CelebV-HQ/
215
 
216
+ ## Citation
217
+
218
+ If you use this dataset, please cite the original source datasets:
219
+
220
+ - **GRID**: Cooke, M., et al. (2006). An audio-visual corpus for speech perception and automatic speech recognition.
221
+ - **RAVDESS**: Livingstone, S. R., & Russo, F. A. (2018). The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS).
222
+ - **MEAD**: Wang, K., et al. (2020). MEAD: A Large-scale Audio-visual Dataset for Emotional Talking-face Generation.
223
+ - **VoxCeleb2**: Chung, J. S., et al. (2018). VoxCeleb2: Deep Speaker Recognition.
224
+ - **HDTF**: Zhang, Z., et al. (2021). Flow-guided One-shot Talking Face Generation with a High-resolution Audio-visual Dataset.
225
+ - **CelebV-HQ**: Zhu, H., et al. (2022). CelebV-HQ: A Large-Scale Video Facial Attributes Dataset.
226
+
227
+ And the EMOCA model used for parameter extraction:
228
+ - **EMOCA**: Danecek, R., et al. (2022). EMOCA: Emotion Driven Monocular Face Capture and Animation.
229
+
230
+ ## License
231
+
232
+ Please refer to the original dataset licenses:
233
+ - GRID: Research use only
234
+ - RAVDESS: CC BY-NA-SC 4.0
235
+ - MEAD, VoxCeleb2, HDTF, CelebV-HQ: Check respective dataset licenses
236
+
237
+ ## Notes
238
+
239
+ - Not all sequence numbers are contiguous (some sequences may be missing due to quality filtering or processing failures)
240
+ - File counts per sequence are consistent (11 files per sequence)
241
+ - This is a processed/derived dataset - original videos are not included, only extracted parameters
242
+
243
+