| --- |
| license: apache-2.0 |
| task_categories: |
| - text-to-speech |
| language: |
| - ur |
| size_categories: |
| - 10K<n<100K |
| --- |
| |
|
|
|
|
| # Urdu TTS Dataset |
|
|
| A high-quality Urdu speech dataset for Text-to-Speech (TTS) model training, |
| built from YouTube audiobooks and narrations. |
|
|
| ## Dataset Statistics |
|
|
| | Property | Value | |
| |---|---| |
| | Language | Urdu (ur) | |
| | Total Samples | 50000 | |
| | Total Duration | 70 hours | |
| | Sample Rate | 22050 Hz | |
| | Audio Format | WAV (16-bit PCM, mono) | |
|
|
| ## Columns |
|
|
| | Column | Type | Description | |
| |---|---|---| |
| | `audio` | Audio | WAV file at 22050 Hz | |
| | `transcript` | string | Raw Urdu transcription (Whisper) | |
| | `normalized_text` | string | Cleaned and normalized Urdu text | |
| | `audio_id` | string | Unique segment identifier | |
| | `duration_sec` | float | Clip duration in seconds | |
| | `snr_db` | float | Estimated signal-to-noise ratio | |
| | `source_file` | string | Source YouTube video filename | |
| | `language` | string | Always `"ur"` | |
|
|
| ## Usage |
|
|
| ```python |
| from datasets import load_dataset |
| |
| dataset = load_dataset("{repo_id}") |
| print(dataset["train"][0]) |
| ``` |
|
|
| ## Pipeline |
|
|
| 1. YouTube audio download (yt-dlp) |
| 2. Voice Activity Detection segmentation (1.5–15 sec clips) |
| 3. SNR quality filtering |
| 4. Urdu transcription (OpenAI Whisper medium/large) |
| 5. Text normalization and deduplication |
| 6. Incremental shard upload to HuggingFace |
|
|
| ## License |
|
|
| CC BY 4.0 — Please respect the rights of original content creators. |