| | --- |
| | license: cc-by-nc-4.0 |
| | task_categories: |
| | - automatic-speech-recognition |
| | language: |
| | - en |
| | tags: |
| | - non-native |
| | - pronunciation |
| | - speech |
| | - pronunciation assessment |
| | - phoneme |
| | pretty_name: EpaDB |
| | size_categories: |
| | - 1K<n<10K |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: train |
| | path: train.json |
| | - split: test |
| | path: test.json |
| | --- |
| | |
| | # EpaDB: English Pronunciation by Argentinians |
| |
|
| | ## Dataset Summary |
| |
|
| | EpaDB is a speech database intended for research in pronunciation scoring. The corpus includes audios from 50 Spanish speakers (25 males and 25 females) from |
| | Argentina reading phrases in English. Each speaker recorded 64 short phrases containing sounds hard to pronounce for this population adding up to ~3.5 hours of speech. |
| |
|
| |
|
| | ## Supported Tasks |
| |
|
| | - **Pronunciation Assessment** – predict utterance-level global scores or phoneme-level correct/incorrect |
| | - **Phone Recognition** - predict phoneme sequences |
| | - **Phone-level Error Detection** – classify each phone as insertion, deletion, distortion, substitution, or correct. |
| | - **Alignment Analysis** – leverage MFA timings to study forced alignment quality or to refine pronunciation models. |
| |
|
| | ## Languages |
| |
|
| | - L2 utterances: English |
| | - Speaker L1: Spanish |
| |
|
| | ## Dataset Structure |
| |
|
| | ### Data Instances |
| |
|
| | Each JSON entry describes one utterance: |
| |
|
| | - Phone sequences for reference transcription (`reference`) and annotators (`annot_1`, optional `annot_2`). |
| | - Phone-level labels (`label_1`, `label_2`) and derived `error_type` categories. |
| | - MFA start/end timestamps per phone (`start_mfa`, `end_mfa`). |
| | - Per-utterance global scores (`global_1`, `global_2`) and propagated speaker levels (`level_1`, `level_2`). |
| | - Speaker metadata (`speaker_id`, `gender`). |
| | - Audio metadata (`duration`, `sample_rate`, `wav_path`) plus the waveform itself. |
| | - Reference sentence orthographic transcription (`transcription`). |
| |
|
| | ### Data Fields |
| |
|
| | | Field | Type | Description | |
| | |-------|------|-------------| |
| | | `utt_id` | string | Unique utterance identifier (e.g., `spkr28_1`). | |
| | | `speaker_id` | string | Speaker identifier. | |
| | | `sentence_id` | string | Reference sentence ID (matches `reference_transcriptions.txt`). | |
| | | `phone_ids` | sequence[string] | Unique phone identifiers per utterance. | |
| | | `reference` | sequence[string] | reference phones assigned to match the closer aimed pronunciation by the speaker. | |
| | | `annot_1` | sequence[string] | Annotator 1 phones (`-` marks deletions). | |
| | | `annot_2` | sequence[string] | Annotator 3 phones when available, empty otherwise. | |
| | | `label_1` | sequence[string] | Annotator 1 phone labels (`"1"` correct, `"0"` incorrect). | |
| | | `label_2` | sequence[string] | Annotator 3 phone labels when present. | |
| | | `error_type` | sequence[string] | Derived categories: `correct`, `insertion`, `deletion`, `distortion`, `substitution`. | |
| | | `start_mfa` | sequence[float] | Phone start times (seconds). | |
| | | `end_mfa` | sequence[float] | Phone end times (seconds). | |
| | | `global_1` | float or null | Annotator 1 utterance-level score (1–4). | |
| | | `global_2` | float or null | Annotator 3 score when available. | |
| | | `level_1` | string or null | Speaker-level proficiency tier from annotator 1 ("A"/"B"). | |
| | | `level_2` | string or null | Speaker tier from annotator 3. | |
| | | `gender` | string or null | Speaker gender (`"M"`/`"F"`). | |
| | | `duration` | float | Utterance duration in seconds (after resampling to 16 kHz). | |
| | | `sample_rate` | int | Sample rate in Hz (16,000). | |
| | | `audio` | string | Waveform filename (`<utt_id>.wav`). | |
| | | `transcription` | string or null | Reference sentence text. | |
| |
|
| | ### Data Splits |
| |
|
| | | Split | # Examples | |
| | |-------|------------| |
| | | train | 1,903 | |
| | | test | 1,263 | |
| |
|
| | ### Notes |
| |
|
| | - When annotator 3 did not label an utterance, related fields (`annot_2`, `label_2`, `global_2`, `level_2`) are absent or set to null. |
| | - Error types come from simple heuristics contrasting MFA reference phones with annotator 1 labels. |
| | - Waveforms were resampled to 16 kHz using `ffmpeg` during manifest generation. |
| | - Forced alignments and annotations were merged to produce enriched CSV files per speaker/partition. |
| | - Global scores are averaged per speaker to derive `level_*` tiers (`A` if mean ≥ 3, `B` otherwise). |
| |
|
| | ## Licensing |
| |
|
| | - Audio and annotations: CC BY-NC 4.0 (non-commercial use allowed with attribution). |
| |
|
| | ## Citation |
| |
|
| | ``` |
| | @article{vidal2019epadb, |
| | title = {EpaDB: a database for development of pronunciation assessment systems}, |
| | author = {Vidal, Jazmin and Ferrer, Luciana and Brambilla, Leonardo}, |
| | journal = {Proc. Interspeech}, |
| | pages = {589--593}, |
| | year = {2019} |
| | } |
| | ``` |
| |
|
| | ## Usage |
| |
|
| | Install dependencies and load the dataset: |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | ds = load_dataset("hashmin/epadb", split="train") |
| | ``` |
| |
|
| | ## Acknowledgements |
| |
|
| | The database is an effort of the Speech Lab at the Laboratorio de Inteligencia Artificial Aplicada from |
| | the Universidad de Buenos Aires and was partially funded by Google by a Google Latin America Reseach Award in 2018 |