VibeVoice ASR โ Kazakh
Model Description
This is VibeVoice ASR fine-tuned on the Kazakh language using the ISSAI KSC2 Structured dataset (~1,200 hours of diverse Kazakh speech). Fine-tuning was performed using LoRA (Low-Rank Adaptation) and the weights were merged into the base model for efficient inference. Model demonstrated 22% WER on test set of ISSAI KSC2.
The base VibeVoice ASR model had no prior Kazakh knowledge. This fine-tuned version produces punctuated and capitalized Kazakh transcriptions.
Training Dataset
InflexionLab/ISSAI-KSC2-Structured โ an enhanced version of the ISSAI KSC2 corpus with punctuation and capitalization restored using Gemma 27B. Covers 6 domains: TV News, Crowdsourced, Parliament, Talkshow, Podcasts, and Radio.
Evaluation Results
Evaluated on the KSC2 Test split (9,351 samples). The base model column reflects the unmodified microsoft/VibeVoice-ASR with no Kazakh training.
| Domain | WER (Base) | WER (Fine-tuned) | CER (Base) | CER (Fine-tuned) |
|---|---|---|---|---|
| TV News | 232.03% | 10.95% | 171.66% | 3.27% |
| Crowdsourced | 257.27% | 12.00% | 192.02% | 3.28% |
| Parliament | 178.99% | 15.01% | 130.68% | 7.45% |
| Talkshow | 531.58% | 25.86% | 390.86% | 11.71% |
| Podcasts | 395.54% | 31.68% | 289.77% | 15.14% |
| Radio | 351.42% | 56.68% | 255.52% | 32.53% |
| Overall | 295.08% | ~22% | 213.33% | ~9.6% |
- Downloads last month
- -
Model tree for InflexionLab/VibeVoice-ASR-Kazakh
Base model
microsoft/VibeVoice-ASRDataset used to train InflexionLab/VibeVoice-ASR-Kazakh
Evaluation results
- WER on ISSAI KSC2self-reported22%