Datasets:
Best audio length/how to deal with long audios
Hi, my product can listen to people speaking for a long time (typically 10-30 min). What would be the best way to pass this kind of audio, eg:
- breakdown audio into 30s chunks, send them to the model, then somehow reconcile the different outputs into a single one at the end?
- send the whole audio?
- something else?
Is longer than 30s audio better than less than 30s?
Thanks a lot!
Hi @belariow ! Up to at least a few minutes of audio, the model will give better results (higher sensitivity and specificity) than using just 30 seconds or less. In our experience the best way to do this is to break the audio into 30 second chunks, feed them to the model serially or in parallel with quantize=False, and average the scores before quantizing.
We have limited experience with audio samples in the range 10-30 min, but some evidence to suggest that the sensitivity and specificity can become imbalanced when averaging over such time scales and then using the same thresholds used for shorter streams. This may have to do with asymmetric tails of the score distribution. If you have the opportunity to calibrate the model on such long-averaged streams, please let us know the results.