Datasets:
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError
Exception: CastError
Message: Couldn't cast
experiment: string
model: string
n_queries_per_wer: int64
wer_levels_tested: list<item: int64>
child 0, item: int64
results: struct<wer_0: struct<wer_pct: int64, mean_quality: double, std_quality: double, n_queries: int64, qu (... 1574 chars omitted)
child 0, wer_0: struct<wer_pct: int64, mean_quality: double, std_quality: double, n_queries: int64, quality_scores: (... 76 chars omitted)
child 0, wer_pct: int64
child 1, mean_quality: double
child 2, std_quality: double
child 3, n_queries: int64
child 4, quality_scores: list<item: double>
child 0, item: double
child 5, degradation_from_clean: double
child 6, degradation_pct: double
child 1, wer_1: struct<wer_pct: int64, mean_quality: double, std_quality: double, n_queries: int64, quality_scores: (... 76 chars omitted)
child 0, wer_pct: int64
child 1, mean_quality: double
child 2, std_quality: double
child 3, n_queries: int64
child 4, quality_scores: list<item: double>
child 0, item: double
child 5, degradation_from_clean: double
child 6, degradation_pct: double
child 2, wer_2: struct<wer_pct: int64, mean_quality: double, std_quality: double, n_queries: int64, quality_scores: (... 76 chars omitted)
child 0, wer_pct: int64
child 1, mean_quality: double
child 2, std_quality: double
child 3, n_queries: int64
child 4, quality_scores: list<item: double>
child 0, item: double
...
i_95_upper: double
comparisons: struct<pavo_vs_cloud_latency: struct<pavo_mean: double, baseline_mean: double, difference: double, t (... 90 chars omitted)
child 0, pavo_vs_cloud_latency: struct<pavo_mean: double, baseline_mean: double, difference: double, t_statistic: double, p_value_tt (... 59 chars omitted)
child 0, pavo_mean: double
child 1, baseline_mean: double
child 2, difference: double
child 3, t_statistic: double
child 4, p_value_ttest: double
child 5, w_statistic: double
child 6, p_value_wilcoxon: double
random_quality: struct<values: list<item: double>, mean: double, std: double, ci_95_lower: double, ci_95_upper: doub (... 3 chars omitted)
child 0, values: list<item: double>
child 0, item: double
child 1, mean: double
child 2, std: double
child 3, ci_95_lower: double
child 4, ci_95_upper: double
pavo_quality: struct<values: list<item: double>, mean: double, std: double, ci_95_lower: double, ci_95_upper: doub (... 3 chars omitted)
child 0, values: list<item: double>
child 0, item: double
child 1, mean: double
child 2, std: double
child 3, ci_95_lower: double
child 4, ci_95_upper: double
cloud_cost: struct<values: list<item: double>, mean: double, std: double, ci_95_lower: double, ci_95_upper: doub (... 3 chars omitted)
child 0, values: list<item: double>
child 0, item: double
child 1, mean: double
child 2, std: double
child 3, ci_95_lower: double
child 4, ci_95_upper: double
to
{'trials': Value('int64'), 'turns_per_trial': Value('int64'), 'seeds': List(Value('int64')), 'pavo_latency': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'pavo_quality': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'pavo_cost': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'cloud_latency': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'cloud_quality': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'cloud_cost': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'ondevice_latency': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'ondevice_quality': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'ondevice_cost': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'random_latency': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'random_quality': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'random_cost': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'comparisons': {'pavo_vs_cloud_latency': {'pavo_mean': Value('float64'), 'baseline_mean': Value('float64'), 'difference': Value('float64'), 't_statistic': Value('float64'), 'p_value_ttest': Value('float64'), 'w_statistic': Value('float64'), 'p_value_wilcoxon': Value('float64')}}}
because column names don't match
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1779, in _prepare_split_single
for key, table in generator:
^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 295, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2281, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2227, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
experiment: string
model: string
n_queries_per_wer: int64
wer_levels_tested: list<item: int64>
child 0, item: int64
results: struct<wer_0: struct<wer_pct: int64, mean_quality: double, std_quality: double, n_queries: int64, qu (... 1574 chars omitted)
child 0, wer_0: struct<wer_pct: int64, mean_quality: double, std_quality: double, n_queries: int64, quality_scores: (... 76 chars omitted)
child 0, wer_pct: int64
child 1, mean_quality: double
child 2, std_quality: double
child 3, n_queries: int64
child 4, quality_scores: list<item: double>
child 0, item: double
child 5, degradation_from_clean: double
child 6, degradation_pct: double
child 1, wer_1: struct<wer_pct: int64, mean_quality: double, std_quality: double, n_queries: int64, quality_scores: (... 76 chars omitted)
child 0, wer_pct: int64
child 1, mean_quality: double
child 2, std_quality: double
child 3, n_queries: int64
child 4, quality_scores: list<item: double>
child 0, item: double
child 5, degradation_from_clean: double
child 6, degradation_pct: double
child 2, wer_2: struct<wer_pct: int64, mean_quality: double, std_quality: double, n_queries: int64, quality_scores: (... 76 chars omitted)
child 0, wer_pct: int64
child 1, mean_quality: double
child 2, std_quality: double
child 3, n_queries: int64
child 4, quality_scores: list<item: double>
child 0, item: double
...
i_95_upper: double
comparisons: struct<pavo_vs_cloud_latency: struct<pavo_mean: double, baseline_mean: double, difference: double, t (... 90 chars omitted)
child 0, pavo_vs_cloud_latency: struct<pavo_mean: double, baseline_mean: double, difference: double, t_statistic: double, p_value_tt (... 59 chars omitted)
child 0, pavo_mean: double
child 1, baseline_mean: double
child 2, difference: double
child 3, t_statistic: double
child 4, p_value_ttest: double
child 5, w_statistic: double
child 6, p_value_wilcoxon: double
random_quality: struct<values: list<item: double>, mean: double, std: double, ci_95_lower: double, ci_95_upper: doub (... 3 chars omitted)
child 0, values: list<item: double>
child 0, item: double
child 1, mean: double
child 2, std: double
child 3, ci_95_lower: double
child 4, ci_95_upper: double
pavo_quality: struct<values: list<item: double>, mean: double, std: double, ci_95_lower: double, ci_95_upper: doub (... 3 chars omitted)
child 0, values: list<item: double>
child 0, item: double
child 1, mean: double
child 2, std: double
child 3, ci_95_lower: double
child 4, ci_95_upper: double
cloud_cost: struct<values: list<item: double>, mean: double, std: double, ci_95_lower: double, ci_95_upper: doub (... 3 chars omitted)
child 0, values: list<item: double>
child 0, item: double
child 1, mean: double
child 2, std: double
child 3, ci_95_lower: double
child 4, ci_95_upper: double
to
{'trials': Value('int64'), 'turns_per_trial': Value('int64'), 'seeds': List(Value('int64')), 'pavo_latency': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'pavo_quality': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'pavo_cost': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'cloud_latency': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'cloud_quality': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'cloud_cost': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'ondevice_latency': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'ondevice_quality': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'ondevice_cost': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'random_latency': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'random_quality': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'random_cost': {'values': List(Value('float64')), 'mean': Value('float64'), 'std': Value('float64'), 'ci_95_lower': Value('float64'), 'ci_95_upper': Value('float64')}, 'comparisons': {'pavo_vs_cloud_latency': {'pavo_mean': Value('float64'), 'baseline_mean': Value('float64'), 'difference': Value('float64'), 't_statistic': Value('float64'), 'p_value_ttest': Value('float64'), 'w_statistic': Value('float64'), 'p_value_wilcoxon': Value('float64')}}}
because column names don't match
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 882, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 943, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1646, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1832, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
trials int64 | turns_per_trial int64 | seeds list | pavo_latency dict | pavo_quality dict | pavo_cost dict | cloud_latency dict | cloud_quality dict | cloud_cost dict | ondevice_latency dict | ondevice_quality dict | ondevice_cost dict | random_latency dict | random_quality dict | random_cost dict | comparisons dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5 | 1,000 | [
42,
123,
456,
789,
1024
] | {
"values": [
2320.9553,
2253.3363,
2269.7735,
2245.8794,
2294.8362
],
"mean": 2276.9561,
"std": 27.6799,
"ci_95_lower": 2252.6937,
"ci_95_upper": 2301.2186
} | {
"values": [
0.8281,
0.8266,
0.8253,
0.8266,
0.8285
],
"mean": 0.827,
"std": 0.0011,
"ci_95_lower": 0.826,
"ci_95_upper": 0.828
} | {
"values": [
0.0187,
0.0187,
0.0185,
0.0187,
0.0188
],
"mean": 0.0187,
"std": 0.0001,
"ci_95_lower": 0.0186,
"ci_95_upper": 0.0187
} | {
"values": [
2702.1093,
2649.2749,
2693.8871,
2642.7514,
2665.4603
],
"mean": 2670.6966,
"std": 23.6297,
"ci_95_lower": 2649.9843,
"ci_95_upper": 2691.4089
} | {
"values": [
0.8747,
0.8745,
0.8749,
0.8745,
0.8745
],
"mean": 0.8746,
"std": 0.0002,
"ci_95_lower": 0.8745,
"ci_95_upper": 0.8748
} | {
"values": [
0.025,
0.025,
0.025,
0.025,
0.025
],
"mean": 0.025,
"std": 0,
"ci_95_lower": 0.025,
"ci_95_upper": 0.025
} | {
"values": [
1403.1297,
1411.8178,
1411.3827,
1392.7992,
1382.1751
],
"mean": 1400.2609,
"std": 11.3865,
"ci_95_lower": 1390.2802,
"ci_95_upper": 1410.2416
} | {
"values": [
0.6276,
0.6276,
0.6276,
0.6276,
0.6276
],
"mean": 0.6276,
"std": 0,
"ci_95_lower": 0.6276,
"ci_95_upper": 0.6276
} | {
"values": [
0.005,
0.005,
0.005,
0.005,
0.005
],
"mean": 0.005,
"std": 0,
"ci_95_lower": 0.005,
"ci_95_upper": 0.005
} | {
"values": [
2085.7257,
2046.8546,
2025.0513,
2052.1472,
2058.332
],
"mean": 2053.6222,
"std": 19.581,
"ci_95_lower": 2036.4586,
"ci_95_upper": 2070.7857
} | {
"values": [
0.7945,
0.7915,
0.7908,
0.7942,
0.7949
],
"mean": 0.7932,
"std": 0.0017,
"ci_95_lower": 0.7917,
"ci_95_upper": 0.7947
} | {
"values": [
0.013,
0.0125,
0.0125,
0.013,
0.0132
],
"mean": 0.0128,
"std": 0.0003,
"ci_95_lower": 0.0126,
"ci_95_upper": 0.0131
} | {
"pavo_vs_cloud_latency": {
"pavo_mean": 2276.9561,
"baseline_mean": 2670.6966,
"difference": -393.7404,
"t_statistic": -43.6151,
"p_value_ttest": 0.000002,
"w_statistic": 0,
"p_value_wilcoxon": 0.0625
}
} |
PAVO-Bench: 50K-Turn Benchmark for ASR-LLM-TTS Pipeline Routing
Code: github.com/vnmoorthy/pavo-bench · Paper: TMLR 2026 (under review) · Authors: NarasingaMoorthy VeiluKanthaPerumal (UPenn), Mohammed Imthathullah (Google)
pip install git+https://github.com/vnmoorthy/pavo-bench.git
Headline results (vs fixed-cloud baseline, 50,000 voice turns)
| Metric | Result | Significance |
|---|---|---|
| P95 end-to-end latency (H100, LibriSpeech) | −10.3% (−167 ms) | p = 2×10⁻⁶ |
| Median latency | −34% | |
| Energy per turn | −71% | |
| Coherence-failure rate | 7.1% → 0.9% (7.9× reduction) | hard-constraint masking, +110 ms median cost |
| Meta-controller size | 85,041 parameters | — |
| Meta-controller training | 106 seconds on A100 | — |
The empirical contribution is a two-regime coupling structure (sharp factual-accuracy cliff + gradual semantic degradation) characterized over n = 5,430 measurements across two hardware platforms (H100, Apple M3) and three LLM families (Llama 3.1 8B, Mistral 7B, Gemma2 2B).
Description
PAVO-Bench evaluates ASR-LLM-TTS voice pipeline routing decisions. It provides 50,000 turns of benchmark data designed to measure how well different pipeline configurations balance latency, quality, cost, and energy when routing spoken-language queries through cascaded ASR, LLM, and TTS components.
The benchmark is organized into three tiers plus component-level ablation. All results were produced on real GPU hardware.
Dataset Files
Tier 1 — Unit-Level Validation
| File | Description |
|---|---|
tier1_statistical_results.json |
Statistical reproducibility across 5 trials × 1,000 turns each (seeds 42, 123, 456, 789, 1024). |
tier1_coupling_results.json |
Coupling-cliff calibration — LLM quality degradation vs ASR word-error rate (WER 0–20%). |
tier1_llm_latency_results.json |
Latency profile for llama3.1:8b across short / medium / long generation contexts. |
Tier 2 — Integration-Level Evaluation
| File | Description |
|---|---|
tier2_e2e_results.json |
End-to-end pipeline measurements (cloud_premium, ondevice_fast, hybrid_balanced, pavo_adaptive) on 200 LibriSpeech samples. |
tier2_cross_dataset_results.json |
Cross-dataset ASR (LibriSpeech + FLEURS) for whisper-large-v3 and whisper-tiny. |
tier2_noise_robustness_results.json |
ASR robustness at SNR 5–30 dB plus clean baseline. |
Tier 3 — Scale Evaluation
| File | Description |
|---|---|
tier3_50k_summary.json |
Summary statistics for the 50K-turn dataset (40K train / 10K test split, complexity 1–5). |
tier3_scaling_results.json |
Per-model latency benchmarks for simple / medium / complex queries. |
Component Analysis
| File | Description |
|---|---|
component_ablation_results.json |
PAVO-Full vs PAVO-NoCoupling, Always-Cloud, Always-OnDevice, etc. |
Usage
from huggingface_hub import hf_hub_download
import json
path = hf_hub_download(
repo_id="vnmoorthy/pavo-bench",
filename="tier3_50k_summary.json",
repo_type="dataset",
)
print(json.load(open(path)))
Or via the pip package:
from pavo_bench import load_dataset, PretrainedPAVORouter, benchmark_router
turns = load_dataset(split="test")
pavo = PretrainedPAVORouter.from_released()
print(benchmark_router(pavo, turns))
Citation
@article{veilukanthaperumal2026pavo,
title = {PAVO: Pipeline-Aware Voice Orchestration with Demand-Conditioned Inference Routing},
author = {VeiluKanthaPerumal, NarasingaMoorthy and Imthathullah, Mohammed},
journal = {Transactions on Machine Learning Research},
year = {2026}
}
License
CC-BY 4.0
- Downloads last month
- 202