Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 1 new columns ({'Unnamed: 0'})

This happened while the csv dataset builder was generating data using

hf://datasets/ibm-research/LLM_Fine-Tuning_Performance/task_datasets/gpu_least.csv (at revision ba3be9c0cced252bd4cf453c24fb997cd257e074), [/tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/dataset.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/dataset.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/gpu_least.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/gpu_least.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/gpu_most_g13b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/gpu_most_g13b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/gpu_most_g3b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/gpu_most_g3b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/gpu_most_l7b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/gpu_most_l7b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/method_least.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/method_least.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/method_most_g13b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/method_most_g13b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/method_most_g3b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/method_most_g3b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/method_most_l7b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/method_most_l7b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/model_least_g13b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/model_least_g13b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/model_least_g3b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/model_least_g3b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/model_least_l7b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/model_least_l7b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/model_most_g13b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/model_most_g13b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/model_most_g3b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/model_most_g3b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/model_most_l7b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/model_most_l7b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/temp.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/temp.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/version_least.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/version_least.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/version_most_g13b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/version_most_g13b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/version_most_g3b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/version_most_g3b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/version_most_l7b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/version_most_l7b.csv)]

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1800, in _prepare_split_single
                  writer.write_table(table)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 765, in write_table
                  self._write_table(pa_table, writer_batch_size=writer_batch_size)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 773, in _write_table
                  pa_table = table_cast(pa_table, self._schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2321, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              Unnamed: 0: int64
              method: string
              model_name: string
              gpu_model: string
              number_gpus: double
              tokens_per_sample: double
              batch_size: double
              version: string
              dataset_tokens_per_second: double
              train_runtime: double
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 1499
              to
              {'method': Value('string'), 'model_name': Value('string'), 'gpu_model': Value('string'), 'number_gpus': Value('float64'), 'tokens_per_sample': Value('float64'), 'batch_size': Value('float64'), 'version': Value('string'), 'dataset_tokens_per_second': Value('float64'), 'train_runtime': Value('float64')}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
                  builder.download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 882, in download_and_prepare
                  self._download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 943, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1646, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1802, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 1 new columns ({'Unnamed: 0'})
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/ibm-research/LLM_Fine-Tuning_Performance/task_datasets/gpu_least.csv (at revision ba3be9c0cced252bd4cf453c24fb997cd257e074), [/tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/dataset.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/dataset.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/gpu_least.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/gpu_least.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/gpu_most_g13b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/gpu_most_g13b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/gpu_most_g3b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/gpu_most_g3b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/gpu_most_l7b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/gpu_most_l7b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/method_least.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/method_least.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/method_most_g13b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/method_most_g13b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/method_most_g3b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/method_most_g3b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/method_most_l7b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/method_most_l7b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/model_least_g13b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/model_least_g13b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/model_least_g3b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/model_least_g3b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/model_least_l7b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/model_least_l7b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/model_most_g13b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/model_most_g13b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/model_most_g3b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/model_most_g3b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/model_most_l7b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/model_most_l7b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/temp.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/temp.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/version_least.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/version_least.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/version_most_g13b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/version_most_g13b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/version_most_g3b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/version_most_g3b.csv), /tmp/hf-datasets-cache/medium/datasets/72478428945458-config-parquet-and-info-ibm-research-LLM_Fine-Tun-f79a0469/hub/datasets--ibm-research--LLM_Fine-Tuning_Performance/snapshots/ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/version_most_l7b.csv (origin=hf://datasets/ibm-research/LLM_Fine-Tuning_Performance@ba3be9c0cced252bd4cf453c24fb997cd257e074/task_datasets/version_most_l7b.csv)]
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

method
string
model_name
string
gpu_model
string
number_gpus
float64
tokens_per_sample
float64
batch_size
float64
version
string
dataset_tokens_per_second
float64
train_runtime
float64
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
2
512
2
v2.1.0
1,058.730442
1,980.8177
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
2
512
4
v2.1.0
1,736.711148
1,207.5422
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
2
512
8
v2.1.0
2,573.452825
814.9176
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
2
512
16
v2.1.0
3,435.079894
610.5104
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
2
512
32
v2.1.0
3,795.603898
552.5213
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
2
1,024
2
v2.1.0
1,834.807708
2,285.9638
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
2
1,024
4
v2.1.0
2,523.233694
1,662.2733
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
2
1,024
8
v2.1.0
3,281.085405
1,278.3282
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
2
1,024
16
v2.1.0
3,724.414047
1,126.1648
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
2
2,048
2
v2.1.0
2,521.188713
3,327.2432
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
2
2,048
4
v2.1.0
3,329.417014
2,519.5426
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
2
2,048
8
v2.1.0
3,648.330121
2,299.3007
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
2
4,096
2
v2.1.0
3,152.457887
5,321.9477
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
2
4,096
4
v2.1.0
3,428.761014
4,893.0841
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
2
8,192
2
v2.1.0
3,210.552998
10,451.2936
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
4
512
4
v2.1.0
3,300.095581
635.4822
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
4
512
8
v2.1.0
4,870.945557
430.5431
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
4
512
16
v2.1.0
6,209.476627
337.7341
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
4
512
32
v2.1.0
7,323.250771
286.369
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
4
512
64
v2.1.0
7,980.343217
262.7897
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
4
512
128
v2.1.0
8,304.335602
252.537
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
4
1,024
4
v2.1.0
4,858.545375
863.2839
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
4
1,024
8
v2.1.0
6,173.969376
679.3529
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
4
1,024
16
v2.1.0
7,252.946058
578.2897
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
4
1,024
32
v2.1.0
7,890.207277
531.5835
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
4
1,024
64
v2.1.0
8,181.678621
512.6459
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
4
2,048
4
v2.1.0
6,066.373962
1,382.8043
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
4
2,048
8
v2.1.0
7,105.78139
1,180.5328
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
4
2,048
16
v2.1.0
7,717.450644
1,086.9662
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
4
2,048
32
v2.1.0
8,057.979434
1,041.0312
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
4
4,096
4
v2.1.0
6,789.064103
2,471.2119
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
4
4,096
8
v2.1.0
7,365.734362
2,277.7384
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
4
4,096
16
v2.1.0
7,717.293737
2,173.9766
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
4
8,192
4
v2.1.0
6,749.117994
4,971.6766
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
4
8,192
8
v2.1.0
7,060.615638
4,752.338
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
8
512
8
v2.1.0
6,442.516915
325.5175
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
8
512
16
v2.1.0
9,640.761698
217.5297
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
8
512
32
v2.1.0
12,390.517008
169.2546
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
8
512
64
v2.1.0
14,591.329746
143.7259
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
8
512
128
v2.1.0
15,912.360132
131.7939
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
8
1,024
8
v2.1.0
9,606.177557
436.6257
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
8
1,024
16
v2.1.0
12,305.291097
340.8537
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
8
1,024
32
v2.1.0
14,472.329628
289.8154
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
8
1,024
64
v2.1.0
15,716.50882
266.8725
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
8
1,024
128
v2.1.0
16,508.406318
254.0708
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
8
2,048
8
v2.1.0
12,128.89823
691.6216
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
8
2,048
16
v2.1.0
14,177.147075
591.6993
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
8
2,048
32
v2.1.0
15,384.393052
545.2674
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
8
2,048
64
v2.1.0
16,200.685447
517.7934
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
8
4,096
8
v2.1.0
13,560.494489
1,237.2127
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
8
4,096
16
v2.1.0
14,711.174272
1,140.4403
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
8
4,096
32
v2.1.0
15,439.452304
1,086.6458
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
8
8,192
8
v2.1.0
13,489.805667
2,487.3918
full
granite-13b-v2
NVIDIA-A100-SXM4-80GB
8
8,192
16
v2.1.0
14,120.091528
2,376.3608
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
512
1
v2.1.0
3,073.674312
682.2948
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
512
2
v2.1.0
4,292.716237
488.5373
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
512
4
v2.1.0
5,128.926787
408.8871
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
512
8
v2.1.0
5,568.943005
376.5799
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
512
16
v2.1.0
5,862.781167
357.706
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
512
32
v2.1.0
6,018.290028
348.4631
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
512
64
v2.1.0
6,220.562735
337.1322
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
512
128
v2.1.0
6,249.875578
335.551
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
1,024
1
v2.1.0
4,231.389164
991.2357
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
1,024
2
v2.1.0
5,044.291296
831.4952
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
1,024
4
v2.1.0
5,485.077721
764.6754
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
1,024
8
v2.1.0
5,760.377954
728.13
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
1,024
16
v2.1.0
5,925.451246
707.8455
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
1,024
32
v2.1.0
6,115.123658
685.8903
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
1,024
64
v2.1.0
6,012.286595
697.6221
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
2,048
1
v2.1.0
4,838.705361
1,733.6472
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
2,048
2
v2.1.0
5,405.370853
1,551.9024
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
2,048
4
v2.1.0
5,622.420814
1,491.9922
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
2,048
8
v2.1.0
5,726.469232
1,464.8831
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
2,048
16
v2.1.0
5,821.946439
1,440.8597
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
2,048
32
v2.1.0
5,811.296736
1,443.5002
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
4,096
1
v2.1.0
4,992.98307
3,360.1588
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
4,096
2
v2.1.0
5,224.333453
3,211.3601
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
4,096
4
v2.1.0
5,335.987648
3,144.1632
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
4,096
8
v2.1.0
5,528.779949
3,034.5241
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
4,096
16
v2.1.0
5,466.088225
3,069.3277
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
8,192
1
v2.1.0
4,666.368151
7,190.6954
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
8,192
2
v2.1.0
4,839.528304
6,933.4096
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
8,192
4
v2.1.0
4,783.868423
7,014.0792
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
1
8,192
8
v2.1.0
4,875.485788
6,882.2746
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
2
512
2
v2.1.0
3,579.552389
585.8699
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
2
512
4
v2.1.0
6,046.692537
346.8263
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
2
512
8
v2.1.0
8,844.902361
237.1029
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
2
512
16
v2.1.0
10,468.622864
200.3274
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
2
512
32
v2.1.0
11,308.583871
185.4478
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
2
512
64
v2.1.0
11,845.298175
177.0451
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
2
512
128
v2.1.0
12,059.556204
173.8996
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
2
1,024
2
v2.1.0
6,039.011143
694.5349
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
2
1,024
4
v2.1.0
8,699.724052
482.1192
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
2
1,024
8
v2.1.0
10,291.919417
407.5337
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
2
1,024
16
v2.1.0
11,161.238132
375.792
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
2
1,024
32
v2.1.0
11,648.767081
360.0642
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
2
1,024
64
v2.1.0
11,857.336669
353.7307
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
2
1,024
128
v2.1.0
12,078.145756
347.2639
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
2
2,048
2
v2.1.0
8,465.384808
990.9305
full
granite-3b-code-base-128k
NVIDIA-A100-SXM4-80GB
2
2,048
4
v2.1.0
9,945.140898
843.4881
End of preview.

LLM Fine-Tuning Performance Benchmark Dataset

Dataset Summary

This dataset contains performance benchmarks for Large Language Model (LLM) fine-tuning across various hardware and software configurations. It includes throughput measurements (tokens per second) for 959 valid configurations, collected over 1000 GPU hours on a Kubernetes cluster. The dataset is designed for research on predictive performance modeling, specifically for evaluating methods that handle Categorical Configuration Space Expansion (CCSE) which occur when new values are introduced for categorical variables.

Research Purpose: This dataset enables evaluation of predictive model building approaches when the configuration space expands with new categorical values (e.g., new LLM models, GPU types, fine-tuning methods, or software versions).

Dataset Description

Overview

LLM fine-tuning is compute and memory intensive. This benchmark measures throughput across a configuration space with 7 variables (4 categorical, 3 numerical):

Categorical Variables:

  • LLM: llama2-7b, granite-13b-v2, granite-3b-code-base-128k
  • Method: Full fine-tuning, LoRA (Low-Rank Adaptation)
  • GPU: NVIDIA A100-80GB, NVIDIA L40S-48GB
  • Version: v2.0.0, v2.1.0 (software stack versions)

Numerical Variables:

  • #GPUs: 1, 2, 4, 8
  • Batch Size: 1, 2, 4, 8, 16, 32, 64, 128
  • Tokens per Sample: 512, 1024, 2048, 4096, 8192

The full configuration space contains 3840 possible combinations. After excluding invalid configurations (batch size not divisible by #GPUs, memory constraints, hardware availability), 959 valid configurations were benchmarked.

Data Collection

Data has been obtained with the software accelerated discovery orchestrator (ado). Ado is a platform for executing computational experiments at scale and analysing their results. More specifically, the actuator SFTTrainer has been used to collect data on IBM Research infrastructure.

  • Compute Time: 1011 GPU hours (computed from train_runtime * number_gpus)
  • Methodology: Each configuration was executed to measure throughput during a single epoch over a synthetic dataset
  • Metric: Throughput = (total dataset tokens processed) / (epoch duration in seconds)

Dataset Structure

Main Dataset

The primary dataset file is dataset.csv containing all 959 benchmarked configurations.

Task-Specific Datasets

The task_datasets/ directory contains CSV files for 18 specific benchmark tasks, organized by the categorical variable causing the configuration space expansion:

Naming Convention: {variable}_{generalization}_{target}.csv

  • variable: gpu, method, model, version
  • generalization: least (generalized), most (specialized)
  • target: specific value being predicted (e.g., g3b for granite-3b, l7b for llama2-7b)

Data Fields

Field Type Description
method string Fine-tuning method: "full" or "lora"
model_name string LLM model: "llama2-7b", "granite-13b-v2", or "granite-3b-code-base-128k"
gpu_model string GPU type: "NVIDIA-A100-SXM4-80GB" or "NVIDIA-L40S-48GB"
number_gpus float Number of GPUs: 1.0, 2.0, 4.0, or 8.0
tokens_per_sample float Tokens per training sample: 512.0, 1024.0, 2048.0, 4096.0, or 8192.0
batch_size float Training batch size: 1.0, 2.0, 4.0, 8.0, 16.0, 32.0, 64.0, or 128.0
version string Foundation Model Stack version: "v2.0.0" or "v2.1.0"
dataset_tokens_per_second float Target variable: Throughput in tokens/second
train_runtime float Training runtime in seconds for one epoch

Benchmark Tasks

The dataset supports 18 distinct prediction tasks for evaluating model building methods under Categorical Configuration Space Expansion (CCSE). Tasks are categorized by:

  1. Variable causing expansion: LLM, GPU, Method, or Version
  2. Generalization level:
    • Generalized (†): Source space includes all values of other categorical variables
    • Specialized (★): Source space restricted to specific combinations

LLM Expansion Tasks (6 tasks)

Task Source Space Target Source Size Target Size
{granite-13b, granite-3b}, *, *, * llama2-7b 614 345
{granite-3b, llama2-7b}, *, *, * granite-13b 713 246
{llama2-7b, granite-13b}, *, *, * granite-3b 614 345
{granite-13b, granite-3b}, LoRA, A100, v2.1 llama2-7b 206 110
{granite-3b, llama2-7b}, LoRA, A100, v2.1 granite-13b 220 96
{llama2-7b, granite-13b}, LoRA, A100, v2.1 granite-3b 206 110

GPU Expansion Tasks (4 tasks)

Task Source Space Target Source Size Target Size
*, LoRA, A100, v2.1.0 L40S 316 203
llama2-7b, LoRA, A100, v2.1 L40S 110 74
granite-13b, LoRA, A100, v2.1 L40S 96 55
granite-3b, LoRA, A100, v2.1 L40S 110 74

Method Expansion Tasks (4 tasks)

Task Source Space Target Source Size Target Size
*, LoRA, A100, v2.1.0 Full 316 264
llama2-7b, LoRA, A100, v2.1 Full 110 101
granite-13b, LoRA, A100, v2.1 Full 96 54
granite-3b, LoRA, A100, v2.1 Full 110 110

Version Expansion Tasks (4 tasks)

Task Source Space Target Source Size Target Size
*, LoRA, A100, v2.1.0 v2.0 316 174
llama2-7b, LoRA, A100, v2.1 v2.0 110 60
granite-13b, LoRA, A100, v2.1 v2.0 96 40
granite-3b, LoRA, A100, v2.1 v2.0 110 74

Note: * indicates the entire domain is present in the source space.

Considerations for Using the Data

Research Context

This dataset is being used for research purposes to evaluate predictive modeling methods, particularly:

  • Transfer learning approaches
  • Performance prediction models
  • Handling categorical configuration space expansion
  • Sample-efficient model building strategies

Data Characteristics

  1. Hardware-Specific: Results are specific to NVIDIA A100-80GB and L40S-48GB GPUs
  2. Software-Specific: Measurements taken with specific PyTorch library versions (v2.0.0, v2.1.0)
  3. Invalid Configurations Excluded:
    • Configurations where batch_size is not divisible by number_gpus
    • Configurations exceeding GPU memory limits
  4. Synthetic Dataset: Throughput measured using synthetic training data
  5. Single Epoch: Measurements represent single-pass throughput, not full training convergence

Citation Information

If you use this dataset in your research, please cite:

@misc{lotito2026finetuning,
  title={LLM Fine-Tuning Performance Benchmark Dataset},
  author={Lotito, Daniele and Venugopal, Srikumar and 
          Vassiliadis, Vassilis and Pinto, Christian and 
          Pomponio, Alessandro and Johnston, Michael},
  howpublished={Hugging Face Datasets},
  url = {https://huggingface.co/datasets/ibm-research/LLM_Fine-Tuning_Performance/},
  year={2026}
}
Downloads last month
32