Update README.md
Browse files
README.md
CHANGED
|
@@ -88,6 +88,56 @@ This dataset contains curated subsets of various thermal stability measurements
|
|
| 88 |
|
| 89 |
Useful for training models to predict various thermal stability metrics, or evaluating stability effects of mutations.
|
| 90 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 91 |
## Dataset Structure
|
| 92 |
|
| 93 |
Subsets included are:
|
|
|
|
| 88 |
|
| 89 |
Useful for training models to predict various thermal stability metrics, or evaluating stability effects of mutations.
|
| 90 |
|
| 91 |
+
|
| 92 |
+
## Quickstart Usage
|
| 93 |
+
### Install HuggingFace Datasets package
|
| 94 |
+
Each subset can be loaded into python using the Huggingface [datasets](https://huggingface.co/docs/datasets/index) library.
|
| 95 |
+
First, from the command line install the `datasets` library
|
| 96 |
+
|
| 97 |
+
$ pip install datasets
|
| 98 |
+
|
| 99 |
+
then, from within python load the datasets library
|
| 100 |
+
|
| 101 |
+
>>> import datasets
|
| 102 |
+
|
| 103 |
+
|
| 104 |
+
### Load Dataset
|
| 105 |
+
|
| 106 |
+
Load one of the 'RosettaCommons/FireProtDB2' datasets.
|
| 107 |
+
|
| 108 |
+
>>> mutation_dg = datasets.load_dataset('RosettaCommons/FireProtDB2', name = 'mutation_dg')
|
| 109 |
+
Downloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 11.7M/11.7M [00:00<00:00, 13.8MB/s]
|
| 110 |
+
Downloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 1.89M/1.89M [00:00<00:00, 6.71MB/s]
|
| 111 |
+
Downloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 1.88M/1.88M [00:00<00:00, 3.74MB/s]
|
| 112 |
+
Generating train split: 100%|████████████████████████████████████████████████████████████████████████████| 410591/410591 [00:00<00:00, 1268242.15 examples/s]
|
| 113 |
+
Generating validation split: 100%|█████████████████████████████████████████████████████████████████████████| 50465/50465 [00:00<00:00, 1280516.59 examples/s]
|
| 114 |
+
Generating test split: 100%|███████████████████████████████████████████████████████████████████████████████| 50115/50115 [00:00<00:00, 1557480.33 examples/s]
|
| 115 |
+
|
| 116 |
+
and the dataset is loaded as a `datasets.arrow_dataset.Dataset`
|
| 117 |
+
|
| 118 |
+
>>> mutation_dg
|
| 119 |
+
DatasetDict({
|
| 120 |
+
train: Dataset({
|
| 121 |
+
features: ['sequence_length', 'protein_name', 'organism', 'uniprotkb', 'ec_number', 'interpro', 'pmid', 'doi', 'publication_year', 'source_dataset', 'referencing_dataset', 'wwpdb_raw', 'ddg', 'domainome_ddg', 'dg', 'dh', 'dhvh', 'tm', 'dtm', 'exp_temperature', 'fitness', 'ph', 'buffer_norm', 'method_norm', 'measure_norm', 'stabilizing', 'buffer_raw', 'buffer_conc_raw', 'ion_raw', 'ion_conc_raw', 'state', 'pdb_id', 'pdb_ids', 'wt_residue', 'position', 'mut_residue', 'mutation', 'sequence', 'sequence_len_uniprot', 'sequence_length_num', 'length_match', 'protein_id', 'cluster_id', 'split'],
|
| 122 |
+
num_rows: 410591
|
| 123 |
+
})
|
| 124 |
+
validation: Dataset({
|
| 125 |
+
features: ['sequence_length', 'protein_name', 'organism', 'uniprotkb', 'ec_number', 'interpro', 'pmid', 'doi', 'publication_year', 'source_dataset', 'referencing_dataset', 'wwpdb_raw', 'ddg', 'domainome_ddg', 'dg', 'dh', 'dhvh', 'tm', 'dtm', 'exp_temperature', 'fitness', 'ph', 'buffer_norm', 'method_norm', 'measure_norm', 'stabilizing', 'buffer_raw', 'buffer_conc_raw', 'ion_raw', 'ion_conc_raw', 'state', 'pdb_id', 'pdb_ids', 'wt_residue', 'position', 'mut_residue', 'mutation', 'sequence', 'sequence_len_uniprot', 'sequence_length_num', 'length_match', 'protein_id', 'cluster_id', 'split'],
|
| 126 |
+
num_rows: 50465
|
| 127 |
+
})
|
| 128 |
+
test: Dataset({
|
| 129 |
+
features: ['sequence_length', 'protein_name', 'organism', 'uniprotkb', 'ec_number', 'interpro', 'pmid', 'doi', 'publication_year', 'source_dataset', 'referencing_dataset', 'wwpdb_raw', 'ddg', 'domainome_ddg', 'dg', 'dh', 'dhvh', 'tm', 'dtm', 'exp_temperature', 'fitness', 'ph', 'buffer_norm', 'method_norm', 'measure_norm', 'stabilizing', 'buffer_raw', 'buffer_conc_raw', 'ion_raw', 'ion_conc_raw', 'state', 'pdb_id', 'pdb_ids', 'wt_residue', 'position', 'mut_residue', 'mutation', 'sequence', 'sequence_len_uniprot', 'sequence_length_num', 'length_match', 'protein_id', 'cluster_id', 'split'],
|
| 130 |
+
num_rows: 50115
|
| 131 |
+
})
|
| 132 |
+
})
|
| 133 |
+
|
| 134 |
+
which is a column oriented format that can be accessed directly, converted in to a `pandas.DataFrame`, or `parquet` format, e.g.
|
| 135 |
+
|
| 136 |
+
>>> mutation_dg.data.column('protein_name')
|
| 137 |
+
>>> mutation_dg.to_pandas()
|
| 138 |
+
>>> mutation_dg.to_parquet("dataset.parquet")
|
| 139 |
+
|
| 140 |
+
|
| 141 |
## Dataset Structure
|
| 142 |
|
| 143 |
Subsets included are:
|