title
stringlengths
1
290
body
stringlengths
0
228k
βŒ€
html_url
stringlengths
46
51
comments
list
pull_request
dict
number
int64
1
5.59k
is_pull_request
bool
2 classes
adding capes
Adding Parallel corpus of theses and dissertation abstracts in Portuguese and English from CAPES https://sites.google.com/view/felipe-soares/datasets#h.p_kxOR6EhHm2a6
https://github.com/huggingface/datasets/pull/1307
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1307", "html_url": "https://github.com/huggingface/datasets/pull/1307", "diff_url": "https://github.com/huggingface/datasets/pull/1307.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1307.patch", "merged_at": "2020-12-09T15:27:45" }
1,307
true
add W&I + LOCNESS dataset (BEA-2019 workshop shared task on GEC)
- **Name:** W&I + LOCNESS dataset (from the BEA-2019 workshop shared task on GEC) - **Description:** https://www.cl.cam.ac.uk/research/nl/bea2019st/#data - **Paper:** https://www.aclweb.org/anthology/W19-4406/ - **Motivation:** This is a recent dataset (actually two in one) for grammatical error correction and is used for benchmarking in this field of NLP. ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
https://github.com/huggingface/datasets/pull/1306
[ "I created a clean PR where I also incorporated the suggested changes here: https://github.com/huggingface/datasets/pull/1449\r\n" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1306", "html_url": "https://github.com/huggingface/datasets/pull/1306", "diff_url": "https://github.com/huggingface/datasets/pull/1306.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1306.patch", "merged_at": null }
1,306
true
[README] Added Windows command to enable slow tests
The Windows command to run slow tests has caused issues, so this adds a functional Windows command.
https://github.com/huggingface/datasets/pull/1305
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1305", "html_url": "https://github.com/huggingface/datasets/pull/1305", "diff_url": "https://github.com/huggingface/datasets/pull/1305.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1305.patch", "merged_at": "2020-12-08T13:56:32" }
1,305
true
adding eitb_parcc
Adding EiTB-ParCC: Parallel Corpus of Comparable News http://opus.nlpl.eu/EiTB-ParCC.php
https://github.com/huggingface/datasets/pull/1304
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1304", "html_url": "https://github.com/huggingface/datasets/pull/1304", "diff_url": "https://github.com/huggingface/datasets/pull/1304.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1304.patch", "merged_at": "2020-12-09T18:02:03" }
1,304
true
adding opus_openoffice
Adding Opus OpenOffice: http://opus.nlpl.eu/OpenOffice.php 8 languages, 28 bitexts
https://github.com/huggingface/datasets/pull/1303
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1303", "html_url": "https://github.com/huggingface/datasets/pull/1303", "diff_url": "https://github.com/huggingface/datasets/pull/1303.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1303.patch", "merged_at": "2020-12-10T09:37:10" }
1,303
true
Add Danish NER dataset
https://github.com/huggingface/datasets/pull/1302
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1302", "html_url": "https://github.com/huggingface/datasets/pull/1302", "diff_url": "https://github.com/huggingface/datasets/pull/1302.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1302.patch", "merged_at": "2020-12-10T09:35:26" }
1,302
true
arxiv dataset added
**adding arXiv dataset**: arXiv dataset and metadata of 1.7M+ scholarly papers across STEM dataset link: https://www.kaggle.com/Cornell-University/arxiv
https://github.com/huggingface/datasets/pull/1301
[ "Readme added\r\n", "@lhoestq is it looking alright ? " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1301", "html_url": "https://github.com/huggingface/datasets/pull/1301", "diff_url": "https://github.com/huggingface/datasets/pull/1301.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1301.patch", "merged_at": "2020-12-09T18:05:16" }
1,301
true
added dutch_social
WIP As some tests did not clear! πŸ‘ŽπŸΌ
https://github.com/huggingface/datasets/pull/1300
[ "Closing this since a new pull request has been made. " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1300", "html_url": "https://github.com/huggingface/datasets/pull/1300", "diff_url": "https://github.com/huggingface/datasets/pull/1300.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1300.patch", "merged_at": null }
1,300
true
can't load "german_legal_entity_recognition" dataset
FileNotFoundError: Couldn't find file locally at german_legal_entity_recognition/german_legal_entity_recognition.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/german_legal_entity_recognition/german_legal_entity_recognition.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/german_legal_entity_recognition/german_legal_entity_recognition.py
https://github.com/huggingface/datasets/issues/1299
[ "Please if you could tell me more about the error? \r\n\r\n1. Please check the directory you've been working on\r\n2. Check for any typos", "> Please if you could tell me more about the error?\r\n> \r\n> 1. Please check the directory you've been working on\r\n> 2. Check for any typos\r\n\r\nError happens during the execution of this line:\r\ndataset = load_dataset(\"german_legal_entity_recognition\")\r\n\r\nAlso, when I try to open mentioned links via Opera I have errors \"404: Not Found\" and \"This XML file does not appear to have any style information associated with it. The document tree is shown below.\" respectively.", "Hello @nataly-obr, the `german_legal_entity_recognition` dataset has not yet been released (it is part of the coming soon v2 release).\r\n\r\nYou can still access it now if you want, but you will need to install `datasets` via the master branch:\r\n`pip install git+https://github.com/huggingface/datasets.git@master`\r\n\r\nPlease let me know if it solves the issue :) " ]
null
1,299
false
Add OPUS Ted Talks 2013
https://github.com/huggingface/datasets/pull/1298
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1298", "html_url": "https://github.com/huggingface/datasets/pull/1298", "diff_url": "https://github.com/huggingface/datasets/pull/1298.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1298.patch", "merged_at": "2020-12-16T16:57:49" }
1,298
true
OPUS Ted Talks 2013
https://github.com/huggingface/datasets/pull/1297
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1297", "html_url": "https://github.com/huggingface/datasets/pull/1297", "diff_url": "https://github.com/huggingface/datasets/pull/1297.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1297.patch", "merged_at": null }
1,297
true
The Snips Built In Intents 2016 dataset.
This PR proposes to add the Snips.ai built in intents dataset. The first configuration added is for the intent labels only, but the dataset includes entity slots that may in future be added as alternate configurations.
https://github.com/huggingface/datasets/pull/1296
[ "It is not clear how to automatically add the dummy data if the source data is a more complex json format. Should I manually take a fraction of the source data and include it as dummy data?", "Will tag the dataset and update the dataset card." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1296", "html_url": "https://github.com/huggingface/datasets/pull/1296", "diff_url": "https://github.com/huggingface/datasets/pull/1296.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1296.patch", "merged_at": null }
1,296
true
add hrenwac_para
https://github.com/huggingface/datasets/pull/1295
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1295", "html_url": "https://github.com/huggingface/datasets/pull/1295", "diff_url": "https://github.com/huggingface/datasets/pull/1295.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1295.patch", "merged_at": "2020-12-11T17:42:20" }
1,295
true
adding opus_euconst
Adding EUconst, a parallel corpus collected from the European Constitution. 21 languages, 210 bitexts
https://github.com/huggingface/datasets/pull/1294
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1294", "html_url": "https://github.com/huggingface/datasets/pull/1294", "diff_url": "https://github.com/huggingface/datasets/pull/1294.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1294.patch", "merged_at": "2020-12-08T18:41:22" }
1,294
true
add hrenwac_para
https://github.com/huggingface/datasets/pull/1293
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1293", "html_url": "https://github.com/huggingface/datasets/pull/1293", "diff_url": "https://github.com/huggingface/datasets/pull/1293.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1293.patch", "merged_at": null }
1,293
true
arXiv dataset added
https://github.com/huggingface/datasets/pull/1292
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1292", "html_url": "https://github.com/huggingface/datasets/pull/1292", "diff_url": "https://github.com/huggingface/datasets/pull/1292.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1292.patch", "merged_at": null }
1,292
true
adding pubmed_qa dataset
Pubmed QA dataset: PQA-L(abeled) 1k PQA-U(labeled) 61.2k PQA-A(rtifical labeled) 211.3k
https://github.com/huggingface/datasets/pull/1291
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1291", "html_url": "https://github.com/huggingface/datasets/pull/1291", "diff_url": "https://github.com/huggingface/datasets/pull/1291.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1291.patch", "merged_at": "2020-12-09T08:54:50" }
1,291
true
imdb dataset cannot be downloaded
hi please find error below getting imdb train spli: thanks ` datasets.load_dataset>>> datasets.load_dataset("imdb", split="train")` errors ``` cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets Downloading and preparing dataset imdb/plain_text (download: 80.23 MiB, generated: 127.06 MiB, post-processed: Unknown size, total: 207.28 MiB) to /idiap/temp/rkarimi/cache_home_1/datasets/imdb/plain_text/1.0.0/90099cb476936b753383ba2ae6ab2eae419b2e87f71cd5189cb9c8e5814d12a3... cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 558, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 73, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=7486451, num_examples=5628, dataset_name='imdb')}] ```
https://github.com/huggingface/datasets/issues/1290
[ "Hi @rabeehk , I am unable to reproduce your problem locally.\r\nCan you try emptying the cache (removing the content of `/idiap/temp/rkarimi/cache_home_1/datasets`) and retry ?", "Hi,\r\nthanks, I did remove the cache and still the same error here\r\n\r\n```\r\n>>> a = datasets.load_dataset(\"imdb\", split=\"train\")\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\nDownloading and preparing dataset imdb/plain_text (download: 80.23 MiB, generated: 127.06 MiB, post-processed: Unknown size, total: 207.28 MiB) to /idiap/temp/rkarimi/cache_home_1/datasets/imdb/plain_text/1.0.0/90099cb476936b753383ba2ae6ab2eae419b2e87f71cd5189cb9c8e5814d12a3...\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads\r\nTraceback (most recent call last): \r\n File \"<stdin>\", line 1, in <module>\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py\", line 558, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py\", line 73, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=4902716, num_examples=3680, dataset_name='imdb')}]\r\n```\r\n\r\ndatasets version\r\n```\r\ndatasets 1.1.2 <pip>\r\ntensorflow-datasets 4.1.0 <pip>\r\n\r\n```", "resolved with moving to version 1.1.3" ]
null
1,290
false
Jigsaw toxicity classification dataset added
The dataset requires manually downloading data from Kaggle.
https://github.com/huggingface/datasets/pull/1289
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1289", "html_url": "https://github.com/huggingface/datasets/pull/1289", "diff_url": "https://github.com/huggingface/datasets/pull/1289.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1289.patch", "merged_at": null }
1,289
true
Add CodeSearchNet corpus dataset
This PR adds the CodeSearchNet corpus proxy dataset for semantic code search: https://github.com/github/CodeSearchNet I have had a few issues, mentioned below. Would appreciate some help on how to solve them. ## Issues generating dataset card Is there something wrong with my declaration of the dataset features ? ``` features=datasets.Features( { "repository_name": datasets.Value("string"), "func_path_in_repository": datasets.Value("string"), "func_name": datasets.Value("string"), "whole_func_string": datasets.Value("string"), "language": datasets.Value("string"), "func_code_string": datasets.Value("string"), "func_code_tokens": datasets.Sequence(datasets.Value("string")), "func_documentation_string": datasets.Value("string"), "func_documentation_tokens": datasets.Sequence(datasets.Value("string")), "split_name": datasets.Value("string"), "func_code_url": datasets.Value("string"), # TODO - add licensing info in the examples } ), ``` When running the streamlite app for tagging the dataset on my machine, I get the following error : ![image](https://user-images.githubusercontent.com/33657802/101469132-9ed12c80-3944-11eb-94ff-2d9c1d0ea080.png) ## Issues with dummy data Due to the unusual structure of the data, I have been unable to generate dummy data automatically. I tried to generate it manually, but pytests fail when using the manually-generated dummy data ! Pytests work fine when using the real data. ``` ============================================================================================== test session starts ============================================================================================== platform linux -- Python 3.7.9, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 plugins: xdist-2.1.0, forked-1.3.0 collected 1 item tests/test_dataset_common.py F [100%] =================================================================================================== FAILURES ==================================================================================================== ________________________________________________________________________ LocalDatasetTest.test_load_dataset_all_configs_code_search_net _________________________________________________________________________ self = <tests.test_dataset_common.LocalDatasetTest testMethod=test_load_dataset_all_configs_code_search_net>, dataset_name = 'code_search_net' @slow def test_load_dataset_all_configs(self, dataset_name): configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True) > self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True) tests/test_dataset_common.py:237: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/test_dataset_common.py:198: in check_load_dataset self.parent.assertTrue(len(dataset[split]) > 0) E AssertionError: False is not true --------------------------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------------------------- Downloading and preparing dataset code_search_net/all (download: 1.00 MiB, generated: 1.00 MiB, post-processed: Unknown size, total: 2.00 MiB) to /tmp/tmppx78sj24/code_search_net/all/1.0.0... Dataset code_search_net downloaded and prepared to /tmp/tmppx78sj24/code_search_net/all/1.0.0. Subsequent calls will reuse this data. --------------------------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------------------------- ... (irrelevant info - Deprecation warnings) ============================================================================================ short test summary info ============================================================================================ FAILED tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_code_search_net - AssertionError: False is not true ========================================================================================= 1 failed, 4 warnings in 3.00s ======================================================================================== ``` ## Note : Data structure in S3 The data is stored on S3, and organized by programming languages. It is stored in the following repository structure: ``` . β”œβ”€β”€ <language_name> # e.g. python β”‚Β Β  └── final β”‚Β Β  └── jsonl β”‚Β Β  β”œβ”€β”€ test β”‚Β Β  β”‚Β Β  └── <language_name>_test_0.jsonl.gz β”‚Β Β  β”œβ”€β”€ train β”‚Β Β  β”‚Β Β  β”œβ”€β”€ <language_name>_train_0.jsonl.gz β”‚Β Β  β”‚Β Β  β”œβ”€β”€ <language_name>_train_1.jsonl.gz β”‚Β Β  β”‚Β Β  β”œβ”€β”€ ... β”‚Β Β  β”‚Β Β  └── <language_name>_train_n.jsonl.gz β”‚Β Β  └── valid β”‚Β Β  └── <language_name>_valid_0.jsonl.gz β”œβ”€β”€ <language_name>_dedupe_definitions_v2.pkl └── <language_name>_licenses.pkl ```
https://github.com/huggingface/datasets/pull/1288
[ "@lhoestq ready for a second review" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1288", "html_url": "https://github.com/huggingface/datasets/pull/1288", "diff_url": "https://github.com/huggingface/datasets/pull/1288.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1288.patch", "merged_at": "2020-12-09T17:05:27" }
1,288
true
'iwslt2017-ro-nl', cannot be downloaded
Hi I am trying `>>> datasets.load_dataset("iwslt2017", 'iwslt2017-ro-nl', split="train")` getting this error thank you for your help ``` cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets Downloading and preparing dataset iwsl_t217/iwslt2017-ro-nl (download: 314.07 MiB, generated: 39.92 MiB, post-processed: Unknown size, total: 354.00 MiB) to /idiap/temp/rkarimi/cache_home_1/datasets/iwsl_t217/iwslt2017-ro-nl/1.0.0/cca6935a0851a8ceac1202a62c958738bdfa23c57a51bc52ac1c5ebd2aa172cd... cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/iwslt2017/cca6935a0851a8ceac1202a62c958738bdfa23c57a51bc52ac1c5ebd2aa172cd/iwslt2017.py", line 118, in _split_generators dl_dir = dl_manager.download_and_extract(MULTI_URL) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 216, in map_nested return function(data_struct) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 477, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://wit3.fbk.eu/archive/2017-01-trnmted//texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.tgz ```
https://github.com/huggingface/datasets/issues/1287
[ "the same issue with datasets.load_dataset(\"iwslt2017\", 'iwslt2017-en-nl', split=split), ..... ", "even with setting master like the following command, still remains \r\n\r\ndatasets.load_dataset(\"iwslt2017\", 'iwslt2017-en-nl', split=\"train\", script_version=\"master\")\r\n", "Looks like the data has been moved from its original location to google drive\r\n\r\nNew url: https://drive.google.com/u/0/uc?id=12ycYSzLIG253AFN35Y6qoyf9wtkOjakp&export=download", "Fixed by #4481 " ]
null
1,287
false
[libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted
Hi I am getting this error when evaluating on wmt16-ro-en using finetune_trainer.py of huggingface repo. thank for your help {'epoch': 20.0} 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 20/20 [00:16<00:00, 1.22it/s] 12/08/2020 10:41:19 - INFO - seq2seq.trainers.trainer - Saving model checkpoint to outputs/experiment/joint/finetune/lr-2e-5 12/08/2020 10:41:24 - INFO - __main__ - {'wmt16-en-ro': Dataset(features: {'src_texts': Value(dtype='string', id=None), 'task': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 1998), 'qnli': Dataset(features: {'src_texts': Value(dtype='string', id=None), 'task': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 5462), 'scitail': Dataset(features: {'src_texts': Value(dtype='string', id=None), 'task': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 1303)} 12/08/2020 10:41:24 - INFO - __main__ - *** Evaluate *** 12/08/2020 10:41:24 - INFO - seq2seq.utils.utils - using task specific params for wmt16-en-ro: {'max_length': 300, 'num_beams': 4} 12/08/2020 10:41:24 - INFO - seq2seq.trainers.trainer - ***** Running Evaluation ***** 12/08/2020 10:41:24 - INFO - seq2seq.trainers.trainer - Num examples = 1998 12/08/2020 10:41:24 - INFO - seq2seq.trainers.trainer - Batch size = 64 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 32/32 [00:37<00:00, 1.19s/it][libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted
https://github.com/huggingface/datasets/issues/1286
[ "I remember also getting the same issue for several other translation datasets like all the iwslt2017 group, this is blokcing me and I really need to fix it and I was wondering if you have an idea on this. @lhoestq thanks,. ", "maybe there is an empty line or something inside these datasets? could you tell me why this is happening? thanks ", "I just checked and the wmt16 en-ro doesn't have empty lines\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nd = load_dataset(\"wmt16\", \"ro-en\", split=\"train\")\r\nlen(d) # 610320\r\nlen(d.filter(lambda x: len(x[\"translation\"][\"en\"].strip()) > 0)) # 610320\r\nlen(d.filter(lambda x: len(x[\"translation\"][\"ro\"].strip()) > 0)) # 610320\r\n# also tested for split=\"validation\" and \"test\"\r\n```\r\n\r\nCan you open an issue on the `transformers` repo ? also cc @sgugger ", "Hi @lhoestq \r\nI am not really sure which part is causing this, to me this is more related to dataset library as this is happening for some of the datassets below please find the information to reprodcue the bug, this is really blocking me and I appreciate your help\r\n\r\n\r\n## Environment info\r\n- `transformers` version: 3.5.1\r\n- Platform: GPU\r\n- Python version: 3.7 \r\n- PyTorch version (GPU?): 1.0.4\r\n- Tensorflow version (GPU?): - \r\n- Using GPU in script?: - \r\n- Using distributed or parallel set-up in script?: - \r\n\r\n### Who can help\r\n tokenizers: @mfuntowicz\r\n Trainer: @sgugger\r\n TextGeneration: @TevenLeScao \r\n nlp datasets: [different repo](https://github.com/huggingface/nlp)\r\n rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)\r\n examples/seq2seq: @patil-suraj\r\n\r\n## Information\r\nHi\r\nI am testing seq2seq model with T5 on different datasets and this is always getting the following bug, this is really blocking me as this fails for many datasets. could you have a look please? thanks \r\n\r\n```\r\n[libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): \r\nterminate called after throwing an instance of 'google::protobuf::FatalException'\r\n what(): CHECK failed: (index) >= (0): \r\nAborted\r\n\r\n```\r\n\r\nTo reproduce the error please run on 1 GPU:\r\n```\r\ngit clone git@github.com:rabeehk/debug-seq2seq.git\r\npython setup.py develop \r\ncd seq2seq \r\npython finetune_t5_trainer.py temp.json\r\n\r\n```\r\n\r\nFull output of the program:\r\n\r\n```\r\n(internship) rkarimi@vgnh008:/idiap/user/rkarimi/dev/debug-seq2seq/seq2seq$ python finetune_t5_trainer.py temp.json \r\n2020-12-12 15:38:16.234542: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n2020-12-12 15:38:16.234598: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\n12/12/2020 15:38:32 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1, distributed training: False, 16-bits training: False\r\n12/12/2020 15:38:32 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(output_dir='outputs/test', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=False, evaluate_during_training=False, evaluation_strategy=<EvaluationStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=64, per_device_eval_batch_size=64, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=0.01, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=2, max_steps=-1, warmup_steps=500, logging_dir='runs/Dec12_15-38-32_vgnh008', logging_first_step=True, logging_steps=200, save_steps=200, save_total_limit=1, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=200, dataloader_num_workers=0, past_index=-1, run_name='outputs/test', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, label_smoothing=0.1, sortish_sampler=False, predict_with_generate=True, adafactor=False, encoder_layerdrop=None, decoder_layerdrop=None, dropout=None, attention_dropout=None, lr_scheduler='linear', fixed_length_emb=None, encoder_projection=None, encoder_pooling=None, projection_length=None, only_projection_bottleneck=False, concat_projection_token=False, gcs_bucket='ruse-xcloud-bucket', temperature=10, train_adapters=True, do_finetune=True, parametric_task_embedding=False, eval_output_dir='outputs/finetune-adapter/test-n-1-lr-1e-02-e-20')\r\nSome weights of T5ForConditionalGeneration were not initialized from the model checkpoint at t5-small and are newly initialized: ['encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.0.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.0.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.1.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.1.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.2.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.2.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.3.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.3.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.4.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.4.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.5.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.5.layer.1.adapter_controller.post_layer_norm.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.0.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.0.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.1.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.1.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.2.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.2.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.3.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.3.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.4.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.4.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.5.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.5.layer.2.adapter_controller.post_layer_norm.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\n12/12/2020 15:38:44 - INFO - filelock - Lock 140079090376272 acquired on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\n12/12/2020 15:38:44 - INFO - filelock - Lock 140079090376272 released on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\nUsing custom data configuration default\r\n12/12/2020 15:38:44 - INFO - filelock - Lock 140082549312272 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12/12/2020 15:38:44 - INFO - filelock - Lock 140082549312272 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12/12/2020 15:38:44 - INFO - filelock - Lock 140082549365648 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nReusing dataset boolq (/idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534)\r\n12/12/2020 15:38:44 - INFO - filelock - Lock 140082549365648 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nLoading cached processed dataset at /idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534/cache-6810ece2a440c3be.arrow\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\n12/12/2020 15:38:45 - INFO - filelock - Lock 140082549560848 acquired on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\n12/12/2020 15:38:45 - INFO - filelock - Lock 140082549560848 released on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\nUsing custom data configuration default\r\n12/12/2020 15:38:45 - INFO - filelock - Lock 140082549560848 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12/12/2020 15:38:45 - INFO - filelock - Lock 140082549560848 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12/12/2020 15:38:45 - INFO - filelock - Lock 140082549365200 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nReusing dataset boolq (/idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534)\r\n12/12/2020 15:38:45 - INFO - filelock - Lock 140082549365200 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nLoading cached processed dataset at /idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534/cache-9a2822394a3a4e34.arrow\r\n12/12/2020 15:38:45 - INFO - seq2seq.metrics.metrics - selected metric <function build_compute_metrics_fn.<locals>.classification_metrics at 0x7f66b464cc20> for task boolq\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - ***** Running training *****\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Num examples = 10\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Num Epochs = 2\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Instantaneous batch size per device = 64\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 64\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Gradient Accumulation steps = 1\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Total optimization steps = 2\r\n{'loss': 529.79443359375, 'learning_rate': 2e-05, 'epoch': 1.0} \r\n100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 2.37it/s]12/12/2020 15:38:46 - INFO - seq2seq.trainers.trainer - \r\n\r\nTraining completed. Do not forget to share your model on huggingface.co/models =)\r\n\r\n\r\n{'epoch': 2.0} \r\n100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 2.43it/s]\r\n12/12/2020 15:38:46 - INFO - seq2seq.trainers.trainer - Saving model checkpoint to outputs/test\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\n12/12/2020 15:38:59 - INFO - filelock - Lock 140079084929680 acquired on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\n12/12/2020 15:38:59 - INFO - filelock - Lock 140079084929680 released on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\nUsing custom data configuration default\r\n12/12/2020 15:38:59 - INFO - filelock - Lock 140079084929360 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12/12/2020 15:38:59 - INFO - filelock - Lock 140079084929360 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12/12/2020 15:38:59 - INFO - filelock - Lock 140079085355216 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nReusing dataset boolq (/idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534)\r\n12/12/2020 15:38:59 - INFO - filelock - Lock 140079085355216 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nLoading cached processed dataset at /idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534/cache-164dd1d57e9fa69a.arrow\r\n12/12/2020 15:38:59 - INFO - seq2seq.metrics.metrics - selected metric <function build_compute_metrics_fn.<locals>.classification_metrics at 0x7f66b40c67a0> for task boolq\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - ***** Running training *****\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Num examples = 1\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Num Epochs = 2\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Instantaneous batch size per device = 64\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 64\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Gradient Accumulation steps = 1\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Total optimization steps = 2\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Continuing training from checkpoint, will skip to saved global_step\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Continuing training from epoch 2\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Continuing training from global step 2\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Will skip the first 0 steps in the first epoch\r\n 0%| | 0/2 [00:00<?, ?it/s]12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - \r\n\r\nTraining completed. Do not forget to share your model on huggingface.co/models =)\r\n\r\n\r\n{'epoch': 2.0} \r\n 0%| | 0/2 [00:00<?, ?it/s]\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Saving model checkpoint to outputs/finetune-adapter/test-n-1-lr-1e-02-e-20/boolq\r\n12/12/2020 15:39:07 - INFO - seq2seq.utils.utils - using task specific params for boolq: {'max_length': 3}\r\n12/12/2020 15:39:07 - INFO - seq2seq.trainers.trainer - ***** Running Evaluation *****\r\n12/12/2020 15:39:07 - INFO - seq2seq.trainers.trainer - Num examples = 3269\r\n12/12/2020 15:39:07 - INFO - seq2seq.trainers.trainer - Batch size = 64\r\n100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 52/52 [00:12<00:00, 4.86it/s][libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): \r\nterminate called after throwing an instance of 'google::protobuf::FatalException'\r\n what(): CHECK failed: (index) >= (0): \r\nAborted\r\n```\r\n\r\n\r\n\r\n", "solved see https://github.com/huggingface/transformers/issues/9079?_pjax=%23js-repo-pjax-container ", "Hii please follow me" ]
null
1,286
false
boolq does not work
Hi I am getting this error when trying to load boolq, thanks for your help ts_boolq_default_0.1.0_2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11.lock Traceback (most recent call last): File "finetune_t5_trainer.py", line 274, in <module> main() File "finetune_t5_trainer.py", line 147, in main for task in data_args.tasks] File "finetune_t5_trainer.py", line 147, in <listcomp> for task in data_args.tasks] File "/remote/idiap.svm/user.active/rkarimi/dev/ruse/seq2seq/tasks/tasks.py", line 58, in get_dataset dataset = self.load_dataset(split=split) File "/remote/idiap.svm/user.active/rkarimi/dev/ruse/seq2seq/tasks/tasks.py", line 54, in load_dataset return datasets.load_dataset(self.task.name, split=split) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 149, in download_custom custom_download(url, path) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py", line 516, in copy_v2 compat.path_to_bytes(src), compat.path_to_bytes(dst), overwrite) tensorflow.python.framework.errors_impl.AlreadyExistsError: file already exists
https://github.com/huggingface/datasets/issues/1285
[ "here is the minimal code to reproduce\r\n\r\n`datasets>>> datasets.load_dataset(\"boolq\", \"train\")\r\n\r\nthe errors\r\n\r\n```\r\n`cahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\nUsing custom data configuration train\r\nDownloading and preparing dataset boolq/train (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /idiap/temp/rkarimi/cache_home_1/datasets/boolq/train/0.1.0/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11...\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py\", line 531, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \" /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py\", line 74, in _split_generators\r\n downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy)\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py\", line 149, in download_custom\r\n custom_download(url, path)\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py\", line 516, in copy_v2\r\n compat.path_to_bytes(src), compat.path_to_bytes(dst), overwrite)\r\n\r\n\r\n\r\n```", "This has been fixed by #881 \r\nthis fix will be available in the next release soon.\r\n\r\nIf you don't want to wait for the release you can actually load the latest version of boolq by specifying `script_version=\"master\"` in `load_dataset`", "thank you this solved this issue, for now seems to work, thanks " ]
null
1,285
false
Update coqa dataset url
`datasets.stanford.edu` is invalid.
https://github.com/huggingface/datasets/pull/1284
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1284", "html_url": "https://github.com/huggingface/datasets/pull/1284", "diff_url": "https://github.com/huggingface/datasets/pull/1284.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1284.patch", "merged_at": "2020-12-08T18:19:09" }
1,284
true
Add dutch book review dataset
- Name: Dutch Book Review Dataset (DBRD) - Description: The DBRD (pronounced dee-bird) dataset contains over 110k book reviews along with associated binary sentiment polarity labels and is intended as a benchmark for sentiment classification in Dutch. - Paper: https://arxiv.org/abs/1910.00896 - Data: https://github.com/benjaminvdb/DBRD - Motivation: A large (real-life) dataset of Dutch book reviews and sentiment polarity (positive/negative), based on the associated rating. Checks - [x] Create the dataset script /datasets/dbrd/dbrd.py using the template - [x] Fill the _DESCRIPTION and _CITATION variables - [x] Implement _info(), _split_generators() and _generate_examples() - [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class. - [x] Generate the metadata file dataset_infos.json for all configurations - [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card README.md using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
https://github.com/huggingface/datasets/pull/1283
[ "> Really cool thanks !\r\n> \r\n> I left some (minor) comments\r\n\r\nThank you for your comments! πŸ‘ I went ahead and improved the dataset card using your suggestions and some tweaks of my own. I hope you like it! πŸ˜„" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1283", "html_url": "https://github.com/huggingface/datasets/pull/1283", "diff_url": "https://github.com/huggingface/datasets/pull/1283.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1283.patch", "merged_at": "2020-12-09T17:25:25" }
1,283
true
add thaiqa_squad
Example format is a little different from SQuAD since `thaiqa` always have one answer per question so I added a check to convert answers to lists if they are not already one to future-proof additional questions that might have multiple answers. `thaiqa_squad` is an open-domain, extractive question answering dataset (4,000 questions in `train` and 74 questions in `dev`) in [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, originally created by [NECTEC](https://www.nectec.or.th/en/) from Wikipedia articles and adapted to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format by [PyThaiNLP](https://github.com/PyThaiNLP/).
https://github.com/huggingface/datasets/pull/1282
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1282", "html_url": "https://github.com/huggingface/datasets/pull/1282", "diff_url": "https://github.com/huggingface/datasets/pull/1282.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1282.patch", "merged_at": "2020-12-08T18:36:18" }
1,282
true
adding hybrid_qa
Adding HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data https://github.com/wenhuchen/HybridQA
https://github.com/huggingface/datasets/pull/1281
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1281", "html_url": "https://github.com/huggingface/datasets/pull/1281", "diff_url": "https://github.com/huggingface/datasets/pull/1281.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1281.patch", "merged_at": "2020-12-08T18:07:00" }
1,281
true
disaster response messages dataset
https://github.com/huggingface/datasets/pull/1280
[ "I have added the Readme.md as well, the PR is ready for review. \r\n\r\nThank you ", "Hi @lhoestq I have updated the code and files. Please if you could check once.\r\n\r\nThank you" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1280", "html_url": "https://github.com/huggingface/datasets/pull/1280", "diff_url": "https://github.com/huggingface/datasets/pull/1280.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1280.patch", "merged_at": "2020-12-09T16:21:57" }
1,280
true
added para_pat
Dataset link : https://figshare.com/articles/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632 Working on README.md currently
https://github.com/huggingface/datasets/pull/1279
[ "Updated with Translation feature type. Working on dataset tags and README", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1279", "html_url": "https://github.com/huggingface/datasets/pull/1279", "diff_url": "https://github.com/huggingface/datasets/pull/1279.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1279.patch", "merged_at": "2020-12-14T13:41:17" }
1,279
true
Craigslist bargains
`craigslist_bargains` dataset from [here](https://worksheets.codalab.org/worksheets/0x453913e76b65495d8b9730d41c7e0a0c/)
https://github.com/huggingface/datasets/pull/1278
[ "Seeing this in the CircleCI builds, this is what I was originally getting before I started messing around with the download URLS to try to fix this:\r\n\r\n`FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpwvji917g/extracted/d6185140afb24ad8fee67392100a478269cba286b0d88915a137fdf88872de14/dummy_data/train__VARIABLE_MISUSE__SStuB.txt-00001-of-00300'`\r\n\r\nCould this be because of the files in my `dummy_data.zip`? I had to manually create it, and it looked like the test was looking for the following files, so I created the `.zip` with this structure:\r\n\r\n```\r\nArchive: dummy_data.zip\r\n creating: dummy_data/\r\n inflating: dummy_data/blobtest \r\n inflating: dummy_data/parsed.jsontrain \r\n inflating: dummy_data/parsed.jsonvalidation \r\n```", "Going to close this out and link to a new (cleaner) PR" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1278", "html_url": "https://github.com/huggingface/datasets/pull/1278", "diff_url": "https://github.com/huggingface/datasets/pull/1278.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1278.patch", "merged_at": null }
1,278
true
add One Million Posts Corpus
- **Name:** One Million Posts Corpus - **Description:** The β€œOne Million Posts” corpus is an annotated data set consisting of user comments posted to an Austrian newspaper website (in German language). - **Paper:** https://dl.acm.org/doi/10.1145/3077136.3080711 - **Data:** https://github.com/OFAI/million-post-corpus - **Motivation:** Big German (real-life) dataset containing different annotations around forum moderation with expert annotations. ### Checkbox - [X] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [X] Fill the `_DESCRIPTION` and `_CITATION` variables - [X] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [X] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [X] Generate the metadata file `dataset_infos.json` for all configurations - [X] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [X] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [X] Both tests for the real data and the dummy data pass.
https://github.com/huggingface/datasets/pull/1276
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1276", "html_url": "https://github.com/huggingface/datasets/pull/1276", "diff_url": "https://github.com/huggingface/datasets/pull/1276.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1276.patch", "merged_at": "2020-12-11T18:28:18" }
1,276
true
Yoruba GV NER added
I just added Yoruba GV NER dataset from this paper https://www.aclweb.org/anthology/2020.lrec-1.335/
https://github.com/huggingface/datasets/pull/1275
[ "Thank you. Okay, I will add the dataset card." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1275", "html_url": "https://github.com/huggingface/datasets/pull/1275", "diff_url": "https://github.com/huggingface/datasets/pull/1275.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1275.patch", "merged_at": null }
1,275
true
oclar-dataset
Opinion Corpus for Lebanese Arabic Reviews (OCLAR) corpus is utilizable for Arabic sentiment classification on reviews, including hotels, restaurants, shops, and others. : [homepage](http://archive.ics.uci.edu/ml/datasets/Opinion+Corpus+for+Lebanese+Arabic+Reviews+%28OCLAR%29#)
https://github.com/huggingface/datasets/pull/1274
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1274", "html_url": "https://github.com/huggingface/datasets/pull/1274", "diff_url": "https://github.com/huggingface/datasets/pull/1274.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1274.patch", "merged_at": "2020-12-09T15:36:08" }
1,274
true
Created wiki_movies dataset.
First PR (ever). Hopefully this movies dataset is useful to others!
https://github.com/huggingface/datasets/pull/1273
[ "looks like your PR includes changes about many other files than the ones for wiki_movies\r\n\r\nCan you create another branch and another PR please ?", "I'm happy to. What's the best way to do that (sorry, I'm new to PRs etc.)?", "Sure !\r\n\r\nFirst please save your new dataset files somewhere.\r\nThen you can do in this order:\r\n```\r\ngit checkout master\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit push\r\ngit checkout -b my-new-branch-name\r\n```\r\nThis will create a new branch from the updated master branch.\r\nThen you can re-add your files and commit + push them\r\n\r\nOnce it's done you should be able to create a new PR using your new branch :) ", "Done!", "closing in favor of #1485 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1273", "html_url": "https://github.com/huggingface/datasets/pull/1273", "diff_url": "https://github.com/huggingface/datasets/pull/1273.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1273.patch", "merged_at": null }
1,273
true
Psc
https://github.com/huggingface/datasets/pull/1272
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1272", "html_url": "https://github.com/huggingface/datasets/pull/1272", "diff_url": "https://github.com/huggingface/datasets/pull/1272.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1272.patch", "merged_at": null }
1,272
true
SMS Spam Dataset
Hi :) I added this [SMS Spam Dataset](http://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection)
https://github.com/huggingface/datasets/pull/1271
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1271", "html_url": "https://github.com/huggingface/datasets/pull/1271", "diff_url": "https://github.com/huggingface/datasets/pull/1271.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1271.patch", "merged_at": "2020-12-08T17:42:19" }
1,271
true
add DFKI SmartData Corpus
- **Name:** DFKI SmartData Corpus - **Description:** DFKI SmartData Corpus is a dataset of 2598 German-language documents which has been annotated with fine-grained geo-entities, such as streets, stops and routes, as well as standard named entity types. - **Paper:** https://www.dfki.de/fileadmin/user_upload/import/9427_lrec_smartdata_corpus.pdf - **Data:** https://github.com/DFKI-NLP/smartdata-corpus - **Motivation:** Contains fine-grained NER labels for German. ### Checkbox - [X] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [X] Fill the `_DESCRIPTION` and `_CITATION` variables - [X] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [X] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [X] Generate the metadata file `dataset_infos.json` for all configurations - [X] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [X] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [X] Both tests for the real data and the dummy data pass.
https://github.com/huggingface/datasets/pull/1270
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1270", "html_url": "https://github.com/huggingface/datasets/pull/1270", "diff_url": "https://github.com/huggingface/datasets/pull/1270.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1270.patch", "merged_at": "2020-12-08T17:41:23" }
1,270
true
Adding OneStopEnglish corpus dataset
This PR adds OneStopEnglish Corpus containing texts classified into reading levels (elementary, intermediate, advance) for automatic readability assessment and text simplification. Link to the paper: https://www.aclweb.org/anthology/W18-0535.pdf
https://github.com/huggingface/datasets/pull/1269
[ "Hi @lhoestq, thanks for the review.\r\nI have made all the changes, PTAL! :) " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1269", "html_url": "https://github.com/huggingface/datasets/pull/1269", "diff_url": "https://github.com/huggingface/datasets/pull/1269.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1269.patch", "merged_at": "2020-12-09T15:33:53" }
1,269
true
new pr for Turkish NER
https://github.com/huggingface/datasets/pull/1268
[ "Can you run `make style` to fix the code format ?\r\n\r\nAlso it looks like the file `file_downloaded/TWNERTC_TC_Coarse Grained NER_DomainIndependent_NoiseReduction.zip/TWNERTC_TC_Coarse Grained NER_DomainIndependent_NoiseReduction.DUMP` is missing inside the dummy_data.zip\r\n\r\n\r\n(note that `TWNERTC_TC_Coarse Grained NER_DomainIndependent_NoiseReduction.zip` is a directory name, not an actual zip file)", "Hi Quentin, thank you for your patience with me. I've fixed the preprocessing pipeline, got this very weird error that Yacine told me to push. I've pushed it and after I'll find out that it will work, I will have my final pr on styling.", "looks like you removed the dataset script file in your latest commit, is it expected ?" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1268", "html_url": "https://github.com/huggingface/datasets/pull/1268", "diff_url": "https://github.com/huggingface/datasets/pull/1268.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1268.patch", "merged_at": "2020-12-09T13:45:05" }
1,268
true
Has part
https://github.com/huggingface/datasets/pull/1267
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1267", "html_url": "https://github.com/huggingface/datasets/pull/1267", "diff_url": "https://github.com/huggingface/datasets/pull/1267.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1267.patch", "merged_at": "2020-12-11T18:25:42" }
1,267
true
removing unzipped hansards dummy data
which were added by mistake
https://github.com/huggingface/datasets/pull/1266
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1266", "html_url": "https://github.com/huggingface/datasets/pull/1266", "diff_url": "https://github.com/huggingface/datasets/pull/1266.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1266.patch", "merged_at": "2020-12-07T17:32:28" }
1,266
true
Add CovidQA dataset
This PR adds CovidQA, a question answering dataset specifically designed for COVID-19, built by hand from knowledge gathered from Kaggle’s COVID-19 Open Research Dataset Challenge. Link to the paper: https://arxiv.org/pdf/2004.11339.pdf Link to the homepage: https://covidqa.ai
https://github.com/huggingface/datasets/pull/1265
[ "It seems to share the same name as this dataset: https://openreview.net/forum?id=JENSKEEzsoU", "> It seems to share the same name as this dataset: https://openreview.net/forum?id=JENSKEEzsoU\r\n\r\nyou're right it can be confusing. I'll add the organization/research group for clarity: `covid_qa_castorini`. I added the dataset you shared as `covid_qa_deepset` in another PR (#1182) ", "Thanks for avoiding the name collision !" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1265", "html_url": "https://github.com/huggingface/datasets/pull/1265", "diff_url": "https://github.com/huggingface/datasets/pull/1265.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1265.patch", "merged_at": "2020-12-08T17:02:26" }
1,265
true
enriched webnlg dataset rebase
Rebase of #1206 !
https://github.com/huggingface/datasets/pull/1264
[ "I've removed the `en` within `de` and reciprocally; but I don't think I will be able to thin it more than this. (Edit: ignore the close, I missclicked !)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1264", "html_url": "https://github.com/huggingface/datasets/pull/1264", "diff_url": "https://github.com/huggingface/datasets/pull/1264.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1264.patch", "merged_at": "2020-12-09T17:00:27" }
1,264
true
Added kannada news headlines classification dataset.
Manual Download of a kaggle dataset. Mostly followed process as ms_terms.
https://github.com/huggingface/datasets/pull/1263
[ "Hi! Let me know if any more comments! Will fix it! :-)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1263", "html_url": "https://github.com/huggingface/datasets/pull/1263", "diff_url": "https://github.com/huggingface/datasets/pull/1263.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1263.patch", "merged_at": "2020-12-09T18:01:31" }
1,263
true
Adding msr_genomics_kbcomp dataset
https://github.com/huggingface/datasets/pull/1262
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1262", "html_url": "https://github.com/huggingface/datasets/pull/1262", "diff_url": "https://github.com/huggingface/datasets/pull/1262.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1262.patch", "merged_at": null }
1,262
true
Add Google Sentence Compression dataset
For more information: https://www.aclweb.org/anthology/D13-1155.pdf
https://github.com/huggingface/datasets/pull/1261
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1261", "html_url": "https://github.com/huggingface/datasets/pull/1261", "diff_url": "https://github.com/huggingface/datasets/pull/1261.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1261.patch", "merged_at": "2020-12-08T17:01:59" }
1,261
true
Added NewsPH Raw Dataset
Added the raw version of the NewsPH dataset, which was used to automatically generate the NewsPH-NLI corpus. Dataset of news articles in Filipino from mainstream Philippine news sites on the internet. Can be used as a language modeling dataset or to reproduce the NewsPH-NLI dataset. Paper: https://arxiv.org/abs/2010.11574 Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
https://github.com/huggingface/datasets/pull/1260
[ "looks like this PR has changes to many files other than the ones for `NewsPH`\r\n\r\nCan you create another branch and another PR please ?" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1260", "html_url": "https://github.com/huggingface/datasets/pull/1260", "diff_url": "https://github.com/huggingface/datasets/pull/1260.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1260.patch", "merged_at": null }
1,260
true
Add KorQPair dataset
This PR adds a [Korean paired question dataset](https://github.com/songys/Question_pair) containing labels indicating whether two questions in a given pair are semantically identical. This dataset was used to evaluate the performance of [KoGPT2](https://github.com/SKT-AI/KoGPT2#subtask-evaluations) on a phrase detection downstream task.
https://github.com/huggingface/datasets/pull/1259
[ "dummy data is missing", "Hey @cceyda, thanks for pointing that out. I thought I'd added it, but seems like that wasn't the case. Just pushed a new commit with the dummy data." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1259", "html_url": "https://github.com/huggingface/datasets/pull/1259", "diff_url": "https://github.com/huggingface/datasets/pull/1259.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1259.patch", "merged_at": "2020-12-08T15:11:41" }
1,259
true
arXiv dataset added
https://github.com/huggingface/datasets/pull/1258
[ "Need help" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1258", "html_url": "https://github.com/huggingface/datasets/pull/1258", "diff_url": "https://github.com/huggingface/datasets/pull/1258.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1258.patch", "merged_at": null }
1,258
true
Add Swahili news classification dataset
Add Swahili news classification dataset
https://github.com/huggingface/datasets/pull/1257
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1257", "html_url": "https://github.com/huggingface/datasets/pull/1257", "diff_url": "https://github.com/huggingface/datasets/pull/1257.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1257.patch", "merged_at": "2020-12-08T14:44:19" }
1,257
true
adding LiMiT dataset
Adding LiMiT: The Literal Motion in Text Dataset https://github.com/ilmgut/limit_dataset
https://github.com/huggingface/datasets/pull/1256
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1256", "html_url": "https://github.com/huggingface/datasets/pull/1256", "diff_url": "https://github.com/huggingface/datasets/pull/1256.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1256.patch", "merged_at": "2020-12-08T14:42:51" }
1,256
true
[doc] nlp/viewer ➑️datasets/viewer
cc @srush
https://github.com/huggingface/datasets/pull/1255
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1255", "html_url": "https://github.com/huggingface/datasets/pull/1255", "diff_url": "https://github.com/huggingface/datasets/pull/1255.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1255.patch", "merged_at": "2020-12-08T17:17:53" }
1,255
true
Added WikiText-TL-39
This PR adds the WikiText-TL-39 Filipino Language Modeling dataset. Paper: https://arxiv.org/abs/1907.00409 Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
https://github.com/huggingface/datasets/pull/1254
[ "looks like this PR also includes changes about another dataset `covid_qa_deepset`\r\n\r\nCould you create another branch and another PR that only includes the changes for the wikitext-tl-39 dataset ?" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1254", "html_url": "https://github.com/huggingface/datasets/pull/1254", "diff_url": "https://github.com/huggingface/datasets/pull/1254.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1254.patch", "merged_at": null }
1,254
true
add thainer
ThaiNER (v1.3) is a 6,456-sentence named entity recognition dataset created from expanding the 2,258-sentence [unnamed dataset](http://pioneer.chula.ac.th/~awirote/Data-Nutcha.zip) by [Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/). It is used to train NER taggers in [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp). The NER tags are annotated by [Tirasaroj and Aroonmanakun (2012)]((http://pioneer.chula.ac.th/~awirote/publications/)) for 2,258 sentences and the rest by [@wannaphong](https://github.com/wannaphong/). The POS tags are done by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s `perceptron` engine trained on `orchid_ud`. [@wannaphong](https://github.com/wannaphong/) is now the only maintainer of this dataset.
https://github.com/huggingface/datasets/pull/1253
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1253", "html_url": "https://github.com/huggingface/datasets/pull/1253", "diff_url": "https://github.com/huggingface/datasets/pull/1253.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1253.patch", "merged_at": "2020-12-08T14:44:49" }
1,253
true
Add Naver sentiment movie corpus
Supersedes #1168 > This PR adds the [Naver sentiment movie corpus](https://github.com/e9t/nsmc), a dataset containing Korean movie reviews from Naver, the most commonly used search engine in Korea. This dataset is often used to benchmark models on Korean NLP tasks, as seen in [this paper](https://www.aclweb.org/anthology/2020.lrec-1.199.pdf).
https://github.com/huggingface/datasets/pull/1252
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1252", "html_url": "https://github.com/huggingface/datasets/pull/1252", "diff_url": "https://github.com/huggingface/datasets/pull/1252.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1252.patch", "merged_at": "2020-12-08T14:21:37" }
1,252
true
Add Wiki Atomic Edits Dataset (43M edits)
https://github.com/huggingface/datasets/pull/1251
[ "@lhoestq fixed :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1251", "html_url": "https://github.com/huggingface/datasets/pull/1251", "diff_url": "https://github.com/huggingface/datasets/pull/1251.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1251.patch", "merged_at": "2020-12-14T10:05:00" }
1,251
true
added Nergrit dataset
Nergrit Corpus is a dataset collection for Indonesian Named Entity Recognition, Statement Extraction, and Sentiment Analysis. This PR is only for the Named Entity Recognition.
https://github.com/huggingface/datasets/pull/1250
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1250", "html_url": "https://github.com/huggingface/datasets/pull/1250", "diff_url": "https://github.com/huggingface/datasets/pull/1250.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1250.patch", "merged_at": "2020-12-08T14:33:29" }
1,250
true
Add doc2dial dataset
### Doc2dial: A Goal-Oriented Document-Grounded Dialogue Dataset v0.9 Once complete this will add the [Doc2dial](https://doc2dial.github.io/data.html) dataset from the generic data sets list.
https://github.com/huggingface/datasets/pull/1249
[ "It not always practical to use nested `Sequence`. If you have troubles with sequence you can use lists instead. \r\n\r\nFor example\r\n```python\r\n\r\nfeatures=datasets.Features(\r\n {\r\n \"dial_id\": datasets.Value(\"string\"),\r\n \"doc_id\": datasets.Value(\"string\"),\r\n \"domain\": datasets.Value(\"string\"),\r\n \"turns\": [\r\n {\r\n \"turn_id\": datasets.Value(\"int32\"),\r\n \"role\": datasets.Value(\"string\"),\r\n \"da\": datasets.Value(\"string\"),\r\n \"reference\": [\r\n {\r\n \"keys\" : datasets.Value(\"string\"),\r\n \"values\": datasets.Value(\"string\"), \r\n }\r\n\r\n ],\r\n \"utterance\": datasets.Value(\"string\"),\r\n }\r\n ],\r\n }\r\n),\r\n```\r\n\r\nthis way `turns` will be a list of dict, and the \"reference\" key of `turns` will be a list of dict as well", "No problem thanks for all your help getting this to the final stages! Added .gitignore, removed .lock and applied the changes you asked for." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1249", "html_url": "https://github.com/huggingface/datasets/pull/1249", "diff_url": "https://github.com/huggingface/datasets/pull/1249.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1249.patch", "merged_at": "2020-12-14T16:17:14" }
1,249
true
Update step-by-step guide about the dataset cards
Small update in the step-by-step guide about the dataset cards to indicate it can be created and completing while exploring the dataset.
https://github.com/huggingface/datasets/pull/1248
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1248", "html_url": "https://github.com/huggingface/datasets/pull/1248", "diff_url": "https://github.com/huggingface/datasets/pull/1248.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1248.patch", "merged_at": "2020-12-07T13:19:23" }
1,248
true
Adding indonlu dataset
IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for Bahasa Indonesia. It contains 12 datasets.
https://github.com/huggingface/datasets/pull/1247
[ "looks like this PR includes changes about many files other than the ones for IndoNLU\r\nCould you create another branch and another PR please ?", "> looks like this PR includes changes about many files other than the ones for IndoNLU\r\n> Could you create another branch and another PR please ?\r\n\r\nOkay I'll make it" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1247", "html_url": "https://github.com/huggingface/datasets/pull/1247", "diff_url": "https://github.com/huggingface/datasets/pull/1247.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1247.patch", "merged_at": null }
1,247
true
arXiv dataset added
https://github.com/huggingface/datasets/pull/1246
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1246", "html_url": "https://github.com/huggingface/datasets/pull/1246", "diff_url": "https://github.com/huggingface/datasets/pull/1246.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1246.patch", "merged_at": null }
1,246
true
Add Google Turkish Treebank Dataset
null
https://github.com/huggingface/datasets/pull/1245
[ "Thanks for your contribution, @abhishekkrthakur. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1245", "html_url": "https://github.com/huggingface/datasets/pull/1245", "diff_url": "https://github.com/huggingface/datasets/pull/1245.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1245.patch", "merged_at": null }
1,245
true
arxiv dataset added
https://github.com/huggingface/datasets/pull/1244
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1244", "html_url": "https://github.com/huggingface/datasets/pull/1244", "diff_url": "https://github.com/huggingface/datasets/pull/1244.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1244.patch", "merged_at": null }
1,244
true
Add Google Noun Verb Dataset
null
https://github.com/huggingface/datasets/pull/1243
[ "Thanks for your contribution, @abhishekkrthakur. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1243", "html_url": "https://github.com/huggingface/datasets/pull/1243", "diff_url": "https://github.com/huggingface/datasets/pull/1243.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1243.patch", "merged_at": null }
1,243
true
adding bprec
https://github.com/huggingface/datasets/pull/1242
[ "looks like this PR includes changes to many files other than the ones related to bprec\r\nCan you create another branch and another PR please ?", "> looks like this PR includes changes to many files other than the ones related to bprec\r\n> Can you create another branch and another PR please ?\r\n\r\nYes, I realized I messed this one up, learning my way :) I'll close this one and open another hopefully clean PR :) Thanks!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1242", "html_url": "https://github.com/huggingface/datasets/pull/1242", "diff_url": "https://github.com/huggingface/datasets/pull/1242.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1242.patch", "merged_at": null }
1,242
true
Opus elhuyar dataset for MT task having languages pair in Spanish to Basque
Opus elhuyar dataset for MT task having languages pair in Spanish to Basque More info : http://opus.nlpl.eu/Elhuyar.php
https://github.com/huggingface/datasets/pull/1241
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1241", "html_url": "https://github.com/huggingface/datasets/pull/1241", "diff_url": "https://github.com/huggingface/datasets/pull/1241.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1241.patch", "merged_at": "2020-12-09T15:12:48" }
1,241
true
Multi Domain Sentiment Analysis Dataset (MDSA)
null
https://github.com/huggingface/datasets/pull/1240
[ "can you also run `make style` to format the code ?", "I'll come back to this one in sometime :) @lhoestq ", "Also if you would use `xml.etree.ElementTree` to parse the XML it would be awesome, because right now you're using an external dependency `xmltodict `", "> Also if you would use xml.etree.ElementTree to parse the XML it would be awesome, because right now you're using an external dependency xmltodict\r\n\r\nIts pseudo xml so elementtree fails. xmltodict seems to be working quite good for this. do we have examples of pseudo xml datasets?", "for the other pseudo xml the text is parsed manually", "Can you add `xmltodict` to the test dependencies in setup.py please to fix the CI please ?", "Also can you add the dataset card with the tags and run `make style` ?", "Hi :) have you had a chance to fix the test dependency and apply `make style` ?\r\n\r\nFeel fee to ping me when it's ready for a review", "Thanks for your contribution, @abhishekkrthakur. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1240", "html_url": "https://github.com/huggingface/datasets/pull/1240", "diff_url": "https://github.com/huggingface/datasets/pull/1240.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1240.patch", "merged_at": null }
1,240
true
add yelp_review_full dataset
This corresponds to the Yelp-5 requested in https://github.com/huggingface/datasets/issues/353
https://github.com/huggingface/datasets/pull/1239
[ "Moved to https://github.com/huggingface/datasets/pull/1315" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1239", "html_url": "https://github.com/huggingface/datasets/pull/1239", "diff_url": "https://github.com/huggingface/datasets/pull/1239.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1239.patch", "merged_at": null }
1,239
true
adding poem_sentiment
Adding poem_sentiment dataset. https://github.com/google-research-datasets/poem-sentiment
https://github.com/huggingface/datasets/pull/1238
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1238", "html_url": "https://github.com/huggingface/datasets/pull/1238", "diff_url": "https://github.com/huggingface/datasets/pull/1238.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1238.patch", "merged_at": "2020-12-09T16:02:45" }
1,238
true
Add AmbigQA dataset
# AmbigQA: Answering Ambiguous Open-domain Questions Dataset Adding the [AmbigQA](https://nlp.cs.washington.edu/ambigqa/) dataset as part of the sprint πŸŽ‰ (from Open dataset list for Dataset sprint) Added both the light and full versions (as seen on the dataset homepage) The json format changes based on the value of one 'type' field, so I set the unavailable field to an empty list. This is explained in the README -> Data Fields ```py train_light_dataset = load_dataset('./datasets/ambig_qa',"light",split="train") val_light_dataset = load_dataset('./datasets/ambig_qa',"light",split="validation") train_full_dataset = load_dataset('./datasets/ambig_qa',"full",split="train") val_full_dataset = load_dataset('./datasets/ambig_qa',"full",split="validation") for example in train_light_dataset: for i,t in enumerate(example['annotations']['type']): if t =='singleAnswer': # use the example['annotations']['answer'][i] # example['annotations']['qaPairs'][i] - > is [] print(example['annotations']['answer'][i]) else: # use the example['annotations']['qaPairs'][i] # example['annotations']['answer'][i] - > is [] print(example['annotations']['qaPairs'][i]) ``` - [x] All tests passed - [x] Added dummy data - [x] Added data card (as much as I could)
https://github.com/huggingface/datasets/pull/1237
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1237", "html_url": "https://github.com/huggingface/datasets/pull/1237", "diff_url": "https://github.com/huggingface/datasets/pull/1237.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1237.patch", "merged_at": "2020-12-08T13:38:52" }
1,237
true
Opus finlex dataset of language pair Finnish and Swedish
Added Opus_finlex dataset of language pair Finnish and Swedish More info : http://opus.nlpl.eu/Finlex.php
https://github.com/huggingface/datasets/pull/1236
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1236", "html_url": "https://github.com/huggingface/datasets/pull/1236", "diff_url": "https://github.com/huggingface/datasets/pull/1236.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1236.patch", "merged_at": "2020-12-08T13:30:33" }
1,236
true
Wino bias
The PR will fail circleCi tests because of the requirement of manual loading of data. Fresh PR because of messed up history of the previous one.
https://github.com/huggingface/datasets/pull/1235
[ "Closing this PR because of messed up history and opening another one after discussion with Quentin Lhoest.\r\n" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1235", "html_url": "https://github.com/huggingface/datasets/pull/1235", "diff_url": "https://github.com/huggingface/datasets/pull/1235.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1235.patch", "merged_at": null }
1,235
true
Added ade_corpus_v2, with 3 configs for relation extraction and classification task
Adverse Drug Reaction Data: ADE-Corpus-V2 dataset added configs for different tasks with given data
https://github.com/huggingface/datasets/pull/1234
[ "@lhoestq I have added the tags they are in separate files for 3 different configs", "@lhoestq thanks for the review I added your suggested changes.", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1234", "html_url": "https://github.com/huggingface/datasets/pull/1234", "diff_url": "https://github.com/huggingface/datasets/pull/1234.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1234.patch", "merged_at": "2020-12-14T17:49:14" }
1,234
true
Add Curiosity Dialogs Dataset
Add Facebook [Curiosity Dialogs](https://github.com/facebookresearch/curiosity) Dataset.
https://github.com/huggingface/datasets/pull/1233
[ "@lhoestq I tried manually creating the dummy files. But unfortunately it was raising an error during testing the dummy data (regarding JSON parsing).\r\n\r\nThe JSONs are pretty big so I cannot actually open it without crashing the text editor.\r\n\r\n Do you have any suggestions?", "@lhoestq I have made all the changes you mentioned." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1233", "html_url": "https://github.com/huggingface/datasets/pull/1233", "diff_url": "https://github.com/huggingface/datasets/pull/1233.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1233.patch", "merged_at": "2020-12-09T14:50:29" }
1,233
true
Add Grail QA dataset
For more information: https://dki-lab.github.io/GrailQA/
https://github.com/huggingface/datasets/pull/1232
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1232", "html_url": "https://github.com/huggingface/datasets/pull/1232", "diff_url": "https://github.com/huggingface/datasets/pull/1232.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1232.patch", "merged_at": "2020-12-08T13:03:19" }
1,232
true
Add Urdu Sentiment Corpus (USC)
@lhoestq opened a clean PR containing only relevant files. old PR #1140
https://github.com/huggingface/datasets/pull/1231
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1231", "html_url": "https://github.com/huggingface/datasets/pull/1231", "diff_url": "https://github.com/huggingface/datasets/pull/1231.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1231.patch", "merged_at": "2020-12-07T16:43:23" }
1,231
true
Add Urdu fake news dataset
@lhoestq opened a clean PR containing only relevant files. old PR #1125
https://github.com/huggingface/datasets/pull/1230
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1230", "html_url": "https://github.com/huggingface/datasets/pull/1230", "diff_url": "https://github.com/huggingface/datasets/pull/1230.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1230.patch", "merged_at": "2020-12-07T16:57:54" }
1,230
true
Muchocine - Spanish movie reviews dataset
https://github.com/huggingface/datasets/pull/1229
[ "Hi @mapmeld !\r\nhave you had a chance to take a look at my suggestions ?\r\n\r\nFeel free to ping me if you have questions or when you're ready for a review", "@lhoestq unfortunately I don't have any more information about where the dataset comes from", "It's fine, you can just add the sections titles back and leave the content with `[More Information Needed]`\r\n\r\n", "added missing sections, updated the Python code βœ… " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1229", "html_url": "https://github.com/huggingface/datasets/pull/1229", "diff_url": "https://github.com/huggingface/datasets/pull/1229.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1229.patch", "merged_at": "2020-12-21T10:09:09" }
1,229
true
add opus_100 dataset
This PR will add [opus100 dataset](http://opus.nlpl.eu/opus-100.php).
https://github.com/huggingface/datasets/pull/1228
[ "done." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1228", "html_url": "https://github.com/huggingface/datasets/pull/1228", "diff_url": "https://github.com/huggingface/datasets/pull/1228.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1228.patch", "merged_at": "2020-12-09T14:53:59" }
1,228
true
readme: remove link to Google's responsible AI practices
...maybe we'll find a company that reallly stands behind responsible AI practices ;)
https://github.com/huggingface/datasets/pull/1227
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1227", "html_url": "https://github.com/huggingface/datasets/pull/1227", "diff_url": "https://github.com/huggingface/datasets/pull/1227.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1227.patch", "merged_at": "2020-12-06T23:20:41" }
1,227
true
Add menyo_20k_mt dataset
Add menyo_20k_mt dataset
https://github.com/huggingface/datasets/pull/1226
[ "looks like your PR includes changes about many other files than the ones for menyo 20k mt\r\nCan you create another branch and another PR please ?", "Yes, I will" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1226", "html_url": "https://github.com/huggingface/datasets/pull/1226", "diff_url": "https://github.com/huggingface/datasets/pull/1226.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1226.patch", "merged_at": null }
1,226
true
Add Winobias dataset
Pardon me for different commits with same message. There were conflicts after I rebased master while simultaneously pushing my changes to local repo, hence the duplicate entries.
https://github.com/huggingface/datasets/pull/1225
[ "Will make another pull request with cleaner history" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1225", "html_url": "https://github.com/huggingface/datasets/pull/1225", "diff_url": "https://github.com/huggingface/datasets/pull/1225.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1225.patch", "merged_at": null }
1,225
true
adding conceptnet5
Adding the conceptnet5 and omcs txt files used to create the conceptnet5 dataset. Conceptne5 is a common sense dataset. More info can be found here: https://github.com/commonsense/conceptnet5/wiki
https://github.com/huggingface/datasets/pull/1224
[ "Thank you. I'll make those changes. but I'm having problems trying to push my changes to my fork\r\n", "Hi, I've removed the TODO, and added a README.md. How do I push these changes?\r\n", "Also, what docstring are you recommending?\r\n", "> Hi, I've removed the TODO, and added a README.md. How do I push these changes?\r\n\r\nyou can just commit and push your changes to the same branch as your first commit.", "@ghomasHudson I've tried it but still getting code quality error. I've removed all blank lines, etc. required by flake8. Don't know what else to do", "> @ghomasHudson I've tried it but still getting code quality error. I've removed all blank lines, etc. required by flake8. Don't know what else to do\r\n\r\nDid you run `make style` before committing? When I run it, it fixes some things (e.g. Splitting line 96 which is currently too long).", "I think @yjernite is looking into this. I did \"make style\" but nothing happens", "looks like your PR includes changes about many other files than the ones related to conceptnet5\r\n\r\ncould you create another branch and another PR please ?", "@lhoestq I'm not sure what I did wrong. What did I push that wasn't conceptnet5? How do I see this?\r\n\r\n did this\r\n\r\nmake style\r\nflake8 datasets\r\ngit add datasets/<your_dataset_name>\r\ngit commit\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit pull\r\ngit push -u origin conceptnet5", "Thanks for rebasing and force push :) ", "Yeah! Thank you @lhoestq, @ghomasHudson and @yjernite !" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1224", "html_url": "https://github.com/huggingface/datasets/pull/1224", "diff_url": "https://github.com/huggingface/datasets/pull/1224.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1224.patch", "merged_at": "2020-12-09T14:37:17" }
1,224
true
πŸ‡ΈπŸ‡ͺ Added Swedish Reviews dataset for sentiment classification in Sw…
perhaps: @lhoestq πŸ€—
https://github.com/huggingface/datasets/pull/1223
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1223", "html_url": "https://github.com/huggingface/datasets/pull/1223", "diff_url": "https://github.com/huggingface/datasets/pull/1223.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1223.patch", "merged_at": "2020-12-08T10:54:56" }
1,223
true
Add numeric fused head dataset
Adding the [NFH: Numeric Fused Head](https://nlp.biu.ac.il/~lazary/fh/) dataset. Everything looks sensible and I've included both the identification and resolution tasks. I haven't personally used this dataset in my research so am unable to specify what the default configuration / supervised keys should be. I've filled out the basic info on the model card to the best of my knowledge but it's a little tricky to understand exactly what the fields represent. Dataset author: @yanaiela
https://github.com/huggingface/datasets/pull/1222
[ "> Thanks for adding this @ghomasHudson!\r\n> I added some comments for some of the fields.\r\n> \r\n> Also, I'm not sure about this since I haven't used the library yet, but maybe it's worth adding the identification and resolution as two separate datasets?\r\n\r\nThanks for replying @yanaiela - I hope this will make your dataset more accessible to a wider audience - I've added the changes to the model card you suggested.\r\n\r\nIn terms of the identification and resolution tasks, I've currently added them as separate `splits` in huggingface/datasets so you can load identification like this:\r\n\r\n```\r\nimport datasets\r\ndataset = datasets.load_dataset(\"numeric_fused_head\", \"identification\")\r\nprint(dataset[\"train\"][0])\r\n>> {\"tokens\": [\"The\", \"quick\", \"brown\", \"fox\",....], \"start_index\": 11, \"end_index\": 12, \"label\": 0}\r\n```\r\nAnd resolution like this:\r\n\r\n```\r\nimport datasets\r\ndataset = datasets.load_dataset(\"numeric_fused_head\", \"resolution\")\r\nprint(dataset[\"train\"][0])\r\n>> {\"tokens\": [\"The\", \"quick\", \"brown\", \"fox\",....], \"head\": [\"AGE\"], \"anchors_indices\": [12], ...}\r\n```", "I hope so too, thanks!\r\n\r\nRe the splits, that makes sense to me." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1222", "html_url": "https://github.com/huggingface/datasets/pull/1222", "diff_url": "https://github.com/huggingface/datasets/pull/1222.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1222.patch", "merged_at": "2020-12-08T11:17:55" }
1,222
true
Add HKCanCor
This PR adds the [Hong Kong Cantonese Corpus](http://compling.hss.ntu.edu.sg/hkcancor/), by [Luke and Wong 2015](http://compling.hss.ntu.edu.sg/hkcancor/data/LukeWong_Hong-Kong-Cantonese-Corpus.pdf). The dummy data included here was manually created, as the original dataset uses a xml-like format (see a copy hosted [here](https://github.com/fcbond/hkcancor/blob/master/sample/d1_v.txt) for example) that requires a few processing steps.
https://github.com/huggingface/datasets/pull/1221
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1221", "html_url": "https://github.com/huggingface/datasets/pull/1221", "diff_url": "https://github.com/huggingface/datasets/pull/1221.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1221.patch", "merged_at": "2020-12-09T16:34:18" }
1,221
true
add Korean HateSpeech dataset
https://github.com/huggingface/datasets/pull/1220
[ "It looks like you forgot to `make style` (I forget it a lot too 🀦 )\r\n+ add dummy data", "hi @cceyda πŸ‘‹, thanks for the hint! it looks like i've run into some other errors though in `_split_generators` or `_generate_examples`. do you have any idea of what's wrong here? πŸ˜…", "I get the same errors on another pr too, so it probably has something to do with circleci, waiting on help.", "the `RemoteDatasetTest ` error on the CI is fixed on master so it's fine", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1220", "html_url": "https://github.com/huggingface/datasets/pull/1220", "diff_url": "https://github.com/huggingface/datasets/pull/1220.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1220.patch", "merged_at": "2020-12-08T11:05:42" }
1,220
true
Add Korean NER dataset
Supersedes #1177 > This PR adds the [Korean named entity recognition dataset](https://github.com/kmounlp/NER). This dataset has been used in many downstream tasks, such as training [KoBERT](https://github.com/SKTBrain/KoBERT) for NER, as seen in this [KoBERT-CRF implementation](https://github.com/eagle705/pytorch-bert-crf-ner).
https://github.com/huggingface/datasets/pull/1219
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1219", "html_url": "https://github.com/huggingface/datasets/pull/1219", "diff_url": "https://github.com/huggingface/datasets/pull/1219.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1219.patch", "merged_at": "2020-12-08T10:25:33" }
1,219
true
Add WMT20 MLQE 3 shared tasks
3 tasks for the WMT 20 MLQE shared tasks -> 3 different datasets (I re-created #1137 because it was too messy). Note that in L199 `task3.py`, I used `logging.warning` to print some missing data in the train set.
https://github.com/huggingface/datasets/pull/1218
[ "Thanks for the comments Quentin!\r\nI integrated them", "It should be ok now!\r\nSorry I wasn't attentive enough.\r\n(tests are currently failing, I understand it's from other datasets)", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1218", "html_url": "https://github.com/huggingface/datasets/pull/1218", "diff_url": "https://github.com/huggingface/datasets/pull/1218.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1218.patch", "merged_at": "2020-12-15T15:27:29" }
1,218
true
adding DataCommons fact checking
Adding the data from: https://datacommons.org/factcheck/ Had to cheat a bit with the dummy data as the test doesn't recognize `.txt.gz`: had to rename uncompressed files with the `.gz` extension manually without actually compressing
https://github.com/huggingface/datasets/pull/1217
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1217", "html_url": "https://github.com/huggingface/datasets/pull/1217", "diff_url": "https://github.com/huggingface/datasets/pull/1217.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1217.patch", "merged_at": "2020-12-16T16:22:48" }
1,217
true
Add limit
This PR adds [LiMiT](https://github.com/ilmgut/limit_dataset), a dataset for literal motion classification/extraction by [Manotas et al., 2020](https://www.aclweb.org/anthology/2020.findings-emnlp.88.pdf).
https://github.com/huggingface/datasets/pull/1216
[ "My bad, didn't see this on the open dataset list. Closing this since it overlaps with PR#1256" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1216", "html_url": "https://github.com/huggingface/datasets/pull/1216", "diff_url": "https://github.com/huggingface/datasets/pull/1216.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1216.patch", "merged_at": null }
1,216
true
Add irc disentanglement
added files for irc disentanglement dataset was unable to test dummy data as a result of vpn/proxy issues
https://github.com/huggingface/datasets/pull/1215
[ "looks like this PR includes changes about many files other than the ones for irc_disentanglement\r\n\r\nCould you please create a new branch and create another PR please ?", "closing in favor of #1586 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1215", "html_url": "https://github.com/huggingface/datasets/pull/1215", "diff_url": "https://github.com/huggingface/datasets/pull/1215.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1215.patch", "merged_at": null }
1,215
true
adding medical-questions-pairs dataset
This dataset consists of 3048 similar and dissimilar medical question pairs hand-generated and labeled by Curai's doctors. Dataset : https://github.com/curai/medical-question-pair-dataset Paper : https://drive.google.com/file/d/1CHPGBXkvZuZc8hpr46HeHU6U6jnVze-s/view
https://github.com/huggingface/datasets/pull/1214
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1214", "html_url": "https://github.com/huggingface/datasets/pull/1214", "diff_url": "https://github.com/huggingface/datasets/pull/1214.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1214.patch", "merged_at": "2020-12-09T14:42:53" }
1,214
true
add taskmaster3
Adding Taskmaster-3 dataset https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020. The dataset structure almost same as original dataset with these two changes 1. In original dataset, each `apis` has a `args` filed which is a `dict` with variable keys, which represent the name and value of the args. Here converted that to a `list` of `dict` with keys `arg_name` and `arg_value`. For ex. ```python args = {"name.movie": "Mulan", "name.theater": ": "Mountain AMC 16"} ``` becomes ```python [ { "arg_name": "name.movie", "arg_value": "Mulan" }, { "arg_name": "name.theater", "arg_value": "Mountain AMC 16" } ] ``` 2. Each `apis` has a `response` which is also a `dict` with variable keys representing response name/type and it's value. As above converted it to `list` of `dict` with keys `response_name` and `response_value`.
https://github.com/huggingface/datasets/pull/1213
[ "(you were unlucky, my rule of thumb for reducing the dummy data is to check whether they're above 50KB and you're at 52KB ^^')", "> (you were unlucky, my rule of thumb for reducing the dummy data is to check whether they're above 50KB and you're at 52KB ^^')\r\n\r\nOops :(\r\n\r\nThanks for the suggestion, will reduce the size" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1213", "html_url": "https://github.com/huggingface/datasets/pull/1213", "diff_url": "https://github.com/huggingface/datasets/pull/1213.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1213.patch", "merged_at": "2020-12-09T11:00:29" }
1,213
true
Add Sanskrit Classic texts in datasets
https://github.com/huggingface/datasets/pull/1212
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1212", "html_url": "https://github.com/huggingface/datasets/pull/1212", "diff_url": "https://github.com/huggingface/datasets/pull/1212.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1212.patch", "merged_at": "2020-12-07T19:04:08" }
1,212
true
Add large spanish corpus
Adds a collection of Spanish corpora that can be useful for pretraining language models. Following a nice suggestion from @yjernite we provide the user with three main ways to preprocess / load either * the whole corpus (17GB!) * one specific sub-corpus * the whole corpus, but return a single split. this is useful if you want to cache the whole preprocessing step once and interact with individual sub-corpora See the dataset card for more details. Ready for review!
https://github.com/huggingface/datasets/pull/1211
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1211", "html_url": "https://github.com/huggingface/datasets/pull/1211", "diff_url": "https://github.com/huggingface/datasets/pull/1211.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1211.patch", "merged_at": "2020-12-09T13:36:36" }
1,211
true
Add XSUM Hallucination Annotations Dataset
Adding Google [XSum Hallucination Annotations](https://github.com/google-research-datasets/xsum_hallucination_annotations) dataset.
https://github.com/huggingface/datasets/pull/1210
[ "@lhoestq All necessary modifications have been done." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1210", "html_url": "https://github.com/huggingface/datasets/pull/1210", "diff_url": "https://github.com/huggingface/datasets/pull/1210.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1210.patch", "merged_at": "2020-12-16T16:57:11" }
1,210
true
[AfriBooms] Dataset exists already
When trying to add "AfriBooms": https://docs.google.com/spreadsheets/d/12ShVow0M6RavnzbBEabm5j5dv12zBaf0y-niwEPPlo4/edit#gid=1386399609 I noticed that the dataset exists already as a config of Universal Dependencies (universal_dependencies.py). I checked and the data exactly matches so that the new data link does not give any new data. This PR improves the config's description a bit by linking to the paper.
https://github.com/huggingface/datasets/pull/1209
[ "It's so cool seeing all these datasets fly by and see how they are still of interest. I did my internship at the research group of Liesbeth Augustinus et al. They're a very kind group of people!", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1209", "html_url": "https://github.com/huggingface/datasets/pull/1209", "diff_url": "https://github.com/huggingface/datasets/pull/1209.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1209.patch", "merged_at": "2020-12-07T16:52:23" }
1,209
true
Add HKCanCor
(Apologies, didn't manage the branches properly and the PR got too messy. Going to open a new PR with everything in order)
https://github.com/huggingface/datasets/pull/1208
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1208", "html_url": "https://github.com/huggingface/datasets/pull/1208", "diff_url": "https://github.com/huggingface/datasets/pull/1208.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1208.patch", "merged_at": null }
1,208
true
Add msr_genomics_kbcomp Dataset
https://github.com/huggingface/datasets/pull/1207
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1207", "html_url": "https://github.com/huggingface/datasets/pull/1207", "diff_url": "https://github.com/huggingface/datasets/pull/1207.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1207.patch", "merged_at": null }
1,207
true