title
stringlengths
1
290
body
stringlengths
0
228k
html_url
stringlengths
46
51
comments
list
pull_request
dict
number
int64
1
5.59k
is_pull_request
bool
2 classes
adding ted_talks_iwslt
UPDATE2: (2nd Jan) Wrote a long writeup on the slack channel. I don't think this approach is correct. Basically this created language pairs (109*108) Running the `pytest `went for more than 40+ hours and it was still running! So working on a different approach, such that the number of configs = number of languages. Will make a new pull request with that. UPDATE: This requires manual download dataset This is a draft version
https://github.com/huggingface/datasets/pull/1608
[ "Closing this with reference to the new approach #1676 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1608", "html_url": "https://github.com/huggingface/datasets/pull/1608", "diff_url": "https://github.com/huggingface/datasets/pull/1608.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1608.patch", "merged_at": null }
1,608
true
modified tweets hate speech detection
https://github.com/huggingface/datasets/pull/1607
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1607", "html_url": "https://github.com/huggingface/datasets/pull/1607", "diff_url": "https://github.com/huggingface/datasets/pull/1607.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1607.patch", "merged_at": "2020-12-21T16:08:48" }
1,607
true
added Semantic Scholar Open Research Corpus
I picked up this dataset [Semantic Scholar Open Research Corpus](https://allenai.org/data/s2orc) but it contains 6000 files to be downloaded. I tried the current code with 100 files and it worked fine (took ~15GB space). For 6000 files it would occupy ~900GB space which I don’t have. Can someone from the HF team with that much of disk space help me with generate dataset_infos and dummy_data?
https://github.com/huggingface/datasets/pull/1606
[ "I think we’ll need complete dataset_infos.json to create YAML tags. I ran the script again with 100 files after going through your comments and it was occupying ~16 GB space. So in total it should take ~960GB and I don’t have this much memory available with me. Also, I'll have to download the whole dataset for generating dummy data, right?" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1606", "html_url": "https://github.com/huggingface/datasets/pull/1606", "diff_url": "https://github.com/huggingface/datasets/pull/1606.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1606.patch", "merged_at": "2021-02-03T09:30:59" }
1,606
true
Navigation version breaking
Hi, when navigating docs (Chrome, Ubuntu) (e.g. on this page: https://huggingface.co/docs/datasets/loading_metrics.html#using-a-custom-metric-script) the version control dropdown has the wrong string displayed as the current version: ![image](https://user-images.githubusercontent.com/3007947/102632187-02cad080-414f-11eb-813b-28f3c8d80def.png) **Edit:** this actually happens _only_ if you open a link to a concrete subsection. IMO, the best way to fix this without getting too deep into the intricacies of retrieving version numbers from the URL would be to change [this](https://github.com/huggingface/datasets/blob/master/docs/source/_static/js/custom.js#L112) line to: ``` let label = (version in versionMapping) ? version : stableVersion ``` which delegates the check to the (already maintained) keys of the version mapping dictionary & should be more robust. There's a similar ternary expression [here](https://github.com/huggingface/datasets/blob/master/docs/source/_static/js/custom.js#L97) which should also fail in this case. I'd also suggest swapping this [block](https://github.com/huggingface/datasets/blob/master/docs/source/_static/js/custom.js#L80-L90) to `string.contains(version) for version in versionMapping` which might be more robust. I'd add a PR myself but I'm by no means competent in JS :) I also have a side question wrt. docs versioning: I'm trying to make docs for a project which are versioned alike to your dropdown versioning. I was wondering how do you handle storage of multiple doc versions on your server? Do you update what `https://huggingface.co/docs/datasets` points to for every stable release & manually create new folders for each released version? So far I'm building & publishing (scping) the docs to the server with a github action which works well for a single version, but would ideally need to reorder the public files triggered on a new release.
https://github.com/huggingface/datasets/issues/1605
[ "Not relevant for our current docs :)." ]
null
1,605
false
Add tests for the download functions ?
AFAIK the download functions in `DownloadManager` are not tested yet. It could be good to add some to ensure behavior is as expected.
https://github.com/huggingface/datasets/issues/1604
[ "We have some tests now for it under `tests/test_download_manager.py`." ]
null
1,604
false
Add retries to HTTP requests
## What does this PR do ? Adding retries to HTTP GET & HEAD requests, when they fail with a `ConnectTimeout` exception. The "canonical" way to do this is to use [urllib's Retry class](https://urllib3.readthedocs.io/en/latest/reference/urllib3.util.html#urllib3.util.Retry) and wrap it in a [HttpAdapter](https://requests.readthedocs.io/en/master/api/#requests.adapters.HTTPAdapter). Seems a bit overkill to me, plus it forces us to use the `requests.Session` object. I prefer this simpler implementation. I'm open to remarks and suggestions @lhoestq @yjernite Fixes #1102
https://github.com/huggingface/datasets/pull/1603
[ "merging this one then :) " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1603", "html_url": "https://github.com/huggingface/datasets/pull/1603", "diff_url": "https://github.com/huggingface/datasets/pull/1603.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1603.patch", "merged_at": "2020-12-22T15:34:06" }
1,603
true
second update of id_newspapers_2018
The feature "url" is currently set wrongly to data["date"], this PR fix it to data["url"]. I added also an additional POC.
https://github.com/huggingface/datasets/pull/1602
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1602", "html_url": "https://github.com/huggingface/datasets/pull/1602", "diff_url": "https://github.com/huggingface/datasets/pull/1602.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1602.patch", "merged_at": "2020-12-22T10:41:14" }
1,602
true
second update of the id_newspapers_2018
The feature "url" is currently set wrongly to data["date"], this PR fix it to data["url"]. I added also an additional POC.
https://github.com/huggingface/datasets/pull/1601
[ "I close this PR, since it based on 1 week old repo. And I will create a new one" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1601", "html_url": "https://github.com/huggingface/datasets/pull/1601", "diff_url": "https://github.com/huggingface/datasets/pull/1601.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1601.patch", "merged_at": null }
1,601
true
AttributeError: 'DatasetDict' object has no attribute 'train_test_split'
The following code fails with "'DatasetDict' object has no attribute 'train_test_split'" - am I doing something wrong? ``` from datasets import load_dataset dataset = load_dataset('csv', data_files='data.txt') dataset = dataset.train_test_split(test_size=0.1) ``` > AttributeError: 'DatasetDict' object has no attribute 'train_test_split'
https://github.com/huggingface/datasets/issues/1600
[ "Hi @david-waterworth!\r\n\r\nAs indicated in the error message, `load_dataset(\"csv\")` returns a `DatasetDict` object, which is mapping of `str` to `Dataset` objects. I believe in this case the behavior is to return a `train` split with all the data.\r\n`train_test_split` is a method of the `Dataset` object, so you will need to do something like this:\r\n```python\r\ndataset_dict = load_dataset(`'csv', data_files='data.txt')\r\ndataset = dataset_dict['split name, eg train']\r\ndataset.train_test_split(test_size=0.1)\r\n```\r\n\r\nPlease let me know if this helps. 🙂 ", "Thanks, that's working - the same issue also tripped me up with training. \r\n\r\nI also agree https://github.com/huggingface/datasets/issues/767 would be a useful addition. ", "Closing this now", "> ```python\r\n> dataset_dict = load_dataset(`'csv', data_files='data.txt')\r\n> dataset = dataset_dict['split name, eg train']\r\n> dataset.train_test_split(test_size=0.1)\r\n> ```\r\n\r\nI am getting error like\r\nKeyError: 'split name, eg train'\r\nCould you please tell me how to solve this?", "dataset = load_dataset('csv', data_files=['files/datasets/dataset.csv'])\r\ndataset = dataset['train']\r\ndataset = dataset.train_test_split(test_size=0.1)" ]
null
1,600
false
add Korean Sarcasm Dataset
https://github.com/huggingface/datasets/pull/1599
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1599", "html_url": "https://github.com/huggingface/datasets/pull/1599", "diff_url": "https://github.com/huggingface/datasets/pull/1599.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1599.patch", "merged_at": "2020-12-23T17:25:59" }
1,599
true
made suggested changes in fake-news-english
https://github.com/huggingface/datasets/pull/1598
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1598", "html_url": "https://github.com/huggingface/datasets/pull/1598", "diff_url": "https://github.com/huggingface/datasets/pull/1598.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1598.patch", "merged_at": "2020-12-18T09:43:57" }
1,598
true
adding hate-speech-and-offensive-language
https://github.com/huggingface/datasets/pull/1597
[ "made suggested changes and opened PR https://github.com/huggingface/datasets/pull/1628" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1597", "html_url": "https://github.com/huggingface/datasets/pull/1597", "diff_url": "https://github.com/huggingface/datasets/pull/1597.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1597.patch", "merged_at": null }
1,597
true
made suggested changes to hate-speech-and-offensive-language
https://github.com/huggingface/datasets/pull/1596
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1596", "html_url": "https://github.com/huggingface/datasets/pull/1596", "diff_url": "https://github.com/huggingface/datasets/pull/1596.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1596.patch", "merged_at": null }
1,596
true
Logiqa en
logiqa in english.
https://github.com/huggingface/datasets/pull/1595
[ "I'm getting an error when I try to create the dummy data:\r\n```python\r\naclifton@pop-os:~/data/hf_datasets_sprint/datasets$ python datasets-cli dummy_data ./datasets/logiqa_en/ --auto_generate \r\n2021-01-07 10:50:12.024791: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory\r\n2021-01-07 10:50:12.024814: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\nUsing custom data configuration default\r\nCouldn't generate dummy file 'datasets/dummy/1.1.0/dummy_data/master.zip/LogiQA-dataset-master/README.md'. Ignore that if this file is not useful for dummy data.\r\nDummy data generation done but dummy data test failed since splits ['train', 'test', 'validation'] have 0 examples for config 'default''.\r\nAutomatic dummy data generation failed for some configs of './datasets/logiqa_en/'\r\n```", "Hi ! Sorry for the delay\r\n\r\nTo fix your issue for the dummy data you must increase the number of lines that will be kept to generate the dummy files. By default it's 5, and as you need at least 8 lines here to yield one example you must increase this.\r\n\r\nYou can increase the number of lines to 32 for example by doing\r\n```\r\ndatasets-cli dummy_data ./datasets/logica_en --auto_generate --n_lines 32\r\n```\r\n\r\nAlso it looks like there are changes about other datasets in this PR (imppres). Can you fix that ? You may need to create another branch and another PR.", "To fix the branch issue, I went ahead and made a backup of the dataset then deleted my local copy of my fork of `datasets`. I then followed the [detailed guide](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md) from the beginning to reclone the fork and start a new branch. \r\n\r\nHowever, when it came time to create the dummy data I got the following error:\r\n```python\r\naclifton@pop-os:~/data/hf_datasets_sprint/datasets$ datasets-cli dummy_data ./datasets/logiqa_en --auto_generate --n_lines 32\r\n2021-02-03 11:23:23.145885: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory\r\n2021-02-03 11:23:23.145914: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\nUsing custom data configuration default\r\nCouldn't generate dummy file 'datasets/logiqa_en/dummy/1.1.0/dummy_data/master.zip/LogiQA-dataset-master/README.md'. Ignore that if this file is not useful for dummy data.\r\nTraceback (most recent call last):\r\n File \"/home/aclifton/anaconda3/bin/datasets-cli\", line 36, in <module>\r\n service.run()\r\n File \"/home/aclifton/anaconda3/lib/python3.7/site-packages/datasets/commands/dummy_data.py\", line 317, in run\r\n keep_uncompressed=self._keep_uncompressed,\r\n File \"/home/aclifton/anaconda3/lib/python3.7/site-packages/datasets/commands/dummy_data.py\", line 355, in _autogenerate_dummy_data\r\n dataset_builder._prepare_split(split_generator)\r\n File \"/home/aclifton/anaconda3/lib/python3.7/site-packages/datasets/builder.py\", line 905, in _prepare_split\r\n example = self.info.features.encode_example(record)\r\n File \"/home/aclifton/anaconda3/lib/python3.7/site-packages/datasets/features.py\", line 799, in encode_example\r\n return encode_nested_example(self, example)\r\n File \"/home/aclifton/anaconda3/lib/python3.7/site-packages/datasets/features.py\", line 710, in encode_nested_example\r\n (k, encode_nested_example(sub_schema, sub_obj)) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)\r\n File \"/home/aclifton/anaconda3/lib/python3.7/site-packages/datasets/features.py\", line 710, in <genexpr>\r\n (k, encode_nested_example(sub_schema, sub_obj)) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)\r\n File \"/home/aclifton/anaconda3/lib/python3.7/site-packages/datasets/features.py\", line 737, in encode_nested_example\r\n return schema.encode_example(obj)\r\n File \"/home/aclifton/anaconda3/lib/python3.7/site-packages/datasets/features.py\", line 522, in encode_example\r\n example_data = self.str2int(example_data)\r\n File \"/home/aclifton/anaconda3/lib/python3.7/site-packages/datasets/features.py\", line 481, in str2int\r\n output.append(self._str2int[str(value)])\r\nKeyError: \"Some Cantonese don't like chili, so some southerners don't like chili.\"\r\n```", "Hi ! The error happens when the script is verifying that the generated dummy data work fine with the dataset script.\r\nApparently it fails because the text `\"Some Cantonese don't like chili, so some southerners don't like chili.\"` was given in a field that is a ClassLabel feature (probably the `answer` field), while it actually expects \"a\", \"b\", \"c\" or \"d\". Can you fix the script so that it returns the expected labels for this field instead of the text ?\r\n\r\nAlso it would be awesome to rename this field `answerKey` instead of `answer` to have the same column names as the other multiple-choice-QA datasets in the library :) ", "Ok getting closer! I got the dummy data to work. However I am now getting the following error:\r\n```python\r\naclifton@pop-os:~/data/hf_datasets_sprint/datasets$ RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_logiqa_en\r\n===================================================================== test session starts ======================================================================\r\nplatform linux -- Python 3.7.6, pytest-5.3.5, py-1.8.1, pluggy-0.13.1\r\nrootdir: /home/aclifton/data/hf_datasets_sprint/datasets\r\nplugins: astropy-header-0.1.2, xdist-2.1.0, doctestplus-0.5.0, forked-1.3.0, hypothesis-5.5.4, arraydiff-0.3, remotedata-0.3.2, openfiles-0.4.0\r\ncollected 0 items / 1 error \r\n\r\n============================================================================ ERRORS ============================================================================\r\n________________________________________________________ ERROR collecting tests/test_dataset_common.py _________________________________________________________\r\nImportError while importing test module '/home/aclifton/data/hf_datasets_sprint/datasets/tests/test_dataset_common.py'.\r\nHint: make sure your test modules/packages have valid Python names.\r\nTraceback:\r\ntests/test_dataset_common.py:42: in <module>\r\n from datasets.packaged_modules import _PACKAGED_DATASETS_MODULES\r\nE ModuleNotFoundError: No module named 'datasets.packaged_modules'\r\n----------------------------------------------------------------------- Captured stderr ------------------------------------------------------------------------\r\n2021-02-10 11:06:14.345510: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory\r\n2021-02-10 11:06:14.345551: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\n======================================================================= warnings summary =======================================================================\r\n/home/aclifton/anaconda3/lib/python3.7/site-packages/tensorflow/python/autograph/utils/testing.py:21\r\n /home/aclifton/anaconda3/lib/python3.7/site-packages/tensorflow/python/autograph/utils/testing.py:21: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\r\n import imp\r\n\r\n/home/aclifton/anaconda3/lib/python3.7/site-packages/apache_beam/typehints/typehints.py:693\r\n /home/aclifton/anaconda3/lib/python3.7/site-packages/apache_beam/typehints/typehints.py:693: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working\r\n if not isinstance(type_params, collections.Iterable):\r\n\r\n/home/aclifton/anaconda3/lib/python3.7/site-packages/apache_beam/typehints/typehints.py:532\r\n /home/aclifton/anaconda3/lib/python3.7/site-packages/apache_beam/typehints/typehints.py:532: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working\r\n if not isinstance(type_params, (collections.Sequence, set)):\r\n\r\n/home/aclifton/anaconda3/lib/python3.7/site-packages/elasticsearch/compat.py:38\r\n /home/aclifton/anaconda3/lib/python3.7/site-packages/elasticsearch/compat.py:38: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working\r\n from collections import Mapping\r\n\r\n-- Docs: https://docs.pytest.org/en/latest/warnings.html\r\n================================================================= 4 warnings, 1 error in 2.74s =================================================================\r\nERROR: not found: /home/aclifton/data/hf_datasets_sprint/datasets/tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_logiqa_en\r\n(no name '/home/aclifton/data/hf_datasets_sprint/datasets/tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_logiqa_en' in any of [<Module test_dataset_common.py>])\r\n\r\n```", "Hi ! It looks like the version of `datasets` that is installed in your environment doesn't match the version of `datasets` you're using for the tests. Can you try uninstalling datasets and reinstall it again ?\r\n```\r\npip uninstall datasets -y\r\npip install -e .\r\n```", "Closer still!\r\n```python\r\naclifton@pop-os:~/data/hf_datasets_sprint/datasets$ git commit\r\n[logiqa_en 2664fe7f] fixed several issues with logiqa_en.\r\n 4 files changed, 324 insertions(+)\r\n create mode 100644 datasets/logiqa_en/README.md\r\n create mode 100644 datasets/logiqa_en/dataset_infos.json\r\n create mode 100644 datasets/logiqa_en/dummy/1.1.0/dummy_data.zip\r\n create mode 100644 datasets/logiqa_en/logiqa_en.py\r\naclifton@pop-os:~/data/hf_datasets_sprint/datasets$ git fetch upstream\r\nremote: Enumerating objects: 1, done.\r\nremote: Counting objects: 100% (1/1), done.\r\nremote: Total 1 (delta 0), reused 0 (delta 0), pack-reused 0\r\nUnpacking objects: 100% (1/1), 590 bytes | 590.00 KiB/s, done.\r\nFrom https://github.com/huggingface/datasets\r\n 6e114a0c..318b09eb master -> upstream/master\r\naclifton@pop-os:~/data/hf_datasets_sprint/datasets$ git rebase upstream/master \r\nerror: cannot rebase: You have unstaged changes.\r\nerror: Please commit or stash them.\r\naclifton@pop-os:~/data/hf_datasets_sprint/datasets$ git push -u origin logiqa_en\r\nUsername for 'https://github.com': aclifton314\r\nPassword for 'https://aclifton314@github.com': \r\nTo https://github.com/aclifton314/datasets\r\n ! [rejected] logiqa_en -> logiqa_en (non-fast-forward)\r\nerror: failed to push some refs to 'https://github.com/aclifton314/datasets'\r\nhint: Updates were rejected because the tip of your current branch is behind\r\nhint: its remote counterpart. Integrate the remote changes (e.g.\r\nhint: 'git pull ...') before pushing again.\r\nhint: See the 'Note about fast-forwards' in 'git push --help' for details.\r\n```", "Thanks for your contribution, @aclifton314. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1595", "html_url": "https://github.com/huggingface/datasets/pull/1595", "diff_url": "https://github.com/huggingface/datasets/pull/1595.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1595.patch", "merged_at": null }
1,595
true
connection error
Hi I am hitting to this error, thanks ``` > Traceback (most recent call last): File "finetune_t5_trainer.py", line 379, in <module> main() File "finetune_t5_trainer.py", line 208, in main if training_args.do_eval or training_args.evaluation_strategy != EvaluationStrategy.NO File "finetune_t5_trainer.py", line 207, in <dictcomp> for task in data_args.eval_tasks} File "/workdir/seq2seq/data/tasks.py", line 70, in get_dataset dataset = self.load_dataset(split=split) File "/workdir/seq2seq/data/tasks.py", line 66, in load_dataset return datasets.load_dataset(self.task.name, split=split, script_version="master") File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 487, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/master/datasets/boolq/boolq.py el/0 I1217 01:11:33.898849 354161 main shadow.py:210 Current job status: FINISHED ```
https://github.com/huggingface/datasets/issues/1594
[ "This happen quite often when they are too many concurrent requests to github.\r\n\r\ni can understand it’s a bit cumbersome to handle on the user side. Maybe we should try a few times in the lib (eg with timeout) before failing, what do you think @lhoestq ?", "Yes currently there's no retry afaik. We should add retries", "Retries were added in #1603 :) \r\nIt will be available in the next release", "Hi @lhoestq thank you for the modification, I will use`script_version=\"master\"` for now :), to my experience, also setting timeout to a larger number like 3*60 which I normally use helps a lot on this.\r\n" ]
null
1,594
false
Access to key in DatasetDict map
It is possible that we want to do different things in the `map` function (and possibly other functions too) of a `DatasetDict`, depending on the key. I understand that `DatasetDict.map` is a really thin wrapper of `Dataset.map`, so it is easy to directly implement this functionality in the client code. Still, it'd be nice if there can be a flag, similar to `with_indices`, that allows the callable to know the key inside `DatasetDict`.
https://github.com/huggingface/datasets/issues/1593
[ "Indeed that would be cool\r\n\r\nAlso FYI right now the easiest way to do this is\r\n```python\r\ndataset_dict[\"train\"] = dataset_dict[\"train\"].map(my_transform_for_the_train_set)\r\ndataset_dict[\"test\"] = dataset_dict[\"test\"].map(my_transform_for_the_test_set)\r\n```", "I don't feel like adding an extra param for this simple usage makes sense, considering how many args `map` already has. \r\n\r\n(Feel free to re-open this issue if you don't agree with me)", "I still think this is useful, since it's common that the data processing is different for training/dev/testing. And I don't know if the fact that `map` currently takes many arguments is a good reason not to support a useful feature." ]
null
1,593
false
IWSLT-17 Link Broken
``` FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnmted//texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.tgz ```
https://github.com/huggingface/datasets/issues/1591
[ "Sorry, this is a duplicate of #1287. Not sure why it didn't come up when I searched `iwslt` in the issues list.", "Closing this since its a duplicate" ]
null
1,591
false
Add helper to resolve namespace collision
Many projects use a module called `datasets`, however this is incompatible with huggingface datasets. It would be great if there if there was some helper or similar function to resolve such a common conflict.
https://github.com/huggingface/datasets/issues/1590
[ "Do you have an example?", "I was thinking about using something like [importlib](https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly) to over-ride the collision. \r\n\r\n**Reason requested**: I use the [following template](https://github.com/jramapuram/ml_base/) repo where I house all my datasets as a submodule.", "Alternatively huggingface could consider some submodule type structure like:\r\n\r\n`import huggingface.datasets`\r\n`import huggingface.transformers`\r\n\r\n`datasets` is a very common module in ML and should be an end-user decision and not scope all of python ¯\\_(ツ)_/¯ \r\n", "That's a interesting option indeed. We'll think about it.", "It also wasn't initially obvious to me that the samples which contain `import datasets` were in fact importing a huggingface library (in fact all the huggingface imports are very generic - transformers, tokenizers, datasets...)" ]
null
1,590
false
Update doc2dial.py
Added data loader for machine reading comprehension tasks proposed in the Doc2Dial EMNLP 2020 paper.
https://github.com/huggingface/datasets/pull/1589
[ "Thanks for adding the `doc2dial_rc` config :) \r\n\r\nIt looks like you're missing the dummy data for this config though. Could you add them please ?\r\nAlso to fix the CI you'll need to format the code with `make style`" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1589", "html_url": "https://github.com/huggingface/datasets/pull/1589", "diff_url": "https://github.com/huggingface/datasets/pull/1589.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1589.patch", "merged_at": null }
1,589
true
Modified hind encorp
description added, unnecessary comments removed from .py and readme.md reformated @lhoestq for #1584
https://github.com/huggingface/datasets/pull/1588
[ "welcome, awesome " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1588", "html_url": "https://github.com/huggingface/datasets/pull/1588", "diff_url": "https://github.com/huggingface/datasets/pull/1588.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1588.patch", "merged_at": "2020-12-16T17:20:28" }
1,588
true
Add nq_open question answering dataset
this is pr is a copy of #1506 due to messed up git history in that pr.
https://github.com/huggingface/datasets/pull/1587
[ "@SBrandeis all checks passing" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1587", "html_url": "https://github.com/huggingface/datasets/pull/1587", "diff_url": "https://github.com/huggingface/datasets/pull/1587.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1587.patch", "merged_at": "2020-12-17T16:07:10" }
1,587
true
added irc disentangle dataset
added irc disentanglement dataset
https://github.com/huggingface/datasets/pull/1586
[ "@lhoestq sorry, this was the only way I was able to fix the pull request ", "@lhoestq Thank you for the feedback. I wondering whether I should be passing an 'id' field in the dictionary since the 'connections' reference the 'id' of the linked messages. This 'id' would just be the same as the id_ that is in the yielded tuple.", "Yes indeed it would be cool to have the ids in the dictionary. This way the dataset can be shuffled and all without losing information about the connections. Can you add it if you don't mind ?", "Thanks :) could you also add the ids in the dictionary since they're useful for the connection links ?", "Thanks !\r\nAlso it looks like the dummy_data.zip were regenerated and are now back to being too big (300KB each).\r\nCan you reduce their sizes ? You can actually just revert to the ones you had before the last commit" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1586", "html_url": "https://github.com/huggingface/datasets/pull/1586", "diff_url": "https://github.com/huggingface/datasets/pull/1586.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1586.patch", "merged_at": "2021-01-29T10:28:53" }
1,586
true
FileNotFoundError for `amazon_polarity`
Version: `datasets==v1.1.3` ### Reproduction ```python from datasets import load_dataset data = load_dataset("amazon_polarity") ``` crashes with ```bash FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/amazon_polarity/amazon_polarity.py ``` and ```bash FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/amazon_polarity/amazon_polarity.py ``` and ```bash FileNotFoundError: Couldn't find file locally at amazon_polarity/amazon_polarity.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/amazon_polarity/amazon_polarity.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/amazon_polarity/amazon_polarity.py ```
https://github.com/huggingface/datasets/issues/1585
[ "Hi @phtephanx , the `amazon_polarity` dataset has not been released yet. It will be available in the coming soon v2of `datasets` :) \r\n\r\nYou can still access it now if you want, but you will need to install datasets via the master branch:\r\n`pip install git+https://github.com/huggingface/datasets.git@master`" ]
null
1,585
false
Load hind encorp
reformated well documented, yaml tags added, code
https://github.com/huggingface/datasets/pull/1584
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1584", "html_url": "https://github.com/huggingface/datasets/pull/1584", "diff_url": "https://github.com/huggingface/datasets/pull/1584.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1584.patch", "merged_at": null }
1,584
true
Update metrics docstrings.
#1478 Correcting the argument descriptions for metrics. Let me know if there's any issues.
https://github.com/huggingface/datasets/pull/1583
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1583", "html_url": "https://github.com/huggingface/datasets/pull/1583", "diff_url": "https://github.com/huggingface/datasets/pull/1583.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1583.patch", "merged_at": "2020-12-18T18:39:06" }
1,583
true
Adding wiki lingua dataset as new branch
Adding the dataset as new branch as advised here: #1470
https://github.com/huggingface/datasets/pull/1582
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1582", "html_url": "https://github.com/huggingface/datasets/pull/1582", "diff_url": "https://github.com/huggingface/datasets/pull/1582.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1582.patch", "merged_at": "2020-12-17T18:06:45" }
1,582
true
Installing datasets and transformers in a tensorflow docker image throws Permission Error on 'import transformers'
I am using a docker container, based on latest tensorflow-gpu image, to run transformers and datasets (4.0.1 and 1.1.3 respectively - Dockerfile attached below). Importing transformers throws a Permission Error to access `/.cache`: ``` $ docker run --gpus=all --rm -it -u $(id -u):$(id -g) -v $(pwd)/data:/root/data -v $(pwd):/root -v $(pwd)/models/:/root/models -v $(pwd)/saved_models/:/root/saved_models -e "HOST_HOSTNAME=$(hostname)" hf-error:latest /bin/bash ________ _______________ ___ __/__________________________________ ____/__ /________ __ __ / _ _ \_ __ \_ ___/ __ \_ ___/_ /_ __ /_ __ \_ | /| / / _ / / __/ / / /(__ )/ /_/ / / _ __/ _ / / /_/ /_ |/ |/ / /_/ \___//_/ /_//____/ \____//_/ /_/ /_/ \____/____/|__/ You are running this container as user with ID 1000 and group 1000, which should map to the ID and group for your user on the Docker host. Great! tf-docker /root > python Python 3.6.9 (default, Oct 8 2020, 12:12:24) [GCC 8.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import transformers 2020-12-15 23:53:21.165827: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.6/dist-packages/transformers/__init__.py", line 22, in <module> from .integrations import ( # isort:skip File "/usr/local/lib/python3.6/dist-packages/transformers/integrations.py", line 5, in <module> from .trainer_utils import EvaluationStrategy File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_utils.py", line 25, in <module> from .file_utils import is_tf_available, is_torch_available, is_torch_tpu_available File "/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py", line 88, in <module> import datasets # noqa: F401 File "/usr/local/lib/python3.6/dist-packages/datasets/__init__.py", line 26, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 40, in <module> from .arrow_reader import ArrowReader File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 31, in <module> from .utils import cached_path, logging File "/usr/local/lib/python3.6/dist-packages/datasets/utils/__init__.py", line 20, in <module> from .download_manager import DownloadManager, GenerateMode File "/usr/local/lib/python3.6/dist-packages/datasets/utils/download_manager.py", line 25, in <module> from .file_utils import HF_DATASETS_CACHE, cached_path, get_from_cache, hash_url_to_filename File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 118, in <module> os.makedirs(HF_MODULES_CACHE, exist_ok=True) File "/usr/lib/python3.6/os.py", line 210, in makedirs makedirs(head, mode, exist_ok) File "/usr/lib/python3.6/os.py", line 210, in makedirs makedirs(head, mode, exist_ok) File "/usr/lib/python3.6/os.py", line 220, in makedirs mkdir(name, mode) PermissionError: [Errno 13] Permission denied: '/.cache' ``` I've pinned the problem to `RUN pip install datasets`, and by commenting it you can actually import transformers correctly. Another workaround I've found is creating the directory and giving permissions to it directly on the Dockerfile. ``` FROM tensorflow/tensorflow:latest-gpu-jupyter WORKDIR /root EXPOSE 80 EXPOSE 8888 EXPOSE 6006 ENV SHELL /bin/bash ENV PATH="/root/.local/bin:${PATH}" ENV CUDA_CACHE_PATH="/root/cache/cuda" ENV CUDA_CACHE_MAXSIZE="4294967296" ENV TFHUB_CACHE_DIR="/root/cache/tfhub" RUN pip install --upgrade pip RUN apt update -y && apt upgrade -y RUN pip install transformers #Installing datasets will throw the error, try commenting and rebuilding RUN pip install datasets #Another workaround is creating the directory and give permissions explicitly #RUN mkdir /.cache #RUN chmod 777 /.cache ```
https://github.com/huggingface/datasets/issues/1581
[ "Thanks for reporting !\r\nYou can override the directory in which cache file are stored using for example\r\n```\r\nENV HF_HOME=\"/root/cache/hf_cache_home\"\r\n```\r\n\r\nThis way both `transformers` and `datasets` will use this directory instead of the default `.cache`", "Great, thanks. I didn't see documentation about than ENV variable, looks like an obvious solution. ", "> Thanks for reporting !\r\n> You can override the directory in which cache file are stored using for example\r\n> \r\n> ```\r\n> ENV HF_HOME=\"/root/cache/hf_cache_home\"\r\n> ```\r\n> \r\n> This way both `transformers` and `datasets` will use this directory instead of the default `.cache`\r\n\r\ncan we disable caching directly?", "Hi ! Unfortunately no since we need this directory to load datasets.\r\nWhen you load a dataset, it downloads the raw data files in the cache directory inside <cache_dir>/downloads. Then it builds the dataset and saves it as arrow data inside <cache_dir>/<dataset_name>.\r\n\r\nHowever you can specify the directory of your choice, and it can be a temporary directory if you want to clean everything up at one point.", "I'm closing this to keep issues a bit cleaner" ]
null
1,581
false
made suggested changes in diplomacy_detection.py
https://github.com/huggingface/datasets/pull/1580
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1580", "html_url": "https://github.com/huggingface/datasets/pull/1580", "diff_url": "https://github.com/huggingface/datasets/pull/1580.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1580.patch", "merged_at": "2020-12-16T10:27:52" }
1,580
true
Adding CLIMATE-FEVER dataset
This PR request the addition of the CLIMATE-FEVER dataset: A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple facets and disputed cases of claims where both supporting and refuting evidence are present. More information can be found at: - Homepage: <http://climatefever.ai> - Paper: <https://arxiv.org/abs/2012.00614>
https://github.com/huggingface/datasets/pull/1579
[ "I `git rebase`ed my branch to `upstream/master` as suggested in point 7 of <https://huggingface.co/docs/datasets/share_dataset.html> and subsequently used `git pull` to be able to push to my remote branch. However, I think this messed up the history.\r\n\r\nPlease let me know if I should create a clean new PR with my changes.\r\n\r\nUpdate: I also fixed the dataset name in the Dataset Card.", "Dear @SBrandeis , @lhoestq . I am not sure how to fix the PR with respect to the additional files that are currently included in the commits. Could you provide me with an example? Otherwise I would be happy to close/re-open another PR. Please let me know if anything is missing for the review.", "Hi @tdiggelm, thanks for the contribution! This dataset is really awesome.\r\nI believe creating a new branch from master and opening a new PR with your changes is the simplest option since no review has been done yet. Feel free to ping us when it's done.", "> Hi @tdiggelm, thanks for the contribution! This dataset is really awesome.\r\n> I believe creating a new branch from master and opening a new PR with your changes is the simplest option since no review has been done yet. Feel free to ping us when it's done.\r\n\r\nThank you very much for your quick reply! Will do ASAP and ping you when done.", "closing in favor of #1623" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1579", "html_url": "https://github.com/huggingface/datasets/pull/1579", "diff_url": "https://github.com/huggingface/datasets/pull/1579.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1579.patch", "merged_at": null }
1,579
true
update multiwozv22 checksums
a file was updated on the GitHub repo for the dataset
https://github.com/huggingface/datasets/pull/1578
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1578", "html_url": "https://github.com/huggingface/datasets/pull/1578", "diff_url": "https://github.com/huggingface/datasets/pull/1578.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1578.patch", "merged_at": "2020-12-15T17:06:29" }
1,578
true
Add comet metric
Hey! I decided to add our new Crosslingual Optimized Metric for Evaluation of Translation (COMET) to the list of the available metrics. COMET was [presented at EMNLP20](https://www.aclweb.org/anthology/2020.emnlp-main.213/) and it is the highest performing metric, so far, on the WMT19 benchmark. We also participated in the [WMT20 Metrics shared task ](http://www.statmt.org/wmt20/pdf/2020.wmt-1.101.pdf) where once again COMET was validated as a top-performing metric. I hope that this metric will help researcher's and industry workers to better validate their MT systems in the future 🤗 ! Cheers, Ricardo
https://github.com/huggingface/datasets/pull/1577
[ "I also thought a bit about the fact that \"sources\" can't be added to the batch.. but changing that would require a lot more changes. And I agree that the idea of adding them as part of the references is not ideal. Conceptually they are not references.\r\n\r\nI would keep it like this for now.. And in the future, work on a more consistent batch interface." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1577", "html_url": "https://github.com/huggingface/datasets/pull/1577", "diff_url": "https://github.com/huggingface/datasets/pull/1577.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1577.patch", "merged_at": "2021-01-14T13:33:10" }
1,577
true
Remove the contributors section
sourcerer is down
https://github.com/huggingface/datasets/pull/1576
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1576", "html_url": "https://github.com/huggingface/datasets/pull/1576", "diff_url": "https://github.com/huggingface/datasets/pull/1576.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1576.patch", "merged_at": "2020-12-15T12:53:46" }
1,576
true
Hind_Encorp all done
https://github.com/huggingface/datasets/pull/1575
[ "ALL TEST PASSED locally @yjernite ", "@rahul-art kindly run the following from the datasets folder \r\n\r\n```\r\nmake style \r\nflake8 datasets\r\n\r\n```\r\n", "@skyprince999 I did that before it says all done \r\n", "I did that again it gives the same output all done and then I synchronized my changes with this branch ", "@lhoestq i did all the changes you suggested but at the time of load_dataset it is giving me error\r\n`**`datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=76591256, num_examples=1, dataset_name='hind_encorp'), 'recorded': SplitInfo(name='train', num_bytes=78945714, num_examples=273885, dataset_name='hind_encorp')}]`**`", "\r\n\r\n\r\nI cloned the branch and it seems to work fine at my end. try to clear the cache - \r\n\r\n```\r\nrm -rf /home/ubuntu/.cache/huggingface/datasets/\r\nrm -rf /home/ubuntu/.cache/huggingface/modules/datasets_modules//datasets/\r\n```\r\nBut the dataset has only one record. Is that correct? \r\n![image](https://user-images.githubusercontent.com/9033954/102331376-c7929b00-3fb0-11eb-8a6c-81b2cf47bc2a.png)\r\n", "> @lhoestq i did all the changes you suggested but at the time of load_dataset it is giving me error\r\n> `**`datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=76591256, num_examples=1, dataset_name='hind_encorp'), 'recorded': SplitInfo(name='train', num_bytes=78945714, num_examples=273885, dataset_name='hind_encorp')}]`**`\r\n\r\nYou can ignore this error by adding `ignore_verifications=True` to `load_dataset`.\r\n\r\nThis error is raised because you're loading a dataset that you've already loaded once in the past. Therefore the library does some verifications to make sure it's generated the same way. \r\n\r\nHowever since you've done changes in the dataset script you should ignore these verifications.\r\n\r\nYou can regenerate the dataset_infos.json with\r\n```\r\ndatasets-cli test ./datasets/hindi_encorp --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\n\r\n> I cloned the branch and it seems to work fine at my end. try to clear the cache -\r\n> \r\n> ```\r\n> rm -rf /home/ubuntu/.cache/huggingface/datasets/\r\n> rm -rf /home/ubuntu/.cache/huggingface/modules/datasets_modules//datasets/\r\n> ```\r\n> \r\n> But the dataset has only one record. Is that correct?\r\n> ![image](https://user-images.githubusercontent.com/9033954/102331376-c7929b00-3fb0-11eb-8a6c-81b2cf47bc2a.png)\r\n\r\nYes the current parsing is wrong, I've already given @rahul-art some suggestions and it looks like it works way better now (num_examples=273885).\r\n\r\nThanks for fixing the parsing @rahul-art !\r\nFeel free to commit and push your changes once it's ready :) ", "i ran the command you provided datasets-cli test ./datasets/hindi_encorp --save_infos --all_configs --ignore_verifications \r\nbut now its giving this error @lhoestq \r\n\r\nFileNotFoundError: Couldn't find file locally at ./datasets/hindi_encorp/hindi_encorp.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/./datasets/hindi_encorp/hindi_encorp.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/./datasets/hindi_encorp/hindi_encorp.py.\r\nIf the dataset was added recently, you may need to to pass script_version=\"master\" to find the loading script on the master branch.\r\n", "whoops I meant `hind_encorp` instead of `hindi_encorp` sorry", "@lhoestq all changes have done successfully in this PR #1584", "Ok thanks ! closing this one in favor of #1584 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1575", "html_url": "https://github.com/huggingface/datasets/pull/1575", "diff_url": "https://github.com/huggingface/datasets/pull/1575.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1575.patch", "merged_at": null }
1,575
true
Diplomacy detection 3
https://github.com/huggingface/datasets/pull/1574
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1574", "html_url": "https://github.com/huggingface/datasets/pull/1574", "diff_url": "https://github.com/huggingface/datasets/pull/1574.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1574.patch", "merged_at": null }
1,574
true
adding dataset for diplomacy detection-2
https://github.com/huggingface/datasets/pull/1573
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1573", "html_url": "https://github.com/huggingface/datasets/pull/1573", "diff_url": "https://github.com/huggingface/datasets/pull/1573.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1573.patch", "merged_at": null }
1,573
true
add Gnad10 dataset
reference [PR#1317](https://github.com/huggingface/datasets/pull/1317)
https://github.com/huggingface/datasets/pull/1572
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1572", "html_url": "https://github.com/huggingface/datasets/pull/1572", "diff_url": "https://github.com/huggingface/datasets/pull/1572.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1572.patch", "merged_at": "2020-12-16T16:52:30" }
1,572
true
Fixing the KILT tasks to match our current standards
This introduces a few changes to the Knowledge Intensive Learning task benchmark to bring it more in line with our current datasets, including adding the (minimal) dataset card and having one config per sub-task
https://github.com/huggingface/datasets/pull/1571
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1571", "html_url": "https://github.com/huggingface/datasets/pull/1571", "diff_url": "https://github.com/huggingface/datasets/pull/1571.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1571.patch", "merged_at": "2020-12-14T23:07:41" }
1,571
true
Documentation for loading CSV datasets misleads the user
Documentation for loading CSV datasets misleads the user into thinking setting `quote_char' to False will disable quoting. There are two problems here: i) `quote_char' is misspelled, must be `quotechar' ii) the documentation should mention `quoting'
https://github.com/huggingface/datasets/pull/1570
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1570", "html_url": "https://github.com/huggingface/datasets/pull/1570", "diff_url": "https://github.com/huggingface/datasets/pull/1570.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1570.patch", "merged_at": "2020-12-21T13:47:09" }
1,570
true
added un_ga dataset
Hi :hugs:, This is a PR for [United nations general assembly resolutions: A six-language parallel corpus](http://opus.nlpl.eu/UN.php) dataset. With suggested changes in #1330
https://github.com/huggingface/datasets/pull/1569
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1569", "html_url": "https://github.com/huggingface/datasets/pull/1569", "diff_url": "https://github.com/huggingface/datasets/pull/1569.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1569.patch", "merged_at": "2020-12-15T15:28:58" }
1,569
true
Added the dataset clickbait_news_bg
There was a problem with my [previous PR 1445](https://github.com/huggingface/datasets/pull/1445) after rebasing, so I'm copying the dataset code into a new branch and submitting a new PR.
https://github.com/huggingface/datasets/pull/1568
[ "Hi @tsvm Great work! \r\nSince you have raised a clean PR could you close the earlier one - #1445 ? \r\n", "> Hi @tsvm Great work!\r\n> Since you have raised a clean PR could you close the earlier one - #1445 ?\r\n\r\nDone." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1568", "html_url": "https://github.com/huggingface/datasets/pull/1568", "diff_url": "https://github.com/huggingface/datasets/pull/1568.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1568.patch", "merged_at": "2020-12-15T18:28:56" }
1,568
true
[wording] Update Readme.md
Make the features of the library clearer.
https://github.com/huggingface/datasets/pull/1567
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1567", "html_url": "https://github.com/huggingface/datasets/pull/1567", "diff_url": "https://github.com/huggingface/datasets/pull/1567.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1567.patch", "merged_at": "2020-12-15T12:54:06" }
1,567
true
Add Microsoft Research Sequential Question Answering (SQA) Dataset
For more information: https://msropendata.com/datasets/b25190ed-0f59-47b1-9211-5962858142c2
https://github.com/huggingface/datasets/pull/1566
[ "I proposed something a few weeks ago in #898 (un-merged) but I think that the way that @mattbui added the dataset in the present PR is smarter and simpler should replace my PR #898.\r\n\r\n(Narrator voice: *And it was around that time that Thomas realized that the community was now a lot smarter than him and he should hand-over the library he had started with @lhoestq to the community and stop pretending he knew everything about it.*)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1566", "html_url": "https://github.com/huggingface/datasets/pull/1566", "diff_url": "https://github.com/huggingface/datasets/pull/1566.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1566.patch", "merged_at": "2020-12-15T15:24:22" }
1,566
true
Create README.md
https://github.com/huggingface/datasets/pull/1565
[ "@ManuelFay thanks you so much for adding a dataset card, this is such a cool contribution!\r\n\r\nThis looks like it uses an old template for the card we've moved things around a bit and we have an app you should be using to get the tags and the structure of the Data Fields paragraph :) Would you mind moving your text to the newer format (we're also asking contributors to keep the full template structure, even if some sections still have [More Information Needed] for the time being)\r\n\r\nHere's the link to the instructions:\r\nhttps://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\r\n\r\nOut of curiosity, what was your landing point for filling out the card? Did you follow the \"Update on Github\" when navigating the datasets? Trying to make the instructions as clear as possible :) ", "@yjernite \r\n\r\nPerfect, I'll follow the instructions when I have a bit more time tomorrow ! I was actually browsing the new contributions after the dataset sprint and realized most of the \"old\" datasets were not tagged, so I just copied and pasted the readme from another dataset and was not aware there was precise instructions... Will fix !\r\n\r\nBTW, amazing job with the retriBert work, I used the contrastive + in-batch negative quite a bit for various projects. Probably neither the time nor place to talk about that but I was curious as to why, in your original work, you prefered using a simple projection in the last layer to differentiate the question vs answer embedding, rather than allowing for bias in the dense layer or even just to fine-tune 2 different embedders for question + answer ? ", "Cool! Looking forward to the next version!\r\n\r\nQuick answer for retriBERT is that I expected a simple projection to generalize better and more importantly only having to store the gradients for the proj means training with larger batches :) If you want to keep chatting about it, feel free to send me an email!", "Hi @ManuelFay ! \r\nIf you're still interested in completing the FQuAD dataset card, note that we've generated one that is pre-filled.\r\nTherefore feel free to complete it with the content you already have in your README.md.\r\nThis would be awesome ! And thanks again for your contribution :)", "Yo @lhoestq , just not sure about the tag table at the top, I used @yjernite eli5 template so hope it's okay ! Also want to signal the streamlit app for dataset tagging has a weird behavior with the size categories when filling in the form. \r\n\r\nThanks to you guys for doing that and sorry about the time it took, i completely forgot about it ! \r\n" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1565", "html_url": "https://github.com/huggingface/datasets/pull/1565", "diff_url": "https://github.com/huggingface/datasets/pull/1565.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1565.patch", "merged_at": "2021-03-25T14:01:49" }
1,565
true
added saudinewsnet
I'm having issues in creating the dummy data. I'm still investigating how to fix it. I'll close the PR if I couldn't find a solution
https://github.com/huggingface/datasets/pull/1564
[ "Hi @abdulelahsm - This is an interesting dataset! But there are multiple issues with the PR. Some of them are listed below: \r\n- default builder config is not defined. There should be atleast one builder config \r\n- URL is incorrectly constructed so the data files are not being downloaded \r\n- dataset_info.json file was not created\r\n\r\nPlease have a look at some existing merged datasets to get a reference on building the data loader. If you are still stuck, reach out. \r\n", "@skyprince999 I totally agree. Thx for the feedback!", "Hi @abdulelahsm ! Thanks for adding this one :) \r\nyou don't actually have to add builder configurations if you don't need them. It's fine as it is now\r\n\r\nAnd as @skyprince999 noticed, the current URLs don't work. to download files.\r\nYou can use this one for example for the first batch instead:\r\nhttps://github.com/parallelfold/SaudiNewsNet/raw/master/dataset/2015-07-21.zip\r\n\r\nFeel free to ping me if you have questions or if you're ready for a review :) ", "@lhoestq Hey, I tried using the first batch instead, the data was downloaded but I got this error, not sure why it can't find the path?\r\n\r\nfor content, I ran ``` \"./datasets/saudinewsnet/test.py\"```\r\n\r\nwhich is a local test I'm running for the dataset, it contains the following code\r\n\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndata = load_dataset(\"./datasets/saudinewsnet\", split= \"train\")\r\n\r\nprint(data)\r\n\r\nprint(data[1])\r\n```\r\n\r\nthis is the error I got \r\n\r\n```\r\n2020-12-18 21:45:39.403908: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory\r\n2020-12-18 21:45:39.403953: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\nUsing custom data configuration default\r\nDownloading and preparing dataset saudi_news_net/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/mesfas/.cache/huggingface/datasets/saudi_news_net/default/0.0.0/62ece5ef0a991415352d4b1efac681d75b5b3404064fd4f6a1d659499dab18f4...\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.42M/3.42M [00:03<00:00, 1.03MB/s]\r\nTraceback (most recent call last):\r\n File \"/home/mesfas/opensource/datasets/src/datasets/builder.py\", line 604, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/mesfas/opensource/datasets/src/datasets/builder.py\", line 902, in _prepare_split\r\n for key, record in utils.tqdm(\r\n File \"/home/mesfas/environments/ar_res_reviews/lib/python3.8/site-packages/tqdm/std.py\", line 1133, in __iter__\r\n for obj in iterable:\r\n File \"/home/mesfas/.cache/huggingface/modules/datasets_modules/datasets/saudinewsnet/62ece5ef0a991415352d4b1efac681d75b5b3404064fd4f6a1d659499dab18f4/saudinewsnet.py\", line 108, in _generate_examples\r\n with open(filepath, encoding=\"utf-8\").read() as f:\r\nIsADirectoryError: [Errno 21] Is a directory: '/home/mesfas/.cache/huggingface/datasets/downloads/extracted/314fd983aa07d3dada9429911a805270c3285f48759d3584a1343c2d86260765'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"./datasets/saudinewsnet/test.py\", line 3, in <module>\r\n data = load_dataset(\"./datasets/saudinewsnet\", split= \"train\")\r\n File \"/home/mesfas/opensource/datasets/src/datasets/load.py\", line 607, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/mesfas/opensource/datasets/src/datasets/builder.py\", line 526, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/mesfas/opensource/datasets/src/datasets/builder.py\", line 606, in _download_and_prepare\r\n raise OSError(\r\nOSError: Cannot find data file. \r\nOriginal error:\r\n[Errno 21] Is a directory: '/home/mesfas/.cache/huggingface/datasets/downloads/extracted/314fd983aa07d3dada9429911a805270c3285f48759d3584a1343c2d86260765'\r\n```\r\n\r\n\r\nthis is the split code \r\n\r\n```\r\n def _split_generators(self, dl_manager):\r\n \"\"\"Returns SplitGenerators.\"\"\"\r\n my_urls = _URL\r\n datadir = dl_manager.download_and_extract(my_urls)\r\n return [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN,\r\n # These kwargs will be passed to _generate_examples\r\n gen_kwargs={\r\n \"filepath\": datadir,\r\n \"split\": \"train\"\r\n },\r\n ),\r\n ]\r\n```\r\nand this is how I'm generating the examples\r\n\r\n```\r\n def _generate_examples(self, filepath, split):\r\n \r\n #logging.info(\"generating examples from = %s\", filepath)\r\n with open(filepath, encoding=\"utf-8\") as f:\r\n articles = json.load(f)\r\n for article in articles:\r\n title = article.get(\"title\", \"\").strip()\r\n source = article.get(\"source\", \"\").strip()\r\n date = article.get(\"date_extracted\", \"\").strip()\r\n link = article.get(\"url\", \"\").strip()\r\n author = article.get(\"author\", \"\").strip()\r\n content = article.get(\"content\", \"\").strip()\r\n\r\n yield id_, {\r\n \"title\": title,\r\n \"source\": source,\r\n \"date\": date,\r\n \"link\": link,\r\n \"author\": author,\r\n \"content\": content\r\n }\r\n```", "What's `_URL` ?\r\n\r\nIt looks like you are downloading an archive.\r\nTherefore you may need to get to the file path using `filepath = os.path.join(datadir, \"actual_file_name_inside_the_downloaded_archive\")`", "@lhoestq you were 100% right. Thank you. All fixed", "@lhoestq ping!", "@lhoestq added the remaining 17 batches and modified the readme.md to reflect that + resolved the camel case comment", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1564", "html_url": "https://github.com/huggingface/datasets/pull/1564", "diff_url": "https://github.com/huggingface/datasets/pull/1564.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1564.patch", "merged_at": "2020-12-22T09:51:04" }
1,564
true
adding tmu-gfm-dataset
Adding TMU-GFM-Dataset for Grammatical Error Correction. https://github.com/tmu-nlp/TMU-GFM-Dataset A dataset for GEC metrics with manual evaluations of grammaticality, fluency, and meaning preservation for system outputs. More detail about the creation of the dataset can be found in [Yoshimura et al. (2020)](https://www.aclweb.org/anthology/2020.coling-main.573.pdf).
https://github.com/huggingface/datasets/pull/1563
[ "@lhoestq Thank you for your code review! I think I could do the necessary corrections. Could you please check it again when you have time?", "Thank you for merging!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1563", "html_url": "https://github.com/huggingface/datasets/pull/1563", "diff_url": "https://github.com/huggingface/datasets/pull/1563.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1563.patch", "merged_at": "2020-12-21T10:07:13" }
1,563
true
Add dataset COrpus of Urdu News TExt Reuse (COUNTER).
https://github.com/huggingface/datasets/pull/1562
[ "Just a small revision from simon's review: 20KB for the dummy_data.zip is fine, you can keep them this way.", "Also the CI is failing because of an error `tests/test_file_utils.py::TempSeedTest::test_tensorflow` that is not related to your dataset and is fixed on master. You can ignore it", "merging since the Ci is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1562", "html_url": "https://github.com/huggingface/datasets/pull/1562", "diff_url": "https://github.com/huggingface/datasets/pull/1562.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1562.patch", "merged_at": "2020-12-21T13:14:46" }
1,562
true
Lama
This the LAMA dataset for probing facts and common sense from language models. See https://github.com/facebookresearch/LAMA for more details.
https://github.com/huggingface/datasets/pull/1561
[ "Let me know why the pyarrow test is failing. For one of the config \"trex\", I had to load an initial datafile for a dictionary which is used to augment the rest of the datasets. In the dummy data, the dictionary file was truncated so I had to fudge that. I'm not sure if that is the issue.\r\n", "@ontocord it just needs a rerun and it will be good to go.", "THanks @tanmoyio. How do I do a rerun?", "@ontocord contributor can’t rerun it, the maintainers will rerun it, it may take lil bit of time as there are so many PRs left to be reviewed and merged ", "@lhoestq not sure why it is failing. i've made all modifications. ", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1561", "html_url": "https://github.com/huggingface/datasets/pull/1561", "diff_url": "https://github.com/huggingface/datasets/pull/1561.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1561.patch", "merged_at": "2020-12-28T09:51:47" }
1,561
true
Adding the BrWaC dataset
Adding the BrWaC dataset, a large corpus of Portuguese language texts
https://github.com/huggingface/datasets/pull/1560
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1560", "html_url": "https://github.com/huggingface/datasets/pull/1560", "diff_url": "https://github.com/huggingface/datasets/pull/1560.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1560.patch", "merged_at": "2020-12-18T15:56:55" }
1,560
true
adding dataset card information to CONTRIBUTING.md
Added a documentation line and link to the full sprint guide in the "How to add a dataset" section, and a section on how to contribute to the dataset card of an existing dataset. And a thank you note at the end :hugs:
https://github.com/huggingface/datasets/pull/1559
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1559", "html_url": "https://github.com/huggingface/datasets/pull/1559", "diff_url": "https://github.com/huggingface/datasets/pull/1559.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1559.patch", "merged_at": "2020-12-14T17:55:03" }
1,559
true
Adding Igbo NER data
This PR adds the Igbo NER dataset. Data: https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_ner
https://github.com/huggingface/datasets/pull/1558
[ "Thanks for the PR @purvimisal. \r\n\r\nFew comments below. ", "Hi, @lhoestq Thank you for the review. I have made all the changes. PTAL! ", "the CI error is not related to your dataset, merging" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1558", "html_url": "https://github.com/huggingface/datasets/pull/1558", "diff_url": "https://github.com/huggingface/datasets/pull/1558.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1558.patch", "merged_at": "2020-12-21T14:38:20" }
1,558
true
HindEncorp again commited
https://github.com/huggingface/datasets/pull/1557
[ "Yes this has the right files!!!\r\n\r\nI'll close the previous one then :) \r\n\r\nNow to pass the tests, you will need to:\r\n- `make style` and run `flake8 datasets` from your repository root directory\r\n- fix the dummy data\r\n\r\nDid you generate the dummy data with the auto-generation tool (see the guide) or manually?", "manually with the tool, it is not able to create", "Cool, in that case you need to pay special attention to the directory structure given to you by the tool, most failures are because the files are in the wrong directory or at the wrong level :) \r\n\r\nAlso, make sure that the tests pass locally before pushing to the branch, it should help you get the structure right ;) ", "yes I have give proper directory structure datasets/hind_encorp/dummy/0.0.0/dummy_data.zip but in my dummy_data.zip only 1 file hind_encorp.plaintext is present because the dataset I got has only 1 file with both English and Hindi languages on 1 file itself may be this is causing issue", "Looks like the name of the file is the issue here: you have a file called `hindencorp05.plaintext`, but it should be called `hindencorp05.plaintext.gz%3Fsequence%3D3%26isAllowed%3Dy`. You just have to rename it to pass the test:\r\n```\r\ncd datasets/hind_encorp/dummy/0.0.0\r\nrm -rf dummy_data\r\nunzip dummy_data.zip\r\nrm dummy_data.zip\r\nmv dummy_data/hindencorp05.plaintext \"dummy_data/hindencorp05.plaintext.gz%3Fsequence%3D3%26isAllowed%3Dy\"\r\nzip -r dummy_data.zip dummy_data \r\n```\r\n\r\nThen **once you pass the tests locally** you just have to remember to `make style` and `flake8 datasets` to pass the style checks, and you should be good to go :hugs: \r\n\r\nFor reference, here are the instructions given by the tool:\r\n```\r\n$ python datasets-cli dummy_data datasets/hind_encorp/\r\n2020-12-14 13:16:26.824828: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory\r\n2020-12-14 13:16:26.824846: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\nUsing custom data configuration default\r\n\r\n==============================DUMMY DATA INSTRUCTIONS==============================\r\n- In order to create the dummy data for , please go into the folder 'datasets/hind_encorp/dummy/0.0.0' with `cd datasets/hind_encorp/dummy/0.0.0` . \r\n\r\n- Please create the following dummy data files 'dummy_data/hindencorp05.plaintext.gz%3Fsequence%3D3%26isAllowed%3Dy' from the folder 'datasets/hind_encorp/dummy/0.0.0'\r\n\r\n- For each of the splits 'train', make sure that one or more of the dummy data files provide at least one example \r\n\r\n- If the method `_generate_examples(...)` includes multiple `open()` statements, you might have to create other files in addition to 'dummy_data/hindencorp05.plaintext.gz%3Fsequence%3D3%26isAllowed%3Dy'. In this case please refer to the `_generate_examples(...)` method \r\n\r\n-After all dummy data files are created, they should be zipped recursively to 'dummy_data.zip' with the command `zip -r dummy_data.zip dummy_data/` \r\n\r\n-You can now delete the folder 'dummy_data' with the command `rm -r dummy_data` \r\n\r\n- To get the folder 'dummy_data' back for further changes to the dummy data, simply unzip dummy_data.zip with the command `unzip dummy_data.zip` \r\n\r\n- Make sure you have created the file 'dummy_data.zip' in 'datasets/hind_encorp/dummy/0.0.0' \r\n===================================================================================\r\n```\r\n", "all test passed locally. my new PR #1575 ", "Closing this one in favor of #1575 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1557", "html_url": "https://github.com/huggingface/datasets/pull/1557", "diff_url": "https://github.com/huggingface/datasets/pull/1557.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1557.patch", "merged_at": null }
1,557
true
add bswac
https://github.com/huggingface/datasets/pull/1556
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1556", "html_url": "https://github.com/huggingface/datasets/pull/1556", "diff_url": "https://github.com/huggingface/datasets/pull/1556.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1556.patch", "merged_at": "2020-12-18T15:14:27" }
1,556
true
Added Opus TedTalks
Dataset : http://opus.nlpl.eu/TedTalks.php
https://github.com/huggingface/datasets/pull/1555
[ "@lhoestq I saw some common changes you made on the other PR's (Similar Opus Datasets). I fixed those changes here. Can you please review it once ? \r\nThanks.", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1555", "html_url": "https://github.com/huggingface/datasets/pull/1555", "diff_url": "https://github.com/huggingface/datasets/pull/1555.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1555.patch", "merged_at": "2020-12-18T09:44:43" }
1,555
true
Opus CAPES added
Dataset : http://opus.nlpl.eu/CAPES.php
https://github.com/huggingface/datasets/pull/1554
[ "@lhoestq I saw some common changes you made on the other PR's (Similar Opus Datasets). I fixed those changes here. Can you please review it once ? \r\nThanks.", "Hi @rkc007 , thanks for the contribution.\r\nUnfortunately, the CAPES dataset has already been added here: #1307\r\nI'm closing the PR ", "@lhoestq FYI" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1554", "html_url": "https://github.com/huggingface/datasets/pull/1554", "diff_url": "https://github.com/huggingface/datasets/pull/1554.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1554.patch", "merged_at": null }
1,554
true
added air_dialogue
UPDATE2 (3797ce5): Updated for multi-configs UPDATE (7018082): manually created the dummy_datasets. All tests were cleared locally. Pushed it to origin/master DRAFT VERSION (57fdb20): (_no longer draft_) Uploaded the air_dialogue database. dummy_data creation was failing in local, since the original downloaded file has some nested folders. Pushing it since the tests with real data was cleared. Will re-check & update via manually creating some dummy_data
https://github.com/huggingface/datasets/pull/1553
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1553", "html_url": "https://github.com/huggingface/datasets/pull/1553", "diff_url": "https://github.com/huggingface/datasets/pull/1553.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1553.patch", "merged_at": "2020-12-23T11:20:39" }
1,553
true
Added OPUS ParaCrawl
Dataset : http://opus.nlpl.eu/ParaCrawl.php
https://github.com/huggingface/datasets/pull/1552
[ "@lhoestq I saw some common changes you made on the other PR's (Similar Opus Datasets). I fixed those changes here. Can you please review it once ? \r\nThanks.", "@rkc007 @lhoestq just noticed a dataset named para_crawl has been added a long time ago: #91.", "They're not exactly the same so it's ok to have both.\r\n\r\nEspecially the `para_crawl` that already exists only uses the text from the ParaCrawl release 4.", "Could you regenerate the dataset_infos.json @rkc007 please ?\r\nIt looks like it has some issues due to the dataset class name change", "@SBrandeis Thank you for suggesting changes. I made the changes you suggested. \r\n\r\n@lhoestq I generated `dataset_infos.json` again. I ran both tests(Dummy & Real data) and it passed. Can you please review it again?", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1552", "html_url": "https://github.com/huggingface/datasets/pull/1552", "diff_url": "https://github.com/huggingface/datasets/pull/1552.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1552.patch", "merged_at": "2020-12-21T09:50:25" }
1,552
true
Monero
Biomedical Romanian dataset :)
https://github.com/huggingface/datasets/pull/1551
[ "Hi @iliemihai - you need to add the Readme file! Otherwise seems good. \r\nAlso don't forget to run `make style` & `flake8 datasets` locally, from the datasets folder", "@skyprince999 I will add the README.d for it. Thank you :D ", "Thanks for your contribution, @iliemihai. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1551", "html_url": "https://github.com/huggingface/datasets/pull/1551", "diff_url": "https://github.com/huggingface/datasets/pull/1551.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1551.patch", "merged_at": null }
1,551
true
Add offensive langauge dravidian dataset
https://github.com/huggingface/datasets/pull/1550
[ "Thanks much!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1550", "html_url": "https://github.com/huggingface/datasets/pull/1550", "diff_url": "https://github.com/huggingface/datasets/pull/1550.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1550.patch", "merged_at": "2020-12-18T14:25:30" }
1,550
true
Generics kb new branch
Datasets need manual downloads. Have thus created dummy data as well. But pytest on real and dummy data are failing. I have completed the readme , tags and other required things. I need to create the metadata json once tests get successful. Opening a PR while working with Yacine Jernite to resolve my pytest issues.
https://github.com/huggingface/datasets/pull/1549
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1549", "html_url": "https://github.com/huggingface/datasets/pull/1549", "diff_url": "https://github.com/huggingface/datasets/pull/1549.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1549.patch", "merged_at": "2020-12-21T13:55:09" }
1,549
true
Fix `🤗Datasets` - `tfds` differences link + a few aesthetics
https://github.com/huggingface/datasets/pull/1548
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1548", "html_url": "https://github.com/huggingface/datasets/pull/1548", "diff_url": "https://github.com/huggingface/datasets/pull/1548.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1548.patch", "merged_at": "2020-12-15T12:55:27" }
1,548
true
Adding PolEval2019 Machine Translation Task dataset
Facing an error with pytest in training. Dummy data is passing. README has to be updated.
https://github.com/huggingface/datasets/pull/1547
[ "**NOTE:**\r\n\r\n- Train and Dev: Manually downloaded (auto download is repeatedly giving `ConnectionError` for one of the files), Test: Auto Download\r\n- Dummy test is passing\r\n- The json file has been created with hard-coded paths for the manual downloads _(hardcoding has been removed from the final uploaded script)_\r\n- datasets-cli is still **failing** . It is not picking the right directory for the config. For instance, my folder structure is as below:\r\n ```\r\n ~/Downloads/Data/\r\n |--- English-to-Polish\r\n |--- (corresponding files) \r\n |--- Russian-Polish\r\n |--- (corresponding files) \r\n```\r\n\r\nWhen ru-pl is selected, ideally it has to search in Russian-Polish folder, but it is searching in '/Downloads/Data/' folder and hence getting a FileNotFound error.\r\n\r\nThe command run is \r\n`python datasets-cli test datasets/poleval2019_mt/ --save_infos --all_configs --data_dir ~/Downloads/Data/\r\n`\r\n", "Hi !\r\nThanks for the changes :)\r\n\r\nThe only error left is the dummy data. Since we changed for standard downloads instead of manual downloads its structure changed. Fortunately you can auto-generate the dummy data with this command:\r\n\r\n```\r\ndatasets-cli dummy_data ./datasets/poleval2019_mt --auto_generate --match_text_files \"*\"\r\n```\r\n\r\nCan you regenerate the dummy data using this command please ?", "Thank you for the help @lhoestq !! I was generating the dummy dataset in a wrong way! That _--match_text_files \"*\"_ did the trick! Now all the tests have passed! :-)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1547", "html_url": "https://github.com/huggingface/datasets/pull/1547", "diff_url": "https://github.com/huggingface/datasets/pull/1547.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1547.patch", "merged_at": "2020-12-21T16:13:21" }
1,547
true
Add persian ner dataset
Adding the following dataset: https://github.com/HaniehP/PersianNER
https://github.com/huggingface/datasets/pull/1546
[ "HI @SBrandeis. Thanks for all the comments - very helpful. I realised that the tests had failed and had been trying to figure out what was causing them to do so. All the tests pass when I run the load_real_dataset test however when I run `RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_persian_ner` I get the below error. One thing to note is that the automated dummy data file generation failed when I tried to run it so I manually created the dummy data and ensured that the last line in the file was an empty line as per your comments. Would appreciate your thoughts on what might be causing this:\r\n\r\n```\r\n__________________________________________________ LocalDatasetTest.test_load_dataset_all_configs_persian_ner __________________________________________________\r\n\r\nself = <tests.test_dataset_common.LocalDatasetTest testMethod=test_load_dataset_all_configs_persian_ner>, dataset_name = 'persian_ner'\r\n\r\n @slow\r\n def test_load_dataset_all_configs(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True)\r\n\r\ntests/test_dataset_common.py:237: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:198: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n--------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------\r\nDownloading and preparing dataset persian_ner/fold1 (download: 1.00 MiB, generated: 1.00 MiB, post-processed: Unknown size, total: 2.00 MiB) to /var/folders/nk/yp5_m5c95cnc0cm_vbd7h7g80000gn/T/tmpzh495aac/persian_ner/fold1/1.1.0...\r\nDataset persian_ner downloaded and prepared to /var/folders/nk/yp5_m5c95cnc0cm_vbd7h7g80000gn/T/tmpzh495aac/persian_ner/fold1/1.1.0. Subsequent calls will reuse this data.\r\n--------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------\r\n \r\n======================================================================= warnings summary =======================================================================\r\nenv/lib/python3.7/site-packages/tensorflow/python/autograph/utils/testing.py:21\r\n /Users/karimfoda/Documents/STUDIES/PYTHON/DATASETS/env/lib/python3.7/site-packages/tensorflow/python/autograph/utils/testing.py:21: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\r\n import imp\r\n\r\nenv/lib/python3.7/site-packages/apache_beam/typehints/typehints.py:693\r\n /Users/karimfoda/Documents/STUDIES/PYTHON/DATASETS/env/lib/python3.7/site-packages/apache_beam/typehints/typehints.py:693: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working\r\n if not isinstance(type_params, collections.Iterable):\r\n\r\nenv/lib/python3.7/site-packages/apache_beam/typehints/typehints.py:532\r\n /Users/karimfoda/Documents/STUDIES/PYTHON/DATASETS/env/lib/python3.7/site-packages/apache_beam/typehints/typehints.py:532: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working\r\n if not isinstance(type_params, (collections.Sequence, set)):\r\n\r\nenv/lib/python3.7/site-packages/elasticsearch/compat.py:38\r\n /Users/karimfoda/Documents/STUDIES/PYTHON/DATASETS/env/lib/python3.7/site-packages/elasticsearch/compat.py:38: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working\r\n from collections import Mapping\r\n\r\n-- Docs: https://docs.pytest.org/en/stable/warnings.html\r\n=================================================================== short test summary info ====================================================================\r\nFAILED tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_persian_ner - AssertionError: False is not true\r\n```", "Thanks @SBrandeis. It turns out the error was because I had to manually increase the n_lines variable to get the dummy data generation to cover at least one example. Should all be working okay now.", "Great, thanks!\r\nIt looks good to me, I'll let @lhoestq take over" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1546", "html_url": "https://github.com/huggingface/datasets/pull/1546", "diff_url": "https://github.com/huggingface/datasets/pull/1546.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1546.patch", "merged_at": "2020-12-23T09:53:03" }
1,546
true
add hrwac
https://github.com/huggingface/datasets/pull/1545
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1545", "html_url": "https://github.com/huggingface/datasets/pull/1545", "diff_url": "https://github.com/huggingface/datasets/pull/1545.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1545.patch", "merged_at": "2020-12-18T13:35:17" }
1,545
true
Added Wiki Summary Dataset
Wiki Summary: Dataset extracted from Persian Wikipedia into the form of articles and highlights. Link: https://github.com/m3hrdadfi/wiki-summary
https://github.com/huggingface/datasets/pull/1544
[ "@lhoestq why my tests are not running?", "Maybe an issue with CircleCI, let me try to make them run", "The CI error `tests/test_file_utils.py::TempSeedTest::test_tensorflow` is not related to this dataset and is fixed on master, you can ignore it", "what I need to do now", "Now the delimiter of the csv reader is fixed, thanks :) \r\n\r\nI just added a comment suggesting to try using actual URLS instead of a manual download if possible.\r\nThis would make things more convenient for the users. Can you try using the `dl_manager` to download the train/dev/test csv files instead of requiring manual download ?", "Also pinging @m3hrdadfi , since I just noticed that there's already a dataset script that was created 3 weeks ago for this dataset here: https://github.com/m3hrdadfi/wiki-summary/tree/master/datasets/wiki_summary_persian", "@lhoestq I am getting this error while generating the dummy data\r\n![Screenshot (181)](https://user-images.githubusercontent.com/33005287/102628819-50a40080-4170-11eb-9e96-efce74b45ff4.png)\r\n", "Can you try by adding the flag `--match_text_files \"*\"` ?", "now it worked", "@lhoestq pytest on dummy data passed, but on real data raising this issue\r\n![Screenshot (196)](https://user-images.githubusercontent.com/33005287/102630784-fa848c80-4172-11eb-9f7e-e5a58dcf7abe.png)\r\nhow to resolve it\r\n", "I see ! This is because the library did some verification to make sure it downloads the same files as in the first time you ran the `datasets-cli test` command with `--save_infos`. Since we're now downloading files, the verification fails. \r\n\r\nTo fix that you just need to regenerate the dataset_infos.json file:\r\n```\r\ndatasets-cli test ./datasets/wiki_summary --save_infos --all_configs --ignore_verifications\r\n```", "@lhoestq I have modified everything and It worked fine, dont know why it is not passing the tests ", "Awesome thank you !\r\n\r\nThe CI error `tests/test_file_utils.py::TempSeedTest::test_tensorflow` is not related to your dataset and is fixed on master.\r\nYou can ignore it :) ", "@lhoestq anything left to do ?", "The dataset script is all good now ! The dummy data and the dataset_infos.json file are good too :) ", "@lhoestq yay, thanks for helping me out , ", "merging since the CI is fixed on master", "@tanmoyio @lhoestq \r\n\r\nThank you both!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1544", "html_url": "https://github.com/huggingface/datasets/pull/1544", "diff_url": "https://github.com/huggingface/datasets/pull/1544.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1544.patch", "merged_at": "2020-12-18T16:17:18" }
1,544
true
adding HindEncorp
adding Hindi Wikipedia corpus
https://github.com/huggingface/datasets/pull/1543
[ "@lhoestq I have created a new PR by reforking and creating a new branch ", "@rahul-art unfortunately this didn't quite work, here's how you can try again:\r\n- `git checkout master` to go back to the main branch\r\n- `git pull upstream master` to make it up to date\r\n- `git checkout -b add_hind_encorp` to create a new branch\r\n\r\nThen add the dataset script, `README.md`, `dummy_data.zip`, and `dataset_infos.json` to the tracked files for the branch with `git add` (please add all of these files individually, NOT the whole directory as we don't want the other data files)\r\nThen after you have passed the style checks and the local tests, do:\r\n- `git commit . -m initial_commit`\r\n- `git push --set-upstream origin add_hind_encorp`\r\n\r\nThen you can go to this branch on the WebApp and open a new PR", "@yjernite #1557 created new PR" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1543", "html_url": "https://github.com/huggingface/datasets/pull/1543", "diff_url": "https://github.com/huggingface/datasets/pull/1543.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1543.patch", "merged_at": null }
1,543
true
fix typo readme
https://github.com/huggingface/datasets/pull/1542
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1542", "html_url": "https://github.com/huggingface/datasets/pull/1542", "diff_url": "https://github.com/huggingface/datasets/pull/1542.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1542.patch", "merged_at": "2020-12-13T17:16:40" }
1,542
true
connection issue while downloading data
Hi I am running my codes on google cloud, and I am getting this error resulting in the failure of the codes when trying to download the data, could you assist me to solve this? also as a temporary solution, could you tell me how I can increase the number of retries and timeout to at least let the models run for now. thanks ``` Traceback (most recent call last): File "finetune_t5_trainer.py", line 361, in <module> main() File "finetune_t5_trainer.py", line 269, in main add_prefix=False if training_args.train_adapters else True) File "/workdir/seq2seq/data/tasks.py", line 70, in get_dataset dataset = self.load_dataset(split=split) File "/workdir/seq2seq/data/tasks.py", line 306, in load_dataset return datasets.load_dataset('glue', 'cola', split=split) File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 263, in prepare_module head_hf_s3(path, filename=name, dataset=dataset) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 200, in head_hf_s3 return http_head(hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset)) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 403, in http_head url, proxies=proxies, headers=headers, cookies=cookies, allow_redirects=allow_redirects, timeout=timeout File "/usr/local/lib/python3.6/dist-packages/requests/api.py", line 104, in head return request('head', url, **kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/sessions.py", line 542, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/sessions.py", line 655, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/adapters.py", line 504, in send raise ConnectTimeout(e, request=request) requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/glue/glue.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f47db511e80>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)')) ```
https://github.com/huggingface/datasets/issues/1541
[ "could you tell me how I can avoid download, by pre-downloading the data first, put them in a folder so the code does not try to redownload? could you tell me the path to put the downloaded data, and how to do it? thanks\r\n@lhoestq ", "Does your instance have an internet connection ?\r\n\r\nIf you don't have an internet connection you'll need to have the dataset on the instance disk.\r\nTo do so first download the dataset on another machine using `load_dataset` and then you can save it in a folder using `my_dataset.save_to_disk(\"path/to/folder\")`. Once the folder is copied on your instance you can reload the dataset with `datasets.load_from_disk(\"path/to/folder\")`" ]
null
1,541
false
added TTC4900: A Benchmark Data for Turkish Text Categorization dataset
This PR adds the TTC4900 dataset which is a Turkish Text Categorization dataset by me and @basakbuluz. Homepage: [https://www.kaggle.com/savasy/ttc4900](https://www.kaggle.com/savasy/ttc4900) Point of Contact: [Savaş Yıldırım](mailto:savasy@gmail.com) / @savasy
https://github.com/huggingface/datasets/pull/1540
[ "@lhoestq, can you help with creating dummy_data?\r\n", "Hi @yavuzKomecoglu did you manage to build the dummy data ?", "> Hi @yavuzKomecoglu did you manage to build the dummy data ?\r\n\r\nHi, sorry for the return. I've created dummy_data.zip manually.", "> Nice thank you !\r\n> \r\n> Before we merge can you fill the two sections of the dataset card I suggested ?\r\n> And also remove one remaining print statement\r\n\r\nI updated your suggestions. Thank you very much for your support.", "I think you accidentally pushed the readme of another dataset (name_to_nation).\r\nI removed it so you have to `git pull`\r\n\r\nBecause of that I guess your changes about the ttc4900 was not included.\r\nFeel free to ping me once they're added\r\n\r\n\r\n", "> I think you accidentally pushed the readme of another dataset (name_to_nation).\r\n> I removed it so you have to `git pull`\r\n> \r\n> Because of that I guess your changes about the ttc4900 was not included.\r\n> Feel free to ping me once they're added\r\n\r\nI did `git pull` and updated readme **ttc4900**.", "merging since the Ci is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1540", "html_url": "https://github.com/huggingface/datasets/pull/1540", "diff_url": "https://github.com/huggingface/datasets/pull/1540.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1540.patch", "merged_at": "2020-12-18T10:09:01" }
1,540
true
Added Wiki Asp dataset
Hello, I have added Wiki Asp dataset. Please review the PR.
https://github.com/huggingface/datasets/pull/1539
[ "> Awesome thank you !\r\n> \r\n> I just left one comment.\r\n> \r\n> Also it looks like the dummy_data.zip files are quite big (around 500KB each)\r\n> Can you try to reduce their sizes please ? Ideally they should be <20KB each\r\n> \r\n> To do so feel free to take a look inside them and in the jsonl files only keep 1 or 2 samples instead of 5 and also remove big chunks of text to only keep a few passages.\r\n\r\nThanks, I have updated the dummy data to keep each domain <20/30KB.", "> > Awesome thank you !\r\n> > I just left one comment.\r\n> > Also it looks like the dummy_data.zip files are quite big (around 500KB each)\r\n> > Can you try to reduce their sizes please ? Ideally they should be <20KB each\r\n> > To do so feel free to take a look inside them and in the jsonl files only keep 1 or 2 samples instead of 5 and also remove big chunks of text to only keep a few passages.\r\n> \r\n> Thanks, I have updated the dummy data to keep each domain <20/30KB.\r\n\r\nLooks like this branch has other commits. I will open a new PR with suggested changes.", "opened a new PR #1612 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1539", "html_url": "https://github.com/huggingface/datasets/pull/1539", "diff_url": "https://github.com/huggingface/datasets/pull/1539.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1539.patch", "merged_at": null }
1,539
true
tweets_hate_speech_detection
https://github.com/huggingface/datasets/pull/1538
[ "Hi @lhoestq I have added this new dataset for tweet's hate speech detection. \r\n\r\nPlease if u could review it. \r\n\r\nThank you", "Hi @darshan-gandhi have you add a chance to take a look at my suggestions ?\r\n\r\nFeel free to ping me when you're ready for the final review", "Closing in favor of #1607" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1538", "html_url": "https://github.com/huggingface/datasets/pull/1538", "diff_url": "https://github.com/huggingface/datasets/pull/1538.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1538.patch", "merged_at": null }
1,538
true
added ohsumed
UPDATE2: PR passed all tests. Now waiting for review. UPDATE: pushed a new version. cross fingers that it should complete all the tests! :) If it passes all tests then it's not a draft version. This is a draft version
https://github.com/huggingface/datasets/pull/1537
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1537", "html_url": "https://github.com/huggingface/datasets/pull/1537", "diff_url": "https://github.com/huggingface/datasets/pull/1537.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1537.patch", "merged_at": "2020-12-17T18:28:16" }
1,537
true
Add Hippocorpus Dataset
https://github.com/huggingface/datasets/pull/1536
[ "> Before we merge can you try to reduce the size of the dummy_data.zip file ?\r\n> \r\n> To do so feel free to only keep a few lines of the csv files ans also remove unnecessary chunks of texts (for example keep only the first sentences of a story).\r\n\r\nHi @lhoestq, I have reduced the size of the dummy_data.zip file by making the necessary changes you had suggested. ", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1536", "html_url": "https://github.com/huggingface/datasets/pull/1536", "diff_url": "https://github.com/huggingface/datasets/pull/1536.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1536.patch", "merged_at": "2020-12-15T13:40:11" }
1,536
true
Adding Igbo monolingual dataset
This PR adds the Igbo Monolingual dataset. Data: https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_monoling Paper: https://arxiv.org/abs/2004.00648
https://github.com/huggingface/datasets/pull/1535
[ "@lhoestq Thank you for the review. I have made all the changes you mentioned. PTAL! " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1535", "html_url": "https://github.com/huggingface/datasets/pull/1535", "diff_url": "https://github.com/huggingface/datasets/pull/1535.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1535.patch", "merged_at": "2020-12-21T14:39:48" }
1,535
true
adding dataset for diplomacy detection
https://github.com/huggingface/datasets/pull/1534
[ "Requested changes made and new PR submitted here: https://github.com/huggingface/datasets/pull/1580 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1534", "html_url": "https://github.com/huggingface/datasets/pull/1534", "diff_url": "https://github.com/huggingface/datasets/pull/1534.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1534.patch", "merged_at": null }
1,534
true
add id_panl_bppt, a parallel corpus for en-id
Parallel Text Corpora for English - Indonesian
https://github.com/huggingface/datasets/pull/1533
[ "Hi @lhoestq, thanks for the review. I will have a look and update it accordingly.", "Strange error message :-)\r\n\r\n```\r\n> tf_context = tf.python.context.context() # eager mode context\r\nE AttributeError: module 'tensorflow' has no attribute 'python'\r\n```\r\n" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1533", "html_url": "https://github.com/huggingface/datasets/pull/1533", "diff_url": "https://github.com/huggingface/datasets/pull/1533.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1533.patch", "merged_at": "2020-12-21T10:40:36" }
1,533
true
adding hate-speech-and-offensive-language
https://github.com/huggingface/datasets/pull/1532
[ "made suggested changes and a new PR created here : https://github.com/huggingface/datasets/pull/1597" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1532", "html_url": "https://github.com/huggingface/datasets/pull/1532", "diff_url": "https://github.com/huggingface/datasets/pull/1532.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1532.patch", "merged_at": null }
1,532
true
adding hate-speech-and-offensive-language
https://github.com/huggingface/datasets/pull/1531
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1531", "html_url": "https://github.com/huggingface/datasets/pull/1531", "diff_url": "https://github.com/huggingface/datasets/pull/1531.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1531.patch", "merged_at": null }
1,531
true
add indonlu benchmark datasets
The IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for the Indonesian language. There are 12 datasets in IndoNLU. This is a new clean PR from [#1322](https://github.com/huggingface/datasets/pull/1322)
https://github.com/huggingface/datasets/pull/1530
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1530", "html_url": "https://github.com/huggingface/datasets/pull/1530", "diff_url": "https://github.com/huggingface/datasets/pull/1530.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1530.patch", "merged_at": "2020-12-16T11:11:43" }
1,530
true
Ro sent
Movies reviews dataset for Romanian language.
https://github.com/huggingface/datasets/pull/1529
[ "Hi @iliemihai, it looks like this PR holds changes from your previous PR #1493 .\r\nWould you mind removing them from the branch please ?", "@SBrandeis I am sorry. Yes I will remove them. Thank you :D ", "Hi @lhoestq @SBrandeis @iliemihai\r\n\r\nIs this still in progress or can I take over this one?\r\n\r\nThanks,\r\nGunjan", "Hi,\r\nWhile trying to add this dataset, I found some potential issues. \r\nThe homepage mentioned is : https://github.com/katakonst/sentiment-analysis-tensorflow/tree/master/datasets/ro/, where the dataset is different from the URLs: https://raw.githubusercontent.com/dumitrescustefan/Romanian-Transformers/examples/examples/sentiment_analysis/ro/train.csv. It is unclear which dataset is \"correct\". I checked the total examples (train+test) in both places and they do not match.", "We should use the data from dumitrescustefan and set the homepage to his repo IMO, since he's first author of the paper of the dataset.", "Hi @lhoestq,\r\n\r\nCool, I'll get working on it.\r\n\r\nThanks", "Hi @lhoestq, \r\n\r\nThis PR can be closed.", "Closing in favor of #2011 \r\nThanks again for adding it !" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1529", "html_url": "https://github.com/huggingface/datasets/pull/1529", "diff_url": "https://github.com/huggingface/datasets/pull/1529.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1529.patch", "merged_at": null }
1,529
true
initial commit for Common Crawl Domain Names
https://github.com/huggingface/datasets/pull/1528
[ "Thank you :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1528", "html_url": "https://github.com/huggingface/datasets/pull/1528", "diff_url": "https://github.com/huggingface/datasets/pull/1528.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1528.patch", "merged_at": "2020-12-18T10:22:32" }
1,528
true
Add : Conv AI 2 (Messed up original PR)
@lhoestq Sorry I messed up the previous 2 PR's -> https://github.com/huggingface/datasets/pull/1462 -> https://github.com/huggingface/datasets/pull/1383. So created a new one. Also, everything is fixed in this PR. Can you please review it ? Thanks in advance.
https://github.com/huggingface/datasets/pull/1527
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1527", "html_url": "https://github.com/huggingface/datasets/pull/1527", "diff_url": "https://github.com/huggingface/datasets/pull/1527.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1527.patch", "merged_at": "2020-12-13T19:14:24" }
1,527
true
added Hebrew thisworld corpus
added corpus from https://thisworld.online/ , https://github.com/thisworld1/thisworld.online
https://github.com/huggingface/datasets/pull/1526
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1526", "html_url": "https://github.com/huggingface/datasets/pull/1526", "diff_url": "https://github.com/huggingface/datasets/pull/1526.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1526.patch", "merged_at": "2020-12-18T10:47:30" }
1,526
true
Adding a second branch for Atomic to fix git errors
Adding the Atomic common sense dataset. See https://homes.cs.washington.edu/~msap/atomic/
https://github.com/huggingface/datasets/pull/1525
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1525", "html_url": "https://github.com/huggingface/datasets/pull/1525", "diff_url": "https://github.com/huggingface/datasets/pull/1525.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1525.patch", "merged_at": "2020-12-28T15:51:11" }
1,525
true
ADD: swahili dataset for language modeling
Add a corpus for Swahili language modelling. All tests passed locally. README updated with all information available.
https://github.com/huggingface/datasets/pull/1524
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1524", "html_url": "https://github.com/huggingface/datasets/pull/1524", "diff_url": "https://github.com/huggingface/datasets/pull/1524.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1524.patch", "merged_at": "2020-12-17T16:37:16" }
1,524
true
Add eHealth Knowledge Discovery dataset
This Spanish dataset can be used to mine knowledge from unstructured health texts. In particular, for: - Entity recognition - Relation extraction
https://github.com/huggingface/datasets/pull/1523
[ "Thank you very much for your review @lewtun ! \r\n\r\nI've updated the script metadata, created the README and fixed the two details you commented.\r\n\r\nReady for another review! 🤗 ", "I've updated the task tag as we discussed and also added a couple of lines of code to make sure I include all the available examples.\r\n\r\nThank you again!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1523", "html_url": "https://github.com/huggingface/datasets/pull/1523", "diff_url": "https://github.com/huggingface/datasets/pull/1523.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1523.patch", "merged_at": "2020-12-17T16:48:56" }
1,523
true
Add semeval 2020 task 11
Adding in propaganda detection task (task 11) from Sem Eval 2020
https://github.com/huggingface/datasets/pull/1522
[ "@SBrandeis : Thanks for the feedback! Just updated to use context manager for the `open`s and removed the placeholder text from the `README`!", "Great, thanks @ZacharySBrown !\r\nFailing tests seem to be unrelated to your changes, merging the current master branch into yours should fix them.\r\n" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1522", "html_url": "https://github.com/huggingface/datasets/pull/1522", "diff_url": "https://github.com/huggingface/datasets/pull/1522.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1522.patch", "merged_at": "2020-12-15T16:48:52" }
1,522
true
Atomic
This is the ATOMIC common sense dataset. More info can be found here: * README.md still to be created.
https://github.com/huggingface/datasets/pull/1521
[ "I had to create a new PR to fix git errors. See: https://github.com/huggingface/datasets/pull/1525\r\n\r\nI'm closing this PR. " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1521", "html_url": "https://github.com/huggingface/datasets/pull/1521", "diff_url": "https://github.com/huggingface/datasets/pull/1521.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1521.patch", "merged_at": null }
1,521
true
ru_reviews dataset adding
RuReviews: An Automatically Annotated Sentiment Analysis Dataset for Product Reviews in Russian
https://github.com/huggingface/datasets/pull/1520
[ "Hi @lhoestq \r\n\r\nI have added the readme as well \r\n\r\nPlease do have a look at it when suitable ", "Chatted with @darshan-gandhi on Slack about parsing examples into a separate text and sentiment field", "Thanks for your contribution, @darshan-gandhi. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1520", "html_url": "https://github.com/huggingface/datasets/pull/1520", "diff_url": "https://github.com/huggingface/datasets/pull/1520.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1520.patch", "merged_at": null }
1,520
true
Initial commit for AQuaMuSe
There is an issue in generation of dummy data. Tests on real data have passed locally.
https://github.com/huggingface/datasets/pull/1519
[ "@yjernite Thank you for your help, generating the dummy data 🤗 Having that all the tests have passed 👍🏻", "merging since the CI is fixed on master", "Thank you :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1519", "html_url": "https://github.com/huggingface/datasets/pull/1519", "diff_url": "https://github.com/huggingface/datasets/pull/1519.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1519.patch", "merged_at": "2020-12-17T17:03:30" }
1,519
true
Add twi text
Add Twi texts
https://github.com/huggingface/datasets/pull/1518
[ "Hii please follow me", "thank you" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1518", "html_url": "https://github.com/huggingface/datasets/pull/1518", "diff_url": "https://github.com/huggingface/datasets/pull/1518.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1518.patch", "merged_at": "2020-12-13T18:53:37" }
1,518
true
Kd conv smangrul
https://github.com/huggingface/datasets/pull/1517
[ "Hii please follow me", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1517", "html_url": "https://github.com/huggingface/datasets/pull/1517", "diff_url": "https://github.com/huggingface/datasets/pull/1517.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1517.patch", "merged_at": "2020-12-16T14:56:14" }
1,517
true
adding wrbsc
https://github.com/huggingface/datasets/pull/1516
[ "@lhoestq thanks for the comments! Should be fixed in the latest commit, I assume the CI errors are unrelated. ", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1516", "html_url": "https://github.com/huggingface/datasets/pull/1516", "diff_url": "https://github.com/huggingface/datasets/pull/1516.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1516.patch", "merged_at": "2020-12-18T09:41:33" }
1,516
true
Add yoruba text
Adding Yoruba text C3
https://github.com/huggingface/datasets/pull/1515
[ "closing since #1379 got merged" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1515", "html_url": "https://github.com/huggingface/datasets/pull/1515", "diff_url": "https://github.com/huggingface/datasets/pull/1515.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1515.patch", "merged_at": null }
1,515
true
how to get all the options of a property in datasets
Hi could you tell me how I can get all unique options of a property of dataset? for instance in case of boolq, if the user wants to know which unique labels it has, is there a way to access unique labels without getting all training data lables and then forming a set i mean? thanks
https://github.com/huggingface/datasets/issues/1514
[ "In a dataset, labels correspond to the `ClassLabel` feature that has the `names` property that returns string represenation of the integer classes (or `num_classes` to get the number of different classes).", "I think the `features` attribute of the dataset object is what you are looking for:\r\n```\r\n>>> dataset.features\r\n{'sentence1': Value(dtype='string', id=None),\r\n 'sentence2': Value(dtype='string', id=None),\r\n 'label': ClassLabel(num_classes=2, names=['not_equivalent', 'equivalent'], names_file=None, id=None),\r\n 'idx': Value(dtype='int32', id=None)\r\n}\r\n>>> dataset.features[\"label\"].names\r\n['not_equivalent', 'equivalent']\r\n```\r\n\r\nFor reference: https://huggingface.co/docs/datasets/exploring.html" ]
null
1,514
false
app_reviews_by_users
Software Applications User Reviews
https://github.com/huggingface/datasets/pull/1513
[ "Hi @lhoestq \r\n\r\nI have added the readme file as well, please if you could check it once \r\n\r\nThank you " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1513", "html_url": "https://github.com/huggingface/datasets/pull/1513", "diff_url": "https://github.com/huggingface/datasets/pull/1513.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1513.patch", "merged_at": "2020-12-14T20:45:24" }
1,513
true
Add Hippocorpus Dataset
https://github.com/huggingface/datasets/pull/1512
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1512", "html_url": "https://github.com/huggingface/datasets/pull/1512", "diff_url": "https://github.com/huggingface/datasets/pull/1512.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1512.patch", "merged_at": null }
1,512
true
poleval cyberbullying
https://github.com/huggingface/datasets/pull/1511
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1511", "html_url": "https://github.com/huggingface/datasets/pull/1511", "diff_url": "https://github.com/huggingface/datasets/pull/1511.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1511.patch", "merged_at": "2020-12-17T16:19:58" }
1,511
true
Add Dataset for (qa_srl)Question-Answer Driven Semantic Role Labeling
- Added tags, Readme file - Added code changes
https://github.com/huggingface/datasets/pull/1510
[ "Hii please follow me", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1510", "html_url": "https://github.com/huggingface/datasets/pull/1510", "diff_url": "https://github.com/huggingface/datasets/pull/1510.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1510.patch", "merged_at": "2020-12-17T16:06:22" }
1,510
true
Added dataset Makhzan
Need help with the dummy data.
https://github.com/huggingface/datasets/pull/1509
[ "The only CI error comes from \r\n```\r\nFAILED tests/test_file_utils.py::TempSeedTest::test_tensorflow\r\n```\r\n\r\nwhich is not related to your PR and is fixed on master.\r\n\r\nYou can ignore it", "@lhoestq I've made the changes. Please review and merge. \r\n\r\nI have a similar PR https://github.com/huggingface/datasets/pull/1562 for another dataset. I'll incorporate your comment about sorting and reducing dummy dataset size there.", "The CI raises an error `FAILED tests/test_file_utils.py::TempSeedTest::test_tensorflow` but it's not related to this dataset.\r\nThis issue is fixed on master", "You did all the work ;) thanks\r\n\r\nmerging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1509", "html_url": "https://github.com/huggingface/datasets/pull/1509", "diff_url": "https://github.com/huggingface/datasets/pull/1509.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1509.patch", "merged_at": "2020-12-16T15:04:51" }
1,509
true
Fix namedsplit docs
Fixes a broken link and `DatasetInfoMixin.split`'s docstring.
https://github.com/huggingface/datasets/pull/1508
[ "Hii please follow me", "Thanks @mariosasko!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1508", "html_url": "https://github.com/huggingface/datasets/pull/1508", "diff_url": "https://github.com/huggingface/datasets/pull/1508.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1508.patch", "merged_at": "2020-12-15T12:57:48" }
1,508
true