title
stringlengths
1
290
body
stringlengths
0
228k
html_url
stringlengths
46
51
comments
list
pull_request
dict
number
int64
1
5.59k
is_pull_request
bool
2 classes
Adding Enriched WebNLG dataset
This pull requests adds the `en` and `de` versions of the [Enriched WebNLG](https://github.com/ThiagoCF05/webnlg) dataset
https://github.com/huggingface/datasets/pull/1206
[ "Nice :) \r\n\r\ncould you add the tags and also remove all the dummy data files that are not zipped ? The diff currently shows 800 files changes xD", "Aaaaand it's rebase time - the new one is at #1264 !", "closing this one since a new PR was created" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1206", "html_url": "https://github.com/huggingface/datasets/pull/1206", "diff_url": "https://github.com/huggingface/datasets/pull/1206.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1206.patch", "merged_at": null }
1,206
true
add lst20 with manual download
passed on local: ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_lst20 ``` Not sure how to test: ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_lst20 ``` ``` LST20 Corpus is a dataset for Thai language processing developed by National Electronics and Computer Technology Center (NECTEC), Thailand. It offers five layers of linguistic annotation: word boundaries, POS tagging, named entities, clause boundaries, and sentence boundaries. At a large scale, it consists of 3,164,002 words, 288,020 named entities, 248,181 clauses, and 74,180 sentences, while it is annotated with 16 distinct POS tags. All 3,745 documents are also annotated with one of 15 news genres. Regarding its sheer size, this dataset is considered large enough for developing joint neural models for NLP. Manually download at https://aiforthai.in.th/corpus.php ```
https://github.com/huggingface/datasets/pull/1205
[ "The pytest suite doesn't allow manual downloads so we just make sure that the `datasets-cli test` command to run without errors instead", "@lhoestq Changes made. Thank you for the review. I've made some same mistakes for https://github.com/huggingface/datasets/pull/1253 too. Will fix them before review." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1205", "html_url": "https://github.com/huggingface/datasets/pull/1205", "diff_url": "https://github.com/huggingface/datasets/pull/1205.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1205.patch", "merged_at": "2020-12-09T16:33:10" }
1,205
true
adding meta_woz dataset
https://github.com/huggingface/datasets/pull/1204
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1204", "html_url": "https://github.com/huggingface/datasets/pull/1204", "diff_url": "https://github.com/huggingface/datasets/pull/1204.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1204.patch", "merged_at": "2020-12-16T15:05:24" }
1,204
true
Add Neural Code Search Dataset
https://github.com/huggingface/datasets/pull/1203
[ "> Really good thanks !\r\n> \r\n> I left a few comments\r\n\r\nThanks, resolved them :) ", "looks like this PR includes changes about many other files than the ones for Code Search\r\n\r\ncan you create another branch and another PR please ?", "> looks like this PR includes changes about many other files than the ones for Code Search\r\n> \r\n> can you create another branch and another PR please ?\r\n\r\nOkay sure" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1203", "html_url": "https://github.com/huggingface/datasets/pull/1203", "diff_url": "https://github.com/huggingface/datasets/pull/1203.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1203.patch", "merged_at": null }
1,203
true
Medical question pairs
This dataset consists of 3048 similar and dissimilar medical question pairs hand-generated and labeled by Curai's doctors. Dataset : https://github.com/curai/medical-question-pair-dataset Paper : https://drive.google.com/file/d/1CHPGBXkvZuZc8hpr46HeHU6U6jnVze-s/view **No splits added**
https://github.com/huggingface/datasets/pull/1202
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1202", "html_url": "https://github.com/huggingface/datasets/pull/1202", "diff_url": "https://github.com/huggingface/datasets/pull/1202.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1202.patch", "merged_at": null }
1,202
true
adding medical-questions-pairs
https://github.com/huggingface/datasets/pull/1201
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1201", "html_url": "https://github.com/huggingface/datasets/pull/1201", "diff_url": "https://github.com/huggingface/datasets/pull/1201.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1201.patch", "merged_at": null }
1,201
true
Update ADD_NEW_DATASET.md
Windows needs special treatment again: unfortunately adding `torch` to the requirements does not work well (crashing the installation). Users should first install torch manually and then continue with the other commands. This issue arises all the time when adding torch as a dependency, but because so many novice users seem to participate in adding datasets, it may be useful to add an explicit note for Windows users to ensure that they do not run into issues.
https://github.com/huggingface/datasets/pull/1200
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1200", "html_url": "https://github.com/huggingface/datasets/pull/1200", "diff_url": "https://github.com/huggingface/datasets/pull/1200.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1200.patch", "merged_at": "2020-12-07T08:32:39" }
1,200
true
Turkish NER dataset, script works fine, couldn't generate dummy data
I've written the script (Turkish_NER.py) that includes dataset. The dataset is a zip inside another zip, and it's extracted as .DUMP file. However, after preprocessing I only get .arrow file. After I ran the script with no error messages, I get .arrow file of dataset, LICENSE and dataset_info.json.
https://github.com/huggingface/datasets/pull/1199
[ "the .DUMP file looks like a txt with one example per line so adding `--match_text_files *.DUMP --n_lines 50` to the dummy generation command might work .", "We can close this PR since a new PR was open at #1268 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1199", "html_url": "https://github.com/huggingface/datasets/pull/1199", "diff_url": "https://github.com/huggingface/datasets/pull/1199.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1199.patch", "merged_at": null }
1,199
true
Add ALT
ALT dataset -- https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/
https://github.com/huggingface/datasets/pull/1198
[ "the `RemoteDatasetTest ` erros in the CI are fixed on master so it's fine", "used `Translation ` feature type and fixed few typos as you suggested.", "Sorry, I made a mistake. please see new PR here. https://github.com/huggingface/datasets/pull/1436" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1198", "html_url": "https://github.com/huggingface/datasets/pull/1198", "diff_url": "https://github.com/huggingface/datasets/pull/1198.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1198.patch", "merged_at": null }
1,198
true
add taskmaster-2
Adding taskmaster-2 dataset. https://github.com/google-research-datasets/Taskmaster/tree/master/TM-2-2020
https://github.com/huggingface/datasets/pull/1197
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1197", "html_url": "https://github.com/huggingface/datasets/pull/1197", "diff_url": "https://github.com/huggingface/datasets/pull/1197.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1197.patch", "merged_at": "2020-12-07T15:22:43" }
1,197
true
Add IWSLT'15 English-Vietnamese machine translation Data
Preprocessed Dataset from IWSLT'15 English-Vietnamese machine translation: English-Vietnamese. from https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/
https://github.com/huggingface/datasets/pull/1196
[ "Thanks ! feel free to ping me once you've added the tags in the dataset card :) ", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1196", "html_url": "https://github.com/huggingface/datasets/pull/1196", "diff_url": "https://github.com/huggingface/datasets/pull/1196.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1196.patch", "merged_at": "2020-12-11T18:26:51" }
1,196
true
addition of py_ast
The dataset consists of parsed Parsed ASTs that were used to train and evaluate the DeepSyn tool. The Python programs are collected from GitHub repositories by removing duplicate files, removing project forks (copy of another existing repository) ,keeping only programs that parse and have at most 30'000 nodes in the AST and we aim to remove obfuscated files
https://github.com/huggingface/datasets/pull/1195
[ "Hi @reshinthadithyan !\r\n\r\nAs mentioned on the Slack, it would be better in this case to parse the file lines into the following feature structure:\r\n```python\r\n\"ast\": datasets.Sequence(\r\n {\r\n \"type\": datasets.Value(\"string\"),\r\n \"value\": datasets.Value(\"string\"),\r\n \"children\": datasets.Sequence(datasets.Value(\"int32\")),\r\n },\r\n)\r\n```\r\n\r\nHere are a few more things to fix before we can move forward:\r\n- the class name needs to be the CamelCase equivalent of the script name, so here it will have to be `PyAst`\r\n- the `README.md` needs to have the tags at the top\r\n- The homepage/info list at the top should be in the same format as the template (added a suggestion)\r\n- You should add the dataset tags and field description to the README as described here: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\r\n\r\nGood luck, let us know if you need any help!", "Hello @yjernite, changes have been made as we talked. Hope this would suffice. Thanks. Feel free to point out any room to improvement.", "Good progress! Here's what still needs to be done:\r\n- first, you need to rebase to master for the tests to pass :)\r\n- the information in your `Data Fields` paragraph should go into `Data Instances`. Data fields should describe the fields one by one, as in e.g. https://github.com/huggingface/datasets/tree/master/datasets/eli5#data-fields\r\n- you still need to add the YAML tags obtained with the tagging app\r\n\r\nShould be good to go after that!", "Hello @yjernite, changes as talked are being done.", "Looks like this PR includes changes about many other files than the ones for py_ast\r\n\r\nCould you create another branch and another PR please ?" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1195", "html_url": "https://github.com/huggingface/datasets/pull/1195", "diff_url": "https://github.com/huggingface/datasets/pull/1195.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1195.patch", "merged_at": null }
1,195
true
Add msr_text_compression
Add [MSR Abstractive Text Compression Dataset](https://msropendata.com/datasets/f8ce2ec9-7fbd-48f7-a8bb-2d2279373563)
https://github.com/huggingface/datasets/pull/1194
[ "the `RemoteDatasetTest ` error in the CI is fixed on master so it's fine" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1194", "html_url": "https://github.com/huggingface/datasets/pull/1194", "diff_url": "https://github.com/huggingface/datasets/pull/1194.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1194.patch", "merged_at": "2020-12-09T10:53:45" }
1,194
true
add taskmaster-1
Adding Taskmaster-1 dataset https://github.com/google-research-datasets/Taskmaster/tree/master/TM-1-2019
https://github.com/huggingface/datasets/pull/1193
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1193", "html_url": "https://github.com/huggingface/datasets/pull/1193", "diff_url": "https://github.com/huggingface/datasets/pull/1193.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1193.patch", "merged_at": "2020-12-07T15:08:39" }
1,193
true
Add NewsPH_NLI dataset
This PR adds the NewsPH-NLI Dataset, the first benchmark dataset for sentence entailment in the low-resource Filipino language. Constructed through exploting the structure of news articles. Contains 600,000 premise-hypothesis pairs, in 70-15-15 split for training, validation, and testing. Link to the paper: https://arxiv.org/pdf/2010.11574.pdf Link to the dataset/repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
https://github.com/huggingface/datasets/pull/1192
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1192", "html_url": "https://github.com/huggingface/datasets/pull/1192", "diff_url": "https://github.com/huggingface/datasets/pull/1192.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1192.patch", "merged_at": "2020-12-07T15:39:43" }
1,192
true
Added Translator Human Parity Data For a Chinese-English news transla…
…tion system from Open dataset list for Dataset sprint, Microsoft Datasets tab.
https://github.com/huggingface/datasets/pull/1191
[ "Can you run `make style` to format the code and fix the CI please ?", "> Can you run `make style` to format the code and fix the CI please ?\r\n\r\nI ran `make style` before this PR and just a few minutes ago. No changes to the code. Not sure why the CI is failing.", "Also, I attempted to see if I can get the source Chinese sentences from `wmt17` dataset. But this call `data = load_dataset('wmt17', \"zh-en\")` failed with this error: `FileNotFoundError: Couldn't find file at https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-zh.tar.gz`. I think it should be possible and fairly straightforward to get the pairing source sentences from it. I just can not test it right now.", "The `RemoteDatasetTest ` errors in the CI are fixed on master so it's fine", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1191", "html_url": "https://github.com/huggingface/datasets/pull/1191", "diff_url": "https://github.com/huggingface/datasets/pull/1191.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1191.patch", "merged_at": "2020-12-09T13:22:45" }
1,191
true
Add Fake News Detection in Filipino dataset
This PR adds the Fake News Filipino Dataset, a low-resource fake news detection corpora in Filipino. Contains 3,206 expertly-labeled news samples, half of which are real and half of which are fake. Link to the paper: http://www.lrec-conf.org/proceedings/lrec2020/index.html Link to the dataset/repo: https://github.com/jcblaisecruz02/Tagalog-fake-news
https://github.com/huggingface/datasets/pull/1190
[ "Hi! I'm the author of this paper (surprised to see our datasets have been added already).\r\n\r\nThat paper link only leads to the conference index, here's a link to the actual paper: https://www.aclweb.org/anthology/2020.lrec-1.316/\r\n\r\nWould it be fine if I also edited your gsheet entry to reflect this change?", "Hi Jan, please go ahead and update. I see you are also in the sprint slack channel. Let me know if what else needs updating. Thanks.\r\n" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1190", "html_url": "https://github.com/huggingface/datasets/pull/1190", "diff_url": "https://github.com/huggingface/datasets/pull/1190.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1190.patch", "merged_at": "2020-12-07T15:39:27" }
1,190
true
Add Dengue dataset in Filipino
This PR adds the Dengue Dataset, a benchmark dataset for low-resource multiclass classification, with 4,015 training, 500 testing, and 500 validation examples, each labeled as part of five classes. Each sample can be a part of multiple classes. Collected as tweets. Link to the paper: https://ieeexplore.ieee.org/document/8459963 Link to the dataset/repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
https://github.com/huggingface/datasets/pull/1189
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1189", "html_url": "https://github.com/huggingface/datasets/pull/1189", "diff_url": "https://github.com/huggingface/datasets/pull/1189.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1189.patch", "merged_at": "2020-12-07T15:38:58" }
1,189
true
adding hind_encorp dataset
adding Hindi_Encorp05 dataset
https://github.com/huggingface/datasets/pull/1188
[ "help needed in dummy data", "extension of the file is .plaintext so dummy data generation is failing\r\n", "you can add the `--match_text_file \"*.plaintext\"` flag when generating the dummy data\r\n\r\nalso it looks like the PR is empty, is this expected ?", "yes it is expected because I made all my changes in PR #1186 then I again run code and open PR #1188 to see if this time test passes or not only so there is no code change from #1186 to #1188 \r\ni tried --match_text_file \"*.plaintext\" this time it is also not generating dummy data don't know why", "well this PR includes no code change at all, can you make sure you added your changes in this one ?", "feel free to ping me when you have added the files so I can take a look and help you with the dummy data", "how to do that i dont know did i have to open new PR\r\n", " actually all my changes are visible in #1186 but don't know how to show same changes here", "these are a the which i did in #1186 and same in #1188 \r\n![1](https://user-images.githubusercontent.com/56379013/101646577-b4864500-3a5d-11eb-8a5a-91b1b441040a.png)\r\n![2](https://user-images.githubusercontent.com/56379013/101646965-32e2e700-3a5e-11eb-94d9-276e602c6ded.png)\r\n![4](https://user-images.githubusercontent.com/56379013/101646989-38d8c800-3a5e-11eb-92bb-d9c4cb2c3595.png)\r\n![5](https://user-images.githubusercontent.com/56379013/101647017-41c99980-3a5e-11eb-87cf-5268e79df19d.png)\r\n![6](https://user-images.githubusercontent.com/56379013/101647038-48581100-3a5e-11eb-8d05-f67834fcaa7b.png)\r\n\r\n![8](https://user-images.githubusercontent.com/56379013/101647080-55750000-3a5e-11eb-8455-8936a35b35c2.png)\r\n![9](https://user-images.githubusercontent.com/56379013/101647084-55750000-3a5e-11eb-988e-ae87f0b252a0.png)\r\n![10](https://user-images.githubusercontent.com/56379013/101647182-6f164780-3a5e-11eb-8af3-f0b0186483c9.png)\r\n![11](https://user-images.githubusercontent.com/56379013/101647230-7c333680-3a5e-11eb-9aeb-2b4ce65965e0.png)\r\n![13](https://user-images.githubusercontent.com/56379013/101647257-848b7180-3a5e-11eb-871c-2fd77b047320.png)\r\n![14](https://user-images.githubusercontent.com/56379013/101647268-89502580-3a5e-11eb-9e2a-b9f7ff1fc95e.png)\r\nthese same codes are in both #1186 and #1188 so because it is already present from PR #1186 because of that it is showing zeor code change in #1188 because it is already present from #1186 how i can show or highlight those changes\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "well for me https://github.com/huggingface/datasets/pull/1188/files is blank", "This PR tries to merge the master branch of you fork into this repo, however I can't find changes with your files inside your master branch.\r\n\r\nMaybe you can fork again the repo and try to create another PR ?", "@lhoestq i opened a new pr #1438 but this time it fails many circl ci tests", "Closing this one since a new PR was created" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1188", "html_url": "https://github.com/huggingface/datasets/pull/1188", "diff_url": "https://github.com/huggingface/datasets/pull/1188.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1188.patch", "merged_at": null }
1,188
true
Added AQUA-RAT (Algebra Question Answering with Rationales) Dataset
https://github.com/huggingface/datasets/pull/1187
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1187", "html_url": "https://github.com/huggingface/datasets/pull/1187", "diff_url": "https://github.com/huggingface/datasets/pull/1187.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1187.patch", "merged_at": "2020-12-07T15:37:12" }
1,187
true
all test passed
need help creating dummy data
https://github.com/huggingface/datasets/pull/1186
[ "looks like this PR includes changes to 5000 files\r\ncould you create a new branch and a new PR ?" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1186", "html_url": "https://github.com/huggingface/datasets/pull/1186", "diff_url": "https://github.com/huggingface/datasets/pull/1186.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1186.patch", "merged_at": null }
1,186
true
Add Hate Speech Dataset in Filipino
This PR adds the Hate Speech Dataset, a text classification dataset in Filipino, consisting 10k tweets (training set) that are labeled as hate speech or non-hate speech. Released with 4,232 validation and 4,232 testing samples. Collected during the 2016 Philippine Presidential Elections. Link to the paper: https://pcj.csp.org.ph/index.php/pcj/issue/download/29/PCJ%20V14%20N1%20pp1-14%202019 Link to the dataset/repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
https://github.com/huggingface/datasets/pull/1185
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1185", "html_url": "https://github.com/huggingface/datasets/pull/1185", "diff_url": "https://github.com/huggingface/datasets/pull/1185.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1185.patch", "merged_at": "2020-12-07T15:35:33" }
1,185
true
Add Adversarial SQuAD dataset
# Adversarial SQuAD Adding the Adversarial [SQuAD](https://github.com/robinjia/adversarial-squad) dataset as part of the sprint 🎉 This dataset adds adversarial sentences to a subset of the SQuAD dataset's dev examples. How to get the original squad example id is explained in readme->Data Instances. The whole data is intended for use in evaluation. (Which could of course be also used for training if one wants). So there is no classical train/val/test split, but a split based on the number of adversaries added. There are 2 splits of this dataset: - AddSent: Has up to five candidate adversarial sentences that don't answer the question, but have a lot of words in common with the question. This adversary is does not query the model in any way. - AddOneSent: Similar to AddSent, but just one candidate sentences was picked at random. This adversary is does not query the model in any way. (The AddAny and AddCommon datasets mentioned in the paper are dynamically generated based on model's output distribution thus are not included here) The failing test look like some unrelated timeout thing, will probably clear if rerun. - [x] All tests passed - [x] Added dummy data - [x] Added data card (as much as I could)
https://github.com/huggingface/datasets/pull/1184
[ "the CI error was just a connection error due to all the activity on the repo this week ^^'\r\nI re-ran it so it should be good now", "I hadn't realized the problem with the dummies since it had passed without errors.\r\nSuggestion: maybe we can show the user a warning based on the generated dummy size.", "Thanks for changing to configs ! Looks all good now :) \r\n\r\nBefore we merge, can you re-lighten the dummy data please if you don't mind ? The idea is to have them weigh only a few KB (currently it's 50KB each). Feel free to remove any unnecessary files or chunk of text", "(also you can ignore the `RemoteDatasetTest ` CI errors, they're fixed on master )", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1184", "html_url": "https://github.com/huggingface/datasets/pull/1184", "diff_url": "https://github.com/huggingface/datasets/pull/1184.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1184.patch", "merged_at": "2020-12-16T16:12:58" }
1,184
true
add mkb dataset
This PR will add Mann Ki Baat dataset (parallel data for Indian languages).
https://github.com/huggingface/datasets/pull/1183
[ "Could you update the languages tags before we merge @VasudevGupta7 ?", "done.", "thanks !" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1183", "html_url": "https://github.com/huggingface/datasets/pull/1183", "diff_url": "https://github.com/huggingface/datasets/pull/1183.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1183.patch", "merged_at": "2020-12-09T09:38:50" }
1,183
true
ADD COVID-QA dataset
This PR adds the COVID-QA dataset, a question answering dataset consisting of 2,019 question/answer pairs annotated by volunteer biomedical experts on scientific articles related to COVID-19 Link to the paper: https://openreview.net/forum?id=JENSKEEzsoU Link to the dataset/repo: https://github.com/deepset-ai/COVID-QA
https://github.com/huggingface/datasets/pull/1182
[ "merging since the CI is fixed on master", "Wow, thanks for including this dataset from my side as well!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1182", "html_url": "https://github.com/huggingface/datasets/pull/1182", "diff_url": "https://github.com/huggingface/datasets/pull/1182.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1182.patch", "merged_at": "2020-12-07T14:23:27" }
1,182
true
added emotions detection in arabic dataset
Dataset for Emotions detection in Arabic text more info: https://github.com/AmrMehasseb/Emotional-Tone
https://github.com/huggingface/datasets/pull/1181
[ "Hi @abdulelahsm did you manage to fix your issue ?\r\nFeel free to ping me if you have questions or if you're ready for a review", "@lhoestq fixed it! ready to merge. I hope haha", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1181", "html_url": "https://github.com/huggingface/datasets/pull/1181", "diff_url": "https://github.com/huggingface/datasets/pull/1181.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1181.patch", "merged_at": "2020-12-21T09:53:51" }
1,181
true
Add KorQuAD v2 Dataset
# The Korean Question Answering Dataset v2 Adding the [KorQuAD](https://korquad.github.io/) v2 dataset as part of the sprint 🎉 This dataset is very similar to SQuAD and is an extension of [squad_kor_v1](https://github.com/huggingface/datasets/pull/1178) which is why I added it as `squad_kor_v2`. - Crowd generated questions and answer (1-answer per question) for Wikipedia articles. Differently from V1 it includes the html structure and markup, which makes it a different enough dataset. (doesn't share ids between v1 and v2 either) - [x] All tests passed - [x] Added dummy data - [x] Added data card (as much as I could) Edit: 🤦 looks like squad_kor_v1 commit sneaked in here too
https://github.com/huggingface/datasets/pull/1180
[ "looks like this PR also includes the changes for the V1\r\nCould you only include the files of the V2 ?", "hmm I have made the dummy data lighter retested on local and it passed not sure why it fails here?", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1180", "html_url": "https://github.com/huggingface/datasets/pull/1180", "diff_url": "https://github.com/huggingface/datasets/pull/1180.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1180.patch", "merged_at": "2020-12-16T16:10:30" }
1,180
true
Small update to the doc: add flatten_indices in doc
Small update to the doc: add flatten_indices in doc
https://github.com/huggingface/datasets/pull/1179
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1179", "html_url": "https://github.com/huggingface/datasets/pull/1179", "diff_url": "https://github.com/huggingface/datasets/pull/1179.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1179.patch", "merged_at": "2020-12-07T13:42:56" }
1,179
true
Add KorQuAD v1 Dataset
# The Korean Question Answering Dataset Adding the [KorQuAD](https://korquad.github.io/KorQuad%201.0/) v1 dataset as part of the sprint 🎉 This dataset is very similar to SQuAD which is why I added it as `squad_kor_v1`. There is also a v2 which I added [here](https://github.com/huggingface/datasets/pull/1180). - Crowd generated questions and answer (1-answer per question) for Wikipedia articles. - [x] All tests passed - [x] Added dummy data - [x] Added data card (as much as I could)
https://github.com/huggingface/datasets/pull/1178
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1178", "html_url": "https://github.com/huggingface/datasets/pull/1178", "diff_url": "https://github.com/huggingface/datasets/pull/1178.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1178.patch", "merged_at": "2020-12-07T13:41:37" }
1,178
true
Add Korean NER dataset
This PR adds the [Korean named entity recognition dataset](https://github.com/kmounlp/NER). This dataset has been used in many downstream tasks, such as training [KoBERT](https://github.com/SKTBrain/KoBERT) for NER, as seen in this [KoBERT-CRF implementation](https://github.com/eagle705/pytorch-bert-crf-ner).
https://github.com/huggingface/datasets/pull/1177
[ "Closed via #1219 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1177", "html_url": "https://github.com/huggingface/datasets/pull/1177", "diff_url": "https://github.com/huggingface/datasets/pull/1177.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1177.patch", "merged_at": null }
1,177
true
Add OpenPI Dataset
Add the OpenPI Dataset by AI2 (AllenAI)
https://github.com/huggingface/datasets/pull/1176
[ "Hi @Bharat123rox ! It looks like some of the dummy data is broken or missing. Did you auto-generate it? Does the local test pass for you?", "@yjernite requesting you to have a look as to why the tests are failing only on Windows, there seems to be a backslash error somewhere, could it be the result of `os.path.join` and what should be the fix for this?", "This is the `black` output locally:\r\n```\r\n(datasets_env) datasets (openpi) > black --check --line-length 119 --target-version py36 datasets/openpi/\r\nAll done! ✨ 🍰 ✨\r\n1 file would be left unchanged.\r\n```", "Can you check your version of black (should be `20.8b1`) and run `make style again`? (And don't forget to rebase before pushing ;) )\r\n\r\nThe other test was a time-out error so should be good on the next commit", "Thanks @yjernite the CI tests finally passed!!", "Hi @Bharat123rox did you manage to join the different config into one using the IDs ?\r\n\r\nFeel free to ping me when you're ready for the next review :) ", "> Hi @Bharat123rox did you manage to join the different config into one using the IDs ?\n> \n> Feel free to ping me when you're ready for the next review :) \n\nNot yet @lhoestq still working on this! Meanwhile please review #1507 where I added the SelQA dataset :)", "Ok ! Let me review SelQA then :) \r\nThanks for your help !", "Apologies for the very late response. Here is the openpi dataset file with a single file per partition after merging `id_answers, answers.jsonl, question.jsonl , question_metadata.jsonl`\r\n\r\nhttps://github.com/allenai/openpi-dataset/blob/main/data/gold-v1.1/dev.jsonl", "Nice thank you @nikett !", "Hi @Bharat123rox , when you get a chance, please feel free to use the dataset from the repo ( [Link](https://github.com/allenai/openpi-dataset/blob/main/data/gold-v1.1/dev.jsonl) ) . Please let me know if any file is missing! Thank you ", "Hi @Bharat123rox are you working on this? ", "@nikett Sorry I'm no longer working on this as I'm out of time for it, please feel free to raise a new PR for this\r\n\r\n", "We are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest to create this dataset there. Please, feel free to tell us if you need some help." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1176", "html_url": "https://github.com/huggingface/datasets/pull/1176", "diff_url": "https://github.com/huggingface/datasets/pull/1176.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1176.patch", "merged_at": null }
1,176
true
added ReDial dataset
Updating README Dataset link: https://redialdata.github.io/website/datasheet
https://github.com/huggingface/datasets/pull/1175
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1175", "html_url": "https://github.com/huggingface/datasets/pull/1175", "diff_url": "https://github.com/huggingface/datasets/pull/1175.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1175.patch", "merged_at": "2020-12-07T13:21:43" }
1,175
true
Add Universal Morphologies
Adding unimorph universal morphology annotations for 110 languages, pfew!!! one lemma per row with all possible forms and annotations https://unimorph.github.io/
https://github.com/huggingface/datasets/pull/1174
[ "Sorry for the delay, changed the default language to \"ady\" (first alphabetical) and only downloading the relevant files for each config (dataset_infos is till 918KB though)", "Thanks for merging it ! Looks all good\r\n\r\nLooks like I didn't reply to your last message, sorry about that.\r\nFeel free to ping me when this happens :) " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1174", "html_url": "https://github.com/huggingface/datasets/pull/1174", "diff_url": "https://github.com/huggingface/datasets/pull/1174.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1174.patch", "merged_at": "2021-01-26T16:41:48" }
1,174
true
add wikipedia biography dataset
My first PR containing the Wikipedia biographies dataset. I have followed all the steps in the [guide](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). It passes all the tests.
https://github.com/huggingface/datasets/pull/1173
[ "Does anyone know why am I getting this \"Some checks were not successful\" message? For the _code_quality_ one, I have successfully run the flake8 command.", "Ok, I need to update the README.md, but don't know if that will fix the errors", "Hi @ACR0S , thanks for adding the dataset!\r\n\r\nIt looks like `black` is throwing the code quality error: you need to run `make style` with the latest version of `black` (`black --version` should return 20.8b1)\r\n\r\nWe also added a requirement to specify encodings when using the python `open` function (line 163 in the current version of your script)\r\n\r\nFinally, you will need to add the tags and field descriptions to the README as described here https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\r\n\r\nLet us know if you have any further questions!", "Also, please leave the full template of the readme with the `[More Information Needed]` paragraphs: you don't have to fill them out now but it will make it easier for us to go back to later :) ", "Thank you for your help, @yjernite! I have updated everything (finally run the _make style_, added the tags, the ecoding to the _open_ function and put back the empty fields in the README). Hope it works now! :)", "LGTM!", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1173", "html_url": "https://github.com/huggingface/datasets/pull/1173", "diff_url": "https://github.com/huggingface/datasets/pull/1173.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1173.patch", "merged_at": "2020-12-07T11:13:14" }
1,173
true
Add proto_qa dataset
Added dataset tags as required.
https://github.com/huggingface/datasets/pull/1172
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1172", "html_url": "https://github.com/huggingface/datasets/pull/1172", "diff_url": "https://github.com/huggingface/datasets/pull/1172.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1172.patch", "merged_at": "2020-12-07T11:12:24" }
1,172
true
Add imdb Urdu Reviews dataset.
Added the imdb Urdu reviews dataset. More info about the dataset over <a href="https://github.com/mirfan899/Urdu">here</a>.
https://github.com/huggingface/datasets/pull/1171
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1171", "html_url": "https://github.com/huggingface/datasets/pull/1171", "diff_url": "https://github.com/huggingface/datasets/pull/1171.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1171.patch", "merged_at": "2020-12-07T11:11:16" }
1,171
true
Fix path handling for Windows
https://github.com/huggingface/datasets/pull/1170
[ "@lhoestq here's the fix!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1170", "html_url": "https://github.com/huggingface/datasets/pull/1170", "diff_url": "https://github.com/huggingface/datasets/pull/1170.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1170.patch", "merged_at": "2020-12-07T10:47:23" }
1,170
true
Add Opus fiskmo dataset for Finnish and Swedish for MT task
Adding fiskmo, a massive parallel corpus for Finnish and Swedish. for more info : http://opus.nlpl.eu/fiskmo.php
https://github.com/huggingface/datasets/pull/1169
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1169", "html_url": "https://github.com/huggingface/datasets/pull/1169", "diff_url": "https://github.com/huggingface/datasets/pull/1169.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1169.patch", "merged_at": "2020-12-07T11:04:11" }
1,169
true
Add Naver sentiment movie corpus
This PR adds the [Naver sentiment movie corpus](https://github.com/e9t/nsmc), a dataset containing Korean movie reviews from Naver, the most commonly used search engine in Korea. This dataset is often used to benchmark models on Korean NLP tasks, as seen in [this paper](https://www.aclweb.org/anthology/2020.lrec-1.199.pdf).
https://github.com/huggingface/datasets/pull/1168
[ "Closed via #1252 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1168", "html_url": "https://github.com/huggingface/datasets/pull/1168", "diff_url": "https://github.com/huggingface/datasets/pull/1168.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1168.patch", "merged_at": null }
1,168
true
❓ On-the-fly tokenization with datasets, tokenizers, and torch Datasets and Dataloaders
Hi there, I have a question regarding "on-the-fly" tokenization. This question was elicited by reading the "How to train a new language model from scratch using Transformers and Tokenizers" [here](https://huggingface.co/blog/how-to-train). Towards the end there is this sentence: "If your dataset is very large, you can opt to load and tokenize examples on the fly, rather than as a preprocessing step". I've tried coming up with a solution that would combine both `datasets` and `tokenizers`, but did not manage to find a good pattern. I guess the solution would entail wrapping a dataset into a Pytorch dataset. As a concrete example from the [docs](https://huggingface.co/transformers/custom_datasets.html) ```python import torch class SquadDataset(torch.utils.data.Dataset): def __init__(self, encodings): # instead of doing this beforehand, I'd like to do tokenization on the fly self.encodings = encodings def __getitem__(self, idx): return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} def __len__(self): return len(self.encodings.input_ids) train_dataset = SquadDataset(train_encodings) ``` How would one implement this with "on-the-fly" tokenization exploiting the vectorized capabilities of tokenizers? ---- Edit: I have come up with this solution. It does what I want, but I feel it's not very elegant ```python class CustomPytorchDataset(Dataset): def __init__(self): self.dataset = some_hf_dataset(...) self.tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased") def __getitem__(self, batch_idx): instance = self.dataset[text_col][batch_idx] tokenized_text = self.tokenizer(instance, truncation=True, padding=True) return tokenized_text def __len__(self): return len(self.dataset) @staticmethod def collate_fn(batch): # batch is a list, however it will always contain 1 item because we should not use the # batch_size argument as batch_size is controlled by the sampler return {k: torch.tensor(v) for k, v in batch[0].items()} torch_ds = CustomPytorchDataset() # NOTE: batch_sampler returns list of integers and since here we have SequentialSampler # it returns: [1, 2, 3], [4, 5, 6], etc. - check calling `list(batch_sampler)` batch_sampler = BatchSampler(SequentialSampler(torch_ds), batch_size=3, drop_last=True) # NOTE: no `batch_size` as now the it is controlled by the sampler! dl = DataLoader(dataset=torch_ds, sampler=batch_sampler, collate_fn=torch_ds.collate_fn) ```
https://github.com/huggingface/datasets/issues/1167
[ "We're working on adding on-the-fly transforms in datasets.\r\nCurrently the only on-the-fly functions that can be applied are in `set_format` in which we transform the data in either numpy/torch/tf tensors or pandas.\r\nFor example\r\n```python\r\ndataset.set_format(\"torch\")\r\n```\r\napplies `torch.Tensor` to the dataset entries on-the-fly.\r\n\r\nWe plan to extend this to user-defined formatting transforms.\r\nFor example\r\n```python\r\ndataset.set_format(transform=tokenize)\r\n```\r\n\r\nWhat do you think ?" ]
null
1,167
false
Opus montenegrinsubs
Opus montenegrinsubs - language pair en-me more info : http://opus.nlpl.eu/MontenegrinSubs.php
https://github.com/huggingface/datasets/pull/1166
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1166", "html_url": "https://github.com/huggingface/datasets/pull/1166", "diff_url": "https://github.com/huggingface/datasets/pull/1166.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1166.patch", "merged_at": "2020-12-07T11:02:49" }
1,166
true
Add ar rest reviews
added restaurants reviews in Arabic for sentiment analysis tasks
https://github.com/huggingface/datasets/pull/1165
[ "Copy-pasted from the Slack discussion:\r\nthe annotation and language creators should be found , not unknown\r\nthe example should go under the \"Data Instances\" paragraph, not \"Data fields\"\r\ncan you remove the abstract from the citation and add it to the dataset description? More people will see that", "@yjernite done! thanks for the feedback", "@lhoestq not sure why it's failing tests now, I only changed cosmetics", "You can ignores these errors\r\n```\r\n\r\n=========================== short test summary info ===========================\r\nFAILED tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_ajgt_twitter_ar\r\nFAILED tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_chr_en\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_ajgt_twitter_ar\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_chr_en\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_great_code\r\n```\r\n\r\nthey're fixed on master", "Feel free to ping me for the final review once you managed to change to ClassLabel :) ", "Hey @lhoestq I was able to fix it !! I think the same errors appeared on circleCI and now it's hopefully ready to be merged?", "@lhoestq done! thanks for your review ", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1165", "html_url": "https://github.com/huggingface/datasets/pull/1165", "diff_url": "https://github.com/huggingface/datasets/pull/1165.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1165.patch", "merged_at": "2020-12-21T17:06:23" }
1,165
true
Add DaNe dataset
https://github.com/huggingface/datasets/pull/1164
[ "Thanks, this looks great!\r\n\r\nFor the code quality test, it looks like `flake8` is throwing the error, so you can tun `flake8 datasets` locally and fix the errors it points out until it passes" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1164", "html_url": "https://github.com/huggingface/datasets/pull/1164", "diff_url": "https://github.com/huggingface/datasets/pull/1164.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1164.patch", "merged_at": null }
1,164
true
Added memat : Xhosa-English parallel corpora
Added memat : Xhosa-English parallel corpora for more info : http://opus.nlpl.eu/memat.php
https://github.com/huggingface/datasets/pull/1163
[ "The `RemoteDatasetTest` CI fail is fixed on master so it's fine", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1163", "html_url": "https://github.com/huggingface/datasets/pull/1163", "diff_url": "https://github.com/huggingface/datasets/pull/1163.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1163.patch", "merged_at": "2020-12-07T10:40:24" }
1,163
true
Add Mocha dataset
More information: https://allennlp.org/mocha
https://github.com/huggingface/datasets/pull/1162
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1162", "html_url": "https://github.com/huggingface/datasets/pull/1162", "diff_url": "https://github.com/huggingface/datasets/pull/1162.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1162.patch", "merged_at": "2020-12-07T10:09:39" }
1,162
true
Linguisticprobing
Adding Linguistic probing datasets from What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties https://www.aclweb.org/anthology/P18-1198/
https://github.com/huggingface/datasets/pull/1161
[ "Thanks for your contribution, @sileod.\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nAs you already created this dataset under your organization namespace (https://huggingface.co/datasets/metaeval/linguisticprobing), I think we can safely close this PR.\r\n\r\nWe would suggest you add a dataset card with the YAML tags, to make it searchable and discoverable.\r\n\r\nPlease, feel free to tell us if you need some help." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1161", "html_url": "https://github.com/huggingface/datasets/pull/1161", "diff_url": "https://github.com/huggingface/datasets/pull/1161.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1161.patch", "merged_at": null }
1,161
true
adding TabFact dataset
Adding TabFact: A Large-scale Dataset for Table-based Fact Verification. https://github.com/wenhuchen/Table-Fact-Checking - The tables are stored as individual csv files, so need to download 16,573 🤯 csv files. As a result the `datasets_infos.json` file is huge (6.62 MB). - Original dataset has nested structure where, where table is one example and each table has multiple statements, flattening the structure here so that each statement is one example.
https://github.com/huggingface/datasets/pull/1160
[ "FYI you guys are on GitHub's homepage 😍\r\n\r\n<img width=\"1589\" alt=\"Screenshot 2020-12-09 at 12 34 28\" src=\"https://user-images.githubusercontent.com/326577/101624883-a0ecc700-39e8-11eb-8a97-11af0d036536.png\">\r\n", "Yeayy 😍 🔥" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1160", "html_url": "https://github.com/huggingface/datasets/pull/1160", "diff_url": "https://github.com/huggingface/datasets/pull/1160.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1160.patch", "merged_at": "2020-12-09T09:12:40" }
1,160
true
Add Roman Urdu dataset
This PR adds the [Roman Urdu dataset](https://archive.ics.uci.edu/ml/datasets/Roman+Urdu+Data+Set#).
https://github.com/huggingface/datasets/pull/1159
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1159", "html_url": "https://github.com/huggingface/datasets/pull/1159", "diff_url": "https://github.com/huggingface/datasets/pull/1159.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1159.patch", "merged_at": "2020-12-07T09:59:03" }
1,159
true
Add BBC Hindi NLI Dataset
# Dataset Card for BBC Hindi NLI Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - HomePage : https://github.com/midas-research/hindi-nli-data - Paper : "https://www.aclweb.org/anthology/2020.aacl-main.71" - Point of Contact : https://github.com/midas-research/hindi-nli-data ### Dataset Summary - Dataset for Natural Language Inference in Hindi Language. BBC Hindi Dataset consists of textual-entailment pairs. - Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic. - Context and Hypothesis is written in Hindi while Entailment_Label is in English. - Entailment_label is of 2 types - entailed and not-entailed. - Dataset can be used to train models for Natural Language Inference tasks in Hindi Language. [More Information Needed] ### Supported Tasks and Leaderboards - Natural Language Inference for Hindi ### Languages Dataset is in Hindi ## Dataset Structure - Data is structured in TSV format. - Train and Test files are in seperate files ### Dataset Instances An example of 'train' looks as follows. ``` {'hypothesis': 'यह खबर की सूचना है|', 'label': 'entailed', 'premise': 'गोपनीयता की नीति', 'topic': '1'} ``` ### Data Fields - Each row contatins 4 columns - Premise, Hypothesis, Label and Topic. ### Data Splits - Train : 15553 - Valid : 2581 - Test : 2593 ## Dataset Creation - We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available BBC Hindi news text classification datasets in Hindi and pose them as TE problems - In this recasting process, we build template hypotheses for each class in the label taxonomy - Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples. - For more information on the recasting process, refer to paper "https://www.aclweb.org/anthology/2020.aacl-main.71" ### Source Data Source Dataset for the recasting process is the BBC Hindi Headlines Dataset(https://github.com/NirantK/hindi2vec/releases/tag/bbc-hindi-v0.1) #### Initial Data Collection and Normalization - BBC Hindi News Classification Dataset contains 4, 335 Hindi news headlines tagged across 14 categories: India, Pakistan,news, International, entertainment, sport, science, China, learning english, social, southasia, business, institutional, multimedia - We processed this dataset to combine two sets of relevant but low prevalence classes. - Namely, we merged the samples from Pakistan, China, international, and southasia as one class called international. - Likewise, we also merged samples from news, business, social, learning english, and institutional as news. - Lastly, we also removed the class multimedia because there were very few samples. #### Who are the source language producers? Pls refer to this paper: "https://www.aclweb.org/anthology/2020.aacl-main.71" ### Annotations #### Annotation process Annotation process has been described in Dataset Creation Section. #### Who are the annotators? Annotation is done automatically. ### Personal and Sensitive Information No Personal and Sensitive Information is mentioned in the Datasets. ## Considerations for Using the Data Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71 ### Discussion of Biases Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71 ### Other Known Limitations No other known limitations ## Additional Information Pls refer to this link: https://github.com/midas-research/hindi-nli-data ### Dataset Curators It is written in the repo : https://github.com/avinsit123/hindi-nli-data that - This corpus can be used freely for research purposes. - The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper. - If interested in commercial use of the corpus, send email to midas@iiitd.ac.in. - If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus. - Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications. - Rather than redistributing the corpus, please direct interested parties to this page - Please feel free to send us an email: - with feedback regarding the corpus. - with information on how you have used the corpus. - if interested in having us analyze your data for natural language inference. - if interested in a collaborative research project. ### Licensing Information Copyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi). Pls contact authors for any information on the dataset. ### Citation Information ``` @inproceedings{uppal-etal-2020-two, title = "Two-Step Classification using Recasted Data for Low Resource Settings", author = "Uppal, Shagun and Gupta, Vivek and Swaminathan, Avinash and Zhang, Haimin and Mahata, Debanjan and Gosangi, Rakesh and Shah, Rajiv Ratn and Stent, Amanda", booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing", month = dec, year = "2020", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.aacl-main.71", pages = "706--719", abstract = "An NLP model{'}s ability to reason should be independent of language. Previous works utilize Natural Language Inference (NLI) to understand the reasoning ability of models, mostly focusing on high resource languages like English. To address scarcity of data in low-resource languages such as Hindi, we use data recasting to create NLI datasets for four existing text classification datasets. Through experiments, we show that our recasted dataset is devoid of statistical irregularities and spurious patterns. We further study the consistency in predictions of the textual entailment models and propose a consistency regulariser to remove pairwise-inconsistencies in predictions. We propose a novel two-step classification method which uses textual-entailment predictions for classification task. We further improve the performance by using a joint-objective for classification and textual entailment. We therefore highlight the benefits of data recasting and improvements on classification performance using our approach with supporting experimental results.", } ```
https://github.com/huggingface/datasets/pull/1158
[ "Hi @avinsit123 !\r\nDid you manage to rename the dataset and apply the suggestion I mentioned for the data fields ?\r\nFeel free to ping me when you're ready for a review :) ", "Hi @avinsit123 ! Have you had a chance to take a look at my suggestions ?\r\nLet me know if you have questions or if I can help", "@lhoestq sorry I completely forgot about this pr. I will complete it ASAP.", "@lhoestq I have fixed the code to resolve all your comments. Pls do check. I also don't seem to know why the CI tests are failing as I ran all the tests in CONTRIBUTING.md on my local pc and they passed.", "@lhoestq thanks for ur patient review :) . I also wish to add similar 3 more NLI hindi datasets. Hope to do within this week.", "@lhoestq would this be merged to master?", "Yes of course ;)\r\nmerging now !" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1158", "html_url": "https://github.com/huggingface/datasets/pull/1158", "diff_url": "https://github.com/huggingface/datasets/pull/1158.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1158.patch", "merged_at": "2021-02-05T09:48:31" }
1,158
true
Add dataset XhosaNavy English -Xhosa
Add dataset XhosaNavy English -Xhosa More info : http://opus.nlpl.eu/XhosaNavy.php
https://github.com/huggingface/datasets/pull/1157
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1157", "html_url": "https://github.com/huggingface/datasets/pull/1157", "diff_url": "https://github.com/huggingface/datasets/pull/1157.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1157.patch", "merged_at": "2020-12-07T09:11:33" }
1,157
true
add telugu-news corpus
Adding Telugu News Corpus to datasets.
https://github.com/huggingface/datasets/pull/1156
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1156", "html_url": "https://github.com/huggingface/datasets/pull/1156", "diff_url": "https://github.com/huggingface/datasets/pull/1156.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1156.patch", "merged_at": "2020-12-07T09:08:48" }
1,156
true
Add BSD
This PR adds BSD, the Japanese-English business dialogue corpus by [Rikters et al., 2020](https://www.aclweb.org/anthology/D19-5204.pdf).
https://github.com/huggingface/datasets/pull/1155
[ "Glad to have more Japanese data! Couple of comments:\r\n- the abbreviation might confuse some people as there is also an OPUS BSD corpus, would you mind renaming it as `bsd_ja_en`?\r\n- `flake8` is throwing some errors, you can run it locally (`flake8 datasets`) and fix what it tells you until it's happy :)\r\n- We're not using `os.path.join` for URLs as it's unstable across systems (introduces backslashes on Windows). Can you write the URLs explicitly instead?\r\n\r\nThanks!", "Fantastic, looks great!", "> Fantastic, looks great!\r\n\r\nThanks for your help @yjernite, really appreciate it!", "The RemoteDatasetTest is fixed on master so it's fine", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1155", "html_url": "https://github.com/huggingface/datasets/pull/1155", "diff_url": "https://github.com/huggingface/datasets/pull/1155.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1155.patch", "merged_at": "2020-12-07T09:27:46" }
1,155
true
Opus sardware
Added Opus sardware dataset for machine translation English to Sardinian. for more info : http://opus.nlpl.eu/sardware.php
https://github.com/huggingface/datasets/pull/1154
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1154", "html_url": "https://github.com/huggingface/datasets/pull/1154", "diff_url": "https://github.com/huggingface/datasets/pull/1154.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1154.patch", "merged_at": "2020-12-05T17:05:45" }
1,154
true
Adding dataset for proto_qa in huggingface datasets library
Added dataset for ProtoQA: A Question Answering Dataset for Prototypical Common-Sense Reasoning Followed all steps for adding a new dataset.
https://github.com/huggingface/datasets/pull/1153
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1153", "html_url": "https://github.com/huggingface/datasets/pull/1153", "diff_url": "https://github.com/huggingface/datasets/pull/1153.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1153.patch", "merged_at": null }
1,153
true
hindi discourse analysis dataset commit
https://github.com/huggingface/datasets/pull/1152
[ "That's a great dataset to have! We need a couple more things to be good to go:\r\n- you should `make style` and `flake8 datasets` before pushing to make the code quality check happy :) \r\n- the dataset will need some dummy data which you should be able to auto-generate and test locally: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#automatically-add-code-metadata\r\n- there's some good information in your current README, but we need the format to follow the template [here](https://github.com/huggingface/datasets/blob/master/templates/README.md) and to have YAML tags at the top, as described in the guide: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\r\n\r\nLEt us know if you need any help!", "Hi @yjernite \r\nI was successfully able to generate the dataset_info.json file using the command \r\npython datasets-cli test datasets/<your-dataset-folder> --save_infos --all_configs\r\n\r\nBut unfortunately, could not generate the dummy data\r\n\r\nWhile running the command \r\npython datasets-cli dummy_data datasets/<your-dataset-folder> --auto_generate\r\nI got an error as \r\n\r\nValueError: Couldn't parse columns ['0', '1', '2', '3', '4', ......, '9982']. Maybe specify which json field must be used to read the data with --json_field <my_field>.\r\n\r\nThe thing is the dataset I am trying to upload is of the format \r\n{\r\n '0': {'Story_no': 15, 'Sentence': ' गाँठ से साढ़े तीन रुपये लग गये, जो अब पेट में जाकर खनकते भी नहीं! जो तेरी करनी मालिक! ” “इसमें मालिक की क्या करनी है? ”', 'Discourse Mode': 'Dialogue'},\r\n '1': {'Story_no': story_no, 'Sentence': sentence, 'Discourse Mode': discourse_mode},\r\n .......,\r\n '9982': {'Story_no': story_no, 'Sentence': sentence, 'Discourse Mode': discourse_mode}\r\n}\r\n\r\nCan you please suggest any errors I am making in the _generate_examples method?\r\n\r\nThanks!", "The dummy data generator doesn't support this kind of json format yet.\r\nCan you create the dummy data manually please ? You can get the instructions by running the \r\n```\r\ndatasets-cli dummy_data ./datasets/dataset_name\r\n```\r\ncommand.", "Hi, I created the dummy data manually but the tests are still failing it seems.\r\nCan you suggest the format of JSON which is supported by dummy data generator?\r\nI will have to modify my _generate_examples method accordingly.\r\nPlease advice on the same.\r\nThanks much.\r\n", "Can you run `make style` to format the code for the CI please ?\r\n\r\nAlso about the dummy data, here is how to generate them:\r\n\r\nWe need a dummy_data.zip file in ./datasets/hindiDiscourse/dummy/1.0.0 (or replace hindiDiscourse by hindi_discourse since we have to rename the folder anyway)\r\nTo create the zip file, first go in this directory and create a folder named dummy_data.\r\nThen inside the dummy_data folder create a file `discourse_dataset.json` and fill it with something like 5 examples.\r\nFinally zip the dummy_data folder to end up with the dummy_data.zip file\r\n\r\nOnce it's done you can check if the dummy data test passes with \r\n```\r\nRUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_hindi_discourse\r\n```\r\n\r\nIf it passes you can then remove the dummy_data folder to keep only the dummy_data.zip file", "Hi @duttahritwik did you manage to make the dummy data ?\r\nFeel free to ping me if you have questions or if we can help", "The error `tests/test_file_utils.py::TempSeedTest::test_tensorflow` just appeared because of tensorflow's update.\r\nOnce it's fixed on master we'll be free to merge this one", "Ci is green on master :) ", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1152", "html_url": "https://github.com/huggingface/datasets/pull/1152", "diff_url": "https://github.com/huggingface/datasets/pull/1152.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1152.patch", "merged_at": "2020-12-14T19:44:48" }
1,152
true
adding psc dataset
https://github.com/huggingface/datasets/pull/1151
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1151", "html_url": "https://github.com/huggingface/datasets/pull/1151", "diff_url": "https://github.com/huggingface/datasets/pull/1151.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1151.patch", "merged_at": "2020-12-09T11:38:41" }
1,151
true
adding dyk dataset
https://github.com/huggingface/datasets/pull/1150
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1150", "html_url": "https://github.com/huggingface/datasets/pull/1150", "diff_url": "https://github.com/huggingface/datasets/pull/1150.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1150.patch", "merged_at": "2020-12-05T16:52:19" }
1,150
true
Fix typo in the comment in _info function
https://github.com/huggingface/datasets/pull/1149
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1149", "html_url": "https://github.com/huggingface/datasets/pull/1149", "diff_url": "https://github.com/huggingface/datasets/pull/1149.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1149.patch", "merged_at": "2020-12-05T16:19:26" }
1,149
true
adding polemo2 dataset
https://github.com/huggingface/datasets/pull/1148
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1148", "html_url": "https://github.com/huggingface/datasets/pull/1148", "diff_url": "https://github.com/huggingface/datasets/pull/1148.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1148.patch", "merged_at": "2020-12-05T16:51:38" }
1,148
true
Vinay/add/telugu books
Real data tests are failing as this dataset needs to be manually downloaded
https://github.com/huggingface/datasets/pull/1147
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1147", "html_url": "https://github.com/huggingface/datasets/pull/1147", "diff_url": "https://github.com/huggingface/datasets/pull/1147.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1147.patch", "merged_at": "2020-12-05T16:36:03" }
1,147
true
Add LINNAEUS
https://github.com/huggingface/datasets/pull/1146
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1146", "html_url": "https://github.com/huggingface/datasets/pull/1146", "diff_url": "https://github.com/huggingface/datasets/pull/1146.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1146.patch", "merged_at": "2020-12-05T16:35:53" }
1,146
true
Add Species-800
https://github.com/huggingface/datasets/pull/1145
[ "thanks @lhoestq ! I probably need to do the same change in the `SplitGenerator`s (lines 107, 110 and 113). I'll open a new PR for that", "Yes indeed ! Good catch 👍 \r\nFeel free to open a PR and ping me", "Hi , theres a issue pulling species_800 dataset , throws google drive error \r\n\r\nerror: \r\n\r\n```\r\nraise ConnectionError(f\"Couldn't reach {url} ({repr(head_error)})\")\r\nConnectionError: Couldn't reach https://drive.google.com/u/0/uc?id=1OletxmPYNkz2ltOr9pyT0b0iBtUWxslh&export=download/ (ReadTimeout(ReadTimeoutError(\"HTTPSConnectionPool(host='drive.google.com', port=443): Read timed out. (read timeout=10)\")))\r\n```\r\ncode: \r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"species_800\")\r\n```", "Hi @obonyojimmy! I am running the same commands and they work for me. Did you check your internet connection?" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1145", "html_url": "https://github.com/huggingface/datasets/pull/1145", "diff_url": "https://github.com/huggingface/datasets/pull/1145.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1145.patch", "merged_at": "2020-12-05T16:35:01" }
1,145
true
Add JFLEG
This PR adds [JFLEG ](https://www.aclweb.org/anthology/E17-2037/), an English grammatical error correction benchmark. The tests were successful on real data, although it would be great if I can get some guidance on the **dummy data**. Basically, **for each source sentence there are 4 possible gold standard target sentences**. The original dataset comprise files in a flat structure, labelled by split then by source/target (e.g., dev.src, dev.ref0, ..., dev.ref3). Not sure what is the best way of adding this. I imagine I can treat each distinct source-target pair as its own split? But having so many copies of the source sentence feels redundant, and it would make it less convenient to end-users who might want to access multiple gold standard targets simultaneously.
https://github.com/huggingface/datasets/pull/1144
[ "Hi @j-chim ! You're right it does feel redundant: your option works better, but I'd even suggest having the references in a Sequence feature, which you can declare as:\r\n```\r\n\t features=datasets.Features(\r\n {\r\n \"sentence\": datasets.Value(\"string\"),\r\n \"corrections\": datasets.Sequence(datasets.Value(\"string\")),\r\n }\r\n ),\r\n```\r\n\r\nTo create the dummy data, you just need to tell the generator which files it should use, which you can do with:\r\n`python datasets-cli dummy_data datasets/<your-dataset-folder> --auto_generate --match_text_files \"train*,dev*,test*\"`\r\n", "Many thanks for this @yjernite! I've incorporated your feedback and sorted out the dummy data." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1144", "html_url": "https://github.com/huggingface/datasets/pull/1144", "diff_url": "https://github.com/huggingface/datasets/pull/1144.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1144.patch", "merged_at": "2020-12-06T18:16:04" }
1,144
true
Add the Winograd Schema Challenge
Adds the Winograd Schema Challenge, including configs for the more canonical wsc273 as well as wsc285 with 12 new examples. - https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WS.html The data format was a bit of a nightmare but I think I got it to a workable format.
https://github.com/huggingface/datasets/pull/1143
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1143", "html_url": "https://github.com/huggingface/datasets/pull/1143", "diff_url": "https://github.com/huggingface/datasets/pull/1143.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1143.patch", "merged_at": "2020-12-09T09:32:34" }
1,143
true
Fix PerSenT
New PR for dataset PerSenT
https://github.com/huggingface/datasets/pull/1142
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1142", "html_url": "https://github.com/huggingface/datasets/pull/1142", "diff_url": "https://github.com/huggingface/datasets/pull/1142.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1142.patch", "merged_at": "2020-12-14T13:39:34" }
1,142
true
Add GitHub version of ETH Py150 Corpus
Add the redistributable version of **ETH Py150 Corpus**
https://github.com/huggingface/datasets/pull/1141
[ "The `RemoteDatasetTest` is fixed on master so it's fine", "thanks for rebasing :)\r\n\r\nCI is green now, merging" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1141", "html_url": "https://github.com/huggingface/datasets/pull/1141", "diff_url": "https://github.com/huggingface/datasets/pull/1141.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1141.patch", "merged_at": "2020-12-07T10:00:24" }
1,141
true
Add Urdu Sentiment Corpus (USC).
Added Urdu Sentiment Corpus. More details about the dataset over <a href="https://github.com/MuhammadYaseenKhan/Urdu-Sentiment-Corpus">here</a>.
https://github.com/huggingface/datasets/pull/1140
[ "@lhoestq have made the suggested changes in the README file.", "@lhoestq Created a new PR #1231 with only the relevant files.\r\nclosing this one :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1140", "html_url": "https://github.com/huggingface/datasets/pull/1140", "diff_url": "https://github.com/huggingface/datasets/pull/1140.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1140.patch", "merged_at": null }
1,140
true
Add ReFreSD dataset
This PR adds the **ReFreSD dataset**. The original data is hosted [on this github repo](https://github.com/Elbria/xling-SemDiv) and we use the `REFreSD_rationale` to expose all the data. Need feedback on: - I couldn't generate the dummy data. The file we download is a tsv file, but without extension, I suppose this is the problem. I'm sure there is a simple trick to make this work. - The feature names. - I don't know if it's better to stick to the classic `sentence1`, `sentence2` or to `sentence_en`, `sentence_fr` to be more explicit. - There is a binary label (called `label`, no problem here), and a 3-class label called `#3_labels` in the original tsv. I changed it to `all_labels` but I'm sure there is better. - The rationales are lists of integers, extracted as a string at first. I wonder what's the best way to treat them, any idea? Also, I couldn't manage to make a `Sequence` of `int8` but I'm sure I've missed something simple. Thanks in advance
https://github.com/huggingface/datasets/pull/1139
[ "Cool dataset! Replying in-line:\r\n\r\n> This PR adds the **ReFreSD dataset**.\r\n> The original data is hosted [on this github repo](https://github.com/Elbria/xling-SemDiv) and we use the `REFreSD_rationale` to expose all the data.\r\n> \r\n> Need feedback on:\r\n> \r\n> * I couldn't generate the dummy data. The file we download is a tsv file, but without extension, I suppose this is the problem. I'm sure there is a simple trick to make this work.\r\n\r\nyou can use `--match_text_files` in the dummy data generation:\r\n`python datasets-cli dummy_data datasets/refresd --auto_generate --match_text_files \"REFreSD_rationale\"`\r\n\r\n> * The feature names.\r\n> \r\n> * I don't know if it's better to stick to the classic `sentence1`, `sentence2` or to `sentence_en`, `sentence_fr` to be more explicit.\r\n\r\nIt would actually be even better to use the `Translation` feature here to replace best:\r\n`\"sentence_pair\": datasets.Translation(languages=['en', 'fr']),`\r\n\r\nThen during `_generate_examples` this filed should look like\"\r\n`{\"sentence_pair\": {\"fr\": french, \"en\": english}}`\r\n\r\n> * There is a binary label (called `label`, no problem here), and a 3-class label called `#3_labels` in the original tsv. I changed it to `all_labels` but I'm sure there is better.\r\nLooks good!\r\n\r\n> * The rationales are lists of integers, extracted as a string at first. I wonder what's the best way to treat them, any idea? Also, I couldn't manage to make a `Sequence` of `int8` but I'm sure I've missed something simple.\r\n\r\nHaving the feature declared as `\"rationale_en\": datasets.Sequence(datasets.Value(\"int32\"))` should work\r\n\r\n> \r\n> Thanks in advance\r\n\r\nHope that helps you out! Don't forget to `make style`, rebase from master, and run all the tests before pushing again! You will also need to add a `README.md` as described in the guide:\r\nhttps://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card", "Thanks a lot for the answer, that does help a lot !\r\nI opened a PR for a License in the original repo so I was waiting for that for the model card. If there is no news on Monday, I'll add it without License. ", "Looks good! It looks like it might need a rebase to pass the tests. Once you do that, should be good to go!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1139", "html_url": "https://github.com/huggingface/datasets/pull/1139", "diff_url": "https://github.com/huggingface/datasets/pull/1139.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1139.patch", "merged_at": "2020-12-16T16:01:18" }
1,139
true
updated after the class name update
@lhoestq <---
https://github.com/huggingface/datasets/pull/1138
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1138", "html_url": "https://github.com/huggingface/datasets/pull/1138", "diff_url": "https://github.com/huggingface/datasets/pull/1138.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1138.patch", "merged_at": "2020-12-05T15:43:32" }
1,138
true
add wmt mlqe 2020 shared task
First commit for Shared task 1 (wmt_mlqw_task1) of WMT20 MLQE (quality estimation of machine translation) Note that I copied the tags in the README for only one (of the 7 configurations): `en-de`. There is one configuration for each pair of languages.
https://github.com/huggingface/datasets/pull/1137
[ "re-created in #1218 because this was too messy" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1137", "html_url": "https://github.com/huggingface/datasets/pull/1137", "diff_url": "https://github.com/huggingface/datasets/pull/1137.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1137.patch", "merged_at": null }
1,137
true
minor change in description in paws-x.py and updated dataset_infos
https://github.com/huggingface/datasets/pull/1136
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1136", "html_url": "https://github.com/huggingface/datasets/pull/1136", "diff_url": "https://github.com/huggingface/datasets/pull/1136.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1136.patch", "merged_at": "2020-12-06T18:02:57" }
1,136
true
added paws
Updating README and tags for dataset card in a while
https://github.com/huggingface/datasets/pull/1135
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1135", "html_url": "https://github.com/huggingface/datasets/pull/1135", "diff_url": "https://github.com/huggingface/datasets/pull/1135.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1135.patch", "merged_at": "2020-12-09T17:17:13" }
1,135
true
adding xquad-r dataset
https://github.com/huggingface/datasets/pull/1134
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1134", "html_url": "https://github.com/huggingface/datasets/pull/1134", "diff_url": "https://github.com/huggingface/datasets/pull/1134.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1134.patch", "merged_at": "2020-12-05T16:50:47" }
1,134
true
Adding XQUAD-R Dataset
https://github.com/huggingface/datasets/pull/1133
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1133", "html_url": "https://github.com/huggingface/datasets/pull/1133", "diff_url": "https://github.com/huggingface/datasets/pull/1133.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1133.patch", "merged_at": null }
1,133
true
Add Urdu Sentiment Corpus (USC).
Added Urdu Sentiment Corpus. More details about the dataset over <a href="https://github.com/MuhammadYaseenKhan/Urdu-Sentiment-Corpus">here</a>.
https://github.com/huggingface/datasets/pull/1132
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1132", "html_url": "https://github.com/huggingface/datasets/pull/1132", "diff_url": "https://github.com/huggingface/datasets/pull/1132.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1132.patch", "merged_at": null }
1,132
true
Adding XQUAD-R Dataset
https://github.com/huggingface/datasets/pull/1131
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1131", "html_url": "https://github.com/huggingface/datasets/pull/1131", "diff_url": "https://github.com/huggingface/datasets/pull/1131.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1131.patch", "merged_at": null }
1,131
true
adding discovery
https://github.com/huggingface/datasets/pull/1130
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1130", "html_url": "https://github.com/huggingface/datasets/pull/1130", "diff_url": "https://github.com/huggingface/datasets/pull/1130.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1130.patch", "merged_at": "2020-12-14T13:03:14" }
1,130
true
Adding initial version of cord-19 dataset
Initial version only reading the metadata in CSV. ### Checklist: - [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template - [x] Fill the _DESCRIPTION and _CITATION variables - [x] Implement _infos(), _split_generators() and _generate_examples() - [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class. - [x] Generate the metadata file dataset_infos.json for all configurations - [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card README.md using the template and at least fill the tags - [x] Both tests for the real data and the dummy data pass. ### TODO: - [x] add more metadata - [x] add full text - [x] add pre-computed document embedding
https://github.com/huggingface/datasets/pull/1129
[ "Hi @ggdupont !\r\nHave you had a chance to take a look at my suggestions ?\r\nFeel free to ping me if you have questions or when you're ready for a review", "> Hi @ggdupont !\r\n> Have you had a chance to take a look at my suggestions ?\r\n> Feel free to ping me if you have questions or when you're ready for a review\r\n\r\nYes I did, just busy period (and no time on weekend right now ;-) )", "With some delay, reduced the dummy data and had t rebase", "Thanks !\r\n\r\nIt looks like the rebase messed up the github diff for this PR (2.000+ files changed)\r\nCould you create another branch and another PR please ?", "Cleaned PR: https://github.com/huggingface/datasets/pull/1850" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1129", "html_url": "https://github.com/huggingface/datasets/pull/1129", "diff_url": "https://github.com/huggingface/datasets/pull/1129.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1129.patch", "merged_at": null }
1,129
true
Add xquad-r dataset
https://github.com/huggingface/datasets/pull/1128
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1128", "html_url": "https://github.com/huggingface/datasets/pull/1128", "diff_url": "https://github.com/huggingface/datasets/pull/1128.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1128.patch", "merged_at": null }
1,128
true
Add wikiqaar dataset
Arabic Wiki Question Answering Corpus.
https://github.com/huggingface/datasets/pull/1127
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1127", "html_url": "https://github.com/huggingface/datasets/pull/1127", "diff_url": "https://github.com/huggingface/datasets/pull/1127.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1127.patch", "merged_at": "2020-12-07T16:39:41" }
1,127
true
Adding babi dataset
Adding the English version of bAbI. Samples are taken from ParlAI for consistency with the main users at the moment. Supersede #945 (problem with the rebase) and adresses the issues mentioned in the review (dummy data are smaller now and code comments are fixed).
https://github.com/huggingface/datasets/pull/1126
[ "This is ok now @lhoestq\r\n\r\nI've included the tweak to `dummy_data` to only use the data transmitted to `_generate_examples` by default (it only do that if it can find at least one path to an existing file in the `gen_kwargs` and this can be unactivated with a flag).\r\n\r\nShould I extract it in another PR or is it ok like this?", "Nice !\r\nCould you add the dummy data generation trick in another PR ?\r\nI think we can also extend it to make it work not only with data files paths but also with data directories (sometimes it's one of the parent directory that is passed to gen_kwargs, not the actual path to the file).\r\nThis will help a lot to make the dummy data lighter !", "This PR can be closed due to #2053 @lhoestq\r\n\r\n" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1126", "html_url": "https://github.com/huggingface/datasets/pull/1126", "diff_url": "https://github.com/huggingface/datasets/pull/1126.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1126.patch", "merged_at": null }
1,126
true
Add Urdu fake news dataset.
Added Urdu fake news dataset. More information about the dataset can be found <a href="https://github.com/MaazAmjad/Datasets-for-Urdu-news">here</a>.
https://github.com/huggingface/datasets/pull/1125
[ "@lhoestq looks like a lot of files were updated... shall I create a new PR?", "Hi @chaitnayabasava ! you can try rebasing and see if that fixes the number of files changed, otherwise please do open a new PR with only the relevant files and close this one :) ", "Created a new PR #1230.\r\nclosing this one :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1125", "html_url": "https://github.com/huggingface/datasets/pull/1125", "diff_url": "https://github.com/huggingface/datasets/pull/1125.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1125.patch", "merged_at": null }
1,125
true
Add Xitsonga Ner
Clean Xitsonga Ner PR
https://github.com/huggingface/datasets/pull/1124
[ "looks like this PR includes changes about many files other than the ones related to xitsonga NER\r\n\r\ncould you create another branch and another PR please ?" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1124", "html_url": "https://github.com/huggingface/datasets/pull/1124", "diff_url": "https://github.com/huggingface/datasets/pull/1124.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1124.patch", "merged_at": null }
1,124
true
adding cdt dataset
https://github.com/huggingface/datasets/pull/1123
[ "the `ms_terms` formatting CI fails is fixed on master", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1123", "html_url": "https://github.com/huggingface/datasets/pull/1123", "diff_url": "https://github.com/huggingface/datasets/pull/1123.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1123.patch", "merged_at": "2020-12-04T17:05:56" }
1,123
true
Add Urdu fake news.
Added Urdu fake news dataset. More information about the dataset can be found <a href="https://github.com/MaazAmjad/Datasets-for-Urdu-news">here</a>.
https://github.com/huggingface/datasets/pull/1122
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1122", "html_url": "https://github.com/huggingface/datasets/pull/1122", "diff_url": "https://github.com/huggingface/datasets/pull/1122.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1122.patch", "merged_at": null }
1,122
true
adding cdt dataset
https://github.com/huggingface/datasets/pull/1121
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1121", "html_url": "https://github.com/huggingface/datasets/pull/1121", "diff_url": "https://github.com/huggingface/datasets/pull/1121.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1121.patch", "merged_at": null }
1,121
true
Add conda environment activation
Added activation of Conda environment before installing.
https://github.com/huggingface/datasets/pull/1120
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1120", "html_url": "https://github.com/huggingface/datasets/pull/1120", "diff_url": "https://github.com/huggingface/datasets/pull/1120.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1120.patch", "merged_at": "2020-12-04T16:40:57" }
1,120
true
Add Google Great Code Dataset
https://github.com/huggingface/datasets/pull/1119
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1119", "html_url": "https://github.com/huggingface/datasets/pull/1119", "diff_url": "https://github.com/huggingface/datasets/pull/1119.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1119.patch", "merged_at": "2020-12-06T17:33:13" }
1,119
true
Add Tashkeela dataset
Arabic Vocalized Words Dataset.
https://github.com/huggingface/datasets/pull/1118
[ "Sorry @lhoestq for the trouble, sometime I forget to change the names :/", "> Sorry @lhoestq for the trouble, sometime I forget to change the names :/\r\n\r\nhaha it's ok ;)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1118", "html_url": "https://github.com/huggingface/datasets/pull/1118", "diff_url": "https://github.com/huggingface/datasets/pull/1118.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1118.patch", "merged_at": "2020-12-04T15:46:50" }
1,118
true
Fix incorrect MRQA train+SQuAD URL
Fix issue #1115
https://github.com/huggingface/datasets/pull/1117
[ "Thanks ! could you regenerate the dataset_infos.json file ?\r\n\r\n```\r\ndatasets-cli test ./datasets/mrqa --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\nalso cc @VictorSanh ", "Oooops, good catch @jimmycode ", "> Thanks ! could you regenerate the dataset_infos.json file ?\r\n> \r\n> ```\r\n> datasets-cli test ./datasets/mrqa --save_infos --all_configs --ignore_verifications\r\n> ```\r\n> \r\n> also cc @VictorSanh\r\n\r\nUpdated the `dataset_infos.json` file." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1117", "html_url": "https://github.com/huggingface/datasets/pull/1117", "diff_url": "https://github.com/huggingface/datasets/pull/1117.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1117.patch", "merged_at": "2020-12-06T17:14:10" }
1,117
true
add dbpedia_14 dataset
This dataset corresponds to the DBpedia dataset requested in https://github.com/huggingface/datasets/issues/353.
https://github.com/huggingface/datasets/pull/1116
[ "Thanks for the review. \r\nCheers!", "Hi @hfawaz, this week we are doing the 🤗 `datasets` sprint (see some details [here](https://discuss.huggingface.co/t/open-to-the-community-one-week-team-effort-to-reach-v2-0-of-hf-datasets-library/2176)).\r\n\r\nNothing more to do on your side but it means that if you register on the thread I linked above, you can have some goodies for the present dataset that you have already added (and a special goodie if you want to spend more time and add 2 other datasets as well).\r\n\r\nIf you want to join, just tell me (or post on the thread on the HuggingFace forum: https://discuss.huggingface.co/t/open-to-the-community-one-week-team-effort-to-reach-v2-0-of-hf-datasets-library/2176)", "Hello @thomwolf \r\nThanks for the feedback and for this invitation, indeed I would be glad to join you guys (you can add me). \r\nI will see if I have the time to implement a couple of datasets. \r\nCheers! ", "@hfawaz invited you to the slack with your uha email.\r\n\r\nCheck your spam folder if you can't find the invitation :)", "Oh thanks, but can you invite me on my gmail: hassanismailfawaz@gmail.com \r\nUHA is my old organization, I haven't had the time to update my online profiles yet.\r\nThank you " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1116", "html_url": "https://github.com/huggingface/datasets/pull/1116", "diff_url": "https://github.com/huggingface/datasets/pull/1116.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1116.patch", "merged_at": "2020-12-05T15:36:23" }
1,116
true
Incorrect URL for MRQA SQuAD train subset
https://github.com/huggingface/datasets/blob/4ef4c8f8b7a60e35c6fa21115fca9faae91c9f74/datasets/mrqa/mrqa.py#L53 The URL for `train+SQuAD` subset of MRQA points to the dev set instead of train set. It should be `https://s3.us-east-2.amazonaws.com/mrqa/release/v2/train/SQuAD.jsonl.gz`.
https://github.com/huggingface/datasets/issues/1115
[ "good catch !" ]
null
1,115
false
Add sesotho ner corpus
Clean Sesotho PR
https://github.com/huggingface/datasets/pull/1114
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1114", "html_url": "https://github.com/huggingface/datasets/pull/1114", "diff_url": "https://github.com/huggingface/datasets/pull/1114.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1114.patch", "merged_at": "2020-12-04T15:02:07" }
1,114
true
add qed
adding QED: Dataset for Explanations in Question Answering https://github.com/google-research-datasets/QED https://arxiv.org/abs/2009.06354
https://github.com/huggingface/datasets/pull/1113
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1113", "html_url": "https://github.com/huggingface/datasets/pull/1113", "diff_url": "https://github.com/huggingface/datasets/pull/1113.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1113.patch", "merged_at": "2020-12-05T15:41:57" }
1,113
true
Initial version of cord-19 dataset from AllenAI with only the abstract
Initial version only reading the metadata in CSV. ### Checklist: - [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template - [x] Fill the _DESCRIPTION and _CITATION variables - [x] Implement _infos(), _split_generators() and _generate_examples() - [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class. - [x] Generate the metadata file dataset_infos.json for all configurations - [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card README.md using the template and at least fill the tags - [ ] Both tests for the real data and the dummy data pass. ### TODO: - [ ] add more metadata - [ ] add full text - [ ] add pre-computed document embedding
https://github.com/huggingface/datasets/pull/1112
[ "too ugly, I'll make a clean one" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1112", "html_url": "https://github.com/huggingface/datasets/pull/1112", "diff_url": "https://github.com/huggingface/datasets/pull/1112.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1112.patch", "merged_at": null }
1,112
true
Add Siswati Ner corpus
Clean Siswati PR
https://github.com/huggingface/datasets/pull/1111
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1111", "html_url": "https://github.com/huggingface/datasets/pull/1111", "diff_url": "https://github.com/huggingface/datasets/pull/1111.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1111.patch", "merged_at": "2020-12-04T14:43:00" }
1,111
true
Using a feature named "_type" fails with certain operations
A column named `_type` leads to a `TypeError: unhashable type: 'dict'` for certain operations: ```python from datasets import Dataset, concatenate_datasets ds = Dataset.from_dict({"_type": ["whatever"]}).map() concatenate_datasets([ds]) # or simply Dataset(ds._data) ``` Context: We are using datasets to persist data coming from elasticsearch to feed to our pipeline, and elasticsearch has a `_type` field, hence the strange name of the column. Not sure if you wish to support this specific column name, but if you do i would be happy to try a fix and provide a PR. I already had a look into it and i think the culprit is the `datasets.features.generate_from_dict` function. It uses the hard coded `_type` string to figure out if it reached the end of the nested feature object from a serialized dict. Best wishes and keep up the awesome work!
https://github.com/huggingface/datasets/issues/1110
[ "Thanks for reporting !\r\n\r\nIndeed this is a keyword in the library that is used to encode/decode features to a python dictionary that we can save/load to json.\r\nWe can probably change `_type` to something that is less likely to collide with user feature names.\r\nIn this case we would want something backward compatible though.\r\n\r\nFeel free to try a fix and open a PR, and to ping me if I can help :) " ]
null
1,110
false
add woz_dialogue
Adding Wizard-of-Oz task oriented dialogue dataset https://github.com/nmrksic/neural-belief-tracker/tree/master/data/woz https://arxiv.org/abs/1604.04562
https://github.com/huggingface/datasets/pull/1109
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1109", "html_url": "https://github.com/huggingface/datasets/pull/1109", "diff_url": "https://github.com/huggingface/datasets/pull/1109.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1109.patch", "merged_at": "2020-12-05T15:40:18" }
1,109
true
Add Sepedi NER corpus
Finally a clean PR for Sepedi
https://github.com/huggingface/datasets/pull/1108
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1108", "html_url": "https://github.com/huggingface/datasets/pull/1108", "diff_url": "https://github.com/huggingface/datasets/pull/1108.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1108.patch", "merged_at": "2020-12-04T14:39:00" }
1,108
true
Add arsentd_lev dataset
Add The Arabic Sentiment Twitter Dataset for Levantine dialect (ArSenTD-LEV) Paper: [ArSentD-LEV: A Multi-Topic Corpus for Target-based Sentiment Analysis in Arabic Levantine Tweets](https://arxiv.org/abs/1906.01830) Homepage: http://oma-project.com/
https://github.com/huggingface/datasets/pull/1107
[ "thanks ! can you also regenerate the dataset_infos.json file please ?" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1107", "html_url": "https://github.com/huggingface/datasets/pull/1107", "diff_url": "https://github.com/huggingface/datasets/pull/1107.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1107.patch", "merged_at": "2020-12-05T15:38:09" }
1,107
true