title
stringlengths
1
290
body
stringlengths
0
228k
html_url
stringlengths
46
51
comments
list
pull_request
dict
number
int64
1
5.59k
is_pull_request
bool
2 classes
Add Tweet Eval Dataset
https://github.com/huggingface/datasets/pull/1407
[ "Hi @lhoestq,\r\n\r\nSeeing that it has been almost two months to this draft, I'm willing to take this forward if you and @abhishekkrthakur don't mind. :)", "Hi @gchhablani !\r\nSure if @abhishekkrthakur doesn't mind\r\nThanks for your help :)", "Please feel free :) ", "Hi @lhoestq, @abhishekkrthakur \r\n\r\nI believe this can be closed. Merged in #1829." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1407", "html_url": "https://github.com/huggingface/datasets/pull/1407", "diff_url": "https://github.com/huggingface/datasets/pull/1407.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1407.patch", "merged_at": null }
1,407
true
Add Portuguese Hate Speech dataset
Binary Portuguese Hate Speech dataset from [this paper](https://www.aclweb.org/anthology/W19-3510/).
https://github.com/huggingface/datasets/pull/1406
[ "@lhoestq done! (The failing tests don't seem to be related)", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1406", "html_url": "https://github.com/huggingface/datasets/pull/1406", "diff_url": "https://github.com/huggingface/datasets/pull/1406.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1406.patch", "merged_at": "2020-12-14T16:22:20" }
1,406
true
Adding TaPaCo Dataset with README.md
https://github.com/huggingface/datasets/pull/1405
[ "We want to keep the repo as light as possible so that it doesn't take ages to clone, that's why we ask for small dummy data files (especially when there are many of them). Let me know if you have questions or if we can help you on this", "Hello @lhoestq , made the changes as you suggested and pushed, please review. By default, the dummy data was generated the way it was by the dummy data auto generate command. Thank you." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1405", "html_url": "https://github.com/huggingface/datasets/pull/1405", "diff_url": "https://github.com/huggingface/datasets/pull/1405.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1405.patch", "merged_at": "2020-12-13T19:11:18" }
1,405
true
Add Acronym Identification Dataset
https://github.com/huggingface/datasets/pull/1404
[ "fixed @lhoestq " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1404", "html_url": "https://github.com/huggingface/datasets/pull/1404", "diff_url": "https://github.com/huggingface/datasets/pull/1404.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1404.patch", "merged_at": "2020-12-14T13:12:00" }
1,404
true
Add dataset clickbait_news_bg
Adding a new dataset - clickbait_news_bg
https://github.com/huggingface/datasets/pull/1403
[ "Closing this pull request, will submit a new one for this dataset." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1403", "html_url": "https://github.com/huggingface/datasets/pull/1403", "diff_url": "https://github.com/huggingface/datasets/pull/1403.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1403.patch", "merged_at": null }
1,403
true
adding covid-tweets-japanese (again)
I had mistaken use git rebase, I was so hurried to fix it. However, I didn't fully consider the use of git reset , so I unintendedly stopped PR (#1367) altogether. Sorry about that. I'll make a new PR.
https://github.com/huggingface/datasets/pull/1402
[ "README.md is not created yet. I'll add it soon.", "Thank you for your detailed code review! It's so helpful.\r\nI'll reflect them to the code in 24 hours.\r\n\r\nYou may have told me in Slack (I cannot find the conversation log though I've looked through threads), but I'm sorry it seems I'm still misunderstanding how to get YAML from the tagger.\r\nI'm now asking on Slack if I am looking at the tagger the wrong way.", "One more thing I'd like to ask.\r\nShould I make changes by myself, or can I use the \"Commit suggestion\" feature?\r\nI'm new to this feature and I don't know how the rules work in this repository, so I'd like to ask just in case.", "Thank you very much for merging!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1402", "html_url": "https://github.com/huggingface/datasets/pull/1402", "diff_url": "https://github.com/huggingface/datasets/pull/1402.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1402.patch", "merged_at": "2020-12-13T17:47:36" }
1,402
true
Add reasoning_bg
Adding reading comprehension dataset for Bulgarian language
https://github.com/huggingface/datasets/pull/1401
[ "Hi @saradhix have you had the chance to reduce the size of the dummy data ?\r\n\r\nFeel free to ping me when it's done so we can merge :) ", "@lhoestq I have reduced the size of the dummy data manually and pushed the changes.", "The CI errors are not related to your dataset.\r\nThey're fixed on master, you can ignore them", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1401", "html_url": "https://github.com/huggingface/datasets/pull/1401", "diff_url": "https://github.com/huggingface/datasets/pull/1401.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1401.patch", "merged_at": "2020-12-17T16:50:42" }
1,401
true
Add European Union Education and Culture Translation Memory (EAC-TM) dataset
Adding the EAC Translation Memory dataset : https://ec.europa.eu/jrc/en/language-technologies/eac-translation-memory
https://github.com/huggingface/datasets/pull/1400
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1400", "html_url": "https://github.com/huggingface/datasets/pull/1400", "diff_url": "https://github.com/huggingface/datasets/pull/1400.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1400.patch", "merged_at": "2020-12-14T13:06:47" }
1,400
true
Add HoVer Dataset
HoVer: A Dataset for Many-Hop Fact Extraction And Claim Verification https://arxiv.org/abs/2011.03088
https://github.com/huggingface/datasets/pull/1399
[ "@lhoestq all comments addressed :) ", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1399", "html_url": "https://github.com/huggingface/datasets/pull/1399", "diff_url": "https://github.com/huggingface/datasets/pull/1399.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1399.patch", "merged_at": "2020-12-14T10:57:22" }
1,399
true
Add Neural Code Search Dataset
https://github.com/huggingface/datasets/pull/1398
[ "@lhoestq Refactored into new branch, please review :) ", "The `RemoteDatasetTest ` errors in the CI are fixed on master so it's fine", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1398", "html_url": "https://github.com/huggingface/datasets/pull/1398", "diff_url": "https://github.com/huggingface/datasets/pull/1398.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1398.patch", "merged_at": "2020-12-09T18:02:27" }
1,398
true
datasets card-creator link added
dataset card creator link has been added link: https://huggingface.co/datasets/card-creator/
https://github.com/huggingface/datasets/pull/1397
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1397", "html_url": "https://github.com/huggingface/datasets/pull/1397", "diff_url": "https://github.com/huggingface/datasets/pull/1397.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1397.patch", "merged_at": null }
1,397
true
initial commit for MultiReQA for second PR
Since last PR #1349 had some issues passing the tests. So, a new PR is generated.
https://github.com/huggingface/datasets/pull/1396
[ "Subsequent [PR #1426 ](https://github.com/huggingface/datasets/pull/1426) since this PR has uploaded other files along with the MultiReQA dataset.", "closing this one since a new PR has been created" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1396", "html_url": "https://github.com/huggingface/datasets/pull/1396", "diff_url": "https://github.com/huggingface/datasets/pull/1396.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1396.patch", "merged_at": null }
1,396
true
Add WikiSource Dataset
https://github.com/huggingface/datasets/pull/1395
[ "@lhoestq fixed :) " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1395", "html_url": "https://github.com/huggingface/datasets/pull/1395", "diff_url": "https://github.com/huggingface/datasets/pull/1395.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1395.patch", "merged_at": "2020-12-14T10:24:13" }
1,395
true
Add OfisPublik Dataset
https://github.com/huggingface/datasets/pull/1394
[ "@lhoestq fixed :) " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1394", "html_url": "https://github.com/huggingface/datasets/pull/1394", "diff_url": "https://github.com/huggingface/datasets/pull/1394.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1394.patch", "merged_at": "2020-12-14T10:23:29" }
1,394
true
Add script_version suggestion when dataset/metric not found
Adds a helpful prompt to the error message when a dataset/metric is not found, suggesting the user might need to pass `script_version="master"` if the dataset was added recently. The whole error looks like: > Couldn't find file locally at blah/blah.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1/metrics/blah/blah.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/met rics/blah/blah.py. If the dataset was added recently, you may need to to pass script_version="master" to find the loading script on the master branch.
https://github.com/huggingface/datasets/pull/1393
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1393", "html_url": "https://github.com/huggingface/datasets/pull/1393", "diff_url": "https://github.com/huggingface/datasets/pull/1393.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1393.patch", "merged_at": "2020-12-10T18:17:05" }
1,393
true
Add KDE4 Dataset
https://github.com/huggingface/datasets/pull/1392
[ "@lhoestq fixed :) " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1392", "html_url": "https://github.com/huggingface/datasets/pull/1392", "diff_url": "https://github.com/huggingface/datasets/pull/1392.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1392.patch", "merged_at": "2020-12-14T10:22:32" }
1,392
true
Add MultiParaCrawl Dataset
https://github.com/huggingface/datasets/pull/1391
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1391", "html_url": "https://github.com/huggingface/datasets/pull/1391", "diff_url": "https://github.com/huggingface/datasets/pull/1391.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1391.patch", "merged_at": "2020-12-10T18:39:44" }
1,391
true
Add SPC Dataset
https://github.com/huggingface/datasets/pull/1390
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1390", "html_url": "https://github.com/huggingface/datasets/pull/1390", "diff_url": "https://github.com/huggingface/datasets/pull/1390.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1390.patch", "merged_at": "2020-12-14T11:13:52" }
1,390
true
add amazon polarity dataset
This corresponds to the amazon (binary dataset) requested in https://github.com/huggingface/datasets/issues/353
https://github.com/huggingface/datasets/pull/1389
[ "`amazon_polarity` is probably a subset of `amazon_us_reviews` but I am not entirely sure about that.\r\nI guess `amazon_polarity` will help in reproducing results of papers using this dataset since even if it is a subset from `amazon_us_reviews`, it is not trivial how to extract `amazon_polarity` from `amazon_us_reviews`, especially since `amazon_us_reviews` was released after `amazon_polarity`. ", "do you know what the problem would be ? should I pull the master before ? @lhoestq ", "The error just appeared on master. I will try to fix it today.\r\nYou can ignore them since it's not related to the dataset you added", "merging since the CI is fixed on master", "Great thanks for the help. " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1389", "html_url": "https://github.com/huggingface/datasets/pull/1389", "diff_url": "https://github.com/huggingface/datasets/pull/1389.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1389.patch", "merged_at": "2020-12-11T11:41:01" }
1,389
true
hind_encorp
resubmit of hind_encorp file changes
https://github.com/huggingface/datasets/pull/1388
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1388", "html_url": "https://github.com/huggingface/datasets/pull/1388", "diff_url": "https://github.com/huggingface/datasets/pull/1388.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1388.patch", "merged_at": null }
1,388
true
Add LIAR dataset
Add LIAR dataset from [“Liar, Liar Pants on Fire”: A New Benchmark Dataset for Fake News Detection](https://www.aclweb.org/anthology/P17-2067/).
https://github.com/huggingface/datasets/pull/1387
[ "@lhoestq done! The failing testes don't seem to be related, it seems to be a connection issue, if I understand it correctly.", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1387", "html_url": "https://github.com/huggingface/datasets/pull/1387", "diff_url": "https://github.com/huggingface/datasets/pull/1387.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1387.patch", "merged_at": "2020-12-14T16:23:59" }
1,387
true
Add RecipeNLG Dataset (manual download)
https://github.com/huggingface/datasets/pull/1386
[ "@lhoestq yes. I asked the authors for direct link but unfortunately we need to fill a form (captcha)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1386", "html_url": "https://github.com/huggingface/datasets/pull/1386", "diff_url": "https://github.com/huggingface/datasets/pull/1386.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1386.patch", "merged_at": "2020-12-10T16:58:21" }
1,386
true
add best2009
`best2009` is a Thai word-tokenization dataset from encyclopedia, novels, news and articles by [NECTEC](https://www.nectec.or.th/) (148,995/2,252 lines of train/test). It was created for [BEST 2010: Word Tokenization Competition](https://thailang.nectec.or.th/archive/indexa290.html?q=node/10). The test set answers are not provided publicly.
https://github.com/huggingface/datasets/pull/1385
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1385", "html_url": "https://github.com/huggingface/datasets/pull/1385", "diff_url": "https://github.com/huggingface/datasets/pull/1385.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1385.patch", "merged_at": "2020-12-14T10:59:08" }
1,385
true
Add News Commentary Dataset
https://github.com/huggingface/datasets/pull/1384
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1384", "html_url": "https://github.com/huggingface/datasets/pull/1384", "diff_url": "https://github.com/huggingface/datasets/pull/1384.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1384.patch", "merged_at": "2020-12-10T16:54:07" }
1,384
true
added conv ai 2
Dataset : https://github.com/DeepPavlov/convai/tree/master/2018
https://github.com/huggingface/datasets/pull/1383
[ "@lhoestq Thank you for the suggestions. I added the changes to the branch and seems after rebasing it to master, all the commits previous commits got added. Should I create a new PR or should I keep this one only ? ", "closing this one in favor of #1527 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1383", "html_url": "https://github.com/huggingface/datasets/pull/1383", "diff_url": "https://github.com/huggingface/datasets/pull/1383.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1383.patch", "merged_at": null }
1,383
true
adding UNPC
Adding United Nations Parallel Corpus http://opus.nlpl.eu/UNPC.php
https://github.com/huggingface/datasets/pull/1382
[ "merging since the CI just had a connection error" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1382", "html_url": "https://github.com/huggingface/datasets/pull/1382", "diff_url": "https://github.com/huggingface/datasets/pull/1382.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1382.patch", "merged_at": "2020-12-09T17:53:06" }
1,382
true
Add twi text c3
Added Twi texts for training embeddings and language models based on the paper https://www.aclweb.org/anthology/2020.lrec-1.335/
https://github.com/huggingface/datasets/pull/1381
[ "looks like this PR includes changes about other datasets\r\n\r\nCan you only include the changes related to twi text c3 please ?", "Hi @lhoestq , I have removed the unnecessary files. Can you please confirm?", "You might need to either find a way to go back to the commit before it changes 389 files or create a new branch.", "okay, I have created another branch, see the latest pull https://github.com/huggingface/datasets/pull/1518 @cstorm125 ", "Hii please follow me", "Closing this one in favor of #1518" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1381", "html_url": "https://github.com/huggingface/datasets/pull/1381", "diff_url": "https://github.com/huggingface/datasets/pull/1381.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1381.patch", "merged_at": null }
1,381
true
Add Tatoeba Dataset
https://github.com/huggingface/datasets/pull/1380
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1380", "html_url": "https://github.com/huggingface/datasets/pull/1380", "diff_url": "https://github.com/huggingface/datasets/pull/1380.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1380.patch", "merged_at": "2020-12-10T16:54:27" }
1,380
true
Add yoruba text c3
Added Yoruba texts for training embeddings and language models based on the paper https://www.aclweb.org/anthology/2020.lrec-1.335/
https://github.com/huggingface/datasets/pull/1379
[ "looks like this PR includes changes about other datasets\r\n", "Thanks for the review. I'm a bit confused how to remove the files. Every time I add a new branch name using the following commands:\r\n\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit checkout -b a-descriptive-name-for-my-changes\r\n\r\nand push to the origin, this issue occurs", "Can you try to create the branch from the master branch of your fork ?\r\n\r\nfirst update your master branch:\r\n```\r\ngit checkout master\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit push\r\n```\r\n\r\nthen create a new one:\r\n```\r\ngit checkout -b my-new-branch\r\n```", "I think you were still having the files because you were creating the new branch from a branch in which you've committed the files, instead of creating the new branch from the master branch", "Got it, will correct that. Thanks", "@lhoestq , I have removed the unnecessary files. Looks like I still have one error. How do I resolve this?", "> @lhoestq , I have removed the unnecessary files. Looks like I still have one error. How do I resolve this?\r\n\r\nI think it's connection error on piqa dataset. Can you try triggering the test again? I usually resolve similar issues with:\r\n```\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit push -u -f origin your_branch_name\r\n```", "thank you @cstorm125 ", "I have created another pull request for this https://github.com/huggingface/datasets/pull/1515 @cstorm125 @lhoestq ", "Hii please follow me", "merging since the CI is fixed on master", "Great, thanks a lot" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1379", "html_url": "https://github.com/huggingface/datasets/pull/1379", "diff_url": "https://github.com/huggingface/datasets/pull/1379.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1379.patch", "merged_at": "2020-12-13T18:37:32" }
1,379
true
Add FACTCK.BR dataset
This PR adds [FACTCK.BR](https://github.com/jghm-f/FACTCK.BR) dataset from [FACTCK.BR: a new dataset to study fake news](https://dl.acm.org/doi/10.1145/3323503.3361698).
https://github.com/huggingface/datasets/pull/1378
[ "@lhoestq done!", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1378", "html_url": "https://github.com/huggingface/datasets/pull/1378", "diff_url": "https://github.com/huggingface/datasets/pull/1378.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1378.patch", "merged_at": "2020-12-15T15:34:11" }
1,378
true
adding marathi-wiki dataset
Adding marathi-wiki-articles dataset.
https://github.com/huggingface/datasets/pull/1377
[ "Can you make it a draft PR until you've added the dataset please ? @ekdnam ", "Done", "Thanks for your contribution, @ekdnam. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1377", "html_url": "https://github.com/huggingface/datasets/pull/1377", "diff_url": "https://github.com/huggingface/datasets/pull/1377.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1377.patch", "merged_at": null }
1,377
true
Add SETimes Dataset
https://github.com/huggingface/datasets/pull/1376
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1376", "html_url": "https://github.com/huggingface/datasets/pull/1376", "diff_url": "https://github.com/huggingface/datasets/pull/1376.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1376.patch", "merged_at": "2020-12-10T16:11:56" }
1,376
true
Add OPUS EMEA Dataset
https://github.com/huggingface/datasets/pull/1375
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1375", "html_url": "https://github.com/huggingface/datasets/pull/1375", "diff_url": "https://github.com/huggingface/datasets/pull/1375.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1375.patch", "merged_at": "2020-12-10T16:11:08" }
1,375
true
Add OPUS Tilde Model Dataset
https://github.com/huggingface/datasets/pull/1374
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1374", "html_url": "https://github.com/huggingface/datasets/pull/1374", "diff_url": "https://github.com/huggingface/datasets/pull/1374.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1374.patch", "merged_at": "2020-12-10T16:11:28" }
1,374
true
Add OPUS ECB Dataset
https://github.com/huggingface/datasets/pull/1373
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1373", "html_url": "https://github.com/huggingface/datasets/pull/1373", "diff_url": "https://github.com/huggingface/datasets/pull/1373.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1373.patch", "merged_at": "2020-12-10T15:25:54" }
1,373
true
Add OPUS Books Dataset
https://github.com/huggingface/datasets/pull/1372
[ "@lhoestq done" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1372", "html_url": "https://github.com/huggingface/datasets/pull/1372", "diff_url": "https://github.com/huggingface/datasets/pull/1372.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1372.patch", "merged_at": "2020-12-14T09:56:27" }
1,372
true
Adding Scielo
Adding Scielo: Parallel corpus of full-text articles in Portuguese, English and Spanish from SciELO https://sites.google.com/view/felipe-soares/datasets#h.p_92uSCyAjWSRB
https://github.com/huggingface/datasets/pull/1371
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1371", "html_url": "https://github.com/huggingface/datasets/pull/1371", "diff_url": "https://github.com/huggingface/datasets/pull/1371.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1371.patch", "merged_at": "2020-12-09T17:53:37" }
1,371
true
Add OPUS PHP Dataset
https://github.com/huggingface/datasets/pull/1370
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1370", "html_url": "https://github.com/huggingface/datasets/pull/1370", "diff_url": "https://github.com/huggingface/datasets/pull/1370.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1370.patch", "merged_at": "2020-12-10T15:37:24" }
1,370
true
Use passed --cache_dir for modules cache
When passed `--cache_dir` arg: ```shell python datasets-cli test datasets/<my-dataset-folder> --save_infos --all_configs --cache_dir <my-cache-dir> ``` it is not used for caching the modules, which are cached in the default location at `.cache/huggingface/modules`. With this fix, the modules will be cached at `<my-cache-dir>/modules`.
https://github.com/huggingface/datasets/pull/1369
[ "I have a question: why not using a tmp dir instead, like the DummyDataGeneratorDownloadManager does?", "Hi @lhoestq, I am trying to understand better the logic...\r\n\r\nWhy do we have a `dynamic_module_path` besides the modules cache path?\r\n```python\r\nDYNAMIC_MODULES_PATH = os.path.join(HF_MODULES_CACHE, \"datasets_modules\")\r\n```\r\nMoreover, 2 subdirectories (for datasets and for metrics) were created inside it:\r\n```python\r\nDATASETS_PATH = os.path.join(DYNAMIC_MODULES_PATH, \"datasets\")\r\nMETRICS_PATH = os.path.join(DYNAMIC_MODULES_PATH, \"metrics\")\r\n```", "Hi :) \r\nThe modules cache path is the path added to `sys.path`.\r\nTherefore inside we need to have a folder that is going to be a package: `datasets_modules`.\r\nThis package will contain dynamic modules, i.e. datasets and metrics modules added on-the-fly.\r\nThen we have two sub-modules `datasets_modules.datasets` and `datasets_modules.metrics`.\r\n\r\nMaybe we can make things more explicit in the code with some comments explaining the structure, and maybe better variable naming as well..\r\n\r\nAlso I wanted to say that I started to work on offline loading of modules in #1726 and actually it lead to do similar changes to what you did to control the path where modules are stored.", "Hi @lhoestq, I see...\r\n\r\nIndeed I was also creating a draft for test_load, to clarify the expected behavior... ;)\r\n\r\nSo, for the command line:\r\n```sh\r\npython datasets-cli test datasets/<my-dataset-folder> --save_infos --all_configs --cache_dir <my-cache-dir>\r\n```\r\nthe `cache_dir` argument refers to dataset cache dir. We do not have control over the modules cache dir, but we would like to have. And if I understand well, you suggest adding another argument `dynamic_module_path`. Am I right?", "> So, for the command line:\r\n> \r\n> ```shell\r\n> python datasets-cli test datasets/<my-dataset-folder> --save_infos --all_configs --cache_dir <my-cache-dir>\r\n> ```\r\n> \r\n> the `cache_dir` argument refers to dataset cache dir. We do not have control over the modules cache dir, but we would like to have. And if I understand well, you suggest adding another argument `dynamic_module_path`. Am I right?\r\n\r\nYes the cache_dir is used to download files and also so save the dataset arrow files.\r\nThis is indeed different from the path for dynamic modules.\r\n\r\nI suggested to have `dynamic_module_path` as a parameter but actually this is the parent directory `hf_modules_cache` that we would need (it's the one that is passed to `init_dynamic_modules ` that we need to add to `sys.path`).\r\n\r\nCurrently it's already possible to override it using the env variable `HF_MODULES_CACHE` but we can imagine having it as a parameter as well.\r\n\r\nThis way the user controls both the `cache_dir` and the `hf_modules_cache` which are the two places used by the library to read/write stuff.\r\n\r\n", "I think #1726 is going to be merged pretty soon. Maybe can work on this as soon as it's merged to avoid doing the same things twice and to avoid conflicts ?", "I agree. Indeed I took some of your code in one of my last commit, to try to implement the logic you described." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1369", "html_url": "https://github.com/huggingface/datasets/pull/1369", "diff_url": "https://github.com/huggingface/datasets/pull/1369.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1369.patch", "merged_at": null }
1,369
true
Re-adding narrativeqa dataset
An update of #309.
https://github.com/huggingface/datasets/pull/1368
[ "@lhoestq I think I've fixed the dummy data - it finally passes! I'll add the model card now.", "@lhoestq - pretty happy with it now", "> Awesome thank you !\r\n> \r\n> Could you try to reduce the size of the dummy_data.zip file before we merge ? (it's 300KB right now)\r\n> \r\n> To do so feel free to take a look inside it and remove all the unnecessary files and chunks of text, to only keep a few examples. The idea is to have a zip file that is only a few KB\r\n\r\nAh, it only contains 1 example for each split. I think the problem is that I include an entire story (like in the full dataset). We can probably get away with a summarised version.", "> Nice thank you, can you make it even lighter if possible ? Something round 10KB would be awesone\r\n> We try to keep the repo light so that it doesn't take ages to clone. So we have to make sure the dummy data are as small as possible for every single dataset.\r\n\r\nHave trimmed a little more out of each example now." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1368", "html_url": "https://github.com/huggingface/datasets/pull/1368", "diff_url": "https://github.com/huggingface/datasets/pull/1368.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1368.patch", "merged_at": null }
1,368
true
adding covid-tweets-japanese
Adding COVID-19 Japanese Tweets Dataset as part of the sprint. Testing with dummy data is not working (the file is said to not exist). Sorry for the incomplete PR.
https://github.com/huggingface/datasets/pull/1367
[ "I think it's because the file you download uncompresses into a file and not a folder so `--autogenerate` couldn't create dummy data for you. See in your dummy_data.zip if there is a file there. If not, manually create your dummy data and compress them to dummy_data.zip.", "@cstorm125 Thank you for the comment! \r\nAs you point out, it seems my code has something wrong about downloading and uncompressing the file.\r\nHowever, my manually created dummy data seems to contain a file of the required format.\r\n\r\nOn Colaboratory,\r\n`!unzip /content/datasets/datasets/covid_tweets_japanese/dummy/1.1.0/dummy_data.zip`\r\nreturns:\r\n\r\n```\r\nArchive: /content/datasets/datasets/covid_tweets_japanese/dummy/1.1.0/dummy_data.zip\r\n creating: content/datasets/datasets/covid_tweets_japanese/dummy/1.1.0/dummy_data/\r\n extracting: content/datasets/datasets/covid_tweets_japanese/dummy/1.1.0/dummy_data/data.csv.bz2 \r\n```\r\n\r\nThe original data is `data.csv.bz2`, and I had a very hard time dealing with uncompressing bzip2.\r\nI think I could handle it, but there may be problems remain." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1367", "html_url": "https://github.com/huggingface/datasets/pull/1367", "diff_url": "https://github.com/huggingface/datasets/pull/1367.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1367.patch", "merged_at": null }
1,367
true
Adding Hope EDI dataset
https://github.com/huggingface/datasets/pull/1366
[ "@lhoestq Have addressed your comments. Please review. Thanks." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1366", "html_url": "https://github.com/huggingface/datasets/pull/1366", "diff_url": "https://github.com/huggingface/datasets/pull/1366.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1366.patch", "merged_at": "2020-12-14T14:27:57" }
1,366
true
Add Mkqa dataset
# MKQA: Multilingual Knowledge Questions & Answers Dataset Adding the [MKQA](https://github.com/apple/ml-mkqa) dataset as part of the sprint 🎉 There is no official data splits so I added just a `train` split. differently from the original: - answer:type field is a ClassLabel (I thought it might be possible to train on this as a label for categorizing questions) - answer:entity field has a default value of empty string '' (since this key is not available for all in original) - answer:alias has default value of [] - [x] All tests passed - [x] Added dummy data - [x] Added data card (as much as I could)
https://github.com/huggingface/datasets/pull/1365
[ "the `RemoteDatasetTest ` error pf the CI is fixed on master so it's fine", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1365", "html_url": "https://github.com/huggingface/datasets/pull/1365", "diff_url": "https://github.com/huggingface/datasets/pull/1365.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1365.patch", "merged_at": "2020-12-10T15:37:56" }
1,365
true
Narrative QA (Manual Download Stories) Dataset
Narrative QA with manual download for stories.
https://github.com/huggingface/datasets/pull/1364
[ "Hi ! Maybe we can rename it `narrativeqa_manual` to make it explicit that this one requires manual download contrary to `narrativeqa` ?\r\nIt's important to have this one as well, in case the `narrativeqa` one suffers from download issues (checksums or dead links for example).\r\n\r\nYou can also copy the dataset card from `narrativeqa` and add the dummy data as well", "Thanks @lhoestq will do all this and submit a request in the coming days. 😊 ", "Closing this as another pull request is already done. " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1364", "html_url": "https://github.com/huggingface/datasets/pull/1364", "diff_url": "https://github.com/huggingface/datasets/pull/1364.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1364.patch", "merged_at": null }
1,364
true
Adding OPUS MultiUN
Adding UnMulti http://www.euromatrixplus.net/multi-un/
https://github.com/huggingface/datasets/pull/1363
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1363", "html_url": "https://github.com/huggingface/datasets/pull/1363", "diff_url": "https://github.com/huggingface/datasets/pull/1363.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1363.patch", "merged_at": "2020-12-09T17:54:19" }
1,363
true
adding opus_infopankki
Adding opus_infopankki http://opus.nlpl.eu/infopankki-v1.php
https://github.com/huggingface/datasets/pull/1362
[ "Thanks Quentin !" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1362", "html_url": "https://github.com/huggingface/datasets/pull/1362", "diff_url": "https://github.com/huggingface/datasets/pull/1362.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1362.patch", "merged_at": "2020-12-09T18:13:48" }
1,362
true
adding bprec
Brand-Product Relation Extraction Corpora in Polish
https://github.com/huggingface/datasets/pull/1361
[ "@lhoestq I think this is ready for review, I assume the errors (connection) are unrelated to the PR :) ", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1361", "html_url": "https://github.com/huggingface/datasets/pull/1361", "diff_url": "https://github.com/huggingface/datasets/pull/1361.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1361.patch", "merged_at": "2020-12-16T17:04:44" }
1,361
true
add wisesight1000
`wisesight1000` contains Thai social media texts randomly drawn from the full `wisesight-sentiment`, tokenized by human annotators. Out of the labels `neg` (negative), `neu` (neutral), `pos` (positive), `q` (question), 250 samples each. Some texts are removed because they look like spam.Because these samples are representative of real world content, we believe having these annotaed samples will allow the community to robustly evaluate tokenization algorithms.
https://github.com/huggingface/datasets/pull/1360
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1360", "html_url": "https://github.com/huggingface/datasets/pull/1360", "diff_url": "https://github.com/huggingface/datasets/pull/1360.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1360.patch", "merged_at": "2020-12-10T14:28:41" }
1,360
true
Add JNLPBA
https://github.com/huggingface/datasets/pull/1359
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1359", "html_url": "https://github.com/huggingface/datasets/pull/1359", "diff_url": "https://github.com/huggingface/datasets/pull/1359.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1359.patch", "merged_at": "2020-12-10T14:24:36" }
1,359
true
Add spider dataset
This PR adds the Spider dataset, a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students. The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases. Dataset website: https://yale-lily.github.io/spider Paper link: https://www.aclweb.org/anthology/D18-1425/
https://github.com/huggingface/datasets/pull/1358
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1358", "html_url": "https://github.com/huggingface/datasets/pull/1358", "diff_url": "https://github.com/huggingface/datasets/pull/1358.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1358.patch", "merged_at": "2020-12-10T15:12:31" }
1,358
true
Youtube caption corrections
This PR adds a new dataset of YouTube captions, error and corrections. This dataset was created in just the last week, as inspired by this sprint!
https://github.com/huggingface/datasets/pull/1357
[ "Sorry about forgetting flake8.\r\nRather than use up the circleci resources on a new push with only formatting changes, I will wait to push until the results from all tests finish and/or any feedback comes in... probably tomorrow for me.", "\r\nSo... my normal work is with mercurial and seem to have clearly forked this up using git... :(\r\n\r\nWhat I did is after calling:\r\n```\r\ngit fetch upstream\r\ngit rebase upstream/master\r\n```\r\n\r\nI then I attempt to pull in my most recent changes UI commit changes based on @lhoestq's feedback with:\r\n```\r\ngit pull\r\n``` \r\n... which I now suspect undid the above fetch and rebase. Will look into fixing later today when I have more time. Sorry!\r\n", "My dummy data seems quite large as a single row is composed of tokens/labels for an entire youtube video, with at least one row required for each file, which in this case 1 file per 13 youtube channels.\r\n\r\nTo make it smaller I passed `--n_lines 1` to reduce about 5x.\r\n\r\nI then manually reduced size of the particularly long youtube lectures to get the size to about 30KB. However, after recompressing into a zip, and running dummy data test I got the following error:\r\n`FAILED tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_youtube_caption_corrections - OSError: Cannot find data file. `, despite file being there, which I haven't had a chance yet to debug.", "I wrote a small script to generate a smaller json file for the dummy_data, with the hope that I could resolve the pytest error noted above (in case related to a manual typo I could have introduce), however the test contains to fail locally... here's to hoping it can pass on remote!", "Sorry for delayed comments here. Last commit made two changes:\r\n- Increased the valency of the labels from just True/False to more categories to describe the various types of diffs encountered. This required some rewrite of the README\r\n- Reduced the number of remote files to be downloaded from 13 to 4, by combining all 13 of the channel-specific files together, and the splitting them up in a way to meet Github file size requirements. This also reduces size of the dummy-data.", "@lhoestq, thank you for the great feedback, especially given how busy you guys are now! \r\n\r\nI checked out GitHub release tags and looks cool. I have added the version tag to the url, instead of the commit sha as originally suggested, with the hope that it serves the same purpose of pinning the content to this url. Please let me know if I have misunderstood.\r\n\r\nIn regard to dynamically changing the number of files downloaded by first downloading a JSON listing the files, I love that idea. But I am a little confused, as I was thinking that any changes to the dataset itself would require a new PR with an updated `dataset_infos.json`, e.g. `num_examples` would increase. \r\n\r\nIf the purpose of this is not to permit dynamic (without a PR needed) growth of the number of files, but instead to provide stability to the consumers of the dataset, maybe I continued use the release tags, maintaining access to old releases could serve this purpose? I am still learning about these release tags... ", "For dynamic datasets, i.e. datasets that evolve over time, we support custom configurations: they are configurations that are not part of the BUILDER_CONFIGS or in the dataset_infos.json\r\n\r\nFor example for wikipedia, you can use the latest wiki dump by specifying `date=` inside `load_dataset()`. A configuration is created on the fly for this date and is used to build the dataset using the latest data.\r\n\r\nTherefore we don't need to have PRs to update the script for each wikipedia release.\r\n\r\nOne downside though is that we don't have metadata in advance such as the size of the dataset.\r\n\r\nI think this could be a nice addition for the youtube caption dataset in the future to be have a system of releases and be able to load the version we want easily. What do you think ?", "\r\n\r\n\r\n\r\n> For dynamic datasets, i.e. datasets that evolve over time, we support custom configurations: they are configurations that are not part of the BUILDER_CONFIGS or in the dataset_infos.json\r\n> \r\n \r\n> I think this could be a nice addition for the youtube caption dataset in the future to be have a system of releases and be able to load the version we want easily. What do you think ?\r\n\r\nThank you for the suggestion! This sounds great! I will take a look at the some datasets that do this, and would love to give it a try in the future, if I continue to grow the captions dataset in a meaningful way. \r\n\r\nAppreciate all the help on this. It has been a really great experience for me. :)", "Excited to merge! And sorry to be such a github n00b, but from what I've quickly read, I don't 'Close pull request', but rather the next steps are action taken on your end... Please let me know if there is some action to be taken at my end first. :/", "Alright merging this one then :) " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1357", "html_url": "https://github.com/huggingface/datasets/pull/1357", "diff_url": "https://github.com/huggingface/datasets/pull/1357.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1357.patch", "merged_at": "2020-12-15T18:12:56" }
1,357
true
Add StackOverflow StackSample dataset
This PR adds the StackOverflow StackSample dataset from Kaggle: https://www.kaggle.com/stackoverflow/stacksample Ran through all of the steps. However, since my dataset requires manually downloading the data, I was unable to run the pytest on the real dataset (the dummy data pytest passed).
https://github.com/huggingface/datasets/pull/1356
[ "@lhoestq Thanks for the review and suggestions! I've added your comments and pushed the changes. I'm having issues with the dummy data still. When I run the dummy data test\r\n\r\n```bash\r\nRUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_so_stacksample\r\n```\r\nI get this error: \r\n\r\n```\r\n___________________________________________ LocalDatasetTest.test_load_dataset_all_configs_so_stacksample ____________________________________________\r\n\r\nself = <tests.test_dataset_common.LocalDatasetTest testMethod=test_load_dataset_all_configs_so_stacksample>, dataset_name = 'so_stacksample'\r\n\r\n @slow\r\n def test_load_dataset_all_configs(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True)\r\n\r\ntests/test_dataset_common.py:237: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:198: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n\r\nFAILED tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_so_stacksample - AssertionError: False is not true\r\n```\r\n\r\nI tried formatting the data similar to other datasets, but I think I don't have my csv's in the zip folder with the proper name. I also ran the command that's supposed to outline the exact steps I need to perform to get them into the correct format, but I followed them and they don't seem to be working still :/. Any help would be greatly appreciated!\r\n", "Ok I found the issue with the dummy data.\r\nIt's currently failing because it's not generating a single example using the dummy csv file.\r\nThat's because there's only only line in the dummy csv file, and this line is skipped using the `next()` call used to ignore the headers of the csv.\r\n\r\nTo fix the dummy data you must add headers to the dummy csv files.", "Also can you make sure that all the original CSV files have headers ? i.e. check that their first line is just the column names", "> Ok I found the issue with the dummy data.\r\n> It's currently failing because it's not generating a single example using the dummy csv file.\r\n> That's because there's only only line in the dummy csv file, and this line is skipped using the `next()` call used to ignore the headers of the csv.\r\n> \r\n> To fix the dummy data you must add headers to the dummy csv files.\r\n\r\nOh man, I bamboozled myself! Thank you @lhoestq for catching that! I've updated the dummy csv's to include headers and also confirmed that they all have headers, so I am not throwing away any information with that `next()` call. When I run the test locally for the dummy data it passes, so hopefully it is good to go :D", "merging since the Ci is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1356", "html_url": "https://github.com/huggingface/datasets/pull/1356", "diff_url": "https://github.com/huggingface/datasets/pull/1356.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1356.patch", "merged_at": "2020-12-21T14:48:21" }
1,356
true
Addition of py_ast dataset
@lhoestq as discussed in PR #1195
https://github.com/huggingface/datasets/pull/1355
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1355", "html_url": "https://github.com/huggingface/datasets/pull/1355", "diff_url": "https://github.com/huggingface/datasets/pull/1355.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1355.patch", "merged_at": "2020-12-09T16:19:48" }
1,355
true
Add TweetQA dataset
This PR adds the TweetQA dataset, the first dataset for QA on social media data by leveraging news media and crowdsourcing. Paper: https://arxiv.org/abs/1907.06292 Repository: https://tweetqa.github.io/
https://github.com/huggingface/datasets/pull/1354
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1354", "html_url": "https://github.com/huggingface/datasets/pull/1354", "diff_url": "https://github.com/huggingface/datasets/pull/1354.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1354.patch", "merged_at": "2020-12-10T15:10:30" }
1,354
true
New instruction for how to generate dataset_infos.json
Add additional instructions for how to generate dataset_infos.json for manual download datasets. Information courtesy of `Taimur Ibrahim` from the slack channel
https://github.com/huggingface/datasets/pull/1353
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1353", "html_url": "https://github.com/huggingface/datasets/pull/1353", "diff_url": "https://github.com/huggingface/datasets/pull/1353.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1353.patch", "merged_at": "2020-12-10T13:45:15" }
1,353
true
change url for prachathai67k to internet archive
`prachathai67k` is currently downloaded from git-lfs of PyThaiNLP github. Since the size is quite large (~250MB), I moved the URL to archive.org in order to prevent rate limit issues.
https://github.com/huggingface/datasets/pull/1352
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1352", "html_url": "https://github.com/huggingface/datasets/pull/1352", "diff_url": "https://github.com/huggingface/datasets/pull/1352.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1352.patch", "merged_at": "2020-12-10T13:42:17" }
1,352
true
added craigslist_bargians
`craigslist_bargains` data set from [here](https://worksheets.codalab.org/worksheets/0x453913e76b65495d8b9730d41c7e0a0c/) (Cleaned up version of #1278)
https://github.com/huggingface/datasets/pull/1351
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1351", "html_url": "https://github.com/huggingface/datasets/pull/1351", "diff_url": "https://github.com/huggingface/datasets/pull/1351.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1351.patch", "merged_at": "2020-12-10T14:14:34" }
1,351
true
add LeNER-Br dataset
Adding the LeNER-Br dataset, a Portuguese language dataset for named entity recognition
https://github.com/huggingface/datasets/pull/1350
[ "I don't know what happened, my first commit passed on all checks, but after just a README.md update one of the scripts failed, is it normal? 😕 ", "Looks like a flaky connection error, I've launched a re-run, it should be fine :)", "The RemoteDatasetTest error in the CI is just a connection error, we can ignore it", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1350", "html_url": "https://github.com/huggingface/datasets/pull/1350", "diff_url": "https://github.com/huggingface/datasets/pull/1350.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1350.patch", "merged_at": "2020-12-10T14:11:33" }
1,350
true
initial commit for MultiReQA
Added MultiReQA, which is a dataset containing the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, and TextbookQA.
https://github.com/huggingface/datasets/pull/1349
[ "looks like this dataset includes changes about many other files than the ones for multi_re_qa\r\n\r\nCan you create another branch and another PR please ?", "> looks like this dataset includes changes about many other files than the ones for multi_re_qa\r\n> \r\n> Can you create another branch and another PR please ?\r\n\r\nSure I will do that. Thank you." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1349", "html_url": "https://github.com/huggingface/datasets/pull/1349", "diff_url": "https://github.com/huggingface/datasets/pull/1349.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1349.patch", "merged_at": null }
1,349
true
add Yoruba NER dataset
Added Yoruba GV dataset based on this paper
https://github.com/huggingface/datasets/pull/1348
[ "Thank you. Okay, other pull requests only have one dataset", "The `RemoteDatasetTest` error in the CI is just a connection error, we can ignore it", "merging since the CI is fixed on master", "Thank you very much" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1348", "html_url": "https://github.com/huggingface/datasets/pull/1348", "diff_url": "https://github.com/huggingface/datasets/pull/1348.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1348.patch", "merged_at": "2020-12-10T14:09:43" }
1,348
true
Add spanish billion words corpus
Add an unannotated Spanish corpus of nearly 1.5 billion words, compiled from different resources from the web.
https://github.com/huggingface/datasets/pull/1347
[ "Thank you for your feedback! I've reduced the dummy data size to 2KB.\r\n\r\nI had to rebase to fix `RemoteDatasetTest` fails, sorry about the 80 commits. \r\nI could create a new clean PR if you prefer.", "I have seen that in similar cases you have suggested to other contributors to create another branch and another PR, so I will do that.", "Yes thank you !", "The new PR is #1476 :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1347", "html_url": "https://github.com/huggingface/datasets/pull/1347", "diff_url": "https://github.com/huggingface/datasets/pull/1347.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1347.patch", "merged_at": null }
1,347
true
Add MultiBooked dataset
Add dataset.
https://github.com/huggingface/datasets/pull/1346
[ "There' still an issue with the dummy data, let me take a look" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1346", "html_url": "https://github.com/huggingface/datasets/pull/1346", "diff_url": "https://github.com/huggingface/datasets/pull/1346.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1346.patch", "merged_at": "2020-12-15T17:02:08" }
1,346
true
First commit of NarrativeQA Dataset
Added NarrativeQA dataset and included a manual downloading option to download scripts from the original scripts provided by the authors.
https://github.com/huggingface/datasets/pull/1345
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1345", "html_url": "https://github.com/huggingface/datasets/pull/1345", "diff_url": "https://github.com/huggingface/datasets/pull/1345.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1345.patch", "merged_at": null }
1,345
true
Add hausa ner corpus
Added Hausa VOA NER data
https://github.com/huggingface/datasets/pull/1344
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1344", "html_url": "https://github.com/huggingface/datasets/pull/1344", "diff_url": "https://github.com/huggingface/datasets/pull/1344.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1344.patch", "merged_at": null }
1,344
true
Add LiveQA
This PR adds LiveQA, the Chinese real-time/timeline-based QA task by [Liu et al., 2020](https://arxiv.org/pdf/2010.00526.pdf).
https://github.com/huggingface/datasets/pull/1343
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1343", "html_url": "https://github.com/huggingface/datasets/pull/1343", "diff_url": "https://github.com/huggingface/datasets/pull/1343.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1343.patch", "merged_at": "2020-12-14T09:40:28" }
1,343
true
[yaml] Fix metadata according to pre-specified scheme
@lhoestq @yjernite
https://github.com/huggingface/datasets/pull/1342
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1342", "html_url": "https://github.com/huggingface/datasets/pull/1342", "diff_url": "https://github.com/huggingface/datasets/pull/1342.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1342.patch", "merged_at": "2020-12-09T15:37:26" }
1,342
true
added references to only data card creator to all guides
We can now use the wonderful online form for dataset cards created by @evrardts
https://github.com/huggingface/datasets/pull/1341
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1341", "html_url": "https://github.com/huggingface/datasets/pull/1341", "diff_url": "https://github.com/huggingface/datasets/pull/1341.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1341.patch", "merged_at": "2020-12-08T21:36:11" }
1,341
true
:fist: ¡Viva la Independencia!
Adds the Catalonia Independence Corpus for stance-detection of Tweets. Ready for review!
https://github.com/huggingface/datasets/pull/1340
[ "I've added the changes / fixes - ready for a second pass :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1340", "html_url": "https://github.com/huggingface/datasets/pull/1340", "diff_url": "https://github.com/huggingface/datasets/pull/1340.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1340.patch", "merged_at": "2020-12-14T10:36:01" }
1,340
true
hate_speech_18 initial commit
https://github.com/huggingface/datasets/pull/1339
[ "> Nice thanks !\r\n> \r\n> Can you rename the dataset folder and the dataset script name `hate_speech18` instead of `hate_speech_18` to follow the snake case convention we're using ?\r\n> \r\n> Also it looks like the dummy_data.zip file is quite big (almost 4MB).\r\n> Can you try to reduce its size ?\r\n> \r\n> To do so feel free to take a look inside it and remove all the unnecessary files or chunks of texts. The idea is to only keep a few examples\r\n\r\nDone, thanks! ", "Re-opened in https://github.com/huggingface/datasets/pull/1486" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1339", "html_url": "https://github.com/huggingface/datasets/pull/1339", "diff_url": "https://github.com/huggingface/datasets/pull/1339.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1339.patch", "merged_at": null }
1,339
true
Add GigaFren Dataset
https://github.com/huggingface/datasets/pull/1338
[ "@lhoestq fixed" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1338", "html_url": "https://github.com/huggingface/datasets/pull/1338", "diff_url": "https://github.com/huggingface/datasets/pull/1338.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1338.patch", "merged_at": "2020-12-14T10:03:46" }
1,338
true
Add spanish billion words
Add an unannotated corpus of the Spanish language of nearly 1.5 billion words, compiled from different resources from the web. The dataset needs 10 GB (download: 1.89 GiB, generated: 8.34 GiB, post-processed: Unknown size, total: 10.22 GiB), the test using dummy data pass but my laptop isn't able to run it on the real data (I left it running for over 8 hours and it didn't finish).
https://github.com/huggingface/datasets/pull/1337
[ "The tests failed because of ```RemoteDatasetTest``` so I tried ```git rebase``` and messed everything up. I've made a new clean PR (#1347)." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1337", "html_url": "https://github.com/huggingface/datasets/pull/1337", "diff_url": "https://github.com/huggingface/datasets/pull/1337.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1337.patch", "merged_at": null }
1,337
true
Add dataset Yoruba BBC Topic Classification
Added new dataset Yoruba BBC Topic Classification Contains loading script as well as dataset card including YAML tags.
https://github.com/huggingface/datasets/pull/1336
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1336", "html_url": "https://github.com/huggingface/datasets/pull/1336", "diff_url": "https://github.com/huggingface/datasets/pull/1336.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1336.patch", "merged_at": "2020-12-10T11:27:41" }
1,336
true
Added Bianet dataset
Hi :hugs:, This is a PR for [Bianet: A parallel news corpus in Turkish, Kurdish and English; Source](http://opus.nlpl.eu/Bianet.php) dataset
https://github.com/huggingface/datasets/pull/1335
[ "merging since the Ci is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1335", "html_url": "https://github.com/huggingface/datasets/pull/1335", "diff_url": "https://github.com/huggingface/datasets/pull/1335.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1335.patch", "merged_at": "2020-12-14T10:00:55" }
1,335
true
Add QED Amara Dataset
https://github.com/huggingface/datasets/pull/1334
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1334", "html_url": "https://github.com/huggingface/datasets/pull/1334", "diff_url": "https://github.com/huggingface/datasets/pull/1334.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1334.patch", "merged_at": "2020-12-10T11:15:57" }
1,334
true
Add Tanzil Dataset
https://github.com/huggingface/datasets/pull/1333
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1333", "html_url": "https://github.com/huggingface/datasets/pull/1333", "diff_url": "https://github.com/huggingface/datasets/pull/1333.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1333.patch", "merged_at": "2020-12-10T11:14:43" }
1,333
true
Add Open Subtitles Dataset
https://github.com/huggingface/datasets/pull/1332
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1332", "html_url": "https://github.com/huggingface/datasets/pull/1332", "diff_url": "https://github.com/huggingface/datasets/pull/1332.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1332.patch", "merged_at": "2020-12-10T11:13:18" }
1,332
true
First version of the new dataset hausa_voa_topics
Contains loading script as well as dataset card including YAML tags.
https://github.com/huggingface/datasets/pull/1331
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1331", "html_url": "https://github.com/huggingface/datasets/pull/1331", "diff_url": "https://github.com/huggingface/datasets/pull/1331.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1331.patch", "merged_at": "2020-12-10T11:09:53" }
1,331
true
added un_ga dataset
Hi :hugs:, This is a PR for [United nations general assembly resolutions: A six-language parallel corpus](http://opus.nlpl.eu/UN.php) dataset
https://github.com/huggingface/datasets/pull/1330
[ "Looks like this PR includes changes about many other files than the ones for un_ga\r\n\r\nCan you create another branch an another PR please ?", "@lhoestq, Thank you for suggestions. I have made the changes and raised the new PR https://github.com/huggingface/datasets/pull/1569. " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1330", "html_url": "https://github.com/huggingface/datasets/pull/1330", "diff_url": "https://github.com/huggingface/datasets/pull/1330.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1330.patch", "merged_at": null }
1,330
true
Add yoruba ner corpus
https://github.com/huggingface/datasets/pull/1329
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1329", "html_url": "https://github.com/huggingface/datasets/pull/1329", "diff_url": "https://github.com/huggingface/datasets/pull/1329.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1329.patch", "merged_at": null }
1,329
true
Added the NewsPH Raw dataset and corresponding dataset card
This PR adds the original NewsPH dataset which is used to autogenerate the NewsPH-NLI dataset. Reopened a new PR as the previous one had problems. Paper: https://arxiv.org/abs/2010.11574 Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
https://github.com/huggingface/datasets/pull/1328
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1328", "html_url": "https://github.com/huggingface/datasets/pull/1328", "diff_url": "https://github.com/huggingface/datasets/pull/1328.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1328.patch", "merged_at": "2020-12-10T11:04:34" }
1,328
true
Add msr_genomics_kbcomp dataset
https://github.com/huggingface/datasets/pull/1327
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1327", "html_url": "https://github.com/huggingface/datasets/pull/1327", "diff_url": "https://github.com/huggingface/datasets/pull/1327.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1327.patch", "merged_at": "2020-12-08T18:18:06" }
1,327
true
TEP: Tehran English-Persian parallel corpus
TEP: Tehran English-Persian parallel corpus more info : http://opus.nlpl.eu/TEP.php
https://github.com/huggingface/datasets/pull/1326
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1326", "html_url": "https://github.com/huggingface/datasets/pull/1326", "diff_url": "https://github.com/huggingface/datasets/pull/1326.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1326.patch", "merged_at": "2020-12-10T11:25:17" }
1,326
true
Add humicroedit dataset
Pull request for adding humicroedit dataset
https://github.com/huggingface/datasets/pull/1325
[ "Updated the commit with the generated yaml tags", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1325", "html_url": "https://github.com/huggingface/datasets/pull/1325", "diff_url": "https://github.com/huggingface/datasets/pull/1325.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1325.patch", "merged_at": "2020-12-17T17:59:09" }
1,325
true
❓ Sharing ElasticSearch indexed dataset
Hi there, First of all, thank you very much for this amazing library. Datasets have become my preferred data structure for basically everything I am currently doing. **Question:** I'm working with a dataset and I have an elasticsearch container running at localhost:9200. I added an elasticsearch index and I was wondering - how can I know where it has been saved? - how can I share the indexed dataset with others? I tried to dig into the docs, but could not find anything about that. Thank you very much for your help. Best, Pietro Edit: apologies for the wrong label
https://github.com/huggingface/datasets/issues/1324
[ "Hello @pietrolesci , I am not sure to understand what you are trying to do here.\r\n\r\nIf you're looking for ways to save a dataset on disk, you can you the `save_to_disk` method:\r\n```python\r\n>>> import datasets\r\n>>> loaded_dataset = datasets.load(\"dataset_name\")\r\n>>> loaded_dataset.save_to_disk(\"/path/on/your/disk\")\r\n```\r\n\r\nThe saved dataset can later be retrieved using:\r\n```python\r\n>>> loaded_dataset = datasets.Dataset.load_from_disk(\"/path/on/your/disk\")\r\n```\r\n\r\nAlso, I'd recommend posting your question directly in the issue section of the [elasticsearch repo](https://github.com/elastic/elasticsearch)", "Hi @SBrandeis,\n\nThanks a lot for picking up my request. \n\nMaybe I can clarify my use-case with a bit of context. Say I have the IMDb dataset. I create an ES index on it. Now I can save and reload the dataset from disk normally. Once I reload the dataset, it is easy to retrieve the ES index on my machine. I was wondering: is there a way I can share the (now) indexed version of the IMDb dataset with my colleagues without requiring them to re-index it?\n\nThanks a lot in advance for your consideration.\n\nBest,\n\nPietro", "Thanks for the clarification.\r\n\r\nI am not familiar with ElasticSearch, but if I understand well you're trying to migrate your data along with the ES index.\r\nMy advice would be to check out ES documentation, for instance, this might help you: https://www.elastic.co/guide/en/cloud/current/ec-migrate-data.html\r\n\r\nLet me know if it helps" ]
null
1,324
false
Add CC-News dataset of English language articles
Adds [CC-News](https://commoncrawl.org/2016/10/news-dataset-available/) dataset. It contains 708241 English language news articles. Although each article has a language field these tags are not reliable. I've used Spacy language detection [pipeline](https://spacy.io/universe/project/spacy-langdetect) to confirm that the article language is indeed English. The prepared dataset is temporarily hosted on my private Google Storage [bucket](https://storage.googleapis.com/hf_datasets/cc_news.tar.gz). We can move it to HF storage and update this PR before merging.
https://github.com/huggingface/datasets/pull/1323
[ "@vblagoje nice work, please add the README.md file and it would be ready", "@lhoestq @tanmoyio @yjernite please have a look at the dataset card. Don't forget that the dataset is still hosted on my private gs bucket and should eventually be moved to the HF bucket", "I will move the files soon and ping you when it's done and with the new URLs :) ", "Hi !\r\n\r\nI just moved the file to a HF bucket. It's available at https://storage.googleapis.com/huggingface-nlp/datasets/cc_news/cc_news.tar.gz\r\n\r\nSorry for the delay ^^'", "@lhoestq no worries, updated PR with the new URL and rebased to master\r\n" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1323", "html_url": "https://github.com/huggingface/datasets/pull/1323", "diff_url": "https://github.com/huggingface/datasets/pull/1323.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1323.patch", "merged_at": "2021-02-01T16:55:49" }
1,323
true
add indonlu benchmark datasets
The IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for the Indonesian language. There are 12 datasets in IndoNLU.
https://github.com/huggingface/datasets/pull/1322
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1322", "html_url": "https://github.com/huggingface/datasets/pull/1322", "diff_url": "https://github.com/huggingface/datasets/pull/1322.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1322.patch", "merged_at": null }
1,322
true
added dutch_social
The Dutch social media tweets dataset. Which has a total of more than 210k tweets in dutch language. These tweets have been machine annotated with sentiment scores (`label` feature) and `industry` and `hisco_codes` It can be used for sentiment analysis, multi-label classification and entity tagging
https://github.com/huggingface/datasets/pull/1321
[ "@lhoestq \r\nUpdated the `dummy_data.zip `(<10kb)I had to reduce it to just a few samples. \r\nTrain-Test-Dev (20-5-5 samples) \r\n\r\nBut the push also added changes from other PRs (probably because of a rebase!) So the files changed tab shows 466 files were changed! \r\n", "Thanks ! The dummy data are all good now :) \r\n\r\nLooks like this PR includes changes to many other files than the ones for dutch_social now.\r\n\r\nCan you create another branch and another PR please ?", "> \r\n> Can you create another branch and another PR please ?\r\n@lhoestq \r\n\r\nI did a rebase. Now it doesn't include the other files. Does that help? \r\n\r\n", "Yes thanks !" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1321", "html_url": "https://github.com/huggingface/datasets/pull/1321", "diff_url": "https://github.com/huggingface/datasets/pull/1321.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1321.patch", "merged_at": "2020-12-16T10:14:17" }
1,321
true
Added the WikiText-TL39 dataset and corresponding card
This PR adds the WikiText-TL-39 Filipino Language Modeling dataset. Restarted a new pull request since there were problems with the earlier one. Paper: https://arxiv.org/abs/1907.00409 Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
https://github.com/huggingface/datasets/pull/1320
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1320", "html_url": "https://github.com/huggingface/datasets/pull/1320", "diff_url": "https://github.com/huggingface/datasets/pull/1320.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1320.patch", "merged_at": "2020-12-10T11:24:52" }
1,320
true
adding wili-2018 language identification dataset
https://github.com/huggingface/datasets/pull/1319
[ "@lhoestq Not sure what happened, I just changed the py file but it is showing some TensorFlow error now.", "You can ignore it.\r\nIt's caused by the Tensorflow update that happened 30min ago. They added breaking changes.\r\nI'm working on a fix on the master branch right now\r\n", "oh okay, btw I have made the required change for reading the CSV, I think it should be fine now, please take a look at it when you have some time.", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1319", "html_url": "https://github.com/huggingface/datasets/pull/1319", "diff_url": "https://github.com/huggingface/datasets/pull/1319.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1319.patch", "merged_at": "2020-12-14T21:20:32" }
1,319
true
ethos first commit
Ethos passed all the tests except from this one: RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<your-dataset-name> with this error: E OSError: Cannot find data file. E Original error: E [Errno 2] No such file or directory:
https://github.com/huggingface/datasets/pull/1318
[ "> Nice thanks !\r\n> \r\n> I left a few comments\r\n> \r\n> Also it looks like this PR includes changes about other files than the ones for ethos\r\n> \r\n> Can you create another branch and another PR please ?\r\n\r\n@lhoestq Should I close this PR? The new one is the: #1453", "You can create another PR and close this one if you don't mind", "> You can create another PR and close this one if you don't mind\r\n\r\nPerfect! You should see the #1453 PR for the fixed version! Thanks" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1318", "html_url": "https://github.com/huggingface/datasets/pull/1318", "diff_url": "https://github.com/huggingface/datasets/pull/1318.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1318.patch", "merged_at": null }
1,318
true
add 10k German News Article Dataset
https://github.com/huggingface/datasets/pull/1317
[ "You can just create another branch from master on your fork and create another PR:\r\n\r\nfirst update your master branch\r\n```\r\ngit checkout master\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit push\r\n```\r\n\r\nthen create a new branch\r\n```\r\ngit checkout -b my-new-branch-name\r\n```\r\n\r\nThen you can add, commit and push the gnad10 files and open a new PR", "closing in favor of #1572 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1317", "html_url": "https://github.com/huggingface/datasets/pull/1317", "diff_url": "https://github.com/huggingface/datasets/pull/1317.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1317.patch", "merged_at": null }
1,317
true
Allow GitHub releases as dataset source
# Summary Providing a GitHub release URL to `DownloadManager.download()` currently throws a `ConnectionError: Couldn't reach [DOWNLOAD_URL]`. This PR fixes this problem by adding an exception for GitHub releases in `datasets.utils.file_utils.get_from_cache()`. # Reproduce ``` import datasets url = 'http://github.com/benjaminvdb/DBRD/releases/download/v3.0/DBRD_v3.tgz' result = datasets.utils.file_utils.get_from_cache(url) # Returns: ConnectionError: Couldn't reach http://github.com/benjaminvdb/DBRD/releases/download/v3.0/DBRD_v3.tgz ``` # Cause GitHub releases returns a HTTP status 403 (FOUND), indicating that the request is being redirected (to AWS S3, in this case). `get_from_cache()` checks whether the status is 200 (OK) or if it is part of two exceptions (Google Drive or Firebase), otherwise the mentioned error is thrown. # Solution Just like the exceptions for Google Drive and Firebase, add a condition for GitHub releases URLs that return the HTTP status 403. If this is the case, continue normally.
https://github.com/huggingface/datasets/pull/1316
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1316", "html_url": "https://github.com/huggingface/datasets/pull/1316", "diff_url": "https://github.com/huggingface/datasets/pull/1316.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1316.patch", "merged_at": "2020-12-10T10:12:00" }
1,316
true
add yelp_review_full
This corresponds to the Yelp-5 requested in https://github.com/huggingface/datasets/issues/353 I included the dataset card.
https://github.com/huggingface/datasets/pull/1315
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1315", "html_url": "https://github.com/huggingface/datasets/pull/1315", "diff_url": "https://github.com/huggingface/datasets/pull/1315.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1315.patch", "merged_at": "2020-12-09T15:55:48" }
1,315
true
Add snips built in intents 2016 12
This PR proposes to add the Snips.ai built in intents dataset. The first configuration added is for the intent labels only, but the dataset includes entity slots that may in future be added as alternate configurations.
https://github.com/huggingface/datasets/pull/1314
[ "It is not clear how to automatically add the dummy data if the source data is a more complex json format. Should I manually take a fraction of the source data and include it as dummy data?\r\n", "Added a fraction of the real data as dummy data.", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1314", "html_url": "https://github.com/huggingface/datasets/pull/1314", "diff_url": "https://github.com/huggingface/datasets/pull/1314.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1314.patch", "merged_at": "2020-12-14T09:59:06" }
1,314
true
Add HateSpeech Corpus for Polish
This PR adds a HateSpeech Corpus for Polish, containing offensive language examples. - **Homepage:** http://zil.ipipan.waw.pl/HateSpeech - **Paper:** http://www.qualitativesociologyreview.org/PL/Volume38/PSJ_13_2_Troszynski_Wawer.pdf
https://github.com/huggingface/datasets/pull/1313
[ "@lhoestq Do you think using the ClassLabel is correct if we don't know the meaning of them?", "Once we find out the meanings we can still add them to the dataset card", "Feel free to ping me when the PR is ready for the final review" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1313", "html_url": "https://github.com/huggingface/datasets/pull/1313", "diff_url": "https://github.com/huggingface/datasets/pull/1313.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1313.patch", "merged_at": "2020-12-16T16:48:45" }
1,313
true
Jigsaw toxicity pred
Requires manually downloading data from Kaggle.
https://github.com/huggingface/datasets/pull/1312
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1312", "html_url": "https://github.com/huggingface/datasets/pull/1312", "diff_url": "https://github.com/huggingface/datasets/pull/1312.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1312.patch", "merged_at": null }
1,312
true
Add OPUS Bible Corpus (102 Languages)
https://github.com/huggingface/datasets/pull/1311
[ "@lhoestq done" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1311", "html_url": "https://github.com/huggingface/datasets/pull/1311", "diff_url": "https://github.com/huggingface/datasets/pull/1311.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1311.patch", "merged_at": "2020-12-09T15:30:56" }
1,311
true
Add OffensEval-TR 2020 Dataset
This PR adds the OffensEval-TR 2020 dataset which is a Turkish offensive language corpus by me and @basakbuluz. The corpus consist of randomly sampled tweets and annotated in a similar way to [OffensEval](https://sites.google.com/site/offensevalsharedtask/) and [GermEval](https://projects.fzai.h-da.de/iggsa/). - **Homepage:** [offensive-turkish](https://coltekin.github.io/offensive-turkish/) - **Paper:** [A Corpus of Turkish Offensive Language on Social Media](https://coltekin.github.io/offensive-turkish/troff.pdf) - **Point of Contact:** [Çağrı Çöltekin](ccoltekin@sfs.uni-tuebingen.de)
https://github.com/huggingface/datasets/pull/1310
[ "@lhoestq, can you please review this PR? ", "> Awesome thank you !\r\n\r\nThanks for the small fixes @lhoestq ", "@coltekin, we have added the data set that you created an article that says \"Turkish Attack Language Community in Social Media\", HuggingFace dataset update sprint for you. We added Sprint quickly for a short time. I hope you welcome it too. The dataset is accessible at https://huggingface.co/datasets/offenseval2020_tr. ", "Thank you for the heads up. I am not familiar with the terminology above (no idea what a sprint is), but I am happy that you found the data useful. Please feel free to distribute/use it as you see fit.\r\n\r\nThe OffensEval version you included in your data set has only binary labels. There is also a version [here](https://coltekin.github.io/offensive-turkish/troff-v1.0.tsv.gz) which also includes fine-grained labels similar to the OffensEval English data set - Just in case it would be of interest.\r\n\r\nIf you have questions about the data set, or need more information please let me know." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1310", "html_url": "https://github.com/huggingface/datasets/pull/1310", "diff_url": "https://github.com/huggingface/datasets/pull/1310.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1310.patch", "merged_at": "2020-12-09T16:02:06" }
1,310
true
Add SAMSum Corpus dataset
Did not spent much time writing README, might update later. Copied description and some stuff from tensorflow_datasets https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/summarization/samsum.py
https://github.com/huggingface/datasets/pull/1309
[ "also to fix the check_code_quality CI you have to remove the imports of the unused `csv` and `os`", "@lhoestq Thanks for the review! I have done what you asked, README is also updated. 🤗 \r\nThe CI fails because of the added dependency. I have never used circleCI before, so I am curious how will you solve that?", "I just added `py7zr` to our test dependencies", "merging since the CI is fixed on master", "Thanks! 🤗 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1309", "html_url": "https://github.com/huggingface/datasets/pull/1309", "diff_url": "https://github.com/huggingface/datasets/pull/1309.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1309.patch", "merged_at": "2020-12-14T10:20:55" }
1,309
true
Add Wiki Lingua Dataset
Hello, This is my first PR. I have added Wiki Lingua Dataset along with dataset card to the best of my knowledge. There was one hiccup though. I was unable to create dummy data because the data is in pkl format. From the document, I see that: ```At the moment it supports data files in the following format: txt, csv, tsv, jsonl, json, xml```
https://github.com/huggingface/datasets/pull/1308
[ "I am done adding the dataset. Requesting to review and advise.", "looks like this PR has changes about many other files than the ones for WIki Lingua \r\n\r\nCan you create another branch and another PR please ?", "Any reason to have english as the default config over the other languages ?", "> looks like this PR has changes about many other files than the ones for WIki Lingua\r\n> \r\n> Can you create another branch and another PR please ?\r\n\r\nOk, I will create another branch and submit a fresh PR.", "> Any reason to have english as the default config over the other languages ?\r\n\r\nThe data for all other languages has a direct reference to English article. Thus, I kept English as default.", "closing in favor of #1470 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1308", "html_url": "https://github.com/huggingface/datasets/pull/1308", "diff_url": "https://github.com/huggingface/datasets/pull/1308.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1308.patch", "merged_at": null }
1,308
true