title
stringlengths
1
290
body
stringlengths
0
228k
html_url
stringlengths
46
51
comments
list
pull_request
dict
number
int64
1
5.59k
is_pull_request
bool
2 classes
Add Urdu fake news
Added Urdu fake news dataset. More information about the dataset can be found <a href="https://github.com/MaazAmjad/Datasets-for-Urdu-news">here</a>.
https://github.com/huggingface/datasets/pull/1106
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1106", "html_url": "https://github.com/huggingface/datasets/pull/1106", "diff_url": "https://github.com/huggingface/datasets/pull/1106.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1106.patch", "merged_at": null }
1,106
true
add xquad_r dataset
https://github.com/huggingface/datasets/pull/1105
[ "looks like this PR includes changes in many files than the ones for xquad_r, could you create a new branch and a new PR ?", "Sure, I will close this then.\r\n" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1105", "html_url": "https://github.com/huggingface/datasets/pull/1105", "diff_url": "https://github.com/huggingface/datasets/pull/1105.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1105.patch", "merged_at": null }
1,105
true
add TLC
Added TLC dataset
https://github.com/huggingface/datasets/pull/1104
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1104", "html_url": "https://github.com/huggingface/datasets/pull/1104", "diff_url": "https://github.com/huggingface/datasets/pull/1104.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1104.patch", "merged_at": "2020-12-04T14:29:23" }
1,104
true
Add support to download kaggle datasets
We can use API key
https://github.com/huggingface/datasets/issues/1103
[ "Hey, I think this is great idea. Any plan to integrate kaggle private datasets loading to `datasets`?" ]
null
1,103
false
Add retries to download manager
https://github.com/huggingface/datasets/issues/1102
[]
null
1,102
false
Add Wikicorpus dataset
Add dataset.
https://github.com/huggingface/datasets/pull/1101
[ "@lhoestq done! ;)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1101", "html_url": "https://github.com/huggingface/datasets/pull/1101", "diff_url": "https://github.com/huggingface/datasets/pull/1101.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1101.patch", "merged_at": "2020-12-09T18:13:09" }
1,101
true
Urdu fake news
Added Bend the Truth urdu fake news dataset. More inforation <a href="https://github.com/MaazAmjad/Datasets-for-Urdu-news">here</a>.
https://github.com/huggingface/datasets/pull/1100
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1100", "html_url": "https://github.com/huggingface/datasets/pull/1100", "diff_url": "https://github.com/huggingface/datasets/pull/1100.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1100.patch", "merged_at": null }
1,100
true
Add tamilmixsentiment data
https://github.com/huggingface/datasets/pull/1099
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1099", "html_url": "https://github.com/huggingface/datasets/pull/1099", "diff_url": "https://github.com/huggingface/datasets/pull/1099.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1099.patch", "merged_at": "2020-12-05T16:48:33" }
1,099
true
Add ToTTo Dataset
Adds a brand new table to text dataset: https://github.com/google-research-datasets/ToTTo
https://github.com/huggingface/datasets/pull/1098
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1098", "html_url": "https://github.com/huggingface/datasets/pull/1098", "diff_url": "https://github.com/huggingface/datasets/pull/1098.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1098.patch", "merged_at": "2020-12-04T13:38:19" }
1,098
true
Add MSRA NER labels
Fixes #940
https://github.com/huggingface/datasets/pull/1097
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1097", "html_url": "https://github.com/huggingface/datasets/pull/1097", "diff_url": "https://github.com/huggingface/datasets/pull/1097.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1097.patch", "merged_at": "2020-12-04T13:31:58" }
1,097
true
FIX matinf link in ADD_NEW_DATASET.md
https://github.com/huggingface/datasets/pull/1096
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1096", "html_url": "https://github.com/huggingface/datasets/pull/1096", "diff_url": "https://github.com/huggingface/datasets/pull/1096.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1096.patch", "merged_at": "2020-12-04T14:25:35" }
1,096
true
Add TupleInf Open IE Dataset
For more information: https://allenai.org/data/tuple-ie
https://github.com/huggingface/datasets/pull/1095
[ "Errors are in the CI are not related to this PR (RemoteDatasetError)\r\nthe CI is fixed on master so it's fine ", "@lhoestq Added the dataset card. Please let me know if more information needs to be added." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1095", "html_url": "https://github.com/huggingface/datasets/pull/1095", "diff_url": "https://github.com/huggingface/datasets/pull/1095.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1095.patch", "merged_at": "2020-12-04T15:40:54" }
1,095
true
add urdu fake news dataset
Added Urdu fake news dataset. The dataset can be found <a href="https://github.com/MaazAmjad/Datasets-for-Urdu-news">here</a>.
https://github.com/huggingface/datasets/pull/1094
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1094", "html_url": "https://github.com/huggingface/datasets/pull/1094", "diff_url": "https://github.com/huggingface/datasets/pull/1094.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1094.patch", "merged_at": null }
1,094
true
Add NCBI Disease Corpus dataset
https://github.com/huggingface/datasets/pull/1093
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1093", "html_url": "https://github.com/huggingface/datasets/pull/1093", "diff_url": "https://github.com/huggingface/datasets/pull/1093.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1093.patch", "merged_at": "2020-12-04T11:15:12" }
1,093
true
Add Coached Conversation Preference Dataset
Adding [Coached Conversation Preference Dataset](https://research.google/tools/datasets/coached-conversational-preference-elicitation/)
https://github.com/huggingface/datasets/pull/1092
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1092", "html_url": "https://github.com/huggingface/datasets/pull/1092", "diff_url": "https://github.com/huggingface/datasets/pull/1092.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1092.patch", "merged_at": "2020-12-04T13:49:50" }
1,092
true
Add Google wellformed query dataset
This pull request will add Google wellformed_query dataset. Link of dataset is https://github.com/google-research-datasets/query-wellformedness
https://github.com/huggingface/datasets/pull/1091
[ "hope this works.." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1091", "html_url": "https://github.com/huggingface/datasets/pull/1091", "diff_url": "https://github.com/huggingface/datasets/pull/1091.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1091.patch", "merged_at": "2020-12-06T17:43:02" }
1,091
true
add thaisum
ThaiSum, a large-scale corpus for Thai text summarization obtained from several online news websites namely Thairath, ThaiPBS, Prachathai, and The Standard. This dataset consists of over 350,000 article and summary pairs written by journalists. We evaluate the performance of various existing summarization models on ThaiSum dataset and analyse the characteristic of the dataset to present its difficulties.
https://github.com/huggingface/datasets/pull/1090
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1090", "html_url": "https://github.com/huggingface/datasets/pull/1090", "diff_url": "https://github.com/huggingface/datasets/pull/1090.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1090.patch", "merged_at": "2020-12-04T11:16:06" }
1,090
true
add sharc_modified
Adding modified ShARC dataset https://github.com/nikhilweee/neural-conv-qa
https://github.com/huggingface/datasets/pull/1089
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1089", "html_url": "https://github.com/huggingface/datasets/pull/1089", "diff_url": "https://github.com/huggingface/datasets/pull/1089.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1089.patch", "merged_at": "2020-12-04T10:31:44" }
1,089
true
add xquad_r dataset
https://github.com/huggingface/datasets/pull/1088
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1088", "html_url": "https://github.com/huggingface/datasets/pull/1088", "diff_url": "https://github.com/huggingface/datasets/pull/1088.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1088.patch", "merged_at": null }
1,088
true
Add Big Patent dataset
* More info on the dataset: https://evasharma.github.io/bigpatent/ * There's another raw version of the dataset available from tfds. However, they're quite large so I don't have the resources to fully test all the configs for that version yet. We'll try to add it later.
https://github.com/huggingface/datasets/pull/1087
[ "@lhoestq reduced the dummy data size to around 19MB in total and added the dataset card.", "@lhoestq so I ended up removing all the nested JSON objects in the gz datafile and keep only one object with minimal content: `{\"publication_number\": \"US-8230922-B2\", \"abstract\": \"dummy abstract\", \"application_number\": \"US-201113163519-A\", \"description\": \"dummy description\"}`. \r\n\r\nThey're reduced to 35KB in total (2.5KB per domain and 17.5KB for all domains), hopefully, they're small enough." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1087", "html_url": "https://github.com/huggingface/datasets/pull/1087", "diff_url": "https://github.com/huggingface/datasets/pull/1087.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1087.patch", "merged_at": "2020-12-06T17:20:59" }
1,087
true
adding cdt dataset
- **Name:** *Cyberbullying Detection Task* - **Description:** *The Cyberbullying Detection task was part of 2019 edition of PolEval competition. The goal is to predict if a given Twitter message contains a cyberbullying (harmful) content.* - **Data:** *https://github.com/ptaszynski/cyberbullying-Polish* - **Motivation:** *The KLEJ benchmark (Kompleksowa Lista Ewaluacji Językowych) is a set of nine evaluation tasks for the Polish language understanding.*
https://github.com/huggingface/datasets/pull/1086
[ "> Thanks for adding this one !\r\n> \r\n> I left a few comments\r\n> \r\n> after the change you'll need to regenerate the dataset_infos.json file as well\r\n\r\ndataset_infos.json regenerated", "looks like this PR includes changes to many files other that the ones for CDT\r\ncould you create another branch and another PR please ?" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1086", "html_url": "https://github.com/huggingface/datasets/pull/1086", "diff_url": "https://github.com/huggingface/datasets/pull/1086.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1086.patch", "merged_at": null }
1,086
true
add mutual friends conversational dataset
Mutual friends dataset WIP TODO: - scenario_kbs (bug with pyarrow conversion) - download from codalab checksums bug
https://github.com/huggingface/datasets/pull/1085
[ "Ready for review" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1085", "html_url": "https://github.com/huggingface/datasets/pull/1085", "diff_url": "https://github.com/huggingface/datasets/pull/1085.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1085.patch", "merged_at": "2020-12-16T15:58:30" }
1,085
true
adding cdsc dataset
- **Name**: *cdsc (domains: cdsc-e & cdsc-r)* - **Description**: *Polish CDSCorpus consists of 10K Polish sentence pairs which are human-annotated for semantic relatedness and entailment. The dataset may be used for the evaluation of compositional distributional semantics models of Polish. The dataset was presented at ACL 2017. Please refer to the Wróblewska and Krasnowska-Kieraś (2017) for a detailed description of the resource.* - **Data**: *http://2019.poleval.pl/index.php/tasks/* - **Motivation**: *The KLEJ benchmark (Kompleksowa Lista Ewaluacji Językowych) is a set of nine evaluation tasks for the Polish language understanding.*
https://github.com/huggingface/datasets/pull/1084
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1084", "html_url": "https://github.com/huggingface/datasets/pull/1084", "diff_url": "https://github.com/huggingface/datasets/pull/1084.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1084.patch", "merged_at": "2020-12-04T10:41:26" }
1,084
true
Add the multilingual Exams dataset
https://github.com/mhardalov/exams-qa `multilingual` configs have all languages mixed together `crosslingual` mixes the languages for test but separates them for train and dec, so I've made one config per language for train/dev data and one config with the joint test set
https://github.com/huggingface/datasets/pull/1083
[ "Will slim down the dummy files in the morning" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1083", "html_url": "https://github.com/huggingface/datasets/pull/1083", "diff_url": "https://github.com/huggingface/datasets/pull/1083.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1083.patch", "merged_at": "2020-12-04T17:12:00" }
1,083
true
Myanmar news dataset
Add news topic classification dataset in Myanmar / Burmese languagess This data was collected in 2017 by Aye Hninn Khine , and published on GitHub with a GPL license https://github.com/ayehninnkhine/MyanmarNewsClassificationSystem
https://github.com/huggingface/datasets/pull/1082
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1082", "html_url": "https://github.com/huggingface/datasets/pull/1082", "diff_url": "https://github.com/huggingface/datasets/pull/1082.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1082.patch", "merged_at": "2020-12-04T10:13:38" }
1,082
true
Add Knowledge-Enhanced Language Model Pre-training (KELM)
Adds the KELM dataset. - Webpage/repo: https://github.com/google-research-datasets/KELM-corpus - Paper: https://arxiv.org/pdf/2010.12688.pdf
https://github.com/huggingface/datasets/pull/1081
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1081", "html_url": "https://github.com/huggingface/datasets/pull/1081", "diff_url": "https://github.com/huggingface/datasets/pull/1081.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1081.patch", "merged_at": "2020-12-04T16:36:28" }
1,081
true
Add WikiANN NER dataset
This PR adds the full set of 176 languages from the balanced train/dev/test splits of WikiANN / PAN-X from: https://github.com/afshinrahimi/mmner Until now, only 40 of these languages were available in `datasets` as part of the XTREME benchmark Courtesy of the dataset author, we can now download this dataset from a Dropbox URL without needing a manual download anymore 🥳, so at some point it would be worth updating the PAN-X subset of XTREME as well 😄 Link to gist with some snippets for producing dummy data: https://gist.github.com/lewtun/5b93294ab6dbcf59d1493dbe2cfd6bb9 P.S. @yjernite I think I was confused about needing to generate a set of YAML tags per config, so ended up just adding a single one in the README.
https://github.com/huggingface/datasets/pull/1080
[ "Dataset card added, so ready for review!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1080", "html_url": "https://github.com/huggingface/datasets/pull/1080", "diff_url": "https://github.com/huggingface/datasets/pull/1080.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1080.patch", "merged_at": "2020-12-06T17:18:55" }
1,080
true
nkjp-ner
- **Name:** *nkjp-ner* - **Description:** *The NKJP-NER is based on a human-annotated part of NKJP. We extracted sentences with named entities of exactly one type. The task is to predict the type of the named entity.* - **Data:** *https://klejbenchmark.com/tasks/* - **Motivation:** *The KLEJ benchmark (Kompleksowa Lista Ewaluacji Językowych) is a set of nine evaluation tasks for the Polish language understanding.*
https://github.com/huggingface/datasets/pull/1079
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1079", "html_url": "https://github.com/huggingface/datasets/pull/1079", "diff_url": "https://github.com/huggingface/datasets/pull/1079.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1079.patch", "merged_at": "2020-12-04T09:42:06" }
1,079
true
add AJGT dataset
Arabic Jordanian General Tweets.
https://github.com/huggingface/datasets/pull/1078
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1078", "html_url": "https://github.com/huggingface/datasets/pull/1078", "diff_url": "https://github.com/huggingface/datasets/pull/1078.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1078.patch", "merged_at": "2020-12-04T09:55:15" }
1,078
true
Added glucose dataset
This PR adds the [Glucose](https://github.com/ElementalCognition/glucose) dataset.
https://github.com/huggingface/datasets/pull/1077
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1077", "html_url": "https://github.com/huggingface/datasets/pull/1077", "diff_url": "https://github.com/huggingface/datasets/pull/1077.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1077.patch", "merged_at": "2020-12-04T09:55:52" }
1,077
true
quac quac / coin coin
Add QUAC (Question Answering in Context) I linearized most of the dictionnaries to lists. Referenced to the authors' datasheet for the dataset card. 🦆🦆🦆 Coin coin
https://github.com/huggingface/datasets/pull/1076
[ "pan" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1076", "html_url": "https://github.com/huggingface/datasets/pull/1076", "diff_url": "https://github.com/huggingface/datasets/pull/1076.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1076.patch", "merged_at": "2020-12-04T09:15:20" }
1,076
true
adding cleaned verion of E2E NLG
Found at: https://github.com/tuetschek/e2e-cleaning
https://github.com/huggingface/datasets/pull/1075
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1075", "html_url": "https://github.com/huggingface/datasets/pull/1075", "diff_url": "https://github.com/huggingface/datasets/pull/1075.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1075.patch", "merged_at": "2020-12-03T19:43:56" }
1,075
true
Swedish MT STS-B
Added a Swedish machine translated version of the well known STS-B Corpus
https://github.com/huggingface/datasets/pull/1074
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1074", "html_url": "https://github.com/huggingface/datasets/pull/1074", "diff_url": "https://github.com/huggingface/datasets/pull/1074.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1074.patch", "merged_at": "2020-12-03T20:44:28" }
1,074
true
Add DialogRE dataset
Adding the [DialogRE](https://github.com/nlpdata/dialogre) dataset Version 2. - All tests passed successfully.
https://github.com/huggingface/datasets/pull/1073
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1073", "html_url": "https://github.com/huggingface/datasets/pull/1073", "diff_url": "https://github.com/huggingface/datasets/pull/1073.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1073.patch", "merged_at": "2020-12-04T13:41:51" }
1,073
true
actually uses the previously declared VERSION on the configs in the template
https://github.com/huggingface/datasets/pull/1072
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1072", "html_url": "https://github.com/huggingface/datasets/pull/1072", "diff_url": "https://github.com/huggingface/datasets/pull/1072.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1072.patch", "merged_at": "2020-12-03T19:35:46" }
1,072
true
add xlrd to test package requirements
Adds `xlrd` package to the test requirements to handle scripts that use `pandas` to load excel files
https://github.com/huggingface/datasets/pull/1071
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1071", "html_url": "https://github.com/huggingface/datasets/pull/1071", "diff_url": "https://github.com/huggingface/datasets/pull/1071.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1071.patch", "merged_at": "2020-12-03T18:47:15" }
1,071
true
add conv_ai
Adding ConvAI dataset https://github.com/DeepPavlov/convai/tree/master/2017
https://github.com/huggingface/datasets/pull/1070
[ "This one will make @thomwolf reminisce ;)", "Merging." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1070", "html_url": "https://github.com/huggingface/datasets/pull/1070", "diff_url": "https://github.com/huggingface/datasets/pull/1070.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1070.patch", "merged_at": "2020-12-04T06:44:34" }
1,070
true
Test
https://github.com/huggingface/datasets/pull/1069
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1069", "html_url": "https://github.com/huggingface/datasets/pull/1069", "diff_url": "https://github.com/huggingface/datasets/pull/1069.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1069.patch", "merged_at": null }
1,069
true
Add Pubmed (citation + abstract) dataset (2020).
null
https://github.com/huggingface/datasets/pull/1068
[ "LGTM! ftp addition looks fine but maybe have a look @thomwolf ?", "It's not finished yet, I need to run the tests on the full dataset (it was running this weekend, there is an error somewhere deep)\r\n", "@yjernite Ready for review !\r\n@thomwolf \r\n\r\nSo I tried to follow closely the original format that means I still had to drop information (namely tags on elements are simply discarded for now but they don't seem to carry critical information).\r\nSome elements are also discarded they tend to not come up often:\r\n - The most notable is Author affiliation, which seems to be all over the place in terms of what it look meaning it's hard to actually get a consistent format.\r\n - Journal is the same, all the elements in there can be wildly different so I decided to drop it for now instead of trying to figure out a way to have a common presentation. (the DOI and medline ID are present so it can be recovered).\r\n\r\nI think this PR could go as it but we probably should add a way to get easier information to use with a config.\r\nFor instance `{\"title\": \"string\", \"abstract\": \"string\", \"authors\": List[str], \"substances\": List[str]}` maybe ? (substances for instance is a tricky one, some substances only have an international identifier, some have simply a common name, some both)\r\n\r\nIt's relatively easy to do I think it's mostly discarding other fields and renaming some deep structure into a flat one.", "Look ok to me but @lhoestq is the master on the Download Manager side" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1068", "html_url": "https://github.com/huggingface/datasets/pull/1068", "diff_url": "https://github.com/huggingface/datasets/pull/1068.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1068.patch", "merged_at": "2020-12-23T09:52:07" }
1,068
true
add xquad-r dataset
https://github.com/huggingface/datasets/pull/1067
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1067", "html_url": "https://github.com/huggingface/datasets/pull/1067", "diff_url": "https://github.com/huggingface/datasets/pull/1067.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1067.patch", "merged_at": null }
1,067
true
Add ChrEn
Adding the Cherokee English machine translation dataset of https://github.com/ZhangShiyue/ChrEn
https://github.com/huggingface/datasets/pull/1066
[ "I just saw your PR actually ^^", "> I just saw your PR actually ^^\r\n\r\nSomehow that still doesn't work, lmk if you have any ideas.", "Did you rebase from master ?" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1066", "html_url": "https://github.com/huggingface/datasets/pull/1066", "diff_url": "https://github.com/huggingface/datasets/pull/1066.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1066.patch", "merged_at": "2020-12-03T21:49:39" }
1,066
true
add xquad-r dataset
https://github.com/huggingface/datasets/pull/1065
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1065", "html_url": "https://github.com/huggingface/datasets/pull/1065", "diff_url": "https://github.com/huggingface/datasets/pull/1065.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1065.patch", "merged_at": null }
1,065
true
Not support links with 302 redirect
I have an issue adding this download link https://github.com/jitkapat/thailitcorpus/releases/download/v.2.0/tlc_v.2.0.tar.gz it might be because it is not a direct link (it returns 302 and redirects to aws that returns 403 for head requests). ``` r.head("https://github.com/jitkapat/thailitcorpus/releases/download/v.2.0/tlc_v.2.0.tar.gz", allow_redirects=True) # <Response [403]> ```
https://github.com/huggingface/datasets/issues/1064
[ "Hi !\r\nThis kind of links is now supported by the library since #1316", "> Hi !\r\n> This kind of links is now supported by the library since #1316\r\n\r\nI updated links in TLC datasets to be the github links in this pull request \r\n https://github.com/huggingface/datasets/pull/1737\r\n\r\nEverything works now. Thank you." ]
null
1,064
false
Add the Ud treebank
This PR adds the 183 datasets in 104 languages of the UD Treebank.
https://github.com/huggingface/datasets/pull/1063
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1063", "html_url": "https://github.com/huggingface/datasets/pull/1063", "diff_url": "https://github.com/huggingface/datasets/pull/1063.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1063.patch", "merged_at": "2020-12-04T15:51:45" }
1,063
true
Add KorNLU dataset
Added Korean NLU datasets. The link to the dataset can be found [here](https://github.com/kakaobrain/KorNLUDatasets) and the paper can be found [here](https://arxiv.org/abs/2004.03289) **Note**: The MNLI tsv file is broken, so this code currently excludes the file. Please suggest other alternative if any @lhoestq - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
https://github.com/huggingface/datasets/pull/1062
[ "Nice thank you !\r\nCan you regenerate the dataset_infos.json file ? Since we changed the features we must update it\r\n\r\nThen I think we'll be good to merge :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1062", "html_url": "https://github.com/huggingface/datasets/pull/1062", "diff_url": "https://github.com/huggingface/datasets/pull/1062.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1062.patch", "merged_at": "2020-12-04T11:05:19" }
1,062
true
add labr dataset
Arabic Book Reviews dataset.
https://github.com/huggingface/datasets/pull/1061
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1061", "html_url": "https://github.com/huggingface/datasets/pull/1061", "diff_url": "https://github.com/huggingface/datasets/pull/1061.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1061.patch", "merged_at": "2020-12-03T18:25:44" }
1,061
true
Fix squad V2 metric script
The current squad v2 metric doesn't work with the squad (v1 or v2) datasets. The script is copied from `squad_evaluate` in transformers that requires the labels (with multiple answers) to be like this: ``` references = [{'id': 'a', 'answers': [ {'text': 'Denver Broncos', 'answer_start': 177}, {'text': 'Denver Broncos', 'answer_start': 177} ]}] ``` while the dataset had references like this: ``` references = [{'id': 'a', 'answers': {'text': ['Denver Broncos' 'Denver Broncos'], 'answer_start': [177, 177]} }] ``` Using one or the other format fails with the current squad v2 metric: ``` from datasets import load_metric metric = load_metric("squad_v2") predictions = [{'id': 'a', 'prediction_text': 'Denver Broncos', 'no_answer_probability': 0.0}] references = [{'id': 'a', 'answers': [ {'text': 'Denver Broncos', 'answer_start': 177}, {'text': 'Denver Broncos', 'answer_start': 177} ]}] metric.compute(predictions=predictions, references=references) ``` fails as well as ``` from datasets import load_metric metric = load_metric("squad_v2") predictions = [{'id': 'a', 'prediction_text': 'Denver Broncos', 'no_answer_probability': 0.0}] references = [{'id': 'a', 'answers': {'text': ['Denver Broncos' 'Denver Broncos'], 'answer_start': [177, 177]} }] metric.compute(predictions=predictions, references=references) ``` This is because arrow reformats the references behind the scene. With this PR (tested locally), both the snippets up there work and return proper results.
https://github.com/huggingface/datasets/pull/1060
[ "The script with changes is used and tested in [#8924](https://github.com/huggingface/transformers/pull/8924). It gives the same results as the old `evaluate_squad` function when used on the same predictions.", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1060", "html_url": "https://github.com/huggingface/datasets/pull/1060", "diff_url": "https://github.com/huggingface/datasets/pull/1060.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1060.patch", "merged_at": "2020-12-22T15:02:19" }
1,060
true
Add TLC
Added TLC dataset
https://github.com/huggingface/datasets/pull/1059
[ "I have reduced the size of the dummy file and added README sections as you suggested. ", "I have a little issue to run the test. It seems there is no failed case in my machine. ", "Thanks !\r\nIt looks like the PR includes changes to many other files than the ones of `tlc`, can you create another branch and another PR ?" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1059", "html_url": "https://github.com/huggingface/datasets/pull/1059", "diff_url": "https://github.com/huggingface/datasets/pull/1059.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1059.patch", "merged_at": null }
1,059
true
added paws-x dataset
Added paws-x dataset. Updating README and tags in the dataset card in a while
https://github.com/huggingface/datasets/pull/1058
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1058", "html_url": "https://github.com/huggingface/datasets/pull/1058", "diff_url": "https://github.com/huggingface/datasets/pull/1058.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1058.patch", "merged_at": "2020-12-04T13:46:05" }
1,058
true
Adding TamilMixSentiment
https://github.com/huggingface/datasets/pull/1057
[ "looks like this pr incldues changes about many other files than the ones for tamilMixSentiment, could you create another branch and another PR ?" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1057", "html_url": "https://github.com/huggingface/datasets/pull/1057", "diff_url": "https://github.com/huggingface/datasets/pull/1057.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1057.patch", "merged_at": null }
1,057
true
Add deal_or_no_dialog
Add deal_or_no_dialog Dataset github: https://github.com/facebookresearch/end-to-end-negotiator Paper: [Deal or No Deal? End-to-End Learning for Negotiation Dialogues](https://arxiv.org/abs/1706.05125)
https://github.com/huggingface/datasets/pull/1056
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1056", "html_url": "https://github.com/huggingface/datasets/pull/1056", "diff_url": "https://github.com/huggingface/datasets/pull/1056.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1056.patch", "merged_at": "2020-12-03T18:13:45" }
1,056
true
Add hebrew-sentiment
hebrew-sentiment dataset is ready! (including tests, tags etc)
https://github.com/huggingface/datasets/pull/1055
[ "@elronbandel it looks like something went wrong with the renaming, as the old files are still in the PR. Can you `git rm datasets/hebrew-sentiment` ?", "merging since the CI is fixed on master", "This is the old version of the data.\r\nHere is the fixed version.\r\nhttps://github.com/OnlpLab/Hebrew-Sentiment-Data\r\n\r\nI hope I would find time to open a PR. I think it supposed to be only to change the data path ", "Cool ! Sure feel free to open a PR if you have some time :) and feel free to ping me for review or if you have questions" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1055", "html_url": "https://github.com/huggingface/datasets/pull/1055", "diff_url": "https://github.com/huggingface/datasets/pull/1055.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1055.patch", "merged_at": "2020-12-04T11:24:16" }
1,055
true
Add dataset - SemEval 2014 - Task 1
Adding the dataset of SemEval 2014 Task 1 Found the dataset under the shared Google Sheet > Recurring Task Datasets Task Homepage - https://alt.qcri.org/semeval2014/task1 Thank you!
https://github.com/huggingface/datasets/pull/1054
[ "Added the dataset card.\r\nRequesting another review." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1054", "html_url": "https://github.com/huggingface/datasets/pull/1054", "diff_url": "https://github.com/huggingface/datasets/pull/1054.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1054.patch", "merged_at": "2020-12-04T00:52:43" }
1,054
true
Fix dataset URL and file names, and add column name in "Social Bias Frames" dataset
# Why I did When I use "social_bias_frames" datasets in this library, I got 404 Errors. So, I fixed this error and another some problems that I faced to use the dataset. # What I did * Modify this dataset URL * Modify this dataset file names * Add a "dataSource" column Thank you!
https://github.com/huggingface/datasets/pull/1053
[ "Thanks a lot, looks good!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1053", "html_url": "https://github.com/huggingface/datasets/pull/1053", "diff_url": "https://github.com/huggingface/datasets/pull/1053.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1053.patch", "merged_at": "2020-12-03T13:42:26" }
1,053
true
add sharc dataset
This PR adds the ShARC dataset. More info: https://sharc-data.github.io/index.html
https://github.com/huggingface/datasets/pull/1052
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1052", "html_url": "https://github.com/huggingface/datasets/pull/1052", "diff_url": "https://github.com/huggingface/datasets/pull/1052.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1052.patch", "merged_at": "2020-12-03T14:09:54" }
1,052
true
Add Facebook SimpleQuestionV2
Add simple questions v2: https://research.fb.com/downloads/babi/
https://github.com/huggingface/datasets/pull/1051
[ "I think @thomwolf may also be working on this one as part of the Babi benchmark in #945 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1051", "html_url": "https://github.com/huggingface/datasets/pull/1051", "diff_url": "https://github.com/huggingface/datasets/pull/1051.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1051.patch", "merged_at": "2020-12-03T17:31:58" }
1,051
true
Add GoEmotions
Adds the GoEmotions dataset, a nice emotion classification dataset with 27 (multi-)label annotations on reddit comments. Includes both a large raw version and a narrowed version with predefined train/test/val splits, which I've included as separate configs with the latter as a default. - Webpage/repo: https://github.com/google-research/google-research/tree/master/goemotions - Paper: https://arxiv.org/abs/2005.00547
https://github.com/huggingface/datasets/pull/1050
[ "Whoops, didn't mean for that to be merged yet (my bad). I'm reaching out to the authors since we'd like their feedback on the best way to have the `author` field anonymized or removed. Will send a patch once they get back to me." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1050", "html_url": "https://github.com/huggingface/datasets/pull/1050", "diff_url": "https://github.com/huggingface/datasets/pull/1050.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1050.patch", "merged_at": "2020-12-03T17:30:08" }
1,050
true
Add siswati ner corpus
https://github.com/huggingface/datasets/pull/1049
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1049", "html_url": "https://github.com/huggingface/datasets/pull/1049", "diff_url": "https://github.com/huggingface/datasets/pull/1049.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1049.patch", "merged_at": null }
1,049
true
Adding NCHLT dataset
https://repo.sadilar.org/handle/20.500.12185/7/discover?filtertype_0=database&filtertype_1=title&filter_relational_operator_1=contains&filter_relational_operator_0=equals&filter_1=&filter_0=Monolingual+Text+Corpora%3A+Annotated&filtertype=project&filter_relational_operator=equals&filter=NCHLT+Text+II
https://github.com/huggingface/datasets/pull/1048
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1048", "html_url": "https://github.com/huggingface/datasets/pull/1048", "diff_url": "https://github.com/huggingface/datasets/pull/1048.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1048.patch", "merged_at": "2020-12-04T13:29:56" }
1,048
true
Add KorNLU
Added Korean NLU datasets. The link to the dataset can be found [here](https://github.com/kakaobrain/KorNLUDatasets) and the paper can be found [here](https://arxiv.org/abs/2004.03289) **Note**: The MNLI tsv file is broken, so this code currently excludes the file. Please suggest other alternative if any @lhoestq - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
https://github.com/huggingface/datasets/pull/1047
[ "the CI error about `social_bias_frames` is fixed on master so it's fine", "created new [PR](https://github.com/huggingface/datasets/pull/1062)", "looks like this PR includes many changes to other files that the ones related to KorNLU\r\nCould you create another branch and another PR please ?", "Wow crazy timing", "hahahaha" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1047", "html_url": "https://github.com/huggingface/datasets/pull/1047", "diff_url": "https://github.com/huggingface/datasets/pull/1047.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1047.patch", "merged_at": null }
1,047
true
Dataset.map() turns tensors into lists?
I apply `Dataset.map()` to a function that returns a dict of torch tensors (like a tokenizer from the repo transformers). However, in the mapped dataset, these tensors have turned to lists! ```import datasets import torch from datasets import load_dataset print("version datasets", datasets.__version__) dataset = load_dataset("snli", split='train[0:50]') def tokenizer_fn(example): # actually uses a tokenizer which does something like: return {'input_ids': torch.tensor([[0, 1, 2]])} print("First item in dataset:\n", dataset[0]) tokenized = tokenizer_fn(dataset[0]) print("Tokenized hyp:\n", tokenized) dataset_tok = dataset.map(tokenizer_fn, batched=False, remove_columns=['label', 'premise', 'hypothesis']) print("Tokenized using map:\n", dataset_tok[0]) print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids'])) dataset_tok = dataset.map(tokenizer_fn, batched=False, remove_columns=['label', 'premise', 'hypothesis']) print("Tokenized using map:\n", dataset_tok[0]) print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids'])) ``` The output is: ``` version datasets 1.1.3 Reusing dataset snli (/home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c) First item in dataset: {'premise': 'A person on a horse jumps over a broken down airplane.', 'hypothesis': 'A person is training his horse for a competition.', 'label': 1} Tokenized hyp: {'input_ids': tensor([[0, 1, 2]])} Loading cached processed dataset at /home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c/cache-fe38f449fe9ac46f.arrow Tokenized using map: {'input_ids': [[0, 1, 2]]} <class 'torch.Tensor'> <class 'list'> ``` Or am I doing something wrong?
https://github.com/huggingface/datasets/issues/1046
[ "A solution is to have the tokenizer return a list instead of a tensor, and then use `dataset_tok.set_format(type = 'torch')` to convert that list into a tensor. Still not sure if bug.", "It is expected behavior, you should set the format to `\"torch\"` as you mentioned to get pytorch tensors back.\r\nBy default datasets returns pure python objects." ]
null
1,046
false
Add xitsonga ner corpus
https://github.com/huggingface/datasets/pull/1045
[ "Look like this PR includes changes to many other files than the ones related to xitsonga NER.\r\nCould you create another branch and another PR please ?" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1045", "html_url": "https://github.com/huggingface/datasets/pull/1045", "diff_url": "https://github.com/huggingface/datasets/pull/1045.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1045.patch", "merged_at": null }
1,045
true
Add AMTTL Chinese Word Segmentation Dataset
https://github.com/huggingface/datasets/pull/1044
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1044", "html_url": "https://github.com/huggingface/datasets/pull/1044", "diff_url": "https://github.com/huggingface/datasets/pull/1044.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1044.patch", "merged_at": "2020-12-03T17:13:13" }
1,044
true
Add TSAC: Tunisian Sentiment Analysis Corpus
github: https://github.com/fbougares/TSAC paper: https://www.aclweb.org/anthology/W17-1307/
https://github.com/huggingface/datasets/pull/1043
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1043", "html_url": "https://github.com/huggingface/datasets/pull/1043", "diff_url": "https://github.com/huggingface/datasets/pull/1043.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1043.patch", "merged_at": "2020-12-03T13:32:24" }
1,043
true
Add Big Patent dataset
- More info on the dataset: https://evasharma.github.io/bigpatent/ - There's another raw version of the dataset available from tfds. However, they're quite large so I don't have the resources to fully test all the configs for that version yet. We'll try to add it later. - ~Currently, there are no dummy data for this dataset yet as I'm facing some problems with generating them. I'm trying to add them later.~
https://github.com/huggingface/datasets/pull/1042
[ "Looks like this PR include changes about many other files than the ones related to big patent.\r\nCould you create another branch and another PR ?", "@lhoestq Just created a new PR here: https://github.com/huggingface/datasets/pull/1087" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1042", "html_url": "https://github.com/huggingface/datasets/pull/1042", "diff_url": "https://github.com/huggingface/datasets/pull/1042.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1042.patch", "merged_at": null }
1,042
true
Add SuperGLUE metric
Adds a new metric for the SuperGLUE benchmark (similar to the GLUE benchmark metric).
https://github.com/huggingface/datasets/pull/1041
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1041", "html_url": "https://github.com/huggingface/datasets/pull/1041", "diff_url": "https://github.com/huggingface/datasets/pull/1041.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1041.patch", "merged_at": "2021-02-23T18:02:12" }
1,041
true
Add UN Universal Declaration of Human Rights (UDHR)
Universal declaration of human rights with translations in 464 languages and dialects. - UN page: https://www.ohchr.org/EN/UDHR/Pages/UDHRIndex.aspx - Raw data source: https://unicode.org/udhr/index.html Each instance of the dataset corresponds to one translation of the document. Since there's only one instance per language (and because there are 500 languages so the dummy data would be messy), I opted to just include them all under the same single config. I wasn't able to find any kind of license so I just copied the copyright notice. I was pretty careful careful generating the language tags so they _should_ all be correct & consistent BCP-47 codes per the docs.
https://github.com/huggingface/datasets/pull/1040
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1040", "html_url": "https://github.com/huggingface/datasets/pull/1040", "diff_url": "https://github.com/huggingface/datasets/pull/1040.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1040.patch", "merged_at": "2020-12-03T19:20:11" }
1,040
true
Update ADD NEW DATASET
This PR adds a couple of detail on cloning/rebasing the repo.
https://github.com/huggingface/datasets/pull/1039
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1039", "html_url": "https://github.com/huggingface/datasets/pull/1039", "diff_url": "https://github.com/huggingface/datasets/pull/1039.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1039.patch", "merged_at": "2020-12-03T09:18:09" }
1,039
true
add med_hop
This PR adds the MedHop dataset from the QAngaroo multi hop reading comprehension datasets More info: http://qangaroo.cs.ucl.ac.uk/index.html
https://github.com/huggingface/datasets/pull/1038
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1038", "html_url": "https://github.com/huggingface/datasets/pull/1038", "diff_url": "https://github.com/huggingface/datasets/pull/1038.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1038.patch", "merged_at": "2020-12-03T16:52:23" }
1,038
true
Fix docs indentation issues
Replace tabs with spaces.
https://github.com/huggingface/datasets/pull/1037
[ "is this an issue ?", "Yes @lhoestq, look at the docs site. For example, in https://huggingface.co/docs/datasets/add_dataset.html, look at the indentation in the code block under the sentence:\r\n> Here are the features of the SQuAD dataset for instance, which is taken from the squad dataset loading script:" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1037", "html_url": "https://github.com/huggingface/datasets/pull/1037", "diff_url": "https://github.com/huggingface/datasets/pull/1037.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1037.patch", "merged_at": "2020-12-22T16:01:14" }
1,037
true
Add PerSenT
Added [Person's SentimenT](https://stonybrooknlp.github.io/PerSenT/) dataset.
https://github.com/huggingface/datasets/pull/1036
[ "looks like this PR contains changes in many other files than the ones for PerSenT\r\ncan you create another branch and another PR ?", "closing since #1142 was merged" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1036", "html_url": "https://github.com/huggingface/datasets/pull/1036", "diff_url": "https://github.com/huggingface/datasets/pull/1036.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1036.patch", "merged_at": null }
1,036
true
add wiki_hop
This PR adds the WikiHop dataset from the QAngaroo multi hop reading comprehension datasets More info: http://qangaroo.cs.ucl.ac.uk/index.html
https://github.com/huggingface/datasets/pull/1035
[ "Also the dummy data files are quite big (500KB)\r\nIf you could reduce that that would be nice (just look at the files inside and remove unecessary chunks of texts)\r\nin general dummy data are just a few KB and we suggest to not get higher than 50KB\r\n\r\nHaving light dummy data makes the repo faster to clone" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1035", "html_url": "https://github.com/huggingface/datasets/pull/1035", "diff_url": "https://github.com/huggingface/datasets/pull/1035.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1035.patch", "merged_at": "2020-12-03T16:41:12" }
1,035
true
add scb_mt_enth_2020
## scb-mt-en-th-2020: A Large English-Thai Parallel Corpus The primary objective of our work is to build a large-scale English-Thai dataset for machine translation. We construct an English-Thai machine translation dataset with over 1 million segment pairs, curated from various sources, namely news, Wikipedia articles, SMS messages, task-based dialogs, web-crawled data and government documents. Methodology for gathering data, building parallel texts and removing noisy sentence pairs are presented in a reproducible manner. We train machine translation models based on this dataset. Our models' performance are comparable to that of Google Translation API (as of May 2020) for Thai-English and outperform Google when the Open Parallel Corpus (OPUS) is included in the training data for both Thai-English and English-Thai translation. The dataset, pre-trained models, and source code to reproduce our work are available for public use.
https://github.com/huggingface/datasets/pull/1034
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1034", "html_url": "https://github.com/huggingface/datasets/pull/1034", "diff_url": "https://github.com/huggingface/datasets/pull/1034.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1034.patch", "merged_at": "2020-12-03T16:57:23" }
1,034
true
Add support for ".txm" format
In dummy data generation, add support for XML-like ".txm" file format. Also support filenames with additional compression extension: ".txm.gz".
https://github.com/huggingface/datasets/pull/1033
[ "Neat! Looks like you need a rebase and then should be good to go :) ", "Done, @yjernite, @lhoestq.", "If you agree, we could merge this.", "Hi ! yes sure :) can you just merge master into this branch before we merge ?", "Done @lhoestq " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1033", "html_url": "https://github.com/huggingface/datasets/pull/1033", "diff_url": "https://github.com/huggingface/datasets/pull/1033.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1033.patch", "merged_at": "2021-02-21T19:47:11" }
1,033
true
IIT B English to Hindi machine translation dataset
Adding IIT Bombay English-Hindi Corpus dataset more info : http://www.cfilt.iitb.ac.in/iitb_parallel/
https://github.com/huggingface/datasets/pull/1032
[ "Please note that this dataset is actually behind a form that one needs to fill. However, the link is direct. I'm not sure what should the approach be in this case.", "also pinging @thomwolf \r\nThe dataset webpage returns a form when trying to download the dataset (form here : http://www.cfilt.iitb.ac.in/iitb_parallel/dataset.html).\r\nHowever the url we get with the form can be used for the dataset script.\r\nShould we ask the authors or use the urls this way ?", "> also pinging @thomwolf\r\n> The dataset webpage returns a form when trying to download the dataset (form here : http://www.cfilt.iitb.ac.in/iitb_parallel/dataset.html).\r\n> However the url we get with the form can be used for the dataset script.\r\n> Should we ask the authors or use the urls this way ?\r\n\r\nI had discussion on this with @thomwolf . We have already sent email to author of this dataset.", "Hi @spatil6 !\r\nAny news from the authors ?", "IIT B folks will add this dataset to repo." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1032", "html_url": "https://github.com/huggingface/datasets/pull/1032", "diff_url": "https://github.com/huggingface/datasets/pull/1032.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1032.patch", "merged_at": null }
1,032
true
add crows_pairs
This PR adds CrowS-Pairs datasets. More info: https://github.com/nyu-mll/crows-pairs/ https://arxiv.org/pdf/2010.00133.pdf
https://github.com/huggingface/datasets/pull/1031
[ "looks good now :) wdyt @yjernite ?", "Looks good to merge for me, can edit the dataset card later if required. Merging" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1031", "html_url": "https://github.com/huggingface/datasets/pull/1031", "diff_url": "https://github.com/huggingface/datasets/pull/1031.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1031.patch", "merged_at": "2020-12-03T18:29:39" }
1,031
true
allegro_reviews dataset
- **Name:** *allegro_reviews* - **Description:** *Allegro Reviews is a sentiment analysis dataset, consisting of 11,588 product reviews written in Polish and extracted from Allegro.pl - a popular e-commerce marketplace. Each review contains at least 50 words and has a rating on a scale from one (negative review) to five (positive review).* - **Data:** *https://github.com/allegro/klejbenchmark-allegroreviews* - **Motivation:** *The KLEJ benchmark (Kompleksowa Lista Ewaluacji Językowych) is a set of nine evaluation tasks for the Polish language understanding.*
https://github.com/huggingface/datasets/pull/1030
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1030", "html_url": "https://github.com/huggingface/datasets/pull/1030", "diff_url": "https://github.com/huggingface/datasets/pull/1030.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1030.patch", "merged_at": "2020-12-03T16:34:46" }
1,030
true
Add PEC
A persona-based empathetic conversation dataset.
https://github.com/huggingface/datasets/pull/1029
[ "I'm a bit frustrated now to get this right.", "Hey @zhongpeixiang!\r\nReally nice addition here!\r\n\r\nDid you officially joined the sprint by posting [on the forum thread](https://discuss.huggingface.co/t/open-to-the-community-one-week-team-effort-to-reach-v2-0-of-hf-datasets-library/2176) and joining our slack?\r\nI can't seem to find you there! Should I add you directly with your gmail address?", "> Hey @zhongpeixiang!\r\n> Really nice addition here!\r\n> \r\n> Did you officially joined the sprint by posting [on the forum thread](https://discuss.huggingface.co/t/open-to-the-community-one-week-team-effort-to-reach-v2-0-of-hf-datasets-library/2176) and joining our slack?\r\n> I can't seem to find you there! Should I add you directly with your gmail address?\r\n\r\nThank you for the invitation. This initiative is awesome. Sadly I’m occupied by my thesis writing this month. Good luck 🤗", "As you want @zhongpeixiang (I was maybe not clear but that just mean that by posting on the forum thread that you participated in the current event you will get a special gift (a tee-shirt) for the contribution that you have already done here :-) Nothing more to do)", "> As you want @zhongpeixiang (I was maybe not clear but that just mean that by posting on the forum thread that you participated in the current event you will get a special gift (a tee-shirt) for the contribution that you have already done here :-) Nothing more to do)\r\n\r\nOh, I misunderstood the post. I'm glad to join." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1029", "html_url": "https://github.com/huggingface/datasets/pull/1029", "diff_url": "https://github.com/huggingface/datasets/pull/1029.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1029.patch", "merged_at": "2020-12-03T16:15:06" }
1,029
true
Add ASSET dataset for text simplification evaluation
Adding the ASSET dataset from https://github.com/facebookresearch/asset One config for the simplification data, one for the human ratings of quality. The README.md borrows from that written by @juand-r
https://github.com/huggingface/datasets/pull/1028
[ "Nice, thanks @yjernite !!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1028", "html_url": "https://github.com/huggingface/datasets/pull/1028", "diff_url": "https://github.com/huggingface/datasets/pull/1028.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1028.patch", "merged_at": "2020-12-03T16:34:37" }
1,028
true
Hi
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://github.com/huggingface/datasets/issues/1027
[]
null
1,027
false
Lío o
````l````````` ``` O ``` ````` Ño ``` ```` ```
https://github.com/huggingface/datasets/issues/1026
[]
null
1,026
false
Add Sesotho Ner
https://github.com/huggingface/datasets/pull/1025
[ "looks like this PR include changes to other files (sepedi)\r\ncould you try to only include the files related to the addition of sesotho ner ?", "I think i need to clean up my local repo. I am committing everything a fresh after sepedi", "Feel free to ping me when yuo have a clean PR and it's ready to review :)", "closing in favor of #1114 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1025", "html_url": "https://github.com/huggingface/datasets/pull/1025", "diff_url": "https://github.com/huggingface/datasets/pull/1025.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1025.patch", "merged_at": null }
1,025
true
Add ZEST: ZEroShot learning from Task descriptions
Adds the ZEST dataset on zero-shot learning from task descriptions from AI2. - Webpage: https://allenai.org/data/zest - Paper: https://arxiv.org/abs/2011.08115 The nature of this dataset made the supported task tags tricky if you wouldn't mind giving any feedback @yjernite. Also let me know if you think we should have a `other-task-generalization` or something like that...
https://github.com/huggingface/datasets/pull/1024
[ "Looks good to me, we can ping the authors for more info later. And yes apply `other-task` labels liberally, we can sort them out later :) \r\n\r\nLooks ready to merge when you're ready @joeddav " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1024", "html_url": "https://github.com/huggingface/datasets/pull/1024", "diff_url": "https://github.com/huggingface/datasets/pull/1024.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1024.patch", "merged_at": "2020-12-03T16:09:14" }
1,024
true
Add Schema Guided Dialogue dataset
This PR adds the Schema Guided Dialogue dataset created for the DSTC8 challenge - https://github.com/google-research-datasets/dstc8-schema-guided-dialogue A bit simpler than MultiWOZ, the only tricky thing is the sequence of dictionaries that had to be linearized. There is a config for the data proper, and a config for the schemas.
https://github.com/huggingface/datasets/pull/1023
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1023", "html_url": "https://github.com/huggingface/datasets/pull/1023", "diff_url": "https://github.com/huggingface/datasets/pull/1023.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1023.patch", "merged_at": "2020-12-03T01:18:01" }
1,023
true
add MRQA
MRQA (shared task 2019) out of distribution generalization Framed as extractive question answering Dataset is the concatenation (of subsets) of existing QA datasets processed to match the SQuAD format
https://github.com/huggingface/datasets/pull/1022
[ "THanks!\r\nDone!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1022", "html_url": "https://github.com/huggingface/datasets/pull/1022", "diff_url": "https://github.com/huggingface/datasets/pull/1022.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1022.patch", "merged_at": "2020-12-04T00:34:24" }
1,022
true
Add Gutenberg time references dataset
This PR adds the gutenberg_time dataset: https://arxiv.org/abs/2011.04124
https://github.com/huggingface/datasets/pull/1021
[ "Description: \"A clean data resource containing all explicit time references in a dataset of 52,183 novels whose full text is available via Project Gutenberg and the Hathi Trust Digital Library 2.\" > This is just the Gutenberg part.\r\n\r\nAlso, the paragraph at the top of the file would make a good Dataset Summary in the README :) " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1021", "html_url": "https://github.com/huggingface/datasets/pull/1021", "diff_url": "https://github.com/huggingface/datasets/pull/1021.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1021.patch", "merged_at": "2020-12-03T10:33:38" }
1,021
true
Add Setswana NER
https://github.com/huggingface/datasets/pull/1020
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1020", "html_url": "https://github.com/huggingface/datasets/pull/1020", "diff_url": "https://github.com/huggingface/datasets/pull/1020.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1020.patch", "merged_at": "2020-12-03T14:56:14" }
1,020
true
Add caWaC dataset
Add dataset.
https://github.com/huggingface/datasets/pull/1019
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1019", "html_url": "https://github.com/huggingface/datasets/pull/1019", "diff_url": "https://github.com/huggingface/datasets/pull/1019.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1019.patch", "merged_at": "2020-12-03T14:47:09" }
1,019
true
Add Sepedi NER
This is a new branch created for this dataset
https://github.com/huggingface/datasets/pull/1018
[ "Sorry for this. I deleted sepedi_ner_corpus as per your earlier advise. Let me check. " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1018", "html_url": "https://github.com/huggingface/datasets/pull/1018", "diff_url": "https://github.com/huggingface/datasets/pull/1018.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1018.patch", "merged_at": null }
1,018
true
Specify file encoding
If not specified, Python uses system default, which for Windows is not "utf-8".
https://github.com/huggingface/datasets/pull/1017
[ "Thanks!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1017", "html_url": "https://github.com/huggingface/datasets/pull/1017", "diff_url": "https://github.com/huggingface/datasets/pull/1017.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1017.patch", "merged_at": "2020-12-03T00:44:25" }
1,017
true
Add CLINC150 dataset
Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
https://github.com/huggingface/datasets/pull/1016
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1016", "html_url": "https://github.com/huggingface/datasets/pull/1016", "diff_url": "https://github.com/huggingface/datasets/pull/1016.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1016.patch", "merged_at": "2020-12-03T10:32:04" }
1,016
true
add hard dataset
Hotel Reviews in Arabic language.
https://github.com/huggingface/datasets/pull/1015
[ "Thanks @sumanthd17 that fixed it. " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1015", "html_url": "https://github.com/huggingface/datasets/pull/1015", "diff_url": "https://github.com/huggingface/datasets/pull/1015.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1015.patch", "merged_at": "2020-12-03T15:03:54" }
1,015
true
Add SciTLDR Dataset (Take 2)
Adds the SciTLDR Dataset by AI2 Added the `README.md` card with tags to the best of my knowledge Multi-target summaries or TLDRs of Scientific Documents Continued from #986
https://github.com/huggingface/datasets/pull/1014
[ "@lhoestq please review this PR when you get free", "If the CI fails just because of `RemoteDatasetTest` errors it's fine, they're fixed on master", "> If the CI fails just because of `RemoteDatasetTest` errors it's fine, they're fixed on master\r\n\r\nThe same 3 tests are failing again :(\r\n```\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_norwegian_ner\r\n```", "One trick if you want to add more datasets to avoid these errors : you can just rebase the master branch of your fork from the master branch of the repo. Then each time you make a new branch from master on your fork, it will include the fix for these errors", "> One trick if you want to add more datasets to avoid these errors : you can just rebase the master branch of your fork from the master branch of the repo. Then each time you make a new branch from master on your fork, it will include the fix for these errors\r\n\r\nYes, I almost always do that, but somehow seems even this branch got old 😓 \r\nI also do the following if I directly create a new branch locally: `git checkout -b <branchname> upstream/master` so it stays up-to date irrespective of my fork, still don't know how this crept in again", "Merging this one since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1014", "html_url": "https://github.com/huggingface/datasets/pull/1014", "diff_url": "https://github.com/huggingface/datasets/pull/1014.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1014.patch", "merged_at": "2020-12-02T18:37:58" }
1,014
true
Adding CS restaurants dataset
This PR adds the CS restaurants dataset; this is a re-opening of a previous PR with a chaotic commit history.
https://github.com/huggingface/datasets/pull/1013
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1013", "html_url": "https://github.com/huggingface/datasets/pull/1013", "diff_url": "https://github.com/huggingface/datasets/pull/1013.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1013.patch", "merged_at": "2020-12-02T18:25:19" }
1,013
true
Adding Evidence Inference Data:
http://evidence-inference.ebm-nlp.com/download/ https://arxiv.org/pdf/2005.04177.pdf
https://github.com/huggingface/datasets/pull/1012
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1012", "html_url": "https://github.com/huggingface/datasets/pull/1012", "diff_url": "https://github.com/huggingface/datasets/pull/1012.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1012.patch", "merged_at": "2020-12-03T15:04:46" }
1,012
true
Add Bilingual Corpus of Arabic-English Parallel Tweets
Added Bilingual Corpus of Arabic-English Parallel Tweets. The link to the dataset can be found [here](https://alt.qcri.org/wp-content/uploads/2020/08/Bilingual-Corpus-of-Arabic-English-Parallel-Tweets.zip) and the paper can be found [here](https://www.aclweb.org/anthology/2020.bucc-1.3.pdf) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
https://github.com/huggingface/datasets/pull/1011
[ "IMO, the problem with this dataset is that it is not really a text/nlp dataset. These are just collections of tweet ids. So, ultimately, one needs to crawl twitter to get the actual text.", "That's true.\r\n\r\n", "at least it's clear in the description that one needs to collect the tweets : \r\n```\r\nThis resource is a result of a generic method for collecting parallel tweets.\r\n```", "Looks like this is failing for other datasets. Should I rebase it and push again?\r\nAlso rebasing and pushing is reflecting changes in many other files (ultimately forcing me to open a new branch and a new PR) any way to avoid this?", "No let me merge this one directly, it's fine", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1011", "html_url": "https://github.com/huggingface/datasets/pull/1011", "diff_url": "https://github.com/huggingface/datasets/pull/1011.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1011.patch", "merged_at": "2020-12-04T14:44:33" }
1,011
true
Add NoReC: Norwegian Review Corpus
https://github.com/huggingface/datasets/pull/1010
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1010", "html_url": "https://github.com/huggingface/datasets/pull/1010", "diff_url": "https://github.com/huggingface/datasets/pull/1010.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1010.patch", "merged_at": "2021-02-18T14:47:28" }
1,010
true
Adding C3 dataset: the first free-form multiple-Choice Chinese machine reading Comprehension dataset.
https://github.com/nlpdata/c3 https://arxiv.org/abs/1904.09679
https://github.com/huggingface/datasets/pull/1009
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1009", "html_url": "https://github.com/huggingface/datasets/pull/1009", "diff_url": "https://github.com/huggingface/datasets/pull/1009.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1009.patch", "merged_at": "2020-12-03T13:16:29" }
1,009
true
Adding C3 dataset: the first free-form multiple-Choice Chinese machine reading Comprehension dataset. https://github.com/nlpdata/c3 https://arxiv.org/abs/1904.09679
null
https://github.com/huggingface/datasets/pull/1008
[ "Dupe of #1009 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1008", "html_url": "https://github.com/huggingface/datasets/pull/1008", "diff_url": "https://github.com/huggingface/datasets/pull/1008.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1008.patch", "merged_at": null }
1,008
true
Include license file in source distribution
It would be helpful to include the license file in the source distribution.
https://github.com/huggingface/datasets/pull/1007
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1007", "html_url": "https://github.com/huggingface/datasets/pull/1007", "diff_url": "https://github.com/huggingface/datasets/pull/1007.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1007.patch", "merged_at": "2020-12-02T17:58:05" }
1,007
true