id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
755,936,327 | 1,034 | add scb_mt_enth_2020 | ## scb-mt-en-th-2020: A Large English-Thai Parallel Corpus
The primary objective of our work is to build a large-scale English-Thai dataset for machine translation.
We construct an English-Thai machine translation dataset with over 1 million segment pairs, curated from various sources,
namely news, Wikipedia artic... | closed | https://github.com/huggingface/datasets/pull/1034 | 2020-12-03T07:13:49 | 2020-12-03T16:57:23 | 2020-12-03T16:57:23 | {
"login": "cstorm125",
"id": 15519308,
"type": "User"
} | [] | true | [] |
755,921,927 | 1,033 | Add support for ".txm" format | In dummy data generation, add support for XML-like ".txm" file format.
Also support filenames with additional compression extension: ".txm.gz". | closed | https://github.com/huggingface/datasets/pull/1033 | 2020-12-03T06:52:08 | 2021-02-21T19:47:11 | 2021-02-21T19:47:11 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
755,858,785 | 1,032 | IIT B English to Hindi machine translation dataset | Adding IIT Bombay English-Hindi Corpus dataset
more info : http://www.cfilt.iitb.ac.in/iitb_parallel/ | closed | https://github.com/huggingface/datasets/pull/1032 | 2020-12-03T05:18:45 | 2021-01-10T08:44:51 | 2021-01-10T08:44:15 | {
"login": "spatil6",
"id": 6419011,
"type": "User"
} | [] | true | [] |
755,844,004 | 1,031 | add crows_pairs | This PR adds CrowS-Pairs datasets.
More info:
https://github.com/nyu-mll/crows-pairs/
https://arxiv.org/pdf/2010.00133.pdf | closed | https://github.com/huggingface/datasets/pull/1031 | 2020-12-03T05:05:11 | 2020-12-03T18:29:52 | 2020-12-03T18:29:39 | {
"login": "patil-suraj",
"id": 27137566,
"type": "User"
} | [] | true | [] |
755,777,438 | 1,030 | allegro_reviews dataset | - **Name:** *allegro_reviews*
- **Description:** *Allegro Reviews is a sentiment analysis dataset, consisting of 11,588 product reviews written in Polish and extracted from Allegro.pl - a popular e-commerce marketplace. Each review contains at least 50 words and has a rating on a scale from one (negative review) to fi... | closed | https://github.com/huggingface/datasets/pull/1030 | 2020-12-03T03:11:39 | 2020-12-04T10:56:29 | 2020-12-03T16:34:47 | {
"login": "abecadel",
"id": 1654113,
"type": "User"
} | [] | true | [] |
755,767,616 | 1,029 | Add PEC | A persona-based empathetic conversation dataset. | closed | https://github.com/huggingface/datasets/pull/1029 | 2020-12-03T02:46:08 | 2020-12-04T10:58:19 | 2020-12-03T16:15:06 | {
"login": "zhongpeixiang",
"id": 11826803,
"type": "User"
} | [] | true | [] |
755,712,854 | 1,028 | Add ASSET dataset for text simplification evaluation | Adding the ASSET dataset from https://github.com/facebookresearch/asset
One config for the simplification data, one for the human ratings of quality.
The README.md borrows from that written by @juand-r | closed | https://github.com/huggingface/datasets/pull/1028 | 2020-12-03T00:28:29 | 2020-12-17T10:03:06 | 2020-12-03T16:34:37 | {
"login": "yjernite",
"id": 10469459,
"type": "User"
} | [] | true | [] |
755,695,420 | 1,027 | Hi | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons t... | closed | https://github.com/huggingface/datasets/issues/1027 | 2020-12-02T23:47:14 | 2020-12-03T16:42:41 | 2020-12-03T16:42:41 | {
"login": "suemori87",
"id": 75398394,
"type": "User"
} | [] | false | [] |
755,689,195 | 1,026 | Lío o | ````l`````````
```
O
```
`````
Ño
```
````
``` | closed | https://github.com/huggingface/datasets/issues/1026 | 2020-12-02T23:32:25 | 2020-12-03T16:42:47 | 2020-12-03T16:42:47 | {
"login": "ghost",
"id": 10137,
"type": "User"
} | [] | false | [] |
755,673,371 | 1,025 | Add Sesotho Ner | closed | https://github.com/huggingface/datasets/pull/1025 | 2020-12-02T23:00:15 | 2020-12-16T16:27:03 | 2020-12-16T16:27:02 | {
"login": "yvonnegitau",
"id": 7923902,
"type": "User"
} | [] | true | [] | |
755,664,113 | 1,024 | Add ZEST: ZEroShot learning from Task descriptions | Adds the ZEST dataset on zero-shot learning from task descriptions from AI2.
- Webpage: https://allenai.org/data/zest
- Paper: https://arxiv.org/abs/2011.08115
The nature of this dataset made the supported task tags tricky if you wouldn't mind giving any feedback @yjernite. Also let me know if you think we shoul... | closed | https://github.com/huggingface/datasets/pull/1024 | 2020-12-02T22:41:20 | 2020-12-03T19:21:00 | 2020-12-03T16:09:15 | {
"login": "joeddav",
"id": 9353833,
"type": "User"
} | [] | true | [] |
755,655,752 | 1,023 | Add Schema Guided Dialogue dataset | This PR adds the Schema Guided Dialogue dataset created for the DSTC8 challenge
- https://github.com/google-research-datasets/dstc8-schema-guided-dialogue
A bit simpler than MultiWOZ, the only tricky thing is the sequence of dictionaries that had to be linearized. There is a config for the data proper, and a config... | closed | https://github.com/huggingface/datasets/pull/1023 | 2020-12-02T22:26:01 | 2020-12-03T01:18:01 | 2020-12-03T01:18:01 | {
"login": "yjernite",
"id": 10469459,
"type": "User"
} | [] | true | [] |
755,651,377 | 1,022 | add MRQA | MRQA (shared task 2019)
out of distribution generalization
Framed as extractive question answering
Dataset is the concatenation (of subsets) of existing QA datasets processed to match the SQuAD format | closed | https://github.com/huggingface/datasets/pull/1022 | 2020-12-02T22:17:56 | 2020-12-04T00:34:26 | 2020-12-04T00:34:25 | {
"login": "VictorSanh",
"id": 16107619,
"type": "User"
} | [] | true | [] |
755,644,559 | 1,021 | Add Gutenberg time references dataset | This PR adds the gutenberg_time dataset: https://arxiv.org/abs/2011.04124 | closed | https://github.com/huggingface/datasets/pull/1021 | 2020-12-02T22:05:26 | 2020-12-03T10:33:39 | 2020-12-03T10:33:38 | {
"login": "TevenLeScao",
"id": 26709476,
"type": "User"
} | [] | true | [] |
755,601,450 | 1,020 | Add Setswana NER | closed | https://github.com/huggingface/datasets/pull/1020 | 2020-12-02T20:52:07 | 2020-12-03T14:56:14 | 2020-12-03T14:56:14 | {
"login": "yvonnegitau",
"id": 7923902,
"type": "User"
} | [] | true | [] | |
755,582,090 | 1,019 | Add caWaC dataset | Add dataset. | closed | https://github.com/huggingface/datasets/pull/1019 | 2020-12-02T20:18:55 | 2020-12-03T14:47:09 | 2020-12-03T14:47:09 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
755,570,882 | 1,018 | Add Sepedi NER | This is a new branch created for this dataset | closed | https://github.com/huggingface/datasets/pull/1018 | 2020-12-02T20:01:05 | 2020-12-03T21:47:03 | 2020-12-03T21:46:38 | {
"login": "yvonnegitau",
"id": 7923902,
"type": "User"
} | [] | true | [] |
755,558,175 | 1,017 | Specify file encoding | If not specified, Python uses system default, which for Windows is not "utf-8". | closed | https://github.com/huggingface/datasets/pull/1017 | 2020-12-02T19:40:45 | 2020-12-03T00:44:25 | 2020-12-03T00:44:25 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
755,521,862 | 1,016 | Add CLINC150 dataset | Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf)
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data | closed | https://github.com/huggingface/datasets/pull/1016 | 2020-12-02T18:44:30 | 2020-12-03T10:32:04 | 2020-12-03T10:32:04 | {
"login": "sumanthd17",
"id": 28291870,
"type": "User"
} | [] | true | [] |
755,508,841 | 1,015 | add hard dataset | Hotel Reviews in Arabic language. | closed | https://github.com/huggingface/datasets/pull/1015 | 2020-12-02T18:27:36 | 2020-12-03T15:03:54 | 2020-12-03T15:03:54 | {
"login": "zaidalyafeai",
"id": 15667714,
"type": "User"
} | [] | true | [] |
755,505,851 | 1,014 | Add SciTLDR Dataset (Take 2) | Adds the SciTLDR Dataset by AI2
Added the `README.md` card with tags to the best of my knowledge
Multi-target summaries or TLDRs of Scientific Documents
Continued from #986 | closed | https://github.com/huggingface/datasets/pull/1014 | 2020-12-02T18:22:50 | 2020-12-02T18:55:10 | 2020-12-02T18:37:58 | {
"login": "bharatr21",
"id": 13381361,
"type": "User"
} | [] | true | [] |
755,493,075 | 1,013 | Adding CS restaurants dataset | This PR adds the CS restaurants dataset; this is a re-opening of a previous PR with a chaotic commit history. | closed | https://github.com/huggingface/datasets/pull/1013 | 2020-12-02T18:02:30 | 2020-12-02T18:25:20 | 2020-12-02T18:25:19 | {
"login": "TevenLeScao",
"id": 26709476,
"type": "User"
} | [] | true | [] |
755,485,658 | 1,012 | Adding Evidence Inference Data: | http://evidence-inference.ebm-nlp.com/download/
https://arxiv.org/pdf/2005.04177.pdf | closed | https://github.com/huggingface/datasets/pull/1012 | 2020-12-02T17:51:35 | 2020-12-03T15:04:46 | 2020-12-03T15:04:46 | {
"login": "Narsil",
"id": 204321,
"type": "User"
} | [] | true | [] |
755,463,726 | 1,011 | Add Bilingual Corpus of Arabic-English Parallel Tweets | Added Bilingual Corpus of Arabic-English Parallel Tweets. The link to the dataset can be found [here](https://alt.qcri.org/wp-content/uploads/2020/08/Bilingual-Corpus-of-Arabic-English-Parallel-Tweets.zip) and the paper can be found [here](https://www.aclweb.org/anthology/2020.bucc-1.3.pdf)
- [x] Followed the instru... | closed | https://github.com/huggingface/datasets/pull/1011 | 2020-12-02T17:20:02 | 2020-12-04T14:45:10 | 2020-12-04T14:44:33 | {
"login": "sumanthd17",
"id": 28291870,
"type": "User"
} | [] | true | [] |
755,432,143 | 1,010 | Add NoReC: Norwegian Review Corpus | closed | https://github.com/huggingface/datasets/pull/1010 | 2020-12-02T16:38:29 | 2021-02-18T14:47:29 | 2021-02-18T14:47:28 | {
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
} | [] | true | [] | |
755,384,433 | 1,009 | Adding C3 dataset: the first free-form multiple-Choice Chinese machine reading Comprehension dataset. | https://github.com/nlpdata/c3
https://arxiv.org/abs/1904.09679 | closed | https://github.com/huggingface/datasets/pull/1009 | 2020-12-02T15:40:36 | 2020-12-03T13:16:30 | 2020-12-03T13:16:29 | {
"login": "Narsil",
"id": 204321,
"type": "User"
} | [] | true | [] |
755,372,798 | 1,008 | Adding C3 dataset: the first free-form multiple-Choice Chinese machine reading Comprehension dataset. https://github.com/nlpdata/c3 https://arxiv.org/abs/1904.09679 | null | closed | https://github.com/huggingface/datasets/pull/1008 | 2020-12-02T15:28:05 | 2020-12-02T15:40:55 | 2020-12-02T15:40:55 | {
"login": "Narsil",
"id": 204321,
"type": "User"
} | [] | true | [] |
755,364,078 | 1,007 | Include license file in source distribution | It would be helpful to include the license file in the source distribution. | closed | https://github.com/huggingface/datasets/pull/1007 | 2020-12-02T15:17:43 | 2020-12-02T17:58:05 | 2020-12-02T17:58:05 | {
"login": "synapticarbors",
"id": 589279,
"type": "User"
} | [] | true | [] |
755,362,766 | 1,006 | add yahoo_answers_topics | This PR adds yahoo answers topic classification dataset.
More info:
https://github.com/LC-John/Yahoo-Answers-Topic-Classification-Dataset
cc @joeddav, @yjernite | closed | https://github.com/huggingface/datasets/pull/1006 | 2020-12-02T15:16:13 | 2020-12-03T16:44:38 | 2020-12-02T18:01:32 | {
"login": "patil-suraj",
"id": 27137566,
"type": "User"
} | [] | true | [] |
755,337,255 | 1,005 | Adding Autshumato South african langages: | https://repo.sadilar.org/handle/20.500.12185/7/discover?filtertype=database&filter_relational_operator=equals&filter=Multilingual+Text+Corpora%3A+Aligned | closed | https://github.com/huggingface/datasets/pull/1005 | 2020-12-02T14:47:33 | 2020-12-03T13:13:30 | 2020-12-03T13:13:30 | {
"login": "Narsil",
"id": 204321,
"type": "User"
} | [] | true | [] |
755,325,368 | 1,004 | how large datasets are handled under the hood | Hi
I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, than... | closed | https://github.com/huggingface/datasets/issues/1004 | 2020-12-02T14:32:40 | 2022-10-05T12:13:29 | 2022-10-05T12:13:29 | {
"login": "rabeehkarimimahabadi",
"id": 73364383,
"type": "User"
} | [] | false | [] |
755,310,318 | 1,003 | Add multi_x_science_sum | Add Multi-XScience Dataset.
github repo: https://github.com/yaolu/Multi-XScience
paper: [Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles](https://arxiv.org/abs/2010.14235) | closed | https://github.com/huggingface/datasets/pull/1003 | 2020-12-02T14:14:01 | 2020-12-02T17:39:05 | 2020-12-02T17:39:05 | {
"login": "moussaKam",
"id": 28675016,
"type": "User"
} | [] | true | [] |
755,309,758 | 1,002 | Adding Medal: MeDAL: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining | null | closed | https://github.com/huggingface/datasets/pull/1002 | 2020-12-02T14:13:17 | 2020-12-07T16:58:03 | 2020-12-03T13:14:33 | {
"login": "Narsil",
"id": 204321,
"type": "User"
} | [] | true | [] |
755,309,071 | 1,001 | Adding Medal: MeDAL: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining | null | closed | https://github.com/huggingface/datasets/pull/1001 | 2020-12-02T14:12:30 | 2020-12-02T14:13:12 | 2020-12-02T14:13:12 | {
"login": "Narsil",
"id": 204321,
"type": "User"
} | [] | true | [] |
755,292,066 | 1,000 | UM005: Urdu <> English Translation Dataset | Adds Urdu-English dataset for machine translation: http://ufal.ms.mff.cuni.cz/umc/005-en-ur/ | closed | https://github.com/huggingface/datasets/pull/1000 | 2020-12-02T13:51:35 | 2020-12-04T15:34:30 | 2020-12-04T15:34:29 | {
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
} | [] | true | [] |
755,246,786 | 999 | add generated_reviews_enth | `generated_reviews_enth` is created as part of [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf) for machine translation task. This dataset (referred to as `generated_reviews_yn` in [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf)) are English product reviews generated by [CTRL](https://arxiv.org/abs/1... | closed | https://github.com/huggingface/datasets/pull/999 | 2020-12-02T12:50:43 | 2020-12-03T11:17:28 | 2020-12-03T11:17:28 | {
"login": "cstorm125",
"id": 15519308,
"type": "User"
} | [] | true | [] |
755,235,356 | 998 | adding yahoo_answers_qa | Adding Yahoo Answers QA dataset.
More info:
https://ciir.cs.umass.edu/downloads/nfL6/ | closed | https://github.com/huggingface/datasets/pull/998 | 2020-12-02T12:33:54 | 2020-12-02T13:45:40 | 2020-12-02T13:26:06 | {
"login": "patil-suraj",
"id": 27137566,
"type": "User"
} | [] | true | [] |
755,185,517 | 997 | Microsoft CodeXGlue | Datasets from https://github.com/microsoft/CodeXGLUE
This contains 13 datasets:
code_x_glue_cc_clone_detection_big_clone_bench
code_x_glue_cc_clone_detection_poj_104
code_x_glue_cc_cloze_testing_all
code_x_glue_cc_cloze_testing_maxmin
code_x_glue_cc_code_completion_line
code_x_glue_cc_code_completion_token
... | closed | https://github.com/huggingface/datasets/pull/997 | 2020-12-02T11:21:18 | 2021-06-08T13:42:25 | 2021-06-08T13:42:24 | {
"login": "madlag",
"id": 272253,
"type": "User"
} | [] | true | [] |
755,176,084 | 996 | NotADirectoryError while loading the CNN/Dailymail dataset |
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------... | closed | https://github.com/huggingface/datasets/issues/996 | 2020-12-02T11:07:56 | 2022-02-17T14:13:39 | 2022-02-17T14:13:39 | {
"login": "arc-bu",
"id": 75367920,
"type": "User"
} | [] | false | [] |
755,175,199 | 995 | added dataset circa | Dataset Circa added. Only README.md and dataset card left | closed | https://github.com/huggingface/datasets/pull/995 | 2020-12-02T11:06:39 | 2020-12-04T10:58:16 | 2020-12-03T09:39:37 | {
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
} | [] | true | [] |
755,146,834 | 994 | Add Sepedi ner corpus | closed | https://github.com/huggingface/datasets/pull/994 | 2020-12-02T10:30:07 | 2020-12-03T10:19:14 | 2020-12-02T18:20:08 | {
"login": "yvonnegitau",
"id": 7923902,
"type": "User"
} | [] | true | [] | |
755,135,768 | 993 | Problem downloading amazon_reviews_multi | Thanks for adding the dataset.
After trying to load the dataset, I am getting the following error:
`ConnectionError: Couldn't reach https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_fr_train.json
`
I used the following code to load the dataset:
`load_dataset(
dataset_name,
... | closed | https://github.com/huggingface/datasets/issues/993 | 2020-12-02T10:15:57 | 2022-10-05T12:21:34 | 2022-10-05T12:21:34 | {
"login": "hfawaz",
"id": 29229602,
"type": "User"
} | [] | false | [] |
755,124,963 | 992 | Add CAIL 2018 dataset | closed | https://github.com/huggingface/datasets/pull/992 | 2020-12-02T10:01:40 | 2020-12-02T16:49:02 | 2020-12-02T16:49:01 | {
"login": "JetRunner",
"id": 22514219,
"type": "User"
} | [] | true | [] | |
755,117,902 | 991 | Adding farsi_news dataset (https://github.com/sci2lab/Farsi-datasets) | null | closed | https://github.com/huggingface/datasets/pull/991 | 2020-12-02T09:52:19 | 2020-12-03T11:01:26 | 2020-12-03T11:01:26 | {
"login": "Narsil",
"id": 204321,
"type": "User"
} | [] | true | [] |
755,097,798 | 990 | Add E2E NLG | Adding the E2E NLG dataset.
More info here : http://www.macs.hw.ac.uk/InteractionLab/E2E/
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_genera... | closed | https://github.com/huggingface/datasets/pull/990 | 2020-12-02T09:25:12 | 2020-12-03T13:08:05 | 2020-12-03T13:08:04 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
755,079,394 | 989 | Fix SV -> NO | This PR fixes the small typo as seen in #956 | closed | https://github.com/huggingface/datasets/pull/989 | 2020-12-02T08:59:59 | 2020-12-02T09:18:21 | 2020-12-02T09:18:14 | {
"login": "jplu",
"id": 959590,
"type": "User"
} | [] | true | [] |
755,069,159 | 988 | making sure datasets are not loaded in memory and distributed training of them | Hi
I am dealing with large-scale datasets which I need to train distributedly, I used the shard function to divide the dataset across the cores, without any sampler, this does not work for distributed training and does not become any faster than 1 TPU core. 1) how I can make sure data is not loaded in memory 2) in cas... | closed | https://github.com/huggingface/datasets/issues/988 | 2020-12-02T08:45:15 | 2022-10-05T13:00:42 | 2022-10-05T13:00:42 | {
"login": "rabeehk",
"id": 6278280,
"type": "User"
} | [] | false | [] |
755,059,469 | 987 | Add OPUS DOGC dataset | closed | https://github.com/huggingface/datasets/pull/987 | 2020-12-02T08:30:32 | 2020-12-04T13:27:41 | 2020-12-04T13:27:41 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] | |
755,047,470 | 986 | Add SciTLDR Dataset | Adds the SciTLDR Dataset by AI2
Added README card with tags to the best of my knowledge
Multi-target summaries or TLDRs of Scientific Documents | closed | https://github.com/huggingface/datasets/pull/986 | 2020-12-02T08:11:16 | 2020-12-02T18:37:22 | 2020-12-02T18:02:59 | {
"login": "bharatr21",
"id": 13381361,
"type": "User"
} | [] | true | [] |
755,020,564 | 985 | Add GAP dataset | GAP dataset
Gender bias coreference resolution | closed | https://github.com/huggingface/datasets/pull/985 | 2020-12-02T07:25:11 | 2022-10-06T14:11:52 | 2020-12-02T16:16:32 | {
"login": "VictorSanh",
"id": 16107619,
"type": "User"
} | [] | true | [] |
755,009,916 | 984 | committing Whoa file | closed | https://github.com/huggingface/datasets/pull/984 | 2020-12-02T07:07:46 | 2020-12-02T16:15:29 | 2020-12-02T15:40:58 | {
"login": "StulosDunamos",
"id": 75356780,
"type": "User"
} | [] | true | [] | |
754,966,620 | 983 | add mc taco | MC-TACO
Temporal commonsense knowledge | closed | https://github.com/huggingface/datasets/pull/983 | 2020-12-02T05:54:55 | 2020-12-02T15:37:47 | 2020-12-02T15:37:46 | {
"login": "VictorSanh",
"id": 16107619,
"type": "User"
} | [] | true | [] |
754,946,337 | 982 | add prachathai67k take2 | I decided it will be faster to create a new pull request instead of fixing the rebase issues.
continuing from https://github.com/huggingface/datasets/pull/954
| closed | https://github.com/huggingface/datasets/pull/982 | 2020-12-02T05:12:01 | 2020-12-02T10:18:11 | 2020-12-02T10:18:11 | {
"login": "cstorm125",
"id": 15519308,
"type": "User"
} | [] | true | [] |
754,937,612 | 981 | add wisesight_sentiment take2 | Take 2 since last time the rebase issues were taking me too much time to fix as opposed to just open a new one. | closed | https://github.com/huggingface/datasets/pull/981 | 2020-12-02T04:50:59 | 2020-12-02T10:37:13 | 2020-12-02T10:37:13 | {
"login": "cstorm125",
"id": 15519308,
"type": "User"
} | [] | true | [] |
754,899,301 | 980 | Wongnai - Thai reviews dataset | 40,000 reviews, previously released on GitHub ( https://github.com/wongnai/wongnai-corpus ) with an LGPL license, and on a closed Kaggle competition ( https://www.kaggle.com/c/wongnai-challenge-review-rating-prediction/ ) | closed | https://github.com/huggingface/datasets/pull/980 | 2020-12-02T03:20:08 | 2020-12-02T15:34:41 | 2020-12-02T15:30:05 | {
"login": "mapmeld",
"id": 643918,
"type": "User"
} | [] | true | [] |
754,893,337 | 979 | [WIP] Add multi woz | This PR adds version 2.2 of the Multi-domain Wizard of OZ dataset: https://github.com/budzianowski/multiwoz/tree/master/data/MultiWOZ_2.2
It was a pretty big chunk of work to figure out the structure, so I stil have tol add the description to the README.md
On the plus side the structure is broadly similar to that... | closed | https://github.com/huggingface/datasets/pull/979 | 2020-12-02T03:05:42 | 2020-12-02T16:07:16 | 2020-12-02T16:07:16 | {
"login": "yjernite",
"id": 10469459,
"type": "User"
} | [] | true | [] |
754,854,478 | 978 | Add code refinement | ### OVERVIEW
Millions of open-source projects with numerous bug fixes
are available in code repositories. This proliferation
of software development histories can be leveraged to
learn how to fix common programming bugs
Code refinement aims to automatically fix bugs in the code,
which can contribute to reducing t... | closed | https://github.com/huggingface/datasets/pull/978 | 2020-12-02T01:29:58 | 2020-12-07T01:52:58 | 2020-12-07T01:52:58 | {
"login": "reshinthadithyan",
"id": 36307201,
"type": "User"
} | [] | true | [] |
754,839,594 | 977 | Add ROPES dataset | ROPES dataset
Reasoning over paragraph effects in situations - testing a system's ability to apply knowledge from a passage of text to a new situation. The task is framed into a reading comprehension task following squad-style extractive qa.
One thing to note: labels of the test set are hidden (leaderboard submiss... | closed | https://github.com/huggingface/datasets/pull/977 | 2020-12-02T00:52:10 | 2020-12-02T10:58:36 | 2020-12-02T10:58:35 | {
"login": "VictorSanh",
"id": 16107619,
"type": "User"
} | [] | true | [] |
754,826,146 | 976 | Arabic pos dialect | A README.md and loading script for the Arabic POS Dialect dataset. The README is missing the sections on personal information, biases, and limitations, as it would probably be better for those to be filled by someone who can read the contents of the dataset and is familiar with Arabic NLP. | closed | https://github.com/huggingface/datasets/pull/976 | 2020-12-02T00:21:13 | 2020-12-09T17:30:32 | 2020-12-09T17:30:32 | {
"login": "mcmillanmajora",
"id": 26722925,
"type": "User"
} | [] | true | [] |
754,823,701 | 975 | add MeTooMA dataset | This PR adds the #MeToo MA dataset. It presents multi-label data points for tweets mined in the backdrop of the #MeToo movement. The dataset includes data points in the form of Tweet ids and appropriate labels. Please refer to the accompanying paper for detailed information regarding annotation, collection, and guideli... | closed | https://github.com/huggingface/datasets/pull/975 | 2020-12-02T00:15:55 | 2020-12-02T10:58:56 | 2020-12-02T10:58:55 | {
"login": "akash418",
"id": 23264033,
"type": "User"
} | [] | true | [] |
754,811,185 | 974 | Add MeTooMA Dataset | closed | https://github.com/huggingface/datasets/pull/974 | 2020-12-01T23:44:01 | 2020-12-01T23:57:58 | 2020-12-01T23:57:58 | {
"login": "akash418",
"id": 23264033,
"type": "User"
} | [] | true | [] | |
754,807,963 | 973 | Adding The Microsoft Terminology Collection dataset. | closed | https://github.com/huggingface/datasets/pull/973 | 2020-12-01T23:36:23 | 2020-12-04T15:25:44 | 2020-12-04T15:12:46 | {
"login": "leoxzhao",
"id": 7915719,
"type": "User"
} | [] | true | [] | |
754,787,314 | 972 | Add Children's Book Test (CBT) dataset | Add the Children's Book Test (CBT) from Facebook (Hill et al. 2016).
Sentence completion given a few sentences as context from a children's book. | closed | https://github.com/huggingface/datasets/pull/972 | 2020-12-01T22:53:26 | 2021-03-19T11:30:03 | 2021-03-19T11:30:03 | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [] | true | [] |
754,784,041 | 971 | add piqa | Physical Interaction: Question Answering (commonsense)
https://yonatanbisk.com/piqa/ | closed | https://github.com/huggingface/datasets/pull/971 | 2020-12-01T22:47:04 | 2020-12-02T09:58:02 | 2020-12-02T09:58:01 | {
"login": "VictorSanh",
"id": 16107619,
"type": "User"
} | [] | true | [] |
754,697,489 | 970 | Add SWAG | Commonsense NLI -> https://rowanzellers.com/swag/ | closed | https://github.com/huggingface/datasets/pull/970 | 2020-12-01T20:21:05 | 2020-12-02T09:55:16 | 2020-12-02T09:55:15 | {
"login": "VictorSanh",
"id": 16107619,
"type": "User"
} | [] | true | [] |
754,681,940 | 969 | Add wiki auto dataset | This PR adds the WikiAuto sentence simplification dataset
https://github.com/chaojiang06/wiki-auto
This is also a prospective GEM task, hence the README.md | closed | https://github.com/huggingface/datasets/pull/969 | 2020-12-01T19:58:11 | 2020-12-02T16:19:14 | 2020-12-02T16:19:14 | {
"login": "yjernite",
"id": 10469459,
"type": "User"
} | [] | true | [] |
754,659,015 | 968 | ADD Afrikaans NER | Afrikaans NER corpus | closed | https://github.com/huggingface/datasets/pull/968 | 2020-12-01T19:23:03 | 2020-12-02T09:41:28 | 2020-12-02T09:41:28 | {
"login": "yvonnegitau",
"id": 7923902,
"type": "User"
} | [] | true | [] |
754,578,988 | 967 | Add CS Restaurants dataset | This PR adds the Czech restaurants dataset for Czech NLG. | closed | https://github.com/huggingface/datasets/pull/967 | 2020-12-01T17:17:37 | 2020-12-02T17:57:44 | 2020-12-02T17:57:25 | {
"login": "TevenLeScao",
"id": 26709476,
"type": "User"
} | [] | true | [] |
754,558,686 | 966 | Add CLINC150 Dataset | Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf)
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data | closed | https://github.com/huggingface/datasets/pull/966 | 2020-12-01T16:50:13 | 2020-12-02T18:45:43 | 2020-12-02T18:45:30 | {
"login": "sumanthd17",
"id": 28291870,
"type": "User"
} | [] | true | [] |
754,553,169 | 965 | Add CLINC150 Dataset | Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf)
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data | closed | https://github.com/huggingface/datasets/pull/965 | 2020-12-01T16:43:00 | 2020-12-01T16:51:16 | 2020-12-01T16:49:15 | {
"login": "sumanthd17",
"id": 28291870,
"type": "User"
} | [] | true | [] |
754,474,660 | 964 | Adding the WebNLG dataset | This PR adds data from the WebNLG challenge, with one configuration per release and challenge iteration.
More information can be found [here](https://webnlg-challenge.loria.fr/)
Unfortunately, the data itself comes from a pretty large number of small XML files, so the dummy data ends up being quite large (8.4 MB ... | closed | https://github.com/huggingface/datasets/pull/964 | 2020-12-01T15:05:23 | 2020-12-02T17:34:05 | 2020-12-02T17:34:05 | {
"login": "yjernite",
"id": 10469459,
"type": "User"
} | [] | true | [] |
754,451,234 | 963 | add CODAH dataset | Adding CODAH dataset.
More info:
https://github.com/Websail-NU/CODAH | closed | https://github.com/huggingface/datasets/pull/963 | 2020-12-01T14:37:05 | 2020-12-02T13:45:58 | 2020-12-02T13:21:25 | {
"login": "patil-suraj",
"id": 27137566,
"type": "User"
} | [] | true | [] |
754,441,428 | 962 | Add Danish Political Comments Dataset | closed | https://github.com/huggingface/datasets/pull/962 | 2020-12-01T14:28:32 | 2020-12-03T10:31:55 | 2020-12-03T10:31:54 | {
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
} | [] | true | [] | |
754,434,398 | 961 | sample multiple datasets | Hi
I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is:
- I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I c... | closed | https://github.com/huggingface/datasets/issues/961 | 2020-12-01T14:20:02 | 2024-06-17T08:23:20 | 2023-07-20T14:08:57 | {
"login": "rabeehk",
"id": 6278280,
"type": "User"
} | [] | false | [] |
754,422,710 | 960 | Add code to automate parts of the dataset card | Most parts of the "Dataset Structure" section can be generated automatically. This PR adds some code to do so. | closed | https://github.com/huggingface/datasets/pull/960 | 2020-12-01T14:04:51 | 2023-09-24T09:50:38 | 2021-04-26T07:56:01 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
754,418,610 | 959 | Add Tunizi Dataset | closed | https://github.com/huggingface/datasets/pull/959 | 2020-12-01T13:59:39 | 2020-12-03T14:21:41 | 2020-12-03T14:21:40 | {
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
} | [] | true | [] | |
754,404,095 | 958 | dataset(ncslgr): add initial loading script | clean #789 | closed | https://github.com/huggingface/datasets/pull/958 | 2020-12-01T13:41:17 | 2020-12-07T16:35:39 | 2020-12-07T16:35:39 | {
"login": "AmitMY",
"id": 5757359,
"type": "User"
} | [] | true | [] |
754,380,073 | 957 | Isixhosa ner corpus | closed | https://github.com/huggingface/datasets/pull/957 | 2020-12-01T13:08:36 | 2020-12-01T18:14:58 | 2020-12-01T18:14:58 | {
"login": "yvonnegitau",
"id": 7923902,
"type": "User"
} | [] | true | [] | |
754,368,378 | 956 | Add Norwegian NER | This PR adds the [Norwegian NER](https://github.com/ljos/navnkjenner) dataset.
I have added the `conllu` package as a test dependency. This is required to properly parse the `.conllu` files. | closed | https://github.com/huggingface/datasets/pull/956 | 2020-12-01T12:51:02 | 2020-12-02T08:53:11 | 2020-12-01T18:09:21 | {
"login": "jplu",
"id": 959590,
"type": "User"
} | [] | true | [] |
754,367,291 | 955 | Added PragmEval benchmark | closed | https://github.com/huggingface/datasets/pull/955 | 2020-12-01T12:49:15 | 2020-12-04T10:43:32 | 2020-12-03T09:36:47 | {
"login": "sileod",
"id": 9168444,
"type": "User"
} | [] | true | [] | |
754,362,012 | 954 | add prachathai67k | `prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com
The prachathai-67k dataset was scraped from the news site Prachathai.
We filtered out those articles with less than 500 characters of body text, mostly images and cartoons.
It contains 67,889 articles wtih 12 curated tags ... | closed | https://github.com/huggingface/datasets/pull/954 | 2020-12-01T12:40:55 | 2020-12-02T05:12:11 | 2020-12-02T04:43:52 | {
"login": "cstorm125",
"id": 15519308,
"type": "User"
} | [] | true | [] |
754,359,942 | 953 | added health_fact dataset | Added dataset Explainable Fact-Checking for Public Health Claims (dataset_id: health_fact) | closed | https://github.com/huggingface/datasets/pull/953 | 2020-12-01T12:37:44 | 2020-12-01T23:11:33 | 2020-12-01T23:11:33 | {
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
} | [] | true | [] |
754,357,270 | 952 | Add orange sum | Add OrangeSum a french abstractive summarization dataset.
Paper: [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) | closed | https://github.com/huggingface/datasets/pull/952 | 2020-12-01T12:33:34 | 2020-12-01T15:44:00 | 2020-12-01T15:44:00 | {
"login": "moussaKam",
"id": 28675016,
"type": "User"
} | [] | true | [] |
754,349,979 | 951 | Prachathai67k | Add `prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com
The `prachathai-67k` dataset was scraped from the news site [Prachathai](prachathai.com). We filtered out those articles with less than 500 characters of body text, mostly images and cartoons. It contains 67,889 articl... | closed | https://github.com/huggingface/datasets/pull/951 | 2020-12-01T12:21:52 | 2020-12-01T12:29:53 | 2020-12-01T12:28:26 | {
"login": "cstorm125",
"id": 15519308,
"type": "User"
} | [] | true | [] |
754,318,686 | 950 | Support .xz file format | Add support to extract/uncompress files in .xz format. | closed | https://github.com/huggingface/datasets/pull/950 | 2020-12-01T11:34:48 | 2020-12-01T13:39:18 | 2020-12-01T13:39:18 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
754,317,777 | 949 | Add GermaNER Dataset | closed | https://github.com/huggingface/datasets/pull/949 | 2020-12-01T11:33:31 | 2020-12-03T14:06:41 | 2020-12-03T14:06:40 | {
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
} | [] | true | [] | |
754,306,260 | 948 | docs(ADD_NEW_DATASET): correct indentation for script | closed | https://github.com/huggingface/datasets/pull/948 | 2020-12-01T11:17:38 | 2020-12-01T11:25:18 | 2020-12-01T11:25:18 | {
"login": "AmitMY",
"id": 5757359,
"type": "User"
} | [] | true | [] | |
754,286,658 | 947 | Add europeana newspapers | This PR adds the [Europeana newspapers](https://github.com/EuropeanaNewspapers/ner-corpora) dataset. | closed | https://github.com/huggingface/datasets/pull/947 | 2020-12-01T10:52:18 | 2020-12-02T09:42:35 | 2020-12-02T09:42:09 | {
"login": "jplu",
"id": 959590,
"type": "User"
} | [] | true | [] |
754,278,632 | 946 | add PEC dataset | A persona-based empathetic conversation dataset published at EMNLP 2020. | closed | https://github.com/huggingface/datasets/pull/946 | 2020-12-01T10:41:41 | 2020-12-03T02:47:14 | 2020-12-03T02:47:14 | {
"login": "zhongpeixiang",
"id": 11826803,
"type": "User"
} | [] | true | [] |
754,273,920 | 945 | Adding Babi dataset - English version | Adding the English version of bAbI.
Samples are taken from ParlAI for consistency with the main users at the moment. | closed | https://github.com/huggingface/datasets/pull/945 | 2020-12-01T10:35:36 | 2020-12-04T15:43:05 | 2020-12-04T15:42:54 | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [] | true | [] |
754,228,947 | 944 | Add German Legal Entity Recognition Dataset | closed | https://github.com/huggingface/datasets/pull/944 | 2020-12-01T09:38:22 | 2020-12-03T13:06:56 | 2020-12-03T13:06:55 | {
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
} | [] | true | [] | |
754,192,491 | 943 | The FLUE Benchmark | This PR adds the [FLUE](https://github.com/getalp/Flaubert/tree/master/flue) benchmark which is a set of different datasets to evaluate models for French content.
Two datasets are missing, the French Treebank that we can use only for research purpose and we are not allowed to distribute, and the Word Sense disambigu... | closed | https://github.com/huggingface/datasets/pull/943 | 2020-12-01T09:00:50 | 2020-12-01T15:24:38 | 2020-12-01T15:24:30 | {
"login": "jplu",
"id": 959590,
"type": "User"
} | [] | true | [] |
754,162,318 | 942 | D | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons t... | closed | https://github.com/huggingface/datasets/issues/942 | 2020-12-01T08:17:10 | 2020-12-03T16:42:53 | 2020-12-03T16:42:53 | {
"login": "CryptoMiKKi",
"id": 74238514,
"type": "User"
} | [] | false | [] |
754,141,321 | 941 | Add People's Daily NER dataset | closed | https://github.com/huggingface/datasets/pull/941 | 2020-12-01T07:48:53 | 2020-12-02T18:42:43 | 2020-12-02T18:42:41 | {
"login": "JetRunner",
"id": 22514219,
"type": "User"
} | [] | true | [] | |
754,010,753 | 940 | Add MSRA NER dataset | closed | https://github.com/huggingface/datasets/pull/940 | 2020-12-01T05:02:11 | 2020-12-04T09:29:40 | 2020-12-01T07:25:53 | {
"login": "JetRunner",
"id": 22514219,
"type": "User"
} | [] | true | [] | |
753,965,405 | 939 | add wisesight_sentiment | Add `wisesight_sentiment` Social media messages in Thai language with sentiment label (positive, neutral, negative, question)
Model Card:
---
YAML tags:
annotations_creators:
- expert-generated
language_creators:
- found
languages:
- th
licenses:
- cc0-1.0
multilinguality:
- monolingual
size_categories:... | closed | https://github.com/huggingface/datasets/pull/939 | 2020-12-01T03:06:39 | 2020-12-02T04:52:38 | 2020-12-02T04:35:51 | {
"login": "cstorm125",
"id": 15519308,
"type": "User"
} | [] | true | [] |
753,940,979 | 938 | V-1.0.0 of isizulu_ner_corpus | closed | https://github.com/huggingface/datasets/pull/938 | 2020-12-01T02:04:32 | 2020-12-01T23:34:36 | 2020-12-01T23:34:36 | {
"login": "yvonnegitau",
"id": 7923902,
"type": "User"
} | [] | true | [] | |
753,921,078 | 937 | Local machine/cluster Beam Datasets example/tutorial | Hi,
I'm wondering if https://huggingface.co/docs/datasets/beam_dataset.html has an non-GCP or non-Dataflow version example/tutorial? I tried to migrate it to run on DirectRunner and SparkRunner, however, there were way too many runtime errors that I had to fix during the process, and even so I wasn't able to get eit... | closed | https://github.com/huggingface/datasets/issues/937 | 2020-12-01T01:11:43 | 2024-03-15T16:05:14 | 2024-03-15T16:05:14 | {
"login": "shangw-nvidia",
"id": 66387198,
"type": "User"
} | [] | false | [] |
753,915,603 | 936 | Added HANS parses and categories | This pull request adds HANS missing information: the sentence parses, as well as the heuristic category. | closed | https://github.com/huggingface/datasets/pull/936 | 2020-12-01T00:58:16 | 2020-12-01T13:19:41 | 2020-12-01T13:19:40 | {
"login": "TevenLeScao",
"id": 26709476,
"type": "User"
} | [] | true | [] |
753,863,055 | 935 | add PIB dataset | This pull request will add PIB dataset. | closed | https://github.com/huggingface/datasets/pull/935 | 2020-11-30T22:55:43 | 2020-12-01T23:17:11 | 2020-12-01T23:17:11 | {
"login": "thevasudevgupta",
"id": 53136577,
"type": "User"
} | [] | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.