id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
1,227,592,826
4,290
Update paper link in medmcqa dataset card
Updating readme in medmcqa dataset.
closed
https://github.com/huggingface/datasets/pull/4290
2022-05-06T08:52:51
2022-09-30T11:51:28
2022-09-30T11:49:07
{ "login": "monk1337", "id": 17107749, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,226,821,732
4,288
Add missing `faiss` import to fix https://github.com/huggingface/datasets/issues/4287
This PR fixes the issue recently mentioned in https://github.com/huggingface/datasets/issues/4287 🤗
closed
https://github.com/huggingface/datasets/pull/4288
2022-05-05T15:21:49
2022-05-10T12:55:06
2022-05-10T12:09:48
{ "login": "alvarobartt", "id": 36760800, "type": "User" }
[]
true
[]
1,226,806,652
4,287
"NameError: name 'faiss' is not defined" on `.add_faiss_index` when `device` is not None
## Describe the bug When using `datasets` to calculate the FAISS indices of a dataset, the exception `NameError: name 'faiss' is not defined` is triggered when trying to calculate those on a device (GPU), so `.add_faiss_index(..., device=0)` fails with that exception. All that assuming that `datasets` is properly...
closed
https://github.com/huggingface/datasets/issues/4287
2022-05-05T15:09:45
2022-05-10T13:53:19
2022-05-10T13:53:19
{ "login": "alvarobartt", "id": 36760800, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,226,758,621
4,286
Add Lahnda language tag
This language is present in [Wikimedia's WIT](https://huggingface.co/datasets/wikimedia/wit_base) dataset.
closed
https://github.com/huggingface/datasets/pull/4286
2022-05-05T14:34:20
2022-05-10T12:10:04
2022-05-10T12:02:38
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,226,374,831
4,285
Update LexGLUE README.md
Update the leaderboard based on the latest results presented in the ACL 2022 version of the article.
closed
https://github.com/huggingface/datasets/pull/4285
2022-05-05T08:36:50
2022-05-05T13:39:04
2022-05-05T13:33:35
{ "login": "iliaschalkidis", "id": 1626984, "type": "User" }
[]
true
[]
1,226,200,727
4,284
Issues in processing very large datasets
## Describe the bug I'm trying to add a feature called "subgraph" to CNN/DM dataset (modifications on run_summarization.py of Huggingface Transformers script) --- I'm not quite sure if I'm doing it the right way, though--- but the main problem appears when the training starts where the error ` [OSError: [Errno 12] Can...
closed
https://github.com/huggingface/datasets/issues/4284
2022-05-05T05:01:09
2023-07-25T15:12:38
2023-07-25T15:12:38
{ "login": "sajastu", "id": 10419055, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,225,686,988
4,283
Fix filesystem docstring
This PR untangles the `S3FileSystem` docstring so the [parameters](https://huggingface.co/docs/datasets/master/en/package_reference/main_classes#parameters) are properly displayed.
closed
https://github.com/huggingface/datasets/pull/4283
2022-05-04T17:42:42
2022-05-06T16:32:02
2022-05-06T06:22:17
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[]
true
[]
1,225,616,545
4,282
Don't do unnecessary list type casting to avoid replacing None values by empty lists
In certain cases, `None` values are replaced by empty lists when casting feature types. It happens every time you cast an array of nested lists like [None, [0, 1, 2, 3]] to a different type (to change the integer precision for example). In this case you'd get [[], [0, 1, 2, 3]] for example. This issue comes from PyA...
closed
https://github.com/huggingface/datasets/pull/4282
2022-05-04T16:37:01
2022-05-06T10:43:58
2022-05-06T10:37:00
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,225,556,939
4,281
Remove a copy-paste sentence in dataset cards
Remove the following copy-paste sentence from dataset cards: ``` We show detailed information for up to 5 configurations of the dataset. ```
closed
https://github.com/huggingface/datasets/pull/4281
2022-05-04T15:41:55
2022-05-06T08:38:03
2022-05-04T18:33:16
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,225,446,844
4,280
Add missing features to commonsense_qa dataset
Fix partially #4275.
closed
https://github.com/huggingface/datasets/pull/4280
2022-05-04T14:24:26
2022-05-06T14:23:57
2022-05-06T14:16:46
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,225,300,273
4,279
Update minimal PyArrow version warning
Update the minimal PyArrow version warning (should've been part of #4250).
closed
https://github.com/huggingface/datasets/pull/4279
2022-05-04T12:26:09
2022-05-05T08:50:58
2022-05-05T08:43:47
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,225,122,123
4,278
Add missing features to openbookqa dataset for additional config
Fix partially #4276.
closed
https://github.com/huggingface/datasets/pull/4278
2022-05-04T09:22:50
2022-05-06T13:13:20
2022-05-06T13:06:01
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,225,002,286
4,277
Enable label alignment for token classification datasets
This PR extends the `Dataset.align_labels_with_mapping()` method to support alignment of label mappings between datasets and models for token classification (e.g. NER). Example of usage: ```python from datasets import load_dataset ner_ds = load_dataset("conll2003", split="train") # returns [3, 0, 7, 0, 0, 0,...
closed
https://github.com/huggingface/datasets/pull/4277
2022-05-04T07:15:16
2022-05-06T15:42:15
2022-05-06T15:36:31
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
1,224,949,252
4,276
OpenBookQA has missing and inconsistent field names
## Describe the bug OpenBookQA implementation is inconsistent with the original dataset. We need to: 1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format. 2. Add missing additional fields: - 'fact1': row['fact1'], - 'humanScore': row['humanSc...
closed
https://github.com/huggingface/datasets/issues/4276
2022-05-04T05:51:52
2022-10-11T17:11:53
2022-10-05T13:50:03
{ "login": "vblagoje", "id": 458335, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,224,943,414
4,275
CommonSenseQA has missing and inconsistent field names
## Describe the bug In short, CommonSenseQA implementation is inconsistent with the original dataset. More precisely, we need to: 1. Add the dataset matching "id" field. The current dataset, instead, regenerates monotonically increasing id. 2. The [“question”][“stem”] field is flattened into "question". We sh...
open
https://github.com/huggingface/datasets/issues/4275
2022-05-04T05:38:59
2022-05-04T11:41:18
null
{ "login": "vblagoje", "id": 458335, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,224,740,303
4,274
Add API code examples for IterableDataset
This PR adds API code examples for `IterableDataset` and `IterableDatasetDicts`.
closed
https://github.com/huggingface/datasets/pull/4274
2022-05-03T22:44:17
2022-05-04T16:29:32
2022-05-04T16:22:04
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
true
[]
1,224,681,036
4,273
leadboard info added for TNE
null
closed
https://github.com/huggingface/datasets/pull/4273
2022-05-03T21:35:41
2022-05-05T13:25:24
2022-05-05T13:18:13
{ "login": "yanaiela", "id": 8031035, "type": "User" }
[]
true
[]
1,224,635,660
4,272
Fix typo in logging docs
This PR fixes #4271.
closed
https://github.com/huggingface/datasets/pull/4272
2022-05-03T20:47:57
2022-05-04T15:42:27
2022-05-04T06:58:36
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[]
true
[]
1,224,404,403
4,271
A typo in docs of datasets.disable_progress_bar
## Describe the bug in the docs of V2.1.0 datasets.disable_progress_bar, we should replace "enable" with "disable".
closed
https://github.com/huggingface/datasets/issues/4271
2022-05-03T17:44:56
2022-05-04T06:58:35
2022-05-04T06:58:35
{ "login": "jiangwangyi", "id": 39762734, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,224,244,460
4,270
Fix style in openbookqa dataset
CI in PR: - #4259 was green, but after merging it to master, a code quality error appeared.
closed
https://github.com/huggingface/datasets/pull/4270
2022-05-03T15:21:34
2022-05-06T08:38:06
2022-05-03T16:20:52
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,223,865,145
4,269
Add license and point of contact to big_patent dataset
Update metadata of big_patent dataset with: - license - point of contact
closed
https://github.com/huggingface/datasets/pull/4269
2022-05-03T09:24:07
2022-05-06T08:38:09
2022-05-03T11:16:19
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,223,331,964
4,268
error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
## Describe the bug Error generated when attempting to download dataset ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered") ``` ## Expected results A clear and concise description of the expected results...
closed
https://github.com/huggingface/datasets/issues/4268
2022-05-02T20:34:25
2022-05-06T15:53:30
2022-05-03T11:23:48
{ "login": "i-am-neo", "id": 102043285, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,223,214,275
4,267
Replace data URL in SAMSum dataset within the same repository
Replace data URL with one in the same repository.
closed
https://github.com/huggingface/datasets/pull/4267
2022-05-02T18:38:08
2022-05-06T08:38:13
2022-05-02T19:03:49
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,223,116,436
4,266
Add HF Speech Bench to Librispeech Dataset Card
Adds the HF Speech Bench to Librispeech Dataset Card in place of the Papers With Code Leaderboard. Should improve usage and visibility of this leaderboard! Wondering whether this can also be done for [Common Voice 7](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) and [8](https://huggingface.co/dat...
closed
https://github.com/huggingface/datasets/pull/4266
2022-05-02T16:59:31
2022-05-05T08:47:20
2022-05-05T08:40:09
{ "login": "sanchit-gandhi", "id": 93869735, "type": "User" }
[]
true
[]
1,222,723,083
4,263
Rename imagenet2012 -> imagenet-1k
On the Hugging Face Hub, users refer to imagenet2012 (from #4178 ) as imagenet-1k in their model tags. To correctly link models to imagenet, we should rename this dataset `imagenet-1k`. Later we can add `imagenet-21k` as a new dataset if we want. Once this one is merged we can delete the `imagenet2012` dataset...
closed
https://github.com/huggingface/datasets/pull/4263
2022-05-02T10:26:21
2022-05-02T17:50:46
2022-05-02T16:32:57
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,222,130,749
4,262
Add YAML tags to Dataset Card rotten tomatoes
The dataset card for the rotten tomatoes / MR movie review dataset had some missing YAML tags. Hopefully, this also improves the visibility of this dataset now that paperswithcode and huggingface link to eachother.
closed
https://github.com/huggingface/datasets/pull/4262
2022-05-01T11:59:08
2022-05-03T14:27:33
2022-05-03T14:20:35
{ "login": "mo6zes", "id": 10004251, "type": "User" }
[]
true
[]
1,221,883,779
4,261
data leakage in `webis/conclugen` dataset
## Describe the bug Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results. Furthermore, all splits contain duplicate samples. ## Steps to reproduce the bug ```pyth...
closed
https://github.com/huggingface/datasets/issues/4261
2022-04-30T17:43:37
2022-05-03T06:04:26
2022-05-03T06:04:26
{ "login": "xflashxx", "id": 54585776, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,221,830,292
4,260
Add mr_polarity movie review sentiment classification
Add the MR (Movie Review) dataset. The original dataset contains sentences from Rotten Tomatoes labeled as either "positive" or "negative". Homepage: [https://www.cs.cornell.edu/people/pabo/movie-review-data/](https://www.cs.cornell.edu/people/pabo/movie-review-data/) paperswithcode: [https://paperswithcode.com/d...
closed
https://github.com/huggingface/datasets/pull/4260
2022-04-30T13:19:33
2022-04-30T14:16:25
2022-04-30T14:16:25
{ "login": "mo6zes", "id": 10004251, "type": "User" }
[]
true
[]
1,221,768,025
4,259
Fix bug in choices labels in openbookqa dataset
This PR fixes the Bug in the openbookqa dataset as mentioned in this issue #3550. Fix #3550. cc. @lhoestq @mariosasko
closed
https://github.com/huggingface/datasets/pull/4259
2022-04-30T07:41:39
2022-05-04T06:31:31
2022-05-03T15:14:21
{ "login": "manandey", "id": 6687858, "type": "User" }
[]
true
[]
1,221,637,727
4,258
Fix/start token mask issue and update documentation
This pr fixes a couple bugs: 1) the perplexity was calculated with a 0 in the attention mask for the start token, which was causing high perplexity scores that were not correct 2) the documentation was not updated
closed
https://github.com/huggingface/datasets/pull/4258
2022-04-29T22:42:44
2022-05-02T16:33:20
2022-05-02T16:26:12
{ "login": "TristanThrush", "id": 20826878, "type": "User" }
[]
true
[]
1,221,393,137
4,257
Create metric card for Mahalanobis Distance
proposing a metric card to better explain how Mahalanobis distance works (last one for now :sweat_smile:
closed
https://github.com/huggingface/datasets/pull/4257
2022-04-29T18:37:27
2022-05-02T14:50:18
2022-05-02T14:43:24
{ "login": "sashavor", "id": 14205986, "type": "User" }
[]
true
[]
1,221,379,625
4,256
Create metric card for MSE
Proposing a metric card for Mean Squared Error
closed
https://github.com/huggingface/datasets/pull/4256
2022-04-29T18:21:22
2022-05-02T14:55:42
2022-05-02T14:48:47
{ "login": "sashavor", "id": 14205986, "type": "User" }
[]
true
[]
1,221,142,899
4,255
No google drive URL for pubmed_qa
I hosted the data files in https://huggingface.co/datasets/pubmed_qa. This is allowed because the data is under the MIT license. cc @stas00
closed
https://github.com/huggingface/datasets/pull/4255
2022-04-29T15:55:46
2022-04-29T16:24:55
2022-04-29T16:18:56
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,220,204,395
4,254
Replace data URL in SAMSum dataset and support streaming
This PR replaces data URL in SAMSum dataset: - original host (arxiv.org) does not allow HTTP Range requests - we have hosted the data on the Hub (license: CC BY-NC-ND 4.0) Moreover, it implements support for streaming. Fix #4146. Related to: #4236. CC: @severo
closed
https://github.com/huggingface/datasets/pull/4254
2022-04-29T08:21:43
2022-05-06T08:38:16
2022-04-29T16:26:09
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,219,286,408
4,253
Create metric cards for mean IOU
Proposing a metric card for mIoU :rocket: sorry for spamming you with review requests, @albertvillanova ! :hugs:
closed
https://github.com/huggingface/datasets/pull/4253
2022-04-28T20:58:27
2022-04-29T17:44:47
2022-04-29T17:38:06
{ "login": "sashavor", "id": 14205986, "type": "User" }
[]
true
[]
1,219,151,100
4,252
Creating metric card for MAE
Initial proposal for MAE metric card
closed
https://github.com/huggingface/datasets/pull/4252
2022-04-28T19:04:33
2022-04-29T16:59:11
2022-04-29T16:52:30
{ "login": "sashavor", "id": 14205986, "type": "User" }
[]
true
[]
1,219,116,354
4,251
Metric card for the XTREME-S dataset
Proposing a metric card for the XTREME-S dataset :hugs:
closed
https://github.com/huggingface/datasets/pull/4251
2022-04-28T18:32:19
2022-04-29T16:46:11
2022-04-29T16:38:46
{ "login": "sashavor", "id": 14205986, "type": "User" }
[]
true
[]
1,219,093,830
4,250
Bump PyArrow Version to 6
Fixes #4152 This PR updates the PyArrow version to 6 in setup.py, CI job files .circleci/config.yaml and .github/workflows/benchmarks.yaml files. This will fix ArrayND error which exists in pyarrow 5.
closed
https://github.com/huggingface/datasets/pull/4250
2022-04-28T18:10:50
2022-05-04T09:36:52
2022-05-04T09:29:46
{ "login": "dnaveenr", "id": 17746528, "type": "User" }
[]
true
[]
1,218,524,424
4,249
Support streaming XGLUE dataset
Support streaming XGLUE dataset. Fix #4247. CC: @severo
closed
https://github.com/huggingface/datasets/pull/4249
2022-04-28T10:27:23
2022-05-06T08:38:21
2022-04-28T16:08:03
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,218,460,444
4,248
conll2003 dataset loads original data.
## Describe the bug I load `conll2003` dataset to use refined data like [this](https://huggingface.co/datasets/conll2003/viewer/conll2003/train) preview, but it is original data that contains `'-DOCSTART- -X- -X- O'` text. Is this a bug or should I use another dataset_name like `lhoestq/conll2003` ? ## Steps to...
closed
https://github.com/huggingface/datasets/issues/4248
2022-04-28T09:33:31
2022-07-18T07:15:48
2022-07-18T07:15:48
{ "login": "sue991", "id": 26458611, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,218,320,882
4,247
The data preview of XGLUE
It seems that something wrong with the data previvew of XGLUE
closed
https://github.com/huggingface/datasets/issues/4247
2022-04-28T07:30:50
2022-04-29T08:23:28
2022-04-28T16:08:03
{ "login": "czq1999", "id": 49108847, "type": "User" }
[]
false
[]
1,218,320,293
4,246
Support to load dataset with TSV files by passing only dataset name
This PR implements support to load a dataset (w/o script) containing TSV files by passing only the dataset name (no need to pass `sep='\t'`): ```python ds = load_dataset("dataset/name") ``` The refactoring allows for future builder kwargs customizations based on file extension. Related to #4238.
closed
https://github.com/huggingface/datasets/pull/4246
2022-04-28T07:30:15
2022-05-06T08:38:28
2022-05-06T08:14:07
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,217,959,400
4,245
Add code examples for DatasetDict
This PR adds code examples for `DatasetDict` in the API reference :)
closed
https://github.com/huggingface/datasets/pull/4245
2022-04-27T22:52:22
2022-04-29T18:19:34
2022-04-29T18:13:03
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
true
[]
1,217,732,221
4,244
task id update
changed multi input text classification as task id instead of category
closed
https://github.com/huggingface/datasets/pull/4244
2022-04-27T18:28:14
2022-05-04T10:43:53
2022-05-04T10:36:37
{ "login": "nazneenrajani", "id": 3278583, "type": "User" }
[]
true
[]
1,217,689,909
4,243
WIP: Initial shades loading script and readme
null
closed
https://github.com/huggingface/datasets/pull/4243
2022-04-27T17:45:43
2022-10-03T09:36:35
2022-10-03T09:36:35
{ "login": "shayne-longpre", "id": 69018523, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,217,665,960
4,242
Update auth when mirroring datasets on the hub
We don't need to use extraHeaders anymore for rate limits anymore. Anyway extraHeaders was not working with git LFS because it was passing the wrong auth to S3.
closed
https://github.com/huggingface/datasets/pull/4242
2022-04-27T17:22:31
2022-04-27T17:37:04
2022-04-27T17:30:42
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,217,423,686
4,241
NonMatchingChecksumError when attempting to download GLUE
## Describe the bug I am trying to download the GLUE dataset from the NLP module but get an error (see below). ## Steps to reproduce the bug ```python import nlp nlp.__version__ # '0.2.0' nlp.load_dataset('glue', name="rte", download_mode="force_redownload") ``` ## Expected results I expect the dataset to ...
closed
https://github.com/huggingface/datasets/issues/4241
2022-04-27T14:14:21
2022-04-28T07:45:27
2022-04-28T07:45:27
{ "login": "drussellmrichie", "id": 9650729, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,217,287,594
4,240
Fix yield for crd3
Modified the `_generate_examples` function to consider all the turns for a chunk id as a single example Modified the features accordingly ``` "turns": [ { "names": datasets.features.Sequence(datasets.Value("string")), "utterances": ...
closed
https://github.com/huggingface/datasets/pull/4240
2022-04-27T12:31:36
2022-04-29T12:41:41
2022-04-29T12:41:41
{ "login": "shanyas10", "id": 21066979, "type": "User" }
[]
true
[]
1,217,269,689
4,239
Small fixes in ROC AUC docs
The list of use cases did not render on GitHub with the prepended spacing. Additionally, some typo's we're fixed.
closed
https://github.com/huggingface/datasets/pull/4239
2022-04-27T12:15:50
2022-05-02T13:28:57
2022-05-02T13:22:03
{ "login": "wschella", "id": 9478856, "type": "User" }
[]
true
[]
1,217,168,123
4,238
Dataset caching policy
## Describe the bug I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error ``` [/usr/local/lib/python3.7/dist-packages/d...
closed
https://github.com/huggingface/datasets/issues/4238
2022-04-27T10:42:11
2022-04-27T16:29:25
2022-04-27T16:28:50
{ "login": "loretoparisi", "id": 163333, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,217,121,044
4,237
Common Voice 8 doesn't show datasets viewer
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
closed
https://github.com/huggingface/datasets/issues/4237
2022-04-27T10:05:20
2022-05-10T12:17:05
2022-05-10T12:17:04
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,217,115,691
4,236
Replace data URL in big_patent dataset and support streaming
This PR replaces the Google Drive URL with our Hub one, once the data owners have approved to host their data on the Hub. Moreover, this PR makes the dataset streamable. Fix #4217.
closed
https://github.com/huggingface/datasets/pull/4236
2022-04-27T10:01:13
2022-06-10T08:10:55
2022-05-02T18:21:15
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,216,952,640
4,235
How to load VERY LARGE dataset?
### System Info ```shell I am using transformer trainer while meeting the issue. The trainer requests torch.utils.data.Dataset as input, which loads the whole dataset into the memory at once. Therefore, when the dataset is too large to load, there's nothing I can do except using IterDataset, which loads samples of da...
closed
https://github.com/huggingface/datasets/issues/4235
2022-04-27T07:50:13
2023-07-25T15:07:57
2023-07-25T15:07:57
{ "login": "CaoYiqingT", "id": 45160643, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,216,818,846
4,234
Autoeval config
Added autoeval config to imdb as pilot
closed
https://github.com/huggingface/datasets/pull/4234
2022-04-27T05:32:10
2022-05-06T13:20:31
2022-05-05T18:20:58
{ "login": "nazneenrajani", "id": 3278583, "type": "User" }
[]
true
[]
1,216,665,044
4,233
Autoeval
null
closed
https://github.com/huggingface/datasets/pull/4233
2022-04-27T01:32:09
2022-04-27T05:29:30
2022-04-27T01:32:23
{ "login": "nazneenrajani", "id": 3278583, "type": "User" }
[]
true
[]
1,216,659,444
4,232
adding new tag to tasks.json and modified for existing datasets
null
closed
https://github.com/huggingface/datasets/pull/4232
2022-04-27T01:21:09
2022-05-03T14:23:56
2022-05-03T14:16:39
{ "login": "nazneenrajani", "id": 3278583, "type": "User" }
[]
true
[]
1,216,651,960
4,231
Fix invalid url to CC-Aligned dataset
The CC-Aligned dataset url has changed to https://data.statmt.org/cc-aligned/, the old address http://www.statmt.org/cc-aligned/ is no longer valid
closed
https://github.com/huggingface/datasets/pull/4231
2022-04-27T01:07:01
2022-05-16T17:01:13
2022-05-16T16:53:12
{ "login": "juntang-zhuang", "id": 44451229, "type": "User" }
[]
true
[]
1,216,643,661
4,230
Why the `conll2003` dataset on huggingface only contains the `en` subset? Where is the German data?
![image](https://user-images.githubusercontent.com/37113676/165416606-96b5db18-b16c-4b6b-928c-de8620fd943e.png) But on huggingface datasets: ![image](https://user-images.githubusercontent.com/37113676/165416649-8fd77980-ca0d-43f0-935e-f398ba8323a4.png) Where is the German data?
closed
https://github.com/huggingface/datasets/issues/4230
2022-04-27T00:53:52
2023-07-25T15:10:15
2023-07-25T15:10:15
{ "login": "beyondguo", "id": 37113676, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,216,638,968
4,229
new task tag
multi-input-text-classification tag for classification datasets that take more than one input
closed
https://github.com/huggingface/datasets/pull/4229
2022-04-27T00:47:08
2022-04-27T00:48:28
2022-04-27T00:48:17
{ "login": "nazneenrajani", "id": 3278583, "type": "User" }
[]
true
[]
1,216,523,043
4,228
new task tag
multi-input-text-classification tag for classification datasets that take more than one input
closed
https://github.com/huggingface/datasets/pull/4228
2022-04-26T22:00:33
2022-04-27T00:48:31
2022-04-27T00:46:31
{ "login": "nazneenrajani", "id": 3278583, "type": "User" }
[]
true
[]
1,216,455,316
4,227
Add f1 metric card, update docstring in py file
null
closed
https://github.com/huggingface/datasets/pull/4227
2022-04-26T20:41:03
2022-05-03T12:50:23
2022-05-03T12:43:33
{ "login": "emibaylor", "id": 27527747, "type": "User" }
[]
true
[]
1,216,331,073
4,226
Add pearsonr mc, update functionality to match the original docs
- adds pearsonr metric card - adds ability to return p-value - p-value was mentioned in the original docs as a return value, but there was no option to return it. I updated the _compute function slightly to have an option to return the p-value.
closed
https://github.com/huggingface/datasets/pull/4226
2022-04-26T18:30:46
2022-05-03T17:09:24
2022-05-03T17:02:28
{ "login": "emibaylor", "id": 27527747, "type": "User" }
[]
true
[]
1,216,213,464
4,225
autoeval config
add train eval index for autoeval
closed
https://github.com/huggingface/datasets/pull/4225
2022-04-26T16:38:34
2022-04-27T00:48:31
2022-04-26T22:00:26
{ "login": "nazneenrajani", "id": 3278583, "type": "User" }
[]
true
[]
1,216,209,667
4,224
autoeval config
add train eval index for autoeval
closed
https://github.com/huggingface/datasets/pull/4224
2022-04-26T16:35:19
2022-04-26T16:36:45
2022-04-26T16:36:45
{ "login": "nazneenrajani", "id": 3278583, "type": "User" }
[]
true
[]
1,216,107,082
4,223
Add Accuracy Metric Card
- adds accuracy metric card - updates docstring in accuracy.py - adds .json file with metric card and docstring information
closed
https://github.com/huggingface/datasets/pull/4223
2022-04-26T15:10:46
2022-05-03T14:27:45
2022-05-03T14:20:47
{ "login": "emibaylor", "id": 27527747, "type": "User" }
[]
true
[]
1,216,056,439
4,222
Fix description links in dataset cards
I noticed many links were not properly displayed (only text, no link) on the Hub because of wrong syntax, e.g.: https://huggingface.co/datasets/big_patent This PR fixes all description links in dataset cards.
closed
https://github.com/huggingface/datasets/pull/4222
2022-04-26T14:36:25
2022-05-06T08:38:38
2022-04-26T16:52:29
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,215,911,182
4,221
Dictionary Feature
Hi, I'm trying to create the loading script for a dataset in which one feature is a list of dictionaries, which afaik doesn't fit very well the values and structures supported by Value and Sequence. Is there any suggested workaround, am I missing something? Thank you in advance.
closed
https://github.com/huggingface/datasets/issues/4221
2022-04-26T12:50:18
2022-04-29T14:52:19
2022-04-28T17:04:58
{ "login": "jordiae", "id": 2944532, "type": "User" }
[ { "name": "question", "color": "d876e3" } ]
false
[]
1,215,225,802
4,220
Altered faiss installation comment
null
closed
https://github.com/huggingface/datasets/pull/4220
2022-04-26T01:20:43
2022-05-09T17:29:34
2022-05-09T17:22:09
{ "login": "vishalsrao", "id": 36671559, "type": "User" }
[]
true
[]
1,214,934,025
4,219
Add F1 Metric Card
null
closed
https://github.com/huggingface/datasets/pull/4219
2022-04-25T19:14:56
2022-04-26T20:44:18
2022-04-26T20:37:46
{ "login": "emibaylor", "id": 27527747, "type": "User" }
[]
true
[]
1,214,748,226
4,218
Make code for image downloading from image urls cacheable
Fix #4199
closed
https://github.com/huggingface/datasets/pull/4218
2022-04-25T16:17:59
2022-04-26T17:00:24
2022-04-26T13:38:26
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,214,688,141
4,217
Big_Patent dataset broken
## Dataset viewer issue for '*big_patent*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/big_patent/viewer/all/train)* *Unable to view because it says FileNotFound, also cannot download it through the python API* Am I the one who added this dataset ? No
closed
https://github.com/huggingface/datasets/issues/4217
2022-04-25T15:31:45
2022-05-26T06:29:43
2022-05-02T18:21:15
{ "login": "Matthew-Larsen", "id": 54189843, "type": "User" }
[ { "name": "hosted-on-google-drive", "color": "8B51EF" } ]
false
[]
1,214,614,029
4,216
Avoid recursion error in map if example is returned as dict value
I noticed this bug while answering [this question](https://discuss.huggingface.co/t/correct-way-to-create-a-dataset-from-a-csv-file/15686/11?u=mariosasko). This code replicates the bug: ```python from datasets import Dataset dset = Dataset.from_dict({"en": ["aa", "bb"], "fr": ["cc", "dd"]}) dset.map(lambda ex: ...
closed
https://github.com/huggingface/datasets/pull/4216
2022-04-25T14:40:32
2022-05-04T17:20:06
2022-05-04T17:12:52
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,214,579,162
4,215
Add `drop_last_batch` to `IterableDataset.map`
Addresses this comment: https://github.com/huggingface/datasets/pull/3801#pullrequestreview-901736921
closed
https://github.com/huggingface/datasets/pull/4215
2022-04-25T14:15:19
2022-05-03T15:56:07
2022-05-03T15:48:54
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,214,572,430
4,214
Skip checksum computation in Imagefolder by default
Avoids having to set `ignore_verifications=True` in `load_dataset("imagefolder", ...)` to skip checksum verification and speed up loading. The user can still pass `DownloadConfig(record_checksums=True)` to not skip this part.
closed
https://github.com/huggingface/datasets/pull/4214
2022-04-25T14:10:41
2022-05-03T15:28:32
2022-05-03T15:21:29
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,214,510,010
4,213
ETT time series dataset
Ready for review.
closed
https://github.com/huggingface/datasets/pull/4213
2022-04-25T13:26:18
2022-05-05T12:19:21
2022-05-05T12:10:35
{ "login": "kashif", "id": 8100, "type": "User" }
[]
true
[]
1,214,498,582
4,212
[Common Voice] Make sure bytes are correctly deleted if `path` exists
`path` should be set to local path inside audio feature if exist so that bytes can correctly be deleted.
closed
https://github.com/huggingface/datasets/pull/4212
2022-04-25T13:18:26
2022-04-26T22:54:28
2022-04-26T22:48:27
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
1,214,361,837
4,211
DatasetDict containing Datasets with different features when pushed to hub gets remapped features
Hi there, I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual features but if I `push_to_hub` and then `load_dataset`, the features are all the same. Dataset and code...
closed
https://github.com/huggingface/datasets/issues/4211
2022-04-25T11:22:54
2023-04-06T19:25:50
2022-05-20T15:15:30
{ "login": "pietrolesci", "id": 61748653, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,214,089,130
4,210
TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe'
### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.10.0+cu111 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed ...
closed
https://github.com/huggingface/datasets/issues/4210
2022-04-25T07:28:42
2022-05-31T12:16:31
2022-05-31T12:16:31
{ "login": "loretoparisi", "id": 163333, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,213,716,426
4,208
Add CMU MoCap Dataset
Resolves #3457 Dataset Request : Add CMU Graphics Lab Motion Capture dataset [#3457](https://github.com/huggingface/datasets/issues/3457) This PR adds the CMU MoCap Dataset. The authors didn't respond even after multiple follow ups, so I ended up crawling the website to get categories, subcategories and descrip...
closed
https://github.com/huggingface/datasets/pull/4208
2022-04-24T17:31:08
2022-10-03T09:38:24
2022-10-03T09:36:30
{ "login": "dnaveenr", "id": 17746528, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,213,604,615
4,207
[Minor edit] Fix typo in class name
Typo: `datasets.DatsetDict` -> `datasets.DatasetDict`
closed
https://github.com/huggingface/datasets/pull/4207
2022-04-24T09:49:37
2022-05-05T13:17:47
2022-05-05T13:17:47
{ "login": "cakiki", "id": 3664563, "type": "User" }
[]
true
[]
1,212,715,581
4,206
Add Nerval Metric
This PR adds readme.md and ner_val.py to metrics. Nerval is a python package that helps evaluate NER models. It creates classification report and confusion matrix at entity level.
closed
https://github.com/huggingface/datasets/pull/4206
2022-04-22T19:45:00
2023-07-11T09:34:56
2023-07-11T09:34:55
{ "login": "maridda", "id": 49372461, "type": "User" }
[ { "name": "transfer-to-evaluate", "color": "E3165C" } ]
true
[]
1,212,466,138
4,205
Fix `convert_file_size_to_int` for kilobits and megabits
Minor change to fully align this function with the recent change in Transformers (https://github.com/huggingface/transformers/pull/16891)
closed
https://github.com/huggingface/datasets/pull/4205
2022-04-22T14:56:21
2022-05-03T15:28:42
2022-05-03T15:21:48
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,212,431,764
4,204
Add Recall Metric Card
What this PR mainly does: - add metric card for recall metric - update docs in recall python file Note: I've also included a .json file with all of the metric card information. I've started compiling the relevant information in this type of .json files, and then using a script I wrote to generate the formatted met...
closed
https://github.com/huggingface/datasets/pull/4204
2022-04-22T14:24:26
2022-05-03T13:23:23
2022-05-03T13:16:24
{ "login": "emibaylor", "id": 27527747, "type": "User" }
[]
true
[]
1,212,431,067
4,203
Add Precision Metric Card
What this PR mainly does: - add metric card for precision metric - update docs in precision python file Note: I've also included a .json file with all of the metric card information. I've started compiling the relevant information in this type of .json files, and then using a script I wrote to generate the formatt...
closed
https://github.com/huggingface/datasets/pull/4203
2022-04-22T14:23:48
2022-05-03T14:23:40
2022-05-03T14:16:46
{ "login": "emibaylor", "id": 27527747, "type": "User" }
[]
true
[]
1,212,326,288
4,202
Fix some type annotation in doc
null
closed
https://github.com/huggingface/datasets/pull/4202
2022-04-22T12:53:31
2022-04-22T15:03:00
2022-04-22T14:56:43
{ "login": "thomasw21", "id": 24695242, "type": "User" }
[]
true
[]
1,212,086,420
4,201
Update GH template for dataset viewer issues
Update template to use new issue forms instead. With this PR we can check if this new feature is useful for us. Once validated, we can update the other templates. CC: @severo
closed
https://github.com/huggingface/datasets/pull/4201
2022-04-22T09:34:44
2022-05-06T08:38:43
2022-04-26T08:45:55
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,211,980,110
4,200
Add to docs how to load from local script
This option was missing from the docs guide (it was only explained in the docstring of `load_dataset`). Although this is an infrequent use case, there might be some users interested in it. Related to #4192 CC: @stevhliu
closed
https://github.com/huggingface/datasets/pull/4200
2022-04-22T08:08:25
2022-05-06T08:39:25
2022-04-23T05:47:25
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,211,953,308
4,199
Cache miss during reload for datasets using image fetch utilities through map
## Describe the bug It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, when you exit the interpretor and reload it, the downloading starts from scratch. ...
closed
https://github.com/huggingface/datasets/issues/4199
2022-04-22T07:47:08
2022-04-26T17:00:32
2022-04-26T13:38:26
{ "login": "apsdehal", "id": 3616806, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,211,456,559
4,198
There is no dataset
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
closed
https://github.com/huggingface/datasets/issues/4198
2022-04-21T19:19:26
2022-05-03T11:29:05
2022-04-22T06:12:25
{ "login": "wilfoderek", "id": 1625647, "type": "User" }
[]
false
[]
1,211,342,558
4,197
Add remove_columns=True
This should fix all the issue we have with in place operations in mapping functions. This is crucial as where we do some weird things like: ``` def apply(batch): batch_size = len(batch["id"]) batch["text"] = ["potato" for _ range(batch_size)] return {} # Columns are: {"id": int} dset.map(apply, bat...
closed
https://github.com/huggingface/datasets/pull/4197
2022-04-21T17:28:13
2023-09-24T10:02:32
2022-04-22T14:45:30
{ "login": "thomasw21", "id": 24695242, "type": "User" }
[]
true
[]
1,211,271,261
4,196
Embed image and audio files in `save_to_disk`
Following https://github.com/huggingface/datasets/pull/4184, currently a dataset saved using `save_to_disk` doesn't actually contain the bytes of the image or audio files. Instead it stores the path to your local files. Adding `embed_external_files` and set it to True by default to save_to_disk would be kind of a b...
closed
https://github.com/huggingface/datasets/issues/4196
2022-04-21T16:25:18
2022-12-14T18:22:59
2022-12-14T18:22:59
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
1,210,958,602
4,194
Support lists of multi-dimensional numpy arrays
Fix #4191. CC: @SaulLu
closed
https://github.com/huggingface/datasets/pull/4194
2022-04-21T12:22:26
2022-05-12T15:16:34
2022-05-12T15:08:40
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,210,734,701
4,193
Document save_to_disk and push_to_hub on images and audio files
Following https://github.com/huggingface/datasets/pull/4187, I explained in the documentation of `save_to_disk` and `push_to_hub` how they handle image and audio data.
closed
https://github.com/huggingface/datasets/pull/4193
2022-04-21T09:04:36
2022-04-22T09:55:55
2022-04-22T09:49:31
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,210,692,554
4,192
load_dataset can't load local dataset,Unable to find ...
Traceback (most recent call last): File "/home/gs603/ahf/pretrained/model.py", line 48, in <module> dataset = load_dataset("json",data_files="dataset/dataset_infos.json") File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1675, in load_dataset **config_kwa...
closed
https://github.com/huggingface/datasets/issues/4192
2022-04-21T08:28:58
2022-04-25T16:51:57
2022-04-22T07:39:53
{ "login": "ahf876828330", "id": 33253979, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,210,028,090
4,191
feat: create an `Array3D` column from a list of arrays of dimension 2
**Is your feature request related to a problem? Please describe.** It is possible to create an `Array2D` column from a list of arrays of dimension 1. Similarly, I think it might be nice to be able to create a `Array3D` column from a list of lists of arrays of dimension 1. To illustrate my proposal, let's take the...
closed
https://github.com/huggingface/datasets/issues/4191
2022-04-20T18:04:32
2022-05-12T15:08:40
2022-05-12T15:08:40
{ "login": "SaulLu", "id": 55560583, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,209,901,677
4,190
Deprecate `shard_size` in `push_to_hub` in favor of `max_shard_size`
This PR adds a `max_shard_size` param to `push_to_hub` and deprecates `shard_size` in favor of this new param to have a more descriptive name (a shard has at most the `shard_size` bytes in `push_to_hub`) for the param and to align the API with [Transformers](https://github.com/huggingface/transformers/blob/ff06b1779173...
closed
https://github.com/huggingface/datasets/pull/4190
2022-04-20T16:08:01
2022-04-22T13:58:25
2022-04-22T13:52:00
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,209,881,351
4,189
Document how to use FAISS index for special operations
Document how to use FAISS index for special operations, by accessing the index itself. Close #4029.
closed
https://github.com/huggingface/datasets/pull/4189
2022-04-20T15:51:56
2022-05-06T08:43:10
2022-05-06T08:35:52
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,209,740,957
4,188
Support streaming cnn_dailymail dataset
Support streaming cnn_dailymail dataset. Fix #3969. CC: @severo
closed
https://github.com/huggingface/datasets/pull/4188
2022-04-20T14:04:36
2022-05-11T13:39:06
2022-04-20T15:52:49
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,209,721,532
4,187
Don't duplicate data when encoding audio or image
Right now if you pass both the `bytes` and a local `path` for audio or image data, then the `bytes` are unnecessarily written in the Arrow file, while we could just keep the local `path`. This PR discards the `bytes` when the audio or image file exists locally. In particular it's common for audio datasets builder...
closed
https://github.com/huggingface/datasets/pull/4187
2022-04-20T13:50:37
2022-04-21T09:17:00
2022-04-21T09:10:47
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,209,463,599
4,186
Fix outdated docstring about default dataset config
null
closed
https://github.com/huggingface/datasets/pull/4186
2022-04-20T10:04:51
2022-04-22T12:54:44
2022-04-22T12:48:31
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]