id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
1,190,025,878
4,083
Add SacreBLEU Metric Card
null
closed
https://github.com/huggingface/datasets/pull/4083
2022-04-01T16:24:56
2022-04-12T20:45:00
2022-04-12T20:38:40
{ "login": "emibaylor", "id": 27527747, "type": "User" }
[]
true
[]
1,189,965,845
4,082
Add chrF(++) Metric Card
null
closed
https://github.com/huggingface/datasets/pull/4082
2022-04-01T15:32:12
2022-04-12T20:43:55
2022-04-12T20:38:06
{ "login": "emibaylor", "id": 27527747, "type": "User" }
[]
true
[]
1,189,916,472
4,081
Close parquet writer properly in `push_to_hub`
We don’t call writer.close(), which causes https://github.com/huggingface/datasets/issues/4077. It can happen that we upload the file before the writer is garbage collected and writes the footer. I fixed this by explicitly closing the parquet writer. Close https://github.com/huggingface/datasets/issues/4077.
closed
https://github.com/huggingface/datasets/pull/4081
2022-04-01T14:58:50
2022-07-14T19:22:06
2022-04-01T16:16:19
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,189,667,296
4,080
NonMatchingChecksumError for downloading conll2012_ontonotesv5 dataset
## Steps to reproduce the bug ```python datasets.load_dataset("conll2012_ontonotesv5", "english_v12") ``` ## Actual results ``` Downloading builder script: 32.2kB [00:00, 9.72MB/s] Downloading metadata: 20.0kB [00:00, 10...
closed
https://github.com/huggingface/datasets/issues/4080
2022-04-01T11:34:28
2022-04-01T13:59:10
2022-04-01T13:59:10
{ "login": "richarddwang", "id": 17963619, "type": "User" }
[ { "name": "duplicate", "color": "cfd3d7" }, { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,189,521,576
4,079
Increase max retries for GitHub datasets
As GitHub recurrently raises connectivity issues, this PR increases the number of max retries to request GitHub datasets, as previously done for GitHub metrics: - #4063 Note that this is a temporary solution, while we decide when and how to load GitHub datasets from the Hub: - #4059 Fix #2048 Related to: - ...
closed
https://github.com/huggingface/datasets/pull/4079
2022-04-01T09:34:03
2022-04-01T15:32:40
2022-04-01T15:27:11
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,189,513,572
4,078
Fix GithubMetricModuleFactory instantiation with None download_config
Recent PR: - #4063 introduced a potential bug if `GithubMetricModuleFactory` is instantiated with None `download_config`. This PR add instantiation tests and fix that potential issue. CC: @lhoestq
closed
https://github.com/huggingface/datasets/pull/4078
2022-04-01T09:26:58
2022-04-01T14:44:51
2022-04-01T14:39:27
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,189,467,585
4,077
ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
## Describe the bug When uploading a relatively large image dataset of > 1GB, reloading doesn't work for me, even though pushing to the hub went just fine. Basically, I do: ``` from datasets import load_dataset dataset = load_dataset("imagefolder", data_files="path_to_my_files") dataset.push_to_hub("dat...
closed
https://github.com/huggingface/datasets/issues/4077
2022-04-01T08:49:13
2022-04-01T16:16:19
2022-04-01T16:16:19
{ "login": "NielsRogge", "id": 48327001, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,188,478,867
4,076
Add ROUGE Metric Card
Add ROUGE metric card. I've left the 'Values from popular papers' section empty for the time being because I don't know the summarization literature very well and am therefore not sure which paper(s) to pull from (note that the original rouge paper does not seem to present specific values, just correlations with hum...
closed
https://github.com/huggingface/datasets/pull/4076
2022-03-31T18:34:34
2022-04-12T20:43:45
2022-04-12T20:37:38
{ "login": "emibaylor", "id": 27527747, "type": "User" }
[]
true
[]
1,188,462,162
4,075
Add CCAgT dataset
## Adding a Dataset - **Name:** CCAgT dataset: Images of Cervical Cells with AgNOR Stain Technique - **Description:** The dataset contains 2540 images (1600x1200 where each pixel is 0.111μm×0.111μm) from three different slides, having at least one nucleus per image. These images are from fields belonging to a sample ...
closed
https://github.com/huggingface/datasets/issues/4075
2022-03-31T18:20:28
2022-07-06T19:03:42
2022-07-06T19:03:42
{ "login": "johnnv1", "id": 20444345, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "vision", "color": "bfdadc" } ]
false
[]
1,188,449,142
4,074
Error in google/xtreme_s dataset card
**Link:** https://huggingface.co/datasets/google/xtreme_s Not a big deal but Hungarian is considered an Eastern European language, together with Serbian, Slovak, Slovenian (all correctly categorized; Slovenia is mostly to the West of Hungary, by the way).
closed
https://github.com/huggingface/datasets/issues/4074
2022-03-31T18:07:45
2022-04-01T08:12:56
2022-04-01T08:12:56
{ "login": "wranai", "id": 1048544, "type": "User" }
[ { "name": "documentation", "color": "0075ca" }, { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,188,364,711
4,073
Create a metric card for Competition MATH
Proposing metric card for Competition MATH
closed
https://github.com/huggingface/datasets/pull/4073
2022-03-31T16:48:59
2022-04-01T19:02:39
2022-04-01T18:57:13
{ "login": "sashavor", "id": 14205986, "type": "User" }
[]
true
[]
1,188,266,410
4,072
Add installation instructions to image_process doc
This PR adds the installation instructions for the Image feature to the image process doc.
closed
https://github.com/huggingface/datasets/pull/4072
2022-03-31T15:29:37
2022-03-31T17:05:46
2022-03-31T17:00:19
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,187,587,683
4,071
Loading issue for xuyeliu/notebookCDG dataset
## Dataset viewer issue for '*xuyeliu/notebookCDG*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/xuyeliu/notebookCDG)* *Couldn't load the xuyeliu/notebookCDG with provided scripts: * ``` from datasets import load_dataset dataset = load_dataset("xuyeliu/notebookCDG/dataset_note...
closed
https://github.com/huggingface/datasets/issues/4071
2022-03-31T06:36:29
2022-03-31T08:17:01
2022-03-31T08:16:16
{ "login": "Jun-jie-Huang", "id": 46160972, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,186,810,205
4,070
Create metric card for seqeval
Proposing metric card for seqeval. Not sure which values to report for Popular papers though.
closed
https://github.com/huggingface/datasets/pull/4070
2022-03-30T18:08:01
2022-04-01T19:02:58
2022-04-01T18:57:25
{ "login": "sashavor", "id": 14205986, "type": "User" }
[]
true
[]
1,186,790,578
4,069
Add support for metadata files to `imagefolder`
This PR adds support for metadata files to `imagefolder` to add an ability to specify image fields other than `image` and `label`, which are inferred from the directory structure in the loaded dataset. To be parsed as an image metadata file, a file should be named `"info.csv"` and should have the following structure...
closed
https://github.com/huggingface/datasets/pull/4069
2022-03-30T17:47:51
2022-05-03T12:49:00
2022-05-03T12:42:16
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,186,765,422
4,068
Improve out of bounds error message
In 1.18.4 with https://github.com/huggingface/datasets/pull/3719 we introduced an error message for users using `select` with out of bounds indices. The message ended up being confusing for some users because it mentioned negative indices, which is not the main use case. I replaced it with a message that is very sim...
closed
https://github.com/huggingface/datasets/pull/4068
2022-03-30T17:22:10
2022-03-31T08:39:08
2022-03-31T08:33:57
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,186,731,905
4,067
Update datasets task tags to align tags with models
**Requires https://github.com/huggingface/datasets/pull/4066 to be merged first** Following https://github.com/huggingface/datasets/pull/4066 we need to update many dataset tags to use the new ones. This PR takes case of this and is quite big - feel free to review only certain tags if you don't want to spend too muc...
closed
https://github.com/huggingface/datasets/pull/4067
2022-03-30T16:49:32
2022-04-13T17:37:27
2022-04-13T17:31:11
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,186,728,104
4,066
Tasks alignment with models
I updated our `tasks.json` file with the new task taxonomy that is aligned with models. The rule that defines a task is the following: **Two tasks are different if and only if the steps of their pipelines** are different, i.e. if they can’t reasonably be implemented using the same coherent code (level of granular...
closed
https://github.com/huggingface/datasets/pull/4066
2022-03-30T16:45:56
2022-04-13T13:12:52
2022-04-08T12:20:00
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,186,722,478
4,065
Create metric card for METEOR
Proposing a metric card for METEOR
closed
https://github.com/huggingface/datasets/pull/4065
2022-03-30T16:40:30
2022-03-31T17:12:10
2022-03-31T17:07:50
{ "login": "sashavor", "id": 14205986, "type": "User" }
[]
true
[]
1,186,650,321
4,064
Contributing MedMCQA dataset
Adding MedMCQA dataset ( https://paperswithcode.com/dataset/medmcqa ) **Name**: MedMCQA **Description**: MedMCQA is a large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions. MedMCQA has more than 194k high-quality AIIMS & NEET PG entranc...
closed
https://github.com/huggingface/datasets/pull/4064
2022-03-30T15:42:47
2022-05-06T09:40:40
2022-05-06T08:42:56
{ "login": "monk1337", "id": 17107749, "type": "User" }
[]
true
[]
1,186,611,368
4,063
Increase max retries for GitHub metrics
As GitHub recurrently raises connectivity issues, this PR increases the number of max retries to request GitHub metrics. Related to: - #3134 Also related to: - #4059
closed
https://github.com/huggingface/datasets/pull/4063
2022-03-30T15:12:48
2022-03-31T14:42:52
2022-03-31T14:37:47
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,186,330,732
4,062
Loading mozilla-foundation/common_voice_7_0 dataset failed
## Describe the bug I wanted to load `mozilla-foundation/common_voice_7_0` dataset with `fi` language and `test` split from datasets on Colab/Kaggle notebook, but I am getting an error `JSONDecodeError: [Errno Expecting value] Not Found: 0` while loading it. The bug seems to affect other languages and splits too than ...
closed
https://github.com/huggingface/datasets/issues/4062
2022-03-30T11:39:41
2024-06-09T12:12:46
2022-03-31T08:18:04
{ "login": "aapot", "id": 19529125, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,186,317,071
4,061
Loading cnn_dailymail dataset failed
## Describe the bug I wanted to load cnn_dailymail dataset from huggingface datasets on jupyter lab, but I am getting an error ` NotADirectoryError:[Errno20] Not a directory ` while loading it. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0....
closed
https://github.com/huggingface/datasets/issues/4061
2022-03-30T11:29:02
2022-03-30T13:36:14
2022-03-30T13:36:14
{ "login": "Arij-Aladel", "id": 68355048, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "duplicate", "color": "cfd3d7" } ]
false
[]
1,186,281,033
4,060
Deprecate canonical Multilingual Librispeech
Deprecate canonical Multilingual Librispeech in favor of [the community one](https://huggingface.co/datasets/facebook/multilingual_librispeech) which supports streaming. However, there is a problem regarding new ASR template schema: since it's changed, I guess all community datasets that use this template do not wor...
closed
https://github.com/huggingface/datasets/pull/4060
2022-03-30T10:56:56
2022-04-01T12:54:05
2022-04-01T12:48:51
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,186,149,949
4,059
Load GitHub datasets from Hub
We have recurrently had connection errors when requesting GitHub because sometimes the site is not available. This PR requests the Hub instead, once all GitHub datasets are mirrored on the Hub. Fix #2048 Related to: - #4051 - #3210 - #2787 - #2075 - #2036
closed
https://github.com/huggingface/datasets/pull/4059
2022-03-30T09:21:56
2022-09-16T12:43:26
2022-09-16T12:40:43
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,185,611,600
4,058
Updated annotations for nli_tr dataset
This PR adds annotation tags for `nli_tr` dataset so that the dataset can be searchable wrt. relevant query parameters. The annotations in this PR are based on the existing annotations of `snli` and `multi_nli` datasets as `nli_tr` is a machine-generated extension of those datasets. This PR is intended only for u...
closed
https://github.com/huggingface/datasets/pull/4058
2022-03-29T23:46:59
2022-04-12T20:55:12
2022-04-12T10:37:22
{ "login": "e-budur", "id": 2246791, "type": "User" }
[]
true
[]
1,185,442,001
4,057
`load_dataset` consumes too much memory for audio + tar archives
## Description `load_dataset` consumes more and more memory until it's killed, even though it's made with a generator. I'm adding a loading script for a new dataset, made up of ~15s audio coming from a tar file. Tried setting `DEFAULT_WRITER_BATCH_SIZE = 1` as per the discussion in #741 but the problem persists. ...
closed
https://github.com/huggingface/datasets/issues/4057
2022-03-29T21:38:55
2022-08-16T10:22:55
2022-08-16T10:22:55
{ "login": "JFCeron", "id": 50839826, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,185,155,775
4,056
Unexpected behavior of _TempDirWithCustomCleanup
## Describe the bug This is not 100% a bug in `datasets`, but behavior that surprised me and I think this could be made more robust on the `datasets`side. When using `datasets.disable_caching()`, cache files are written to a temporary directory. This directory should be based on the environment variable TMPDIR. I ...
open
https://github.com/huggingface/datasets/issues/4056
2022-03-29T16:58:22
2022-03-30T15:08:04
null
{ "login": "JonasGeiping", "id": 22680696, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,184,976,292
4,055
[DO NOT MERGE] Test doc-builder
This is a test PR to ensure the changes in https://github.com/huggingface/doc-builder/pull/164 don't break anything in `datasets`
closed
https://github.com/huggingface/datasets/pull/4055
2022-03-29T14:39:02
2022-03-30T12:31:14
2022-03-30T12:25:52
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
1,184,575,368
4,054
Support float data types in pearsonr/spearmanr metrics
Fix #4053.
closed
https://github.com/huggingface/datasets/pull/4054
2022-03-29T09:29:10
2022-03-29T14:07:59
2022-03-29T14:02:20
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,184,500,378
4,053
Modify datatype from `int32` to `float` for pearsonr, spearmanr.
**Is your feature request related to a problem? Please describe.** - Now [Pearsonr](https://github.com/huggingface/datasets/blob/master/metrics/pearsonr/pearsonr.py) and [Spearmanr](https://github.com/huggingface/datasets/blob/master/metrics/spearmanr/spearmanr.py) both get input data as 'int32'. **Describe the ...
closed
https://github.com/huggingface/datasets/issues/4053
2022-03-29T08:27:41
2022-03-29T14:02:20
2022-03-29T14:02:20
{ "login": "woodywarhol9", "id": 86637320, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,184,447,977
4,052
metric = metric_cls( TypeError: 'NoneType' object is not callable
Hi, friend. I meet a problem. When I run the code: `metric = load_metric('glue', 'rte')` There is a problem raising: `metric = metric_cls( TypeError: 'NoneType' object is not callable ` I don't know why. Thanks for your help!
closed
https://github.com/huggingface/datasets/issues/4052
2022-03-29T07:43:08
2022-03-29T14:06:01
2022-03-29T14:06:01
{ "login": "klyuhang9", "id": 39409233, "type": "User" }
[]
false
[]
1,184,400,179
4,051
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py
Hi, I meet a problem. When I run the code: `dataset = load_dataset('glue','sst2')` There is a issue raising: ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py I don't know why; it is ok when I use Google Chrome to view this url. Thanks for your...
closed
https://github.com/huggingface/datasets/issues/4051
2022-03-29T07:00:31
2022-05-08T07:27:32
2022-03-29T08:29:25
{ "login": "klyuhang9", "id": 39409233, "type": "User" }
[]
false
[]
1,184,346,501
4,050
Add RVL-CDIP dataset
Resolves #2762 Dataset Request : Add RVL-CDIP dataset [#2762](https://github.com/huggingface/datasets/issues/2762) This PR adds the RVL-CDIP dataset. The dataset contains Google Drive link for download and wasn't getting downloaded automatically, so I have provided manual_download_instructions. - I have added ...
closed
https://github.com/huggingface/datasets/pull/4050
2022-03-29T06:00:02
2022-04-22T09:55:07
2022-04-21T17:15:41
{ "login": "dnaveenr", "id": 17746528, "type": "User" }
[]
true
[]
1,183,832,893
4,049
Create metric card for the Code Eval metric
Creating initial Code Eval metric card
closed
https://github.com/huggingface/datasets/pull/4049
2022-03-28T18:34:23
2022-03-29T13:38:12
2022-03-29T13:32:50
{ "login": "sashavor", "id": 14205986, "type": "User" }
[]
true
[]
1,183,804,576
4,048
Split size error on `amazon_us_reviews` / `PC_v1_00` dataset
## Describe the bug When downloading this subset as of 3-28-2022 you will encounter a split size error after the dataset is extracted. The extracted dataset has roughly ~6m rows while the split expects <1m. Upon digging a little deeper, I downloaded the raw files from `https://s3.amazonaws.com/amazon-reviews-pds/t...
closed
https://github.com/huggingface/datasets/issues/4048
2022-03-28T18:12:04
2022-04-08T12:29:30
2022-04-08T12:29:30
{ "login": "trentonstrong", "id": 191985, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "good first issue", "color": "7057ff" } ]
false
[]
1,183,789,237
4,047
Dataset.unique(column: str) -> ArrowNotImplementedError
## Describe the bug I'm trying to use `unique()` function, but it fails ## Steps to reproduce the bug 1. Get dataset 2. Call `unique` 3. Error # Sample code to reproduce the bug ```python !pip show datasets from datasets import load_dataset dataset = load_dataset('wikiann', 'en') dataset['train'].col...
closed
https://github.com/huggingface/datasets/issues/4047
2022-03-28T17:59:32
2022-04-01T18:24:57
2022-04-01T18:24:57
{ "login": "orkenstein", "id": 1461936, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,183,723,360
4,046
Create metric card for XNLI
Proposing a metric card for XNLI
closed
https://github.com/huggingface/datasets/pull/4046
2022-03-28T16:57:58
2022-03-29T13:32:59
2022-03-29T13:27:30
{ "login": "sashavor", "id": 14205986, "type": "User" }
[]
true
[]
1,183,661,091
4,045
Fix CLI dummy data generation
PR: - #3868 broke the CLI dummy data generation. Fix #4044.
closed
https://github.com/huggingface/datasets/pull/4045
2022-03-28T16:09:15
2022-03-31T15:04:12
2022-03-31T14:59:06
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,183,658,942
4,044
CLI dummy data generation is broken
## Describe the bug We get a TypeError when running CLI dummy data generation: ```shell datasets-cli dummy_data datasets/<your-dataset-folder> --auto_generate ``` gives: ``` File ".../huggingface/datasets/src/datasets/commands/dummy_data.py", line 361, in _autogenerate_dummy_data dataset_builder._prepare_...
closed
https://github.com/huggingface/datasets/issues/4044
2022-03-28T16:07:37
2022-03-31T14:59:06
2022-03-31T14:59:06
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,183,624,475
4,043
Create metric card for CUAD
Proposing a CUAD metric card
closed
https://github.com/huggingface/datasets/pull/4043
2022-03-28T15:38:58
2022-03-29T15:20:56
2022-03-29T15:15:19
{ "login": "sashavor", "id": 14205986, "type": "User" }
[]
true
[]
1,183,599,461
4,041
Add support for IIIF in datasets
This is a feature request for support for IIIF in `datasets`. Apologies for the long issue. I have also used a different format to the usual feature request since I think that makes more sense but happy to use the standard template if preferred. ## What is [IIIF](https://iiif.io/)? IIIF (International Image Inte...
open
https://github.com/huggingface/datasets/issues/4041
2022-03-28T15:19:25
2022-04-05T18:20:53
null
{ "login": "davanstrien", "id": 8995957, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,183,468,927
4,039
Support streaming xcopa dataset
null
closed
https://github.com/huggingface/datasets/pull/4039
2022-03-28T13:45:55
2022-03-28T16:26:48
2022-03-28T16:21:46
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,183,189,827
4,038
[DO NOT MERGE] Test doc-builder with skipped installation feature
This PR is just for testing that we can build PR docs with changes made on the [`skip-install-for-real`](https://github.com/huggingface/doc-builder/tree/skip-install-for-real) branch of `doc-builder`.
closed
https://github.com/huggingface/datasets/pull/4038
2022-03-28T09:58:31
2023-09-24T10:01:05
2022-03-28T12:29:09
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
1,183,144,486
4,037
Error while building documentation
## Describe the bug Documentation building is failing: - https://github.com/huggingface/datasets/runs/5716300989?check_suite_focus=true ``` ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format. Unable to find datasets.filesystems.S3FileSystem...
closed
https://github.com/huggingface/datasets/issues/4037
2022-03-28T09:22:44
2022-03-28T10:01:52
2022-03-28T10:00:48
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,183,126,893
4,036
Fix building of documentation
Documentation building is failing: - https://github.com/huggingface/datasets/runs/5716300989?check_suite_focus=true ``` ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format. Unable to find datasets.filesystems.S3FileSystem in datasets. Make su...
closed
https://github.com/huggingface/datasets/pull/4036
2022-03-28T09:09:12
2023-09-24T09:55:34
2022-03-28T11:13:22
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,183,067,456
4,035
Add zero_division argument to precision and recall metrics
Fix #4025.
closed
https://github.com/huggingface/datasets/pull/4035
2022-03-28T08:19:14
2022-03-28T09:53:07
2022-03-28T09:53:06
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,183,033,285
4,034
Fix null checksum in xcopa dataset
null
closed
https://github.com/huggingface/datasets/pull/4034
2022-03-28T07:48:14
2022-03-28T08:06:14
2022-03-28T08:06:14
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,182,984,445
4,033
Fix checksum error in cats_vs_dogs dataset
Recent PR updated the metadata JSON file of cats_vs_dogs dataset: - #3878 However, that new JSON file contains a None checksum. This PR fixes it. Fix #4032.
closed
https://github.com/huggingface/datasets/pull/4033
2022-03-28T07:01:25
2022-03-28T07:49:39
2022-03-28T07:44:24
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,182,595,697
4,032
can't download cats_vs_dogs dataset
## Describe the bug can't download cats_vs_dogs dataset. error: Checksums didn't match for dataset source files ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("cats_vs_dogs") ``` ## Expected results loaded successfully. ## Actual results NonMatchingCheck...
closed
https://github.com/huggingface/datasets/issues/4032
2022-03-27T17:05:39
2022-03-28T07:44:24
2022-03-28T07:44:24
{ "login": "RRaphaell", "id": 74569835, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,182,415,124
4,031
Cannot load the dataset conll2012_ontonotesv5
## Describe the bug Cannot load the dataset conll2012_ontonotesv5 ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset dataset = load_dataset('conll2012_ontonotesv5', 'english_v4', split="test") print(dataset) ``` ## Expected results The datasets s...
closed
https://github.com/huggingface/datasets/issues/4031
2022-03-27T07:38:23
2022-03-28T06:58:31
2022-03-28T06:31:18
{ "login": "cathyxl", "id": 8326473, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,182,157,056
4,030
Use a constant for the articles regex in SQuAD v2
The main reason for doing this is to be able to change the articles list if using another language, for example. It's not the most elegant solution but at least it makes the metric more extensible with no drawbacks. BTW, what could be the best way to make this more generic (i.e., SQuAD in other languages)? Maybe rec...
closed
https://github.com/huggingface/datasets/pull/4030
2022-03-26T23:06:30
2022-04-12T16:30:45
2022-04-12T11:00:24
{ "login": "bryant1410", "id": 3905501, "type": "User" }
[]
true
[]
1,181,057,011
4,029
Add FAISS .range_search() method for retrieving all texts from dataset above similarity threshold
**Is your feature request related to a problem? Please describe.** I would like to retrieve all texts from a dataset, which are semantically similar to a specific input text (query), above a certain (cosine) similarity threshold. My dataset is very large (Wikipedia), so I need to use Datasets and FAISS for this. I wou...
closed
https://github.com/huggingface/datasets/issues/4029
2022-03-25T17:31:33
2022-05-06T08:35:52
2022-05-06T08:35:52
{ "login": "MoritzLaurer", "id": 41862082, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,181,022,675
4,028
Fix docs on audio feature installation
This PR: - Removes the explicit installation of `librosa` (this is installed with `pip install datasets[audio]` - Adds the warning for Linux users to install manually the non-Python package `libsndfile` - Explains that the installation of `torchaudio` is only necessary to support loading audio datasets containing MP...
closed
https://github.com/huggingface/datasets/pull/4028
2022-03-25T16:55:11
2022-03-31T16:20:47
2022-03-31T16:15:20
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,180,991,344
4,027
ElasticSearch Indexing example: TypeError: __init__() missing 1 required positional argument: 'scheme'
## Describe the bug I am following the example in the documentation for elastic search step by step (on google colab): https://huggingface.co/docs/datasets/faiss_es#elasticsearch ``` from datasets import load_dataset squad = load_dataset('crime_and_punish', split='train[:1000]') ``` When I run the line: `sq...
closed
https://github.com/huggingface/datasets/issues/4027
2022-03-25T16:22:28
2022-04-07T10:29:52
2022-03-28T07:58:56
{ "login": "MoritzLaurer", "id": 41862082, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "duplicate", "color": "cfd3d7" } ]
false
[]
1,180,968,774
4,026
Support streaming xtreme dataset for bucc18 config
Support streaming xtreme dataset for bucc18 config.
closed
https://github.com/huggingface/datasets/pull/4026
2022-03-25T16:00:40
2022-03-25T16:26:50
2022-03-25T16:21:52
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,180,963,105
4,025
Missing argument in precision/recall
**Is your feature request related to a problem? Please describe.** [`sklearn.metrics.precision_score`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html) accepts an argument `zero_division`, but it is not available in [precision Metric](https://github.com/huggingface/datasets/blob/...
closed
https://github.com/huggingface/datasets/issues/4025
2022-03-25T15:55:52
2022-03-28T09:53:06
2022-03-28T09:53:06
{ "login": "Dref360", "id": 8976546, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,180,951,817
4,024
Doc: image_process small tip
I've added a small tip in the `image_process` doc
closed
https://github.com/huggingface/datasets/pull/4024
2022-03-25T15:44:32
2022-03-31T15:35:35
2022-03-31T15:30:20
{ "login": "FrancescoSaverioZuppichini", "id": 15908060, "type": "User" }
[]
true
[]
1,180,840,399
4,023
Replace yahoo_answers_topics data url
I replaced the Google Drive URL of the dataset by the FastAI one, since we've had some issues with Google Drive.
closed
https://github.com/huggingface/datasets/pull/4023
2022-03-25T14:08:57
2022-03-28T10:12:56
2022-03-28T10:07:52
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,180,816,682
4,022
Replace dbpedia_14 data url
I replaced the Google Drive URL of the dataset by the FastAI one, since we've had some issues with Google Drive.
closed
https://github.com/huggingface/datasets/pull/4022
2022-03-25T13:47:21
2022-03-25T15:03:37
2022-03-25T14:58:49
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,180,805,092
4,021
Fix `map` remove_columns on empty dataset
On an empty dataset, the `remove_columns` parameter of `map` currently doesn't actually remove the columns: ```python >>> ds = datasets.load_dataset("glue", "rte") >>> ds_filtered = ds.filter(lambda x: x["label"] != -1) >>> ds_mapped = ds_filtered.map(lambda x: x, remove_columns=["label"]) >>> print(repr(ds_mapped...
closed
https://github.com/huggingface/datasets/pull/4021
2022-03-25T13:36:29
2022-03-29T13:41:31
2022-03-29T13:35:44
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,180,636,754
4,020
Replace amazon_polarity data URL
I replaced the Google Drive URL of the dataset by the FastAI one, since we've had some issues with Google Drive.
closed
https://github.com/huggingface/datasets/pull/4020
2022-03-25T10:50:57
2022-03-25T15:02:36
2022-03-25T14:57:41
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,180,628,293
4,019
Make yelp_polarity streamable
It was using `dl_manager.download_and_extract` on a TAR archive, which is not supported in streaming mode. I replaced this by `dl_manager.iter_archive`
closed
https://github.com/huggingface/datasets/pull/4019
2022-03-25T10:42:51
2022-03-25T15:02:19
2022-03-25T14:57:16
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,180,622,816
4,018
Replace yelp_review_full data url
I replaced the Google Drive URL of the Yelp review dataset by the FastAI one, since we've had some issues with Google Drive. Close https://github.com/huggingface/datasets/issues/4005
closed
https://github.com/huggingface/datasets/pull/4018
2022-03-25T10:37:18
2022-03-25T15:01:02
2022-03-25T14:56:10
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,180,595,160
4,017
Support streaming scan dataset
null
closed
https://github.com/huggingface/datasets/pull/4017
2022-03-25T10:11:28
2022-03-25T12:08:55
2022-03-25T12:03:52
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,180,557,828
4,016
Support streaming blimp dataset
null
closed
https://github.com/huggingface/datasets/pull/4016
2022-03-25T09:39:10
2022-03-25T11:19:18
2022-03-25T11:14:13
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,180,510,856
4,015
Can not correctly parse the classes with imagefolder
## Describe the bug I try to load my own image dataset with imagefolder, but the parsing of classes is incorrect. ## Steps to reproduce the bug I organized my dataset (ImageNet) in the following structure: ``` - imagenet/ - train/ - n01440764/ - ILSVRC2012_val_00000293.jpg ...
closed
https://github.com/huggingface/datasets/issues/4015
2022-03-25T08:51:17
2022-03-28T01:02:03
2022-03-25T09:27:56
{ "login": "YiSyuanChen", "id": 21264909, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,180,481,229
4,014
Support streaming id_clickbait dataset
null
closed
https://github.com/huggingface/datasets/pull/4014
2022-03-25T08:18:28
2022-03-25T08:58:31
2022-03-25T08:53:32
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,180,427,174
4,013
Cannot preview "hazal/Turkish-Biomedical-corpus-trM"
## Dataset viewer issue for '*hazal/Turkish-Biomedical-corpus-trM' **Link:** *https://huggingface.co/datasets/hazal/Turkish-Biomedical-corpus-trM* *I cannot see the dataset preview.* ``` Server Error Status code: 400 Exception: HTTPError Message: 403 Client Error: Forbidden for url: https://h...
closed
https://github.com/huggingface/datasets/issues/4013
2022-03-25T07:12:02
2022-04-04T08:05:01
2022-03-25T14:16:11
{ "login": "hazalturkmen", "id": 42860397, "type": "User" }
[]
false
[]
1,180,350,083
4,012
Rename wer to cer
wer variable changed to cer in README file
closed
https://github.com/huggingface/datasets/pull/4012
2022-03-25T05:06:05
2022-03-28T13:57:25
2022-03-28T13:57:25
{ "login": "pmgautam", "id": 28428143, "type": "User" }
[]
true
[]
1,179,885,965
4,011
Fix SQuAD v2 metric docs on `references` format
`references` it's not a list of dictionaries but a dictionary that has a list in its values.
closed
https://github.com/huggingface/datasets/pull/4011
2022-03-24T18:27:10
2023-07-11T09:35:46
2023-07-11T09:35:15
{ "login": "bryant1410", "id": 3905501, "type": "User" }
[ { "name": "transfer-to-evaluate", "color": "E3165C" } ]
true
[]
1,179,848,036
4,010
Fix None issue with Sequence of dict
`Features.encode_example` currently fails if it contains a sequence if dict like `Sequence({"subcolumn": Value("int32")})` and if `None` is passed instead of the dict. ```python File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/features/features.py", line 1310, in encode_example return encode_neste...
closed
https://github.com/huggingface/datasets/pull/4010
2022-03-24T17:58:59
2022-03-28T10:13:53
2022-03-28T10:08:40
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,179,658,611
4,009
AMI load_dataset error: sndfile library not found
## Describe the bug Getting error message when loading AMI dataset. ## Steps to reproduce the bug `python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])" ` ## Expected results A clear and concise description of the expected results. ## Actual r...
closed
https://github.com/huggingface/datasets/issues/4009
2022-03-24T15:13:38
2022-03-24T15:46:38
2022-03-24T15:17:29
{ "login": "i-am-neo", "id": 102043285, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,179,591,068
4,008
Support streaming daily_dialog dataset
null
closed
https://github.com/huggingface/datasets/pull/4008
2022-03-24T14:23:23
2022-03-24T15:29:01
2022-03-24T14:46:58
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,179,381,021
4,007
set_format does not work with multi dimension tensor
## Describe the bug set_format only transforms the last dimension of a multi-dimension list to tensor ## Steps to reproduce the bug ```python import torch from datasets import Dataset ds = Dataset.from_dict({"A": [torch.rand((2, 2))]}) # ds = Dataset.from_dict({"A": [np.random.rand(2, 2)]}) # => same result...
closed
https://github.com/huggingface/datasets/issues/4007
2022-03-24T11:27:43
2022-03-30T07:28:57
2022-03-24T14:39:29
{ "login": "phihung", "id": 5902432, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,179,367,195
4,006
Use audio feature in ASR task template
The AutomaticSpeechRecognition task template is outdated: it still uses the file path column as input instead of the audio column. I changed that and updated all the datasets as well as the tests. The only community dataset that will need to be updated is `facebook/multilingual_librispeech`. It has almost zero us...
closed
https://github.com/huggingface/datasets/pull/4006
2022-03-24T11:15:22
2022-03-24T17:19:29
2022-03-24T16:48:02
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,179,365,663
4,005
Yelp not working
## Dataset viewer issue for '*name of the dataset*' **Link:** https://huggingface.co/datasets/yelp_review_full/viewer/yelp_review_full/train Doesn't work: ``` Server error Status code: 400 Exception: Error Message: line contains NULL ``` Am I the one who added this dataset ? No A seamingly...
closed
https://github.com/huggingface/datasets/issues/4005
2022-03-24T11:14:00
2022-03-25T14:59:57
2022-03-25T14:56:10
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
false
[]
1,179,320,795
4,004
ASSIN 2 dataset: replace broken Google Drive _URLS by links on github
Closes #4003 . Fixes checksum error. Replaces Google Drive urls by the files hosted here: [Multilingual Transformer Ensembles for Portuguese Natural Language Tasks](https://github.com/ruanchaves/assin)
closed
https://github.com/huggingface/datasets/pull/4004
2022-03-24T10:37:39
2022-03-28T14:01:46
2022-03-28T13:56:39
{ "login": "ruanchaves", "id": 14352388, "type": "User" }
[]
true
[]
1,179,286,877
4,003
ASSIN2 dataset checksum bug
## Describe the bug Checksum error after trying to load the [ASSIN 2 dataset](https://huggingface.co/datasets/assin2). `NonMatchingChecksumError` triggered by calling `load_dataset("assin2")`. Similar to #3952 , #3942 , #3941 , etc. ``` ----------------------------------------------------------------------...
closed
https://github.com/huggingface/datasets/issues/4003
2022-03-24T10:08:50
2022-04-27T14:14:45
2022-03-28T13:56:39
{ "login": "ruanchaves", "id": 14352388, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,179,263,787
4,002
Support streaming conll2012_ontonotesv5 dataset
Use another URL whit a single ZIP file (instead of previous one with a ZIP file inside another ZIP file).
closed
https://github.com/huggingface/datasets/pull/4002
2022-03-24T09:49:56
2022-03-24T10:53:41
2022-03-24T10:48:47
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,179,231,418
4,001
How to use generate this multitask dataset for SQUAD? I am getting a value error.
## Dataset viewer issue for 'squad_multitask*' **Link:** https://huggingface.co/datasets/vershasaxena91/squad_multitask *short description of the issue* I am trying to generate the multitask dataset for squad dataset. However, gives the error in dataset explorer as well as my local machine. I tried the comma...
closed
https://github.com/huggingface/datasets/issues/4001
2022-03-24T09:21:51
2022-03-26T09:48:21
2022-03-26T03:35:43
{ "login": "gsk1692", "id": 1963097, "type": "User" }
[]
false
[]
1,178,844,616
4,000
load_dataset error: sndfile library not found
## Describe the bug Can't load ami dataset ## Steps to reproduce the bug ``` python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])" ``` ## Expected results ## Actual results Downloading and preparing dataset ami/headset-single (download: 10.71...
closed
https://github.com/huggingface/datasets/issues/4000
2022-03-24T01:52:32
2022-03-25T17:53:33
2022-03-25T17:53:33
{ "login": "i-am-neo", "id": 102043285, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,178,685,280
3,999
Docs maintenance
This PR links some functions to the API reference. These functions previously only showed up in code format because the path to the actual API was incorrect.
closed
https://github.com/huggingface/datasets/pull/3999
2022-03-23T21:27:33
2022-03-30T17:01:45
2022-03-30T16:56:38
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
true
[]
1,178,631,986
3,998
Fix Audio.encode_example() when writing an array
Closes #3996
closed
https://github.com/huggingface/datasets/pull/3998
2022-03-23T20:32:13
2022-03-29T14:21:44
2022-03-29T14:16:13
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,178,566,568
3,997
Sync Features dictionaries
This PR adds a wrapper to the `Features` class to keep the secondary dict, `_column_requires_decoding`, aligned with the main dict (as discussed in https://github.com/huggingface/datasets/pull/3723#discussion_r806912731). A more elegant approach would be to subclass `UserDict` and override `__setitem__` and `__delit...
closed
https://github.com/huggingface/datasets/pull/3997
2022-03-23T19:23:51
2022-04-13T15:52:27
2022-04-13T15:46:19
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,178,415,905
3,996
Audio.encode_example() throws an error when writing example from array
## Describe the bug When trying to do `Audio().encode_example()` with preexisting array (see [this line](https://github.com/huggingface/datasets/blob/master/src/datasets/features/audio.py#L73)), `sf.write()` throws you an error: `TypeError: No format specified and unable to get format from file extension: <_io.BytesI...
closed
https://github.com/huggingface/datasets/issues/3996
2022-03-23T17:11:47
2022-03-29T14:16:13
2022-03-29T14:16:13
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,178,232,623
3,995
Close `PIL.Image` file handler in `Image.decode_example`
Closes the file handler of the PIL image object in `Image.decode_example` to avoid the `Too many open files` error. To pass [the image equality checks](https://app.circleci.com/pipelines/github/huggingface/datasets/10774/workflows/d56670e6-16bb-4c64-b601-a152c5acf5ed/jobs/65825) in CI, `Image.decode_example` calls `...
closed
https://github.com/huggingface/datasets/pull/3995
2022-03-23T14:51:48
2022-03-23T18:24:52
2022-03-23T18:19:27
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,178,211,138
3,994
Change audio column from string path to Audio feature in ASR task
Will fix #3990
closed
https://github.com/huggingface/datasets/pull/3994
2022-03-23T14:34:52
2022-03-23T15:43:43
2022-03-23T15:43:43
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,178,201,495
3,993
Streaming dataset + interleave + DataLoader hangs with multiple workers
## Describe the bug Interleaving multiple iterable datasets that use `load_dataset` on streaming mode hangs when passed to `torch.utils.data.DataLoader` with multiple workers. ## Steps to reproduce the bug ```python from datasets import interleave_datasets, load_dataset from torch.utils.data import DataLoader ...
open
https://github.com/huggingface/datasets/issues/3993
2022-03-23T14:27:29
2023-02-28T14:14:24
null
{ "login": "jpilaul", "id": 614861, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,177,946,153
3,992
Image column is not decoded in map when using with with_transform
## Describe the bug Image column is not _decoded_ in **map** when using with `with_transform` ## Steps to reproduce the bug ```python from datasets import Image, Dataset def add_C(batch): batch["C"] = batch["A"] return batch ds = Dataset.from_dict({"A": ["image.png"]}).cast_column("A", Image()) ...
closed
https://github.com/huggingface/datasets/issues/3992
2022-03-23T10:51:13
2022-12-13T16:59:06
2022-12-13T16:59:06
{ "login": "phihung", "id": 5902432, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,177,362,901
3,991
Add Lung Image Database Consortium image collection (LIDC-IDRI) dataset
## Adding a Dataset - **Name:** *Lung Image Database Consortium image collection (LIDC-IDRI)* - **Description:** *Consists of diagnostic and lung cancer screening thoracic computed tomography (CT) scans with marked-up annotated lesions. It is a web-accessible international resource for development, training, and ev...
open
https://github.com/huggingface/datasets/issues/3991
2022-03-22T22:16:05
2022-03-23T12:57:16
null
{ "login": "omarespejel", "id": 4755430, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "vision", "color": "bfdadc" } ]
false
[]
1,176,976,247
3,990
Improve AutomaticSpeechRecognition task template
**Is your feature request related to a problem? Please describe.** [AutomaticSpeechRecognition task template](https://github.com/huggingface/datasets/blob/master/src/datasets/tasks/automatic_speech_recognition.py) is outdated as it uses path to audiofile as an audio column instead of a Audio feature itself (I guess it...
closed
https://github.com/huggingface/datasets/issues/3990
2022-03-22T15:41:08
2022-03-23T17:12:40
2022-03-23T17:12:40
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,176,955,078
3,989
Remove old wikipedia leftovers
After updating Wikipedia dataset, remove old wikipedia leftovers from doc.
closed
https://github.com/huggingface/datasets/pull/3989
2022-03-22T15:25:46
2022-03-31T15:35:26
2022-03-31T15:30:16
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,176,858,540
3,988
More consistent references in docs
Aligns the internal references with style discussed in https://github.com/huggingface/datasets/pull/3980. cc @stevhliu
closed
https://github.com/huggingface/datasets/pull/3988
2022-03-22T14:18:41
2022-03-22T17:06:32
2022-03-22T16:50:44
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,176,481,659
3,987
Fix Faiss custom_index device
Currently, if both `custom_index` and `device` are passed to `FaissIndex`, `device` is silently ignored. This PR fixes this by raising a ValueError if both arguments are passed. Alternatively, the `custom_index` could be transferred to the target `device`.
closed
https://github.com/huggingface/datasets/pull/3987
2022-03-22T09:11:24
2022-03-24T12:18:59
2022-03-24T12:14:12
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,176,429,565
3,986
Dataset loads indefinitely after modifying default cache path (~/.cache/huggingface)
## Describe the bug Dataset loads indefinitely after modifying cache path (~/.cache/huggingface) If none of the environment variables are set, this custom dataset loads fine ( json-based dataset with custom dataset load script) ** Update: Transformer modules faces the same issue as well during loading ## A clear ...
open
https://github.com/huggingface/datasets/issues/3986
2022-03-22T08:23:21
2023-03-06T16:55:04
null
{ "login": "kelvinAI", "id": 10686779, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,175,982,937
3,985
[image feature] Too many files open error when image feature is returned as a path
## Describe the bug PR in context: #3967. If I load the dataset in this PR (TextVQA), and do a simple list comprehension on the dataset, I get `Too many open files error`. This is happening due to the way we are loading the image feature when a str path is returned from the `_generate_examples`. Specifically at http...
closed
https://github.com/huggingface/datasets/issues/3985
2022-03-21T21:54:05
2022-03-23T18:19:27
2022-03-23T18:19:27
{ "login": "apsdehal", "id": 3616806, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,175,822,117
3,984
Local and automatic tests fail
## Describe the bug Running the tests from CircleCI on a PR or locally fails, even with no changes. Tests seem to fail on `test_metric_common.py` ## Steps to reproduce the bug ```shell git clone https://huggingface/datasets.git cd datasets ``` ```python python -m pip install -e . pytest ``` ## Expected...
closed
https://github.com/huggingface/datasets/issues/3984
2022-03-21T19:07:37
2023-07-25T15:18:40
2023-07-25T15:18:40
{ "login": "MarkusSagen", "id": 20767068, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,175,759,412
3,983
Infinitely attempting lock
I am trying to run one of the examples of the `transformers` repo, which makes use of `datasets`. Important to note is that I am trying to run this via a Databricks notebook, and all the files reside in the Databricks Filesystem (DBFS). ``` %sh python /dbfs/transformers/examples/pytorch/summarization/run_summariz...
closed
https://github.com/huggingface/datasets/issues/3983
2022-03-21T18:11:57
2024-05-09T08:24:34
2022-05-06T16:12:18
{ "login": "jyrr", "id": 11869652, "type": "User" }
[]
false
[]
1,175,478,099
3,982
Exclude Google Drive tests of the CI
These tests make the CI spam the Google Drive API, the CI now gets banned by Google Drive very often. I think we can just skip these tests from the CI for now. In the future we could have a CI job that runs only once a day or once a week for such cases cc @albertvillanova @mariosasko @severo Close #3415 ...
closed
https://github.com/huggingface/datasets/pull/3982
2022-03-21T14:34:16
2022-03-31T16:38:02
2022-03-21T14:51:35
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]