id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
1,271,112,497
4,492
Pin the revision in imagenet download links
Use the commit sha in the data files URLs of the imagenet-1k download script, in case we want to restructure the data files in the future. For example we may split it into many more shards for better paralellism. cc @mariosasko
closed
https://github.com/huggingface/datasets/pull/4492
2022-06-14T17:15:17
2022-06-14T17:35:13
2022-06-14T17:25:45
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,270,803,822
4,491
Dataset Viewer issue for Pavithree/test
### Link https://huggingface.co/datasets/Pavithree/test ### Description I have extracted the subset of original eli5 dataset found at hugging face. However, while loading the dataset It throws ArrowNotImplementedError: Unsupported cast from string to null using function cast_null error. Is there anything missi...
closed
https://github.com/huggingface/datasets/issues/4491
2022-06-14T13:23:10
2022-06-14T14:37:21
2022-06-14T14:34:33
{ "login": "Pavithree", "id": 23344465, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,270,719,074
4,490
Use `torch.nested_tensor` for arrays of varying length in torch formatter
Use `torch.nested_tensor` for arrays of varying length in `TorchFormatter`. The PyTorch API of nested tensors is in the prototype stage, so wait for it to become more mature.
open
https://github.com/huggingface/datasets/issues/4490
2022-06-14T12:19:40
2023-07-07T13:02:58
null
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,270,706,195
4,489
Add SV-Ident dataset
null
closed
https://github.com/huggingface/datasets/pull/4489
2022-06-14T12:09:00
2022-06-20T08:48:26
2022-06-20T08:37:27
{ "login": "e-tornike", "id": 20404466, "type": "User" }
[]
true
[]
1,270,613,857
4,488
Update PASS dataset version
Update the PASS dataset to version v3 (the newest one) from the [version history](https://github.com/yukimasano/PASS/blob/main/version_history.txt). PS: The older versions are not exposed as configs in the script because v1 was removed from Zenodo, and the same thing will probably happen to v2.
closed
https://github.com/huggingface/datasets/pull/4488
2022-06-14T10:47:14
2022-06-14T16:41:55
2022-06-14T16:32:28
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,270,525,163
4,487
Support streaming UDHR dataset
This PR: - Adds support for streaming UDHR dataset - Adds the BCP 47 language code as feature
closed
https://github.com/huggingface/datasets/pull/4487
2022-06-14T09:33:33
2022-06-15T05:09:22
2022-06-15T04:59:49
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,269,518,084
4,486
Add CCAgT dataset
As described in #4075 I could not generate the dummy data. Also, on the data repository isn't provided the split IDs, but I copy the functions that provide the correct data split. In summary, to have a better distribution, the data in this dataset should be separated based on the amount of NORs in each image.
closed
https://github.com/huggingface/datasets/pull/4486
2022-06-13T14:20:19
2022-07-04T14:37:03
2022-07-04T14:25:45
{ "login": "johnnv1", "id": 20444345, "type": "User" }
[]
true
[]
1,269,463,054
4,485
Fix cast to null
It currently fails with `ArrowNotImplementedError` instead of `TypeError` when one tries to cast integer to null type. Because if this, type inference breaks when one replaces null values with integers in `map` (it first tries to cast to the previous type before inferring the new type). Fix https://github.com/hug...
closed
https://github.com/huggingface/datasets/pull/4485
2022-06-13T13:44:32
2022-06-14T13:43:54
2022-06-14T13:34:14
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,269,383,811
4,484
Better ImportError message when a dataset script dependency is missing
When a depenency is missing for a dataset script, an ImportError message is shown, with a tip to install the missing dependencies. This message is not ideal at the moment: it may show duplicate dependencies, and is not very readable. I improved it from ``` ImportError: To be able to use bigbench, you need to insta...
closed
https://github.com/huggingface/datasets/pull/4484
2022-06-13T12:44:37
2022-07-08T14:30:44
2022-06-13T13:50:47
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,269,253,840
4,483
Dataset.map throws pyarrow.lib.ArrowNotImplementedError when converting from list of empty lists
## Describe the bug Dataset.map throws pyarrow.lib.ArrowNotImplementedError: Unsupported cast from int64 to null using function cast_null when converting from a type of 'empty lists' to 'lists with some type'. This appears to be due to the interaction of arrow internals and some assumptions made by datasets. T...
closed
https://github.com/huggingface/datasets/issues/4483
2022-06-13T10:47:52
2022-06-14T13:34:14
2022-06-14T13:34:14
{ "login": "sanderland", "id": 48946947, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,269,237,447
4,482
Test that TensorFlow is not imported on startup
TF takes some time to be imported, and also uses some GPU memory. I just added a test to make sure that in the future it's never imported by default when ```python import datasets ``` is called. Right now this fails because `huggingface_hub` does import tensorflow (though this is fixed now on their `main` bra...
closed
https://github.com/huggingface/datasets/pull/4482
2022-06-13T10:33:49
2023-10-12T06:31:39
2023-10-11T09:11:56
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,269,187,792
4,481
Fix iwslt2017
The files were moved to google drive, I hosted them on the Hub instead (ok according to the license) I also updated the `datasets_infos.json`
closed
https://github.com/huggingface/datasets/pull/4481
2022-06-13T09:51:21
2022-10-26T09:09:31
2022-06-13T10:40:18
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,268,921,567
4,480
Bigbench tensorflow GPU dependency
## Describe the bug Loading bigbech ```py from datasets import load_dataset dataset = load_dataset("bigbench","swedish_to_german_proverbs") ``` tries to use gpu and fails with OOM with the following error ``` Downloading and preparing dataset bigbench/swedish_to_german_proverbs (download: Unknown size, genera...
closed
https://github.com/huggingface/datasets/issues/4480
2022-06-13T05:24:06
2022-06-14T19:45:24
2022-06-14T19:45:23
{ "login": "cceyda", "id": 15624271, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,268,558,237
4,479
Include entity positions as feature in ReCoRD
https://huggingface.co/datasets/super_glue/viewer/record/validation TLDR: We need to record entity positions, which are included in the source data but excluded by the loading script, to enable efficient and effective training for ReCoRD. Currently, the loading script ignores the entity positions ("entity_start",...
closed
https://github.com/huggingface/datasets/pull/4479
2022-06-12T11:56:28
2022-08-19T23:23:02
2022-08-19T13:23:48
{ "login": "richarddwang", "id": 17963619, "type": "User" }
[]
true
[]
1,268,358,213
4,478
Dataset slow during model training
## Describe the bug While migrating towards 🤗 Datasets, I encountered an odd performance degradation: training suddenly slows down dramatically. I train with an image dataset using Keras and execute a `to_tf_dataset` just before training. First, I have optimized my dataset following https://discuss.huggingface.co/...
open
https://github.com/huggingface/datasets/issues/4478
2022-06-11T19:40:19
2022-06-14T12:04:31
null
{ "login": "lehrig", "id": 9555494, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,268,308,986
4,477
Dataset Viewer issue for fgrezes/WIESP2022-NER
### Link _No response_ ### Description _No response_ ### Owner _No response_
closed
https://github.com/huggingface/datasets/issues/4477
2022-06-11T15:49:17
2022-07-18T13:07:33
2022-07-18T13:07:33
{ "login": "AshTayade", "id": 42551754, "type": "User" }
[]
false
[]
1,267,987,499
4,476
`to_pandas` doesn't take into account format.
**Is your feature request related to a problem? Please describe.** I have a large dataset that I need to convert part of to pandas to do some further analysis. Calling `to_pandas` directly on it is expensive. So I thought I could simply select the columns that I want and then call `to_pandas`. **Describe the solu...
closed
https://github.com/huggingface/datasets/issues/4476
2022-06-10T20:25:31
2022-06-15T17:41:41
2022-06-15T17:41:41
{ "login": "Dref360", "id": 8976546, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,267,798,451
4,475
Improve error message for missing pacakges from inside dataset script
Improve the error message for missing packages from inside a dataset script: With this change, the error message for missing packages for `bigbench` looks as follows: ``` ImportError: To be able to use bigbench, you need to install the following dependencies: - 'bigbench' using 'pip install "bigbench @ ht...
closed
https://github.com/huggingface/datasets/pull/4475
2022-06-10T16:59:36
2022-10-06T13:46:26
2022-06-13T13:16:43
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,267,767,541
4,474
[Docs] How to use with PyTorch page
Currently the docs about PyTorch are scattered around different pages, and we were missing a place to explain more in depth how to use and optimize a dataset for PyTorch. This PR is related to #4457 which is the TF counterpart :) cc @Rocketknight1 we can try to align both documentations contents now I think cc @s...
closed
https://github.com/huggingface/datasets/pull/4474
2022-06-10T16:25:49
2022-06-14T14:40:32
2022-06-14T14:04:33
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
true
[]
1,267,555,994
4,473
Add SST-2 dataset
Add SST-2 dataset. Currently it is part of GLUE benchmark. This PR adds it as a standalone dataset. CC: @julien-c
closed
https://github.com/huggingface/datasets/pull/4473
2022-06-10T13:37:26
2022-06-13T14:11:34
2022-06-13T14:01:09
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,267,488,523
4,472
Fix 401 error for unauthticated requests to non-existing repos
The hub now returns 401 instead of 404 for unauthenticated requests to non-existing repos. This PR add support for the 401 error and fixes the CI fails on `master`
closed
https://github.com/huggingface/datasets/pull/4472
2022-06-10T12:38:11
2022-06-10T13:05:11
2022-06-10T12:55:57
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,267,475,268
4,471
CI error with repo lhoestq/_dummy
## Describe the bug CI is failing because of repo "lhoestq/_dummy". See: https://app.circleci.com/pipelines/github/huggingface/datasets/12461/workflows/1b040b45-9578-4ab9-8c44-c643c4eb8691/jobs/74269 ``` requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/datasets/lhoest...
closed
https://github.com/huggingface/datasets/issues/4471
2022-06-10T12:26:06
2022-06-10T13:24:53
2022-06-10T13:24:53
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,267,470,051
4,470
Reorder returned validation/test splits in script template
null
closed
https://github.com/huggingface/datasets/pull/4470
2022-06-10T12:21:13
2022-06-10T18:04:10
2022-06-10T17:54:50
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,267,213,849
4,469
Replace data URLs in wider_face dataset once hosted on the Hub
This PR replaces the URLs of data files in Google Drive with our Hub ones, once the data owners have approved to host their data on the Hub. They also informed us that their dataset is licensed under CC BY-NC-ND.
closed
https://github.com/huggingface/datasets/pull/4469
2022-06-10T08:13:25
2022-06-10T16:42:08
2022-06-10T16:32:46
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,266,715,742
4,468
Generalize tutorials for audio and vision
This PR updates the tutorials to be more generalizable to all modalities. After reading the tutorials, a user should be able to load any type of dataset, know how to index into and slice a dataset, and do the most basic/common type of preprocessing (tokenization, resampling, applying transforms) depending on their data...
closed
https://github.com/huggingface/datasets/pull/4468
2022-06-09T22:00:44
2022-06-14T16:22:02
2022-06-14T16:12:00
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
true
[]
1,266,218,358
4,467
Transcript string 'null' converted to [None] by load_dataset()
## Issue I am training a luxembourgish speech-recognition model in Colab with a custom dataset, including a dictionary of luxembourgish words, for example the speaken numbers 0 to 9. When preparing the dataset with the script `ds_train1 = mydataset.map(prepare_dataset)` the following error was issued: ``` ...
closed
https://github.com/huggingface/datasets/issues/4467
2022-06-09T14:26:00
2023-07-04T02:18:39
2022-06-09T16:29:02
{ "login": "mbarnig", "id": 1360633, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,266,159,920
4,466
Optimize contiguous shard and select
Currently `.shard()` and `.select()` always create an indices mapping. However if the requested data are contiguous, it's much more optimized to simply slice the Arrow table instead of building an indices mapping. In particular: - the shard/select operation will be much faster - reading speed will be much faster in t...
closed
https://github.com/huggingface/datasets/pull/4466
2022-06-09T13:45:39
2022-06-14T16:04:30
2022-06-14T15:54:45
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,265,754,479
4,465
Fix bigbench config names
Fix https://github.com/huggingface/datasets/issues/4462 in the case of bigbench
closed
https://github.com/huggingface/datasets/pull/4465
2022-06-09T08:06:19
2022-06-09T14:38:36
2022-06-09T14:29:19
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,265,682,931
4,464
Extend support for streaming datasets that use xml.dom.minidom.parse
This PR extends the support in streaming mode for datasets that use `xml.dom.minidom.parse`, by patching that function. This PR adds support for streaming datasets like "Yaxin/SemEval2015". Fix #4453.
closed
https://github.com/huggingface/datasets/pull/4464
2022-06-09T06:58:25
2022-06-09T08:43:24
2022-06-09T08:34:16
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,265,093,211
4,463
Use config_id to check split sizes instead of config name
Fix https://github.com/huggingface/datasets/issues/4462
closed
https://github.com/huggingface/datasets/pull/4463
2022-06-08T17:45:24
2023-09-24T10:03:00
2022-06-09T08:06:37
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,265,079,347
4,462
BigBench: NonMatchingSplitsSizesError when passing a dataset configuration parameter
As noticed in https://github.com/huggingface/datasets/pull/4125 when a dataset config class has a parameter that reduces the number of examples (e.g. named `max_examples`), then loading the dataset and passing `max_examples` raises `NonMatchingSplitsSizesError`. This is because it will check for expected the number ...
open
https://github.com/huggingface/datasets/issues/4462
2022-06-08T17:31:24
2022-07-05T07:39:55
null
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,264,800,451
4,461
AttributeError: module 'datasets' has no attribute 'load_dataset'
## Describe the bug I have piped install datasets, but this package doesn't have these attributes: load_dataset, load_metric. ## Environment info - `datasets` version: 1.9.0 - Platform: Linux-5.13.0-44-generic-x86_64-with-debian-bullseye-sid - Python version: 3.6.13 - PyArrow version: 6.0.1
closed
https://github.com/huggingface/datasets/issues/4461
2022-06-08T13:59:20
2024-03-25T12:58:29
2022-06-08T14:41:00
{ "login": "AlexNLP", "id": 59248970, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,264,644,205
4,460
Drop Python 3.6 support
Remove the fallback imports/checks in the code needed for Python 3.6 and update the requirements/CI files. Also, use Python types for the NumPy dtype wherever possible to avoid deprecation warnings in newer NumPy versions.
closed
https://github.com/huggingface/datasets/pull/4460
2022-06-08T12:10:18
2022-07-26T19:16:39
2022-07-26T19:04:21
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,264,636,481
4,459
Add and fix language tags for udhr dataset
Related to #4362.
closed
https://github.com/huggingface/datasets/pull/4459
2022-06-08T12:03:42
2022-06-08T12:36:24
2022-06-08T12:27:13
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,263,531,911
4,457
First draft of the docs for TF + Datasets
I might cc a few of the other TF people to take a look when this is closer to being finished, but it's still a draft for now.
closed
https://github.com/huggingface/datasets/pull/4457
2022-06-07T16:06:48
2022-06-14T16:08:41
2022-06-14T15:59:08
{ "login": "Rocketknight1", "id": 12866554, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
true
[]
1,263,241,449
4,456
Workflow for Tabular data
Tabular data are treated very differently than data for NLP, audio, vision, etc. and therefore the worflow for tabular data in `datasets` is not ideal. For example for tabular data, it is common to use pandas/spark/dask to process the data, and then load the data into X and y (X is an array of features and y an arra...
open
https://github.com/huggingface/datasets/issues/4456
2022-06-07T12:48:22
2023-03-06T08:53:55
null
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "generic discussion", "color": "c5def5" } ]
false
[]
1,263,089,067
4,455
Update data URLs in fever dataset
As stated in their website, data owners updated their URLs on 28/04/2022. This PR updates the data URLs. Fix #4452.
closed
https://github.com/huggingface/datasets/pull/4455
2022-06-07T10:40:54
2022-06-08T07:24:54
2022-06-08T07:16:17
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,262,674,973
4,454
Dataset Viewer issue for Yaxin/SemEval2015
### Link _No response_ ### Description the link could not visit ### Owner _No response_
closed
https://github.com/huggingface/datasets/issues/4454
2022-06-07T03:31:46
2022-06-07T11:53:11
2022-06-07T11:53:11
{ "login": "WithYouTo", "id": 18160852, "type": "User" }
[ { "name": "duplicate", "color": "cfd3d7" }, { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,262,674,105
4,453
Dataset Viewer issue for Yaxin/SemEval2015
### Link _No response_ ### Description _No response_ ### Owner _No response_
closed
https://github.com/huggingface/datasets/issues/4453
2022-06-07T03:30:08
2022-06-09T08:34:16
2022-06-09T08:34:16
{ "login": "WithYouTo", "id": 18160852, "type": "User" }
[]
false
[]
1,262,529,654
4,452
Trying to load FEVER dataset results in NonMatchingChecksumError
## Describe the bug Trying to load the `fever` dataset fails with `datasets.utils.info_utils.NonMatchingChecksumError`. I tried with `download_mode="force_redownload"` but that did not fix the error. I also tried with `ignore_verification=True` but then that raised a `json.decoder.JSONDecodeError`. ## Steps to r...
closed
https://github.com/huggingface/datasets/issues/4452
2022-06-06T23:13:15
2022-12-15T13:36:40
2022-06-08T07:16:16
{ "login": "santhnm2", "id": 5347982, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,262,103,323
4,451
Use newer version of multi-news with fixes
Closes #4430.
closed
https://github.com/huggingface/datasets/pull/4451
2022-06-06T16:57:08
2022-06-07T17:40:01
2022-06-07T17:14:44
{ "login": "JohnGiorgi", "id": 8917831, "type": "User" }
[]
true
[]
1,261,878,324
4,450
Update README.md of fquad
null
closed
https://github.com/huggingface/datasets/pull/4450
2022-06-06T13:52:41
2022-06-06T14:51:49
2022-06-06T14:43:03
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,261,262,326
4,449
Rj
import android.content.DialogInterface; import android.database.Cursor; import android.os.Bundle; import android.view.View; import android.widget.ArrayAdapter; import android.widget.Button; import android.widget.EditText; import android.widget.Toast; import androidx.appcompat.app.AlertDialog; import androidx.appcompat...
closed
https://github.com/huggingface/datasets/issues/4449
2022-06-06T02:24:32
2022-06-06T15:44:50
2022-06-06T15:44:50
{ "login": "Aeckard45", "id": 87345839, "type": "User" }
[]
false
[]
1,260,966,129
4,448
New Preprocessing Feature - Deduplication [Request]
**Is your feature request related to a problem? Please describe.** Many large datasets are full of duplications and it has been shown that deduplicating datasets can lead to better performance while training, and more truthful evaluation at test-time. A feature that allows one to easily deduplicate a dataset can be...
open
https://github.com/huggingface/datasets/issues/4448
2022-06-05T05:32:56
2023-12-12T07:52:40
null
{ "login": "yuvalkirstain", "id": 57996478, "type": "User" }
[ { "name": "duplicate", "color": "cfd3d7" }, { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,260,041,805
4,447
Minor fixes/improvements in `scene_parse_150` card
Add `paperswithcode_id` and fix some links in the `scene_parse_150` card.
closed
https://github.com/huggingface/datasets/pull/4447
2022-06-03T15:22:34
2022-06-06T15:50:25
2022-06-06T15:41:37
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,260,028,995
4,446
Add missing kwargs to docstrings
null
closed
https://github.com/huggingface/datasets/pull/4446
2022-06-03T15:10:27
2022-06-03T16:10:09
2022-06-03T16:01:29
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,259,947,568
4,445
Fix missing args in docstring of load_dataset_builder
Currently, the docstring of `load_dataset_builder` only contains the first parameter `path` (no other): - https://huggingface.co/docs/datasets/v2.2.1/en/package_reference/loading_methods#datasets.load_dataset_builder.path
closed
https://github.com/huggingface/datasets/pull/4445
2022-06-03T13:55:50
2022-06-03T14:35:32
2022-06-03T14:27:09
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,259,738,209
4,444
Fix kwargs in docstrings
To fix the rendering of `**kwargs` in docstrings, a parentheses must be added afterwards. See: - huggingface/doc-builder/issues/235
closed
https://github.com/huggingface/datasets/pull/4444
2022-06-03T10:29:02
2022-06-03T11:01:28
2022-06-03T10:52:46
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,259,606,334
4,443
Dataset Viewer issue for openclimatefix/nimrod-uk-1km
### Link _No response_ ### Description _No response_ ### Owner _No response_
open
https://github.com/huggingface/datasets/issues/4443
2022-06-03T08:17:16
2023-09-25T12:15:08
null
{ "login": "ZYMXIXI", "id": 32382826, "type": "User" }
[]
false
[]
1,258,589,276
4,442
Dataset Viewer issue for amazon_polarity
### Link https://huggingface.co/datasets/amazon_polarity/viewer/amazon_polarity/test ### Description For some reason the train split is OK but the test split is not for this dataset: ``` Server error Status code: 400 Exception: FileNotFoundError Message: [Errno 2] No such file or directory: '/cach...
closed
https://github.com/huggingface/datasets/issues/4442
2022-06-02T19:18:38
2022-06-07T18:50:37
2022-06-07T18:50:37
{ "login": "lewtun", "id": 26859204, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,258,568,656
4,441
Dataset Viewer issue for aeslc
### Link https://huggingface.co/datasets/aeslc ### Description The dataset viewer can't find `dataset_infos.json` in it's cache: ``` Server error Status code: 400 Exception: FileNotFoundError Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/aeslc/eb8e30234cf9...
closed
https://github.com/huggingface/datasets/issues/4441
2022-06-02T18:57:12
2022-06-07T18:50:55
2022-06-07T18:50:55
{ "login": "lewtun", "id": 26859204, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,258,494,469
4,440
Update docs around audio and vision
As part of the strategy to center the docs around the different modalities, this PR updates the quickstart to include audio and vision examples. This improves the developer experience by making audio and vision content more discoverable, enabling users working in these modalities to also quickly get started without dig...
closed
https://github.com/huggingface/datasets/pull/4440
2022-06-02T17:42:03
2022-06-23T16:33:19
2022-06-23T16:23:02
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
true
[]
1,258,434,111
4,439
TIMIT won't load after manual download: Errors about files that don't exist
## Describe the bug I get the message from HuggingFace that it must be downloaded manually. From the URL provided in the message, I got to UPenn page for manual download. (UPenn apparently want $250? for the dataset??) ...So, ok, I obtained a copy from a friend and also a smaller version from Kaggle. But in both c...
closed
https://github.com/huggingface/datasets/issues/4439
2022-06-02T16:35:56
2022-06-03T08:44:17
2022-06-03T08:44:16
{ "login": "drscotthawley", "id": 13925685, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,258,255,394
4,438
Fix docstring of inspect_dataset
As pointed out by @sgugger: - huggingface/doc-builder/issues/235
closed
https://github.com/huggingface/datasets/pull/4438
2022-06-02T14:21:10
2022-06-02T16:40:55
2022-06-02T16:32:27
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,258,249,582
4,437
Add missing columns to `blended_skill_talk`
Adds the missing columns to `blended_skill_talk` to align the loading logic with [ParlAI](https://github.com/facebookresearch/ParlAI/blob/main/parlai/tasks/blended_skill_talk/build.py). Fix #4426
closed
https://github.com/huggingface/datasets/pull/4437
2022-06-02T14:16:26
2022-06-06T15:49:56
2022-06-06T15:41:25
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,257,758,834
4,436
Fix directory names for LDC data in timit_asr dataset
Related to: - #4422
closed
https://github.com/huggingface/datasets/pull/4436
2022-06-02T06:45:04
2022-06-02T09:32:56
2022-06-02T09:24:27
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,257,496,552
4,435
Load a local cached dataset that has been modified
## Describe the bug I have loaded a dataset as follows: ``` d = load_dataset("emotion", split="validation") ``` Afterwards I make some modifications to the dataset via a `map` call: ``` d.map(some_update_func, cache_file_name=modified_dataset) ``` This generates a cached version of the dataset on my local syst...
closed
https://github.com/huggingface/datasets/issues/4435
2022-06-02T01:51:49
2022-06-02T23:59:26
2022-06-02T23:59:18
{ "login": "mihail911", "id": 2789441, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,256,207,321
4,434
Fix dummy dataset generation script for handling nested types of _URLs
It seems that when user specify nested _URLs structures in their dataset script. An error will be raised when generating dummy dataset. I think the types of all elements in `dummy_data_dict.values()` should be checked because they may have different types. Linked to issue #4428 PS: I am not sure whether my co...
closed
https://github.com/huggingface/datasets/pull/4434
2022-06-01T14:53:15
2022-06-07T12:08:28
2022-06-07T09:24:09
{ "login": "silverriver", "id": 2529049, "type": "User" }
[]
true
[]
1,255,830,758
4,433
Fix script fetching and local path handling in `inspect_dataset` and `inspect_metric`
Fix #4348
closed
https://github.com/huggingface/datasets/pull/4433
2022-06-01T12:09:56
2022-06-09T10:34:54
2022-06-09T10:26:07
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,255,523,720
4,432
Fix builder docstring
Currently, the args of `DatasetBuilder` do not appear in the docs: https://huggingface.co/docs/datasets/v2.1.0/en/package_reference/builder_classes#datasets.DatasetBuilder
closed
https://github.com/huggingface/datasets/pull/4432
2022-06-01T09:45:30
2022-06-02T17:43:47
2022-06-02T17:35:15
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,254,618,948
4,431
Add personaldialog datasets
It seems that all tests are passed
closed
https://github.com/huggingface/datasets/pull/4431
2022-06-01T01:20:40
2022-06-11T12:40:23
2022-06-11T12:31:16
{ "login": "silverriver", "id": 2529049, "type": "User" }
[]
true
[]
1,254,412,591
4,430
Add ability to load newer, cleaner version of Multi-News
**Is your feature request related to a problem? Please describe.** The [Multi-News dataloader points to the original version of the Multi-News dataset](https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/datasets/multi_news/multi_news.py#L47), but this has [known errors in it](https...
closed
https://github.com/huggingface/datasets/issues/4430
2022-05-31T21:00:44
2022-06-07T17:14:44
2022-06-07T17:14:44
{ "login": "JohnGiorgi", "id": 8917831, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,254,184,358
4,429
Update builder docstring for deprecated/added arguments
This PR updates the builder docstring with deprecated/added directives for arguments name/config_name. Follow up of: - #4414 - huggingface/doc-builder#233 First merge: - #4432
closed
https://github.com/huggingface/datasets/pull/4429
2022-05-31T17:37:25
2022-06-08T11:40:18
2022-06-08T11:31:45
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,254,092,818
4,428
Errors when building dummy data if you use nested _URLS
## Describe the bug When making dummy data with the `datasets-cli dummy_data` tool, an error will be raised if you use a nested _URLS in your dataset script. Traceback (most recent call last): File "/home/name/LCCC/datasets/src/datasets/commands/datasets_cli.py", line 43, in <module> main() File "/hom...
closed
https://github.com/huggingface/datasets/issues/4428
2022-05-31T16:10:57
2022-06-07T09:24:09
2022-06-07T09:24:09
{ "login": "silverriver", "id": 2529049, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,253,959,313
4,427
Add HF.co for PRs/Issues for specific datasets
As in https://github.com/huggingface/transformers/pull/17485, issues and PR for datasets under a namespace have to be on the HF Hub
closed
https://github.com/huggingface/datasets/pull/4427
2022-05-31T14:31:21
2022-06-01T12:37:42
2022-06-01T12:29:02
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,253,887,311
4,426
Add loading variable number of columns for different splits
**Is your feature request related to a problem? Please describe.** The original dataset `blended_skill_talk` consists of different sets of columns for the different splits: (test/valid) splits have additional data column `label_candidates` that the (train) doesn't have. When loading such data, an exception occurs at ...
closed
https://github.com/huggingface/datasets/issues/4426
2022-05-31T13:40:16
2022-06-03T16:25:25
2022-06-03T16:25:25
{ "login": "DrMatters", "id": 22641583, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,253,641,604
4,425
Make extensions case-insensitive in timit_asr dataset
Related to #4422.
closed
https://github.com/huggingface/datasets/pull/4425
2022-05-31T10:10:04
2022-06-01T14:15:30
2022-06-01T14:06:51
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,253,542,488
4,424
Fix DuplicatedKeysError in timit_asr dataset
Fix #4422.
closed
https://github.com/huggingface/datasets/pull/4424
2022-05-31T08:47:45
2022-05-31T13:50:50
2022-05-31T13:42:31
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,253,326,023
4,423
Add new dataset MMChat
Hi, I am adding a new dataset MMChat. It seems that all tests are passed
closed
https://github.com/huggingface/datasets/pull/4423
2022-05-31T04:45:07
2022-06-11T12:40:52
2022-06-11T12:31:42
{ "login": "silverriver", "id": 2529049, "type": "User" }
[]
true
[]
1,253,146,511
4,422
Cannot load timit_asr data set
## Describe the bug I am trying to load the timit_asr data set. I have tried with a copy from the LDC, and a copy from deepai. In both cases they fail with a "duplicate key" error. With the LDC version I have to convert the file extensions all to upper-case before I can load it at all. ## Steps to reproduce the bug...
closed
https://github.com/huggingface/datasets/issues/4422
2022-05-30T22:00:22
2022-06-02T06:34:05
2022-05-31T13:42:31
{ "login": "bhaddow", "id": 992795, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,253,059,467
4,421
Add extractor for bzip2-compressed files
This change enables loading bzipped datasets, just like any other compressed dataset.
closed
https://github.com/huggingface/datasets/pull/4421
2022-05-30T19:19:40
2022-06-06T15:22:50
2022-06-06T15:22:50
{ "login": "osyvokon", "id": 2910707, "type": "User" }
[]
true
[]
1,252,739,239
4,420
Metric evaluation problems in multi-node, shared file system
## Describe the bug Metric evaluation fails in multi-node within a shared file system, because the master process cannot find the lock files from other nodes. (This issue was originally mentioned in the transformers repo https://github.com/huggingface/transformers/issues/17412) ## Steps to reproduce the bug 1. c...
closed
https://github.com/huggingface/datasets/issues/4420
2022-05-30T13:24:05
2023-07-11T09:33:18
2023-07-11T09:33:17
{ "login": "gullabi", "id": 40303490, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,252,652,896
4,419
Update `unittest` assertions over tuples from `assertEqual` to `assertTupleEqual`
**Is your feature request related to a problem? Please describe.** So this is more a readability improvement rather than a proposal, wouldn't it be better to use `assertTupleEqual` over the tuples rather than `assertEqual`? As `unittest` added that function in `v3.1`, as detailed at https://docs.python.org/3/library...
closed
https://github.com/huggingface/datasets/issues/4419
2022-05-30T12:13:18
2022-09-30T16:01:37
2022-09-30T16:01:37
{ "login": "alvarobartt", "id": 36760800, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,252,506,268
4,418
Add dataset MMChat
null
closed
https://github.com/huggingface/datasets/pull/4418
2022-05-30T10:10:40
2022-05-30T14:58:18
2022-05-30T14:58:18
{ "login": "silverriver", "id": 2529049, "type": "User" }
[]
true
[]
1,251,933,091
4,417
how to convert a dict generator into a huggingface dataset.
### Link _No response_ ### Description Hey there, I have used seqio to get a well distributed mixture of samples from multiple dataset. However the resultant output from seqio is a python generator dict, which I cannot produce back into huggingface dataset. The generator contains all the samples needed for ...
closed
https://github.com/huggingface/datasets/issues/4417
2022-05-29T16:28:27
2022-09-16T14:44:19
2022-09-16T14:44:19
{ "login": "StephennFernandes", "id": 32235549, "type": "User" }
[ { "name": "question", "color": "d876e3" } ]
false
[]
1,251,875,763
4,416
Add LCCC dataset
Hi, I am trying to add a new dataset lccc. All tests are passed.
closed
https://github.com/huggingface/datasets/pull/4416
2022-05-29T12:27:19
2022-06-15T10:28:59
2022-06-02T09:13:46
{ "login": "silverriver", "id": 2529049, "type": "User" }
[]
true
[]
1,251,002,981
4,415
Update `dataset_infos.json` with new split info in `Dataset.push_to_hub` to avoid verification error
Update `dataset_infos.json` when pushing splits one by one via `Dataset.push_to_hub` to avoid the splits verification error. TODO: ~~- [ ] handle token + `{Audio, Image}.embed_storage`~~ - [x] tests
closed
https://github.com/huggingface/datasets/pull/4415
2022-05-27T17:03:42
2022-06-07T12:42:25
2022-06-07T12:33:52
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,250,546,888
4,414
Rename DatasetBuilder config_name
This PR renames the DatasetBuilder keyword argument `name` to `config_name` so that: - it avoids confusion with the attribute `DatasetBuilder.name`, which is different - it aligns with the Dataset property name `config_name`, defined in `DatasetInfoMixin.config_name` Other simpler possibility could be to rename it...
closed
https://github.com/huggingface/datasets/pull/4414
2022-05-27T09:28:02
2022-05-31T15:07:21
2022-05-31T14:58:51
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,250,259,822
4,413
Dataset Viewer issue for ett
### Link https://huggingface.co/datasets/ett ### Description Timestamp is not JSON serializable. ``` Status code: 500 Exception: Status500Error Message: Type is not JSON serializable: Timestamp ``` ### Owner No
closed
https://github.com/huggingface/datasets/issues/4413
2022-05-27T02:12:35
2022-06-15T07:30:46
2022-06-15T07:30:46
{ "login": "dgcnz", "id": 24966039, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,249,490,179
4,412
Skip hidden files/directories in data files resolution and `iter_files`
Fix #4115
closed
https://github.com/huggingface/datasets/pull/4412
2022-05-26T12:10:28
2022-06-15T17:11:25
2022-06-01T13:04:16
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,249,462,390
4,411
Update `_format_columns` in `remove_columns`
As explained at #4398, when calling `dataset.add_faiss_index` under certain conditions when calling a sequence of operations `cast_column`, `map`, and `remove_columns`, it fails as it's trying to look for already removed columns. So on, after testing some possible fixes, it seems that setting the dataset format righ...
closed
https://github.com/huggingface/datasets/pull/4411
2022-05-26T11:40:06
2022-06-14T19:05:37
2022-06-14T16:01:56
{ "login": "alvarobartt", "id": 36760800, "type": "User" }
[]
true
[]
1,249,148,457
4,410
Remove Google Drive URL in spider dataset
The `spider` dataset is distributed under the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode) license. Fix #4401.
closed
https://github.com/huggingface/datasets/pull/4410
2022-05-26T06:17:35
2022-05-26T06:48:42
2022-05-26T06:40:12
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,249,083,179
4,409
Update: add using pcm bytes (#4323)
first of all, please look #4323 why i can not use {"path","array","sampling_rate"} because sf.write(format="wav") and sf.read(BytesIO) is changed my pcm data value maybe, i think wav got header but, pcm is not. and variable naming, pcm data is "byte" type. so, "array" name is not fair i think so, i use scipy l...
closed
https://github.com/huggingface/datasets/pull/4409
2022-05-26T04:26:36
2022-07-07T13:27:29
2022-07-07T13:16:09
{ "login": "YooSungHyun", "id": 34292279, "type": "User" }
[]
true
[]
1,248,687,574
4,408
Update imagenet gate
null
closed
https://github.com/huggingface/datasets/pull/4408
2022-05-25T20:32:19
2022-05-25T20:45:11
2022-05-25T20:36:47
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,248,671,778
4,407
Dataset Viewer issue for conll2012_ontonotesv5
### Link https://huggingface.co/datasets/conll2012_ontonotesv5 ### Description Dataset viewer outage. ### Owner No
closed
https://github.com/huggingface/datasets/issues/4407
2022-05-25T20:18:33
2022-06-07T18:39:16
2022-06-07T18:39:16
{ "login": "jiangwangyi", "id": 39762734, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,248,626,622
4,406
Improve language tag for PIAF dataset
Hi, As pointed out by @lhoestq in this discussion (https://huggingface.co/datasets/asi/wikitext_fr/discussions/1), it is not yet possible to edit datasets outside of a namespace with the Hub PR feature and that you have to go through GitHub. This modification should allow better referencing since only the xx lan...
closed
https://github.com/huggingface/datasets/pull/4406
2022-05-25T19:41:55
2022-05-27T14:51:23
2022-05-27T14:51:23
{ "login": "lbourdois", "id": 58078086, "type": "User" }
[]
true
[]
1,248,574,087
4,405
[TypeError: Couldn't cast array of type] Cannot process dataset in v2.2.2
## Describe the bug I am trying to process the [conll2012_ontonotesv5](https://huggingface.co/datasets/conll2012_ontonotesv5) dataset in `datasets` v2.2.2 and am running into a type error when casting the features. ## Steps to reproduce the bug ```python import os from typing import ( List, Dict, ) f...
closed
https://github.com/huggingface/datasets/issues/4405
2022-05-25T18:56:43
2022-06-07T14:27:20
2022-06-07T14:27:20
{ "login": "jiangwangyi", "id": 39762734, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,248,572,899
4,404
Dataset should have a `.name` field
**Is your feature request related to a problem? Please describe.** If building pipelines that can evaluate on more than one dataset, it would be nice to be able to log results of things like `Evaluating on {dataset.name}` or `results for {dataset.name} are: {results}` Without some way of concisely identifying a dat...
closed
https://github.com/huggingface/datasets/issues/4404
2022-05-25T18:56:08
2022-09-13T15:09:30
2022-06-16T10:47:53
{ "login": "f4hy", "id": 36440, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,248,390,134
4,403
Uncomment logging deactivation for ArrowBasedBuilder
null
closed
https://github.com/huggingface/datasets/pull/4403
2022-05-25T16:46:15
2022-05-31T08:33:36
2022-05-31T08:25:02
{ "login": "thomasw21", "id": 24695242, "type": "User" }
[]
true
[]
1,248,078,067
4,402
Skip identical files in `push_to_hub` instead of overwriting
Skip identical files instead of overwriting them to save bandwidth and circumvent (user-side/server-side) errors, which can arise when working with large datasets due to long-lasting HTTP connections, by repeating calls to `push_to_hub` to resume an upload. To be able to check if an upload can be resumed, this PR mo...
closed
https://github.com/huggingface/datasets/pull/4402
2022-05-25T13:12:51
2022-05-25T15:16:36
2022-05-25T15:08:03
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,247,695,921
4,401
"NonMatchingChecksumError" when importing 'spider' dataset
## Describe the bug When importing 'spider' dataset [https://huggingface.co/datasets/spider] an error occurs ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('spider') ``` ## Expected results Dataset object ## Actual results NonMatchingChecksumError: Check...
closed
https://github.com/huggingface/datasets/issues/4401
2022-05-25T07:45:07
2022-05-26T06:40:12
2022-05-26T06:40:12
{ "login": "OmarAlaaeldein", "id": 81417777, "type": "User" }
[ { "name": "hosted-on-google-drive", "color": "8B51EF" } ]
false
[]
1,247,404,237
4,400
load dataset wikitext-2-raw-v1 failed. Could not reach wikitext-2-raw-v1.py.
## Describe the bug Could not reach wikitext-2-raw-v1.py ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("wikitext-2-raw-v1") ``` ## Expected results Download `wikitext-2-raw-v1` dataset successfully. ## Actual results ``` File "load_datasets.py", line 13, in <m...
closed
https://github.com/huggingface/datasets/issues/4400
2022-05-25T03:10:44
2022-10-24T06:10:27
2022-05-25T03:26:36
{ "login": "cailun01", "id": 20658907, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,246,948,299
4,399
LocalDatasetModuleFactoryWithoutScript extracts invalid builder name
## Describe the bug Trying to load a local dataset raises an error indicating that the config builder has to have a name. No error should be reported, since the call is completly valid. ## Steps to reproduce the bug ```python load_dataset("./data/some-dataset/", name="some-name") ``` ## Expected results The...
closed
https://github.com/huggingface/datasets/issues/4399
2022-05-24T18:03:01
2022-09-12T15:30:43
2022-09-12T15:30:43
{ "login": "apohllo", "id": 40543, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "good first issue", "color": "7057ff" } ]
false
[]
1,246,666,749
4,398
Calling `cast_column`/`remove_columns` and a sequence of `map` operations ends up making `faiss` fail with `ValueError`
First of all, sorry in advance for the unclear title, but this bug is weird to explain (at least for me), so I tried my best to summarize all the information in this issue. ## Describe the bug Calling a certain combination of operations over a 🤗 `Dataset` and then trying to calculate the `faiss` index with `.add...
closed
https://github.com/huggingface/datasets/issues/4398
2022-05-24T14:41:34
2022-06-14T16:01:56
2022-06-14T16:01:56
{ "login": "alvarobartt", "id": 36760800, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,246,597,632
4,397
Fix dependency on dill version
We had to make a hotfix by pinning dill: - #4380 because from version 0.3.5, our custom `save_function` pickling function was raising an exception: - #4379 This PR fixes this by implementing our custom `save_function` depending on the version of dill. CC: @anivegesana This PR needs first being merged: -...
closed
https://github.com/huggingface/datasets/pull/4397
2022-05-24T13:54:23
2022-10-26T08:45:37
2022-05-25T13:54:08
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,245,479,399
4,396
Fix URL in gem dataset for totto config
As commented in: - https://github.com/huggingface/datasets/issues/4386#issuecomment-1134902372 CC: @StevenTang1998
closed
https://github.com/huggingface/datasets/pull/4396
2022-05-23T17:16:12
2022-05-24T05:49:11
2022-05-24T05:41:00
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,245,436,486
4,395
Add Pascal VOC dataset
This PR adds the Pascal VOC dataset in the same way TFDS has it added. I believe we can iterate on this dataset and in future versions include more data, such as segmentation masks, but for now I think it is a good idea to just add it the same way as TFDS to get a solid first version out there.
closed
https://github.com/huggingface/datasets/pull/4395
2022-05-23T16:34:05
2023-09-24T09:37:05
2022-10-03T09:36:56
{ "login": "nateraw", "id": 32437151, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,245,221,657
4,394
trainer became extremely slow after reload dataset by `load_from_disk`
## Describe the bug Due to memory problem, I need to save my tokenized datasets locally by CPU and reload it by multi GPU for running training script. However, after I reload it by `load_from_disk` and start training, the speed is extremely slow. It says I need about 1500 hours with 8 A100 cards. Before this, I can ru...
open
https://github.com/huggingface/datasets/issues/4394
2022-05-23T14:04:37
2023-11-23T07:40:30
null
{ "login": "conan1024hao", "id": 50416856, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,244,876,662
4,393
Update CI deprecated legacy image
Now our CI still uses a deprecated legacy image: > You’re using a [deprecated Docker convenience image.](https://discuss.circleci.com/t/legacy-convenience-image-deprecation/41034) Upgrade to a next-gen Docker convenience image. This PR updates to next-generation convenience image. Related to: - #2955
closed
https://github.com/huggingface/datasets/pull/4393
2022-05-23T09:35:42
2022-05-23T10:08:28
2022-05-23T09:59:55
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,244,859,971
4,392
remove int documentation from logging docs
Removes the `int` documentation from the [logging section](https://huggingface.co/docs/datasets/package_reference/logging_methods#levels) of the docs.
closed
https://github.com/huggingface/datasets/pull/4392
2022-05-23T09:24:55
2022-05-23T15:16:55
2022-05-23T15:08:32
{ "login": "lvwerra", "id": 8264887, "type": "User" }
[]
true
[]