id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
1,380,952,960
5,005
Release 2.5.0 breaks transformers CI
## Describe the bug As reported by @lhoestq: > see https://app.circleci.com/pipelines/github/huggingface/transformers/47634/workflows/b491886b-e66e-4edb-af96-8b459e72aa25/jobs/564563 this is used here: [https://github.com/huggingface/transformers/blob/3b19c0317b6909e2d7f11b5053895ac55[…]torch/speech-pretraining/ru...
closed
https://github.com/huggingface/datasets/issues/5005
2022-09-21T13:39:19
2022-09-21T14:11:57
2022-09-21T14:11:57
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,380,860,606
5,004
Remove license tag file and validation
As requested, we are removing the validation of the licenses from `datasets` because this is done on the Hub. Fix #4994. Related to: - #4926, which is removing all the validation from `datasets`
closed
https://github.com/huggingface/datasets/pull/5004
2022-09-21T12:35:14
2022-09-22T11:47:41
2022-09-22T11:45:46
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,380,617,353
5,003
Fix missing use_auth_token in streaming docstrings
This PRs fixes docstrings: - adds the missing `use_auth_token` param - updates syntax of param types - adds params to docstrings without them - fixes return/yield types - fixes syntax
closed
https://github.com/huggingface/datasets/pull/5003
2022-09-21T09:27:03
2022-09-21T16:24:01
2022-09-21T16:20:59
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,380,589,402
5,002
Dataset Viewer issue for loubnabnl/humaneval-x
### Link https://huggingface.co/datasets/loubnabnl/humaneval-x/viewer/ ### Description The dataset has subsets but the viewer gets stuck in the default subset even when I select another one (the data loading of the subsets works fine) ### Owner Yes
closed
https://github.com/huggingface/datasets/issues/5002
2022-09-21T09:06:17
2022-09-21T11:49:49
2022-09-21T11:49:49
{ "login": "loubnabnl", "id": 44069155, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,379,844,820
5,001
Support loading XML datasets
CC: @davanstrien
open
https://github.com/huggingface/datasets/pull/5001
2022-09-20T18:42:58
2024-05-22T22:13:25
null
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,379,709,398
5,000
Dataset Viewer issue for asapp/slue
### Link https://huggingface.co/datasets/asapp/slue/viewer/ ### Description Hi, I wonder how to get the dataset viewer of our slue dataset to work. Best, Felix ### Owner Yes
closed
https://github.com/huggingface/datasets/issues/5000
2022-09-20T16:45:45
2022-09-27T07:04:03
2022-09-21T07:24:07
{ "login": "fwu-asapp", "id": 56092571, "type": "User" }
[]
false
[]
1,379,610,030
4,999
Add EmptyDatasetError
examples: from the hub: ```python Traceback (most recent call last): File "playground/ttest.py", line 3, in <module> print(load_dataset("lhoestq/empty")) File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1686, in load_dataset **config_kwargs, File "/Users/quentinlhoest/Deskto...
closed
https://github.com/huggingface/datasets/pull/4999
2022-09-20T15:28:05
2022-09-21T12:23:43
2022-09-21T12:21:24
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,379,466,717
4,998
Don't add a tag on the Hub on release
Datasets with no namespace on the Hub have tags to redirect to the version of datasets where they come from. I’m about to remove them all because I think it looks bad/unexpected in the UI and it’s not actually useful Therefore I'm also disabling tagging. Note that the CI job will be completely removed in https:/...
closed
https://github.com/huggingface/datasets/pull/4998
2022-09-20T13:54:57
2022-09-20T14:11:46
2022-09-20T14:08:54
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,379,430,711
4,997
Add support for parsing JSON files in array form
Support parsing JSON files in the array form (top-level object is an array). For simplicity, `json.load` is used for decoding. This means the entire file is loaded into memory. If requested, we can optimize this by introducing a param similar to `lines` in [`pandas.read_json`](https://pandas.pydata.org/docs/reference/a...
closed
https://github.com/huggingface/datasets/pull/4997
2022-09-20T13:31:26
2022-09-20T15:42:40
2022-09-20T15:40:06
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,379,345,161
4,996
Dataset Viewer issue for Jean-Baptiste/wikiner_fr
### Link https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr ### Description ``` Error code: StreamingRowsError Exception: FileNotFoundError Message: [Errno 2] No such file or directory: 'zip:/data/train::https:/huggingface.co/datasets/Jean-Baptiste/wikiner_fr/resolve/main/data.zip/state.json' Tra...
closed
https://github.com/huggingface/datasets/issues/4996
2022-09-20T12:32:07
2022-09-27T12:35:44
2022-09-27T12:35:44
{ "login": "severo", "id": 1676121, "type": "User" }
[]
false
[]
1,379,108,482
4,995
Get a specific Exception when the dataset has no data
In the dataset viewer on the Hub (https://huggingface.co/datasets/glue/viewer), we would like (https://github.com/huggingface/moon-landing/issues/3882) to show a specific message when the repository lacks any data files. In that case, instead of showing a complex traceback, we want to show a call to action to help t...
closed
https://github.com/huggingface/datasets/issues/4995
2022-09-20T09:31:59
2022-09-21T12:21:25
2022-09-21T12:21:25
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,379,084,015
4,994
delete the hardcoded license list in `datasets`
> Feel free to delete the license list in `datasets` [...] > > Also FYI in #4926 I also removed all the validation steps anyway (language, license, types etc.) _Originally posted by @lhoestq in https://github.com/huggingface/datasets/issues/4930#issuecomment-1238401662_ > [...], in my opinion we can just delete...
closed
https://github.com/huggingface/datasets/issues/4994
2022-09-20T09:14:41
2022-09-22T11:45:47
2022-09-22T11:45:47
{ "login": "julien-c", "id": 326577, "type": "User" }
[]
false
[]
1,379,044,435
4,993
fix: avoid casting tuples after Dataset.map
This PR updates features.py to avoid casting tuples to lists when reading the results of Dataset.map as suggested by @lhoestq [here](https://github.com/huggingface/datasets/issues/4676#issuecomment-1187371367) in https://github.com/huggingface/datasets/issues/4676.
closed
https://github.com/huggingface/datasets/pull/4993
2022-09-20T08:45:16
2022-09-20T16:11:27
2022-09-20T13:08:29
{ "login": "szmoro", "id": 5697926, "type": "User" }
[]
true
[]
1,379,031,842
4,992
Support streaming iwslt2017 dataset
Support streaming iwslt2017 dataset. Once this PR is merged: - [x] Remove old ".tgz" data files from the Hub.
closed
https://github.com/huggingface/datasets/pull/4992
2022-09-20T08:35:41
2022-09-20T09:27:55
2022-09-20T09:15:24
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,378,898,752
4,991
Fix missing tags in dataset cards
Fix missing tags in dataset cards: - aeslc - empathetic_dialogues - event2Mind - gap - iwslt2017 - newsgroup - qa4mre - scicite This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833 - #4891 - #4896 - #4908 - #4921 - #4931 - ...
closed
https://github.com/huggingface/datasets/pull/4991
2022-09-20T06:42:07
2022-09-22T12:25:32
2022-09-20T07:37:30
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,378,120,806
4,990
"no-token" is passed to `huggingface_hub` when token is `None`
## Describe the bug In the 2 lines listed below, a token is passed to `huggingface_hub` to get information from a dataset. If no token is provided, a "no-token" string is passed. What is the purpose of it ? If no real, I would prefer if the `None` value could be sent directly to be handle by `huggingface_hub`. I fee...
closed
https://github.com/huggingface/datasets/issues/4990
2022-09-19T15:14:40
2022-09-30T09:16:00
2022-09-30T09:16:00
{ "login": "Wauplin", "id": 11801849, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,376,832,233
4,989
Running add_column() seems to corrupt existing sequence-type column info
I have a dataset that contains a column ("foo") that is a sequence type of length 4. So when I run .to_pandas() on it, the resulting dataframe correctly contains 4 columns - foo_0, foo_1, foo_2, foo_3. So the 1st row of the dataframe might look like: ds = load_dataset(...) df = ds.to_pandas() df: foo_0 | foo_1 ...
closed
https://github.com/huggingface/datasets/issues/4989
2022-09-17T17:42:05
2022-09-19T12:54:54
2022-09-19T12:54:54
{ "login": "derek-rocheleau", "id": 93728165, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,376,096,584
4,988
Add `IterableDataset.from_generator` to the API
We've just added `Dataset.from_generator` to the API. It would also be cool to add `IterableDataset.from_generator` to support creating an iterable dataset from a generator. cc @lhoestq
closed
https://github.com/huggingface/datasets/issues/4988
2022-09-16T15:19:41
2022-10-05T12:10:49
2022-10-05T12:10:49
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "good first issue", "color": "7057ff" } ]
false
[]
1,376,006,477
4,987
Embed image/audio data in dl_and_prepare parquet
Embed the bytes of the image or audio files in the Parquet files directly, instead of having a "path" that points to a local file. Indeed Parquet files are often used to share data or to be used by workers that may not have access to the local files.
closed
https://github.com/huggingface/datasets/pull/4987
2022-09-16T14:09:27
2022-09-16T16:24:47
2022-09-16T16:22:35
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,375,895,035
4,986
[doc] Fix broken snippet that had too many quotes
Hello! ### Pull request overview * Fix broken snippet in https://huggingface.co/docs/datasets/main/en/process that has too many quotes ### Details The snippet in question can be found here: https://huggingface.co/docs/datasets/main/en/process#map This screenshot shows the issue, there is a quote too many, caus...
closed
https://github.com/huggingface/datasets/pull/4986
2022-09-16T12:41:07
2022-09-16T22:12:21
2022-09-16T17:32:14
{ "login": "tomaarsen", "id": 37621491, "type": "User" }
[]
true
[]
1,375,807,768
4,985
Prefer split patterns from directories over split patterns from filenames
related to https://github.com/huggingface/datasets/issues/4895
closed
https://github.com/huggingface/datasets/pull/4985
2022-09-16T11:20:40
2022-11-02T11:54:28
2022-09-29T08:07:49
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,375,690,330
4,984
docs: ✏️ add links to the Datasets API
I added some links to the Datasets API in the docs. See https://github.com/huggingface/datasets-server/pull/566 for a companion PR in the datasets-server. The idea is to improve the discovery of the API through the docs. I'm a bit shy about pasting a lot of links to the API in the docs, so it's minimal for now. I'm ...
closed
https://github.com/huggingface/datasets/pull/4984
2022-09-16T09:34:12
2022-09-16T13:10:14
2022-09-16T13:07:33
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
1,375,667,654
4,983
How to convert torch.utils.data.Dataset to huggingface dataset?
I look through the huggingface dataset docs, and it seems that there is no offical support function to convert `torch.utils.data.Dataset` to huggingface dataset. However, there is a way to convert huggingface dataset to `torch.utils.data.Dataset`, like below: ```python from datasets import Dataset data = [[1, 2]...
closed
https://github.com/huggingface/datasets/issues/4983
2022-09-16T09:15:10
2023-12-14T20:54:15
2022-09-20T11:23:43
{ "login": "DEROOCE", "id": 77595952, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,375,604,693
4,982
Create dataset_infos.json with VALIDATION and TEST splits
The problem is described in that [issue](https://github.com/huggingface/datasets/issues/4895#issuecomment-1247975569). > When I try to create data_infos.json using datasets-cli test Peter.py --save_infos --all_configs I get an error: > ValueError: Unknown split "test". Should be one of ['train']. > > The data_i...
closed
https://github.com/huggingface/datasets/issues/4982
2022-09-16T08:21:19
2022-09-28T07:59:39
2022-09-28T07:59:39
{ "login": "skalinin", "id": 26695348, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,375,086,773
4,981
Can't create a dataset with `float16` features
## Describe the bug I can't create a dataset with `float16` features. I understand from the traceback that this is a `pyarrow` error, but I don't see anywhere in the `datasets` documentation about how to successfully do this. Is it actually supported? I've tried older versions of `pyarrow` as well with the same e...
open
https://github.com/huggingface/datasets/issues/4981
2022-09-15T21:03:24
2025-06-12T11:47:42
null
{ "login": "dconathan", "id": 15098095, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,374,868,083
4,980
Make `pyarrow` optional
**Is your feature request related to a problem? Please describe.** Is `pyarrow` really needed for every dataset? **Describe the solution you'd like** It is made optional. **Describe alternatives you've considered** Likely, no.
closed
https://github.com/huggingface/datasets/issues/4980
2022-09-15T17:38:03
2022-09-16T17:23:47
2022-09-16T17:23:47
{ "login": "KOLANICH", "id": 240344, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,374,820,758
4,979
Fix missing tags in dataset cards
Fix missing tags in dataset cards: - amazon_us_reviews - art - discofuse - indic_glue - ubuntu_dialogs_corpus This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833 - #4891 - #4896 - #4908 - #4921 - #4931
closed
https://github.com/huggingface/datasets/pull/4979
2022-09-15T16:51:03
2022-09-22T12:37:55
2022-09-15T17:12:09
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,374,271,504
4,978
Update IndicGLUE download links
null
closed
https://github.com/huggingface/datasets/pull/4978
2022-09-15T10:05:57
2022-09-15T22:00:20
2022-09-15T21:57:34
{ "login": "sumanthd17", "id": 28291870, "type": "User" }
[]
true
[]
1,372,962,157
4,977
Providing dataset size
**Is your feature request related to a problem? Please describe.** Especially for big datasets like [LAION](https://huggingface.co/datasets/laion/laion2B-en/), it's hard to know exactly the downloaded size (because there are many files and you don't have their exact size when downloaded). **Describe the solution yo...
open
https://github.com/huggingface/datasets/issues/4977
2022-09-14T13:09:27
2022-09-15T16:03:58
null
{ "login": "sashavor", "id": 14205986, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,372,322,382
4,976
Hope to adapt Python3.9 as soon as possible
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternat...
open
https://github.com/huggingface/datasets/issues/4976
2022-09-14T04:42:22
2022-09-26T16:32:35
null
{ "login": "RedHeartSecretMan", "id": 74012141, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,371,703,691
4,975
Add `fn_kwargs` param to `IterableDataset.map`
Add the `fn_kwargs` parameter to `IterableDataset.map`. ("Resolves" https://discuss.huggingface.co/t/how-to-use-large-image-text-datasets-in-hugging-face-hub-without-downloading-for-free/22780/3)
closed
https://github.com/huggingface/datasets/pull/4975
2022-09-13T16:19:05
2023-05-05T16:53:43
2022-09-13T16:45:34
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,371,682,020
4,974
[GH->HF] Part 2: Remove all dataset scripts from github
Now that all the datasets live on the Hub we can remove the /datasets directory that contains all the dataset scripts of this repository - [x] Needs https://github.com/huggingface/datasets/pull/4973 to be merged first - [x] and PR to be enabled on the Hub for non-namespaced datasets
closed
https://github.com/huggingface/datasets/pull/4974
2022-09-13T16:01:12
2022-10-03T17:09:39
2022-10-03T17:07:32
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,371,600,074
4,973
[GH->HF] Load datasets from the Hub
Currently datasets with no namespace (e.g. squad, glue) are loaded from github. In this PR I changed this logic to use the Hugging Face Hub instead. This is the first step in removing all the dataset scripts in this repository related to discussions in https://github.com/huggingface/datasets/pull/4059 (I shoul...
closed
https://github.com/huggingface/datasets/pull/4973
2022-09-13T15:01:41
2023-09-24T10:06:02
2022-09-15T15:24:26
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,371,443,306
4,972
Fix map batched with torch output
Reported in https://discuss.huggingface.co/t/typeerror-when-applying-map-after-set-format-type-torch/23067/2 Currently it fails if one uses batched `map` and the map function returns a torch tensor. I fixed it for torch, tf, jax and pandas series.
closed
https://github.com/huggingface/datasets/pull/4972
2022-09-13T13:16:34
2022-09-20T09:42:02
2022-09-20T09:39:33
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,370,319,516
4,971
Preserve non-`input_colums` in `Dataset.map` if `input_columns` are specified
Currently, if the `input_columns` list in `Dataset.map` is specified, the columns not in that list are dropped after the `map` transform. This makes the behavior inconsistent with `IterableDataset.map`. (It seems this issue was introduced by mistake in https://github.com/huggingface/datasets/pull/2246) Fix h...
closed
https://github.com/huggingface/datasets/pull/4971
2022-09-12T18:08:24
2022-09-13T13:51:08
2022-09-13T13:48:45
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,369,433,074
4,970
Support streaming nli_tr dataset
Support streaming nli_tr dataset. This PR removes legacy `codecs.open` and replaces it with `open` that supports passing encoding. Fix #3186.
closed
https://github.com/huggingface/datasets/pull/4970
2022-09-12T07:48:45
2022-09-12T08:45:04
2022-09-12T08:43:08
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,369,334,740
4,969
Fix data URL and metadata of vivos dataset
After contacting the authors of the VIVOS dataset to report that their data server is down, we have received a reply from Hieu-Thi Luong that their data is now hosted on Zenodo: https://doi.org/10.5281/zenodo.7068130 This PR updates their data URL and some metadata (homepage, citation and license). Fix #4936.
closed
https://github.com/huggingface/datasets/pull/4969
2022-09-12T06:12:34
2022-09-12T07:16:15
2022-09-12T07:14:19
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,369,312,877
4,968
Support streaming compguesswhat dataset
Support streaming `compguesswhat` dataset. Fix #3191.
closed
https://github.com/huggingface/datasets/pull/4968
2022-09-12T05:42:24
2022-09-12T08:00:06
2022-09-12T07:58:06
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,369,092,452
4,967
Strip "/" in local dataset path to avoid empty dataset name error
null
closed
https://github.com/huggingface/datasets/pull/4967
2022-09-11T23:09:16
2022-09-29T10:46:21
2022-09-12T15:30:38
{ "login": "apohllo", "id": 40543, "type": "User" }
[]
true
[]
1,368,661,002
4,965
[Apple M1] MemoryError: Cannot allocate write+execute memory for ffi.callback()
## Describe the bug I'm trying to run `cast_column("audio", Audio())` on Apple M1 Pro, but it seems that it doesn't work. ## Steps to reproduce the bug ```python import datasets dataset = load_dataset("csv", data_files="./train.csv")["train"] dataset = dataset.map(lambda x: {"audio": str(DATA_DIR / "audio" / ...
closed
https://github.com/huggingface/datasets/issues/4965
2022-09-10T15:55:49
2024-03-21T17:25:53
2023-07-21T14:45:50
{ "login": "hoangtnm", "id": 35718590, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,368,617,322
4,964
Column of arrays (2D+) are using unreasonably high memory
## Describe the bug When trying to store `Array2D, Array3D, etc` as column values in a dataset, accessing that column (or creating depending on how you create it, see code below) will cause more than 10 fold of memory usage. ## Steps to reproduce the bug ```python from datasets import Dataset, Features, Array2D, ...
open
https://github.com/huggingface/datasets/issues/4964
2022-09-10T13:07:22
2022-09-22T18:29:22
null
{ "login": "vigsterkr", "id": 30353, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,368,201,188
4,963
Dataset without script does not support regular JSON data file
### Link https://huggingface.co/datasets/julien-c/label-studio-my-dogs ### Description <img width="1115" alt="image" src="https://user-images.githubusercontent.com/326577/189422048-7e9c390f-bea7-4521-a232-43f049ccbd1f.png"> ### Owner Yes
closed
https://github.com/huggingface/datasets/issues/4963
2022-09-09T18:45:33
2022-09-20T15:40:07
2022-09-20T15:40:07
{ "login": "julien-c", "id": 326577, "type": "User" }
[]
false
[]
1,368,155,365
4,962
Update setup.py
exclude broken version of fsspec. See the [related issue](https://github.com/huggingface/datasets/issues/4961)
closed
https://github.com/huggingface/datasets/pull/4962
2022-09-09T17:57:56
2022-09-12T14:33:04
2022-09-12T14:33:04
{ "login": "DCNemesis", "id": 3616964, "type": "User" }
[]
true
[]
1,368,124,033
4,961
fsspec 2022.8.2 breaks xopen in streaming mode
## Describe the bug When fsspec 2022.8.2 is installed in your environment, xopen will prematurely close files, making streaming mode inoperable. ## Steps to reproduce the bug ```python import datasets data = datasets.load_dataset('MLCommons/ml_spoken_words', 'id_wav', split='train', streaming=True) ``` ...
closed
https://github.com/huggingface/datasets/issues/4961
2022-09-09T17:26:55
2022-09-12T17:45:50
2022-09-12T14:32:05
{ "login": "DCNemesis", "id": 3616964, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,368,035,159
4,960
BioASQ AttributeError: 'BuilderConfig' object has no attribute 'schema'
## Describe the bug I am trying to load a dataset from drive and running into an error. ## Steps to reproduce the bug ```python data_dir = "/Users/dlituiev/repos/datasets/bioasq/BioASQ-training9b" bioasq_task_b = load_dataset("aps/bioasq_task_b", data_dir=data_dir) ``` ## Actual results `AttributeError: ...
open
https://github.com/huggingface/datasets/issues/4960
2022-09-09T16:06:43
2022-09-13T08:51:03
null
{ "login": "DSLituiev", "id": 8426290, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,367,924,429
4,959
Fix data URLs of compguesswhat dataset
After we informed the `compguesswhat` dataset authors about an error with their data URLs, they have updated them: - https://github.com/CompGuessWhat/compguesswhat.github.io/issues/1 This PR updates their data URLs in our loading script. Related to: - #3191
closed
https://github.com/huggingface/datasets/pull/4959
2022-09-09T14:36:10
2022-09-09T16:01:34
2022-09-09T15:59:04
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,367,695,376
4,958
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.4.0/datasets/jsonl/jsonl.py
Hi, When I use load_dataset from local jsonl files, below error happens, and I type the link into the browser prompting me `404: Not Found`. I download the other `.py` files using the same method and it works. It seems that the server is missing the appropriate file, or it is a problem with the code version. ``` C...
closed
https://github.com/huggingface/datasets/issues/4958
2022-09-09T11:29:55
2022-09-09T11:38:44
2022-09-09T11:38:44
{ "login": "hasakikiki", "id": 66322047, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,366,532,849
4,957
Add `Dataset.from_generator`
Add `Dataset.from_generator` to the API to allow creating datasets from data larger than RAM. The implementation relies on a packaged module not exposed in `load_dataset` to tie this method with `datasets`' caching mechanism. Closes https://github.com/huggingface/datasets/issues/4417
closed
https://github.com/huggingface/datasets/pull/4957
2022-09-08T15:08:25
2022-09-16T14:46:35
2022-09-16T14:44:18
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,366,475,160
4,956
Fix TF tests for 2.10
Fixes #4953
closed
https://github.com/huggingface/datasets/pull/4956
2022-09-08T14:39:10
2022-09-08T15:16:51
2022-09-08T15:14:44
{ "login": "Rocketknight1", "id": 12866554, "type": "User" }
[]
true
[]
1,366,382,314
4,955
Raise a more precise error when the URL is unreachable in streaming mode
See for example: - https://github.com/huggingface/datasets/issues/3191 - https://github.com/huggingface/datasets/issues/3186 It would help provide clearer information on the Hub and help the dataset maintainer solve the issue by themselves quicker. Currently: - https://huggingface.co/datasets/compguesswhat ...
open
https://github.com/huggingface/datasets/issues/4955
2022-09-08T13:52:37
2022-09-08T13:53:36
null
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,366,369,682
4,954
Pin TensorFlow temporarily
Temporarily fix TensorFlow until a permanent solution is found. Related to: - #4953
closed
https://github.com/huggingface/datasets/pull/4954
2022-09-08T13:46:15
2022-09-08T14:12:33
2022-09-08T14:10:03
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,366,356,514
4,953
CI test of TensorFlow is failing
## Describe the bug The following CI test fails: https://github.com/huggingface/datasets/runs/8246722693?check_suite_focus=true ``` FAILED tests/test_py_utils.py::TempSeedTest::test_tensorflow - AssertionError: ``` Details: ``` _________________________ TempSeedTest.test_tensorflow _________________________ [...
closed
https://github.com/huggingface/datasets/issues/4953
2022-09-08T13:39:29
2022-09-08T15:14:45
2022-09-08T15:14:45
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,366,354,604
4,952
Add test-datasets CI job
To avoid having too many conflicts in the datasets and metrics dependencies I split the CI into test and test-catalog test does the test of the core of the `datasets` lib, while test-catalog tests the datasets scripts and metrics scripts This also makes `pip install -e .[dev]` much smaller for developers WDYT ...
closed
https://github.com/huggingface/datasets/pull/4952
2022-09-08T13:38:30
2023-09-24T10:05:57
2022-09-16T13:25:48
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,365,954,814
4,951
Fix license information in qasc dataset card
This PR adds the license information to `qasc` dataset, once reported via GitHub by Tushar Khot, the dataset is licensed under CC BY 4.0: - https://github.com/allenai/qasc/issues/5
closed
https://github.com/huggingface/datasets/pull/4951
2022-09-08T10:04:39
2022-09-08T14:54:47
2022-09-08T14:52:05
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,365,458,633
4,950
Update Enwik8 broken link and information
The current enwik8 dataset link give a 502 bad gateway error which can be view on https://huggingface.co/datasets/enwik8 (click the dropdown to see the dataset preview, it will show the error). This corrects the links, and json metadata as well as adds a little bit more information about enwik8.
closed
https://github.com/huggingface/datasets/pull/4950
2022-09-08T03:15:00
2022-09-24T22:14:35
2022-09-08T14:51:00
{ "login": "mtanghu", "id": 54819091, "type": "User" }
[]
true
[]
1,365,251,916
4,949
Update enwik8 fixing the broken link
The current enwik8 dataset link give a 502 bad gateway error which can be view on https://huggingface.co/datasets/enwik8 (click the dropdown to see the dataset preview, it will show the error). This corrects the links, and json metadata as well as adds a little bit more information about enwik8.
closed
https://github.com/huggingface/datasets/pull/4949
2022-09-07T22:17:14
2022-09-08T03:14:04
2022-09-08T03:14:04
{ "login": "mtanghu", "id": 54819091, "type": "User" }
[]
true
[]
1,364,973,778
4,948
Fix minor typo in error message for missing imports
null
closed
https://github.com/huggingface/datasets/pull/4948
2022-09-07T17:20:51
2022-09-08T14:59:31
2022-09-08T14:57:15
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,364,967,957
4,947
Try to fix the Windows CI after TF update 2.10
null
closed
https://github.com/huggingface/datasets/pull/4947
2022-09-07T17:14:49
2023-09-24T10:05:38
2022-09-08T09:13:10
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,364,692,069
4,946
Introduce regex check when pushing as well
Closes https://github.com/huggingface/datasets/issues/4945 by adding a regex check when pushing to hub. Let me know if this is helpful and if it's the fix you would have in mind for the issue and I'm happy to contribute tests.
closed
https://github.com/huggingface/datasets/pull/4946
2022-09-07T13:45:58
2022-09-13T10:19:01
2022-09-13T10:16:34
{ "login": "LysandreJik", "id": 30755778, "type": "User" }
[]
true
[]
1,364,691,096
4,945
Push to hub can push splits that do not respect the regex
## Describe the bug The `push_to_hub` method can push splits that do not respect the regex check that is used for downloads. Therefore, splits may be pushed but never re-used, which can be painful if the split was done after runtime preprocessing. ## Steps to reproduce the bug ```python >>> from datasets import...
closed
https://github.com/huggingface/datasets/issues/4945
2022-09-07T13:45:17
2022-09-13T10:16:35
2022-09-13T10:16:35
{ "login": "LysandreJik", "id": 30755778, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,364,313,569
4,944
larger dataset, larger GPU memory in the training phase? Is that correct?
from datasets import set_caching_enabled set_caching_enabled(False) for ds_name in ["squad","newsqa","nqopen","narrativeqa"]: train_ds = load_from_disk("../../../dall/downstream/processedproqa/{}-train.hf".format(ds_name)) break train_ds = concatenate_datasets([train_ds,train_...
closed
https://github.com/huggingface/datasets/issues/4944
2022-09-07T08:46:30
2022-09-07T12:34:58
2022-09-07T12:34:58
{ "login": "debby1103", "id": 38886373, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,363,967,650
4,943
Add splits to MBPP dataset
This PR addresses https://github.com/huggingface/datasets/issues/4795
closed
https://github.com/huggingface/datasets/pull/4943
2022-09-07T01:18:31
2022-09-13T12:29:19
2022-09-13T12:27:21
{ "login": "cwarny", "id": 2788526, "type": "User" }
[]
true
[]
1,363,869,421
4,942
Trec Dataset has incorrect labels
## Describe the bug Both coarse and fine labels seem to be out of line. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = "trec" raw_datasets = load_dataset(dataset) df = pd.DataFrame(raw_datasets["test"]) df.head() ``` ## Expected results text (string) | coarse_labe...
closed
https://github.com/huggingface/datasets/issues/4942
2022-09-06T22:13:40
2022-09-08T11:12:03
2022-09-08T11:12:03
{ "login": "wmpauli", "id": 6539145, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,363,622,861
4,941
Add Papers with Code ID to scifact dataset
This PR: - adds Papers with Code ID - forces sync between GitHub and Hub, which previously failed due to Hub validation error of the license tag: https://github.com/huggingface/datasets/runs/8200223631?check_suite_focus=true
closed
https://github.com/huggingface/datasets/pull/4941
2022-09-06T17:46:37
2022-09-06T18:28:17
2022-09-06T18:26:01
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,363,513,058
4,940
Fix multilinguality tag and missing sections in xquad_r dataset card
This PR fixes issue reported on the Hub: - Label as multilingual: https://huggingface.co/datasets/xquad_r/discussions/1
closed
https://github.com/huggingface/datasets/pull/4940
2022-09-06T16:05:35
2022-09-12T10:11:07
2022-09-12T10:08:48
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,363,468,679
4,939
Fix NonMatchingChecksumError in adv_glue dataset
Fix issue reported on the Hub: https://huggingface.co/datasets/adv_glue/discussions/1
closed
https://github.com/huggingface/datasets/pull/4939
2022-09-06T15:31:16
2022-09-06T17:42:10
2022-09-06T17:39:16
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,363,429,228
4,938
Remove main branch rename notice
We added a notice in README.md to show that we renamed the master branch to main, but we can remove it now (it's been 2 months) I also unpinned the github issue about the branch renaming
closed
https://github.com/huggingface/datasets/pull/4938
2022-09-06T15:03:05
2022-09-06T16:46:11
2022-09-06T16:43:53
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,363,426,946
4,937
Remove deprecated identical_ok
`huggingface-hub` says that the `identical_ok` argument of `HfApi.upload_file` is now deprecated, and will be removed soon. It even has no effect at the moment when it's passed: ```python Args: ... identical_ok (`bool`, *optional*, defaults to `True`): Deprecated: will be removed in 0.11.0. ...
closed
https://github.com/huggingface/datasets/pull/4937
2022-09-06T15:01:24
2022-09-06T22:24:09
2022-09-06T22:21:57
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,363,274,907
4,936
vivos (Vietnamese speech corpus) dataset not accessible
## Describe the bug VIVOS data is not accessible anymore, neither of these links work (at least from France): * https://ailab.hcmus.edu.vn/assets/vivos.tar.gz (data) * https://ailab.hcmus.edu.vn/vivos (dataset page) Therefore `load_dataset` doesn't work. ## Steps to reproduce the bug ```python ds = load_dat...
closed
https://github.com/huggingface/datasets/issues/4936
2022-09-06T13:17:55
2022-09-21T06:06:02
2022-09-12T07:14:20
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,363,226,736
4,935
Dataset Viewer issue for ubuntu_dialogs_corpus
### Link _No response_ ### Description _No response_ ### Owner _No response_
closed
https://github.com/huggingface/datasets/issues/4935
2022-09-06T12:41:50
2022-09-06T12:51:25
2022-09-06T12:51:25
{ "login": "CibinQuadance", "id": 87330568, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,363,034,253
4,934
Dataset Viewer issue for indonesian-nlp/librivox-indonesia
### Link https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia ### Description I created a new speech dataset https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia, but the dataset preview doesn't work with following error message: ``` Server error Status code: 400 Exception: TypeEr...
closed
https://github.com/huggingface/datasets/issues/4934
2022-09-06T10:03:23
2022-09-06T12:46:40
2022-09-06T12:46:40
{ "login": "cahya-wirawan", "id": 7669893, "type": "User" }
[]
false
[]
1,363,013,023
4,933
Dataset/DatasetDict.filter() cannot have `batched=True` due to `mask` (numpy array?) being non-iterable.
## Describe the bug `Dataset/DatasetDict.filter()` cannot have `batched=True` due to `mask` (numpy array?) being non-iterable. ## Steps to reproduce the bug (In a python 3.7.12 env, I've tried 2.4.0 and 2.3.2 with both `pyarraw==9.0.0` and `pyarrow==8.0.0`.) ```python from datasets import load_dataset ds_...
closed
https://github.com/huggingface/datasets/issues/4933
2022-09-06T09:47:48
2022-09-06T11:44:27
2022-09-06T11:44:27
{ "login": "tianjianjiang", "id": 4812544, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,362,522,423
4,932
Dataset Viewer issue for bigscience-biomedical/biosses
### Link https://huggingface.co/datasets/bigscience-biomedical/biosses ### Description I've just been working on adding the dataset loader script to this dataset and working with the relative imports. I'm not sure how to interpret the error below (show where the dataset preview used to be) . ``` Status code: 40...
closed
https://github.com/huggingface/datasets/issues/4932
2022-09-05T22:40:32
2022-09-06T14:24:56
2022-09-06T14:24:56
{ "login": "galtay", "id": 663051, "type": "User" }
[]
false
[]
1,362,298,764
4,931
Fix missing tags in dataset cards
Fix missing tags in dataset cards: - coqa - hyperpartisan_news_detection - opinosis - scientific_papers - scifact - search_qa - wiki_qa - wiki_split - wikisql This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833 - #4891 - #489...
closed
https://github.com/huggingface/datasets/pull/4931
2022-09-05T17:03:04
2022-09-22T12:40:15
2022-09-06T05:39:29
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,362,193,587
4,930
Add cc-by-nc-2.0 to list of licenses
This PR adds the `cc-by-nc-2.0` to the list of licenses because it is required by `scifact` dataset: https://github.com/allenai/scifact/blob/master/LICENSE.md
closed
https://github.com/huggingface/datasets/pull/4930
2022-09-05T15:37:32
2022-09-06T16:43:32
2022-09-05T17:01:04
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,361,508,366
4,929
Fixes a typo in loading documentation
As show in the [documentation page](https://huggingface.co/docs/datasets/loading) here the `"tr"in` should be `"train`. ![image](https://user-images.githubusercontent.com/7144772/188390445-e1f04d54-e3e3-4762-8686-63ecbe4087e5.png)
closed
https://github.com/huggingface/datasets/pull/4929
2022-09-05T07:18:54
2022-09-06T02:11:03
2022-09-05T13:06:38
{ "login": "sighingnow", "id": 7144772, "type": "User" }
[]
true
[]
1,360,941,172
4,928
Add ability to read-write to SQL databases.
Fixes #3094 Add ability to read/write to SQLite files and also read from any SQL database supported by SQLAlchemy. I didn't add SQLAlchemy as a dependence as it is fairly big and it remains optional. I also recorded a Loom to showcase the feature. https://www.loom.com/share/f0e602c2de8a46f58bca4b43333d541...
closed
https://github.com/huggingface/datasets/pull/4928
2022-09-03T19:09:08
2022-10-03T16:34:36
2022-10-03T16:32:28
{ "login": "Dref360", "id": 8976546, "type": "User" }
[]
true
[]
1,360,428,139
4,927
fix BLEU metric card
I've fixed some typos in BLEU metric card.
closed
https://github.com/huggingface/datasets/pull/4927
2022-09-02T17:00:56
2022-09-09T16:28:15
2022-09-09T16:28:15
{ "login": "antoniolanza1996", "id": 40452030, "type": "User" }
[]
true
[]
1,360,384,484
4,926
Dataset infos in yaml
To simplify the addition of new datasets, we'd like to have the dataset infos in the YAML and deprecate the dataset_infos.json file. YAML is readable and easy to edit, and the YAML metadata of the readme already contain dataset metadata so we would have everything in one place. To be more specific, I moved these fie...
closed
https://github.com/huggingface/datasets/pull/4926
2022-09-02T16:10:05
2024-05-04T14:52:50
2022-10-03T09:11:12
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,360,007,616
4,925
Add note about loading image / audio files to docs
This PR adds a small note about how to load image / audio datasets that have multiple splits in their dataset structure. Related forum thread: https://discuss.huggingface.co/t/loading-train-and-test-splits-with-audiofolder/22447 cc @NielsRogge
closed
https://github.com/huggingface/datasets/pull/4925
2022-09-02T10:31:58
2022-09-26T12:21:30
2022-09-23T13:59:07
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
1,358,611,513
4,924
Concatenate_datasets loads everything into RAM
## Describe the bug When loading the datasets seperately and saving them on disk, I want to concatenate them. But `concatenate_datasets` is filling up my RAM and the process gets killed. Is there a way to prevent this from happening or is this intended behaviour? Thanks in advance ## Steps to reproduce the bug ```...
closed
https://github.com/huggingface/datasets/issues/4924
2022-09-01T10:25:17
2022-09-01T11:50:54
2022-09-01T11:50:54
{ "login": "louisdeneve", "id": 39416047, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,357,735,287
4,923
decode mp3 with librosa if torchaudio is > 0.12 as a temporary workaround
`torchaudio>0.12` fails with decoding mp3 files if `ffmpeg<4`. currently we ask users to downgrade torchaudio, but sometimes it's not possible as torchaudio version is binded to torch version. as a temporary workaround we can decode mp3 with librosa (though it 60 times slower, at least it works) another option would...
closed
https://github.com/huggingface/datasets/pull/4923
2022-08-31T18:57:59
2022-11-02T11:54:33
2022-09-20T13:12:52
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,357,684,018
4,922
I/O error on Google Colab in streaming mode
## Describe the bug When trying to load a streaming dataset in Google Colab the loading fails with an I/O error ## Steps to reproduce the bug ```python import datasets from datasets import load_dataset hf_ds = load_dataset(path='wmt19', name='cs-en', streaming=True, split=datasets.Split.VALIDATION) list(hf_ds....
closed
https://github.com/huggingface/datasets/issues/4922
2022-08-31T18:08:26
2022-08-31T18:15:48
2022-08-31T18:15:48
{ "login": "jotterbach", "id": 5595043, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,357,609,003
4,921
Fix missing tags in dataset cards
Fix missing tags in dataset cards: - eraser_multi_rc - hotpot_qa - metooma - movie_rationales - qanta - quora - quoref - race - ted_hrlr - ted_talks_iwslt This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833 - #4891 - #4896 ...
closed
https://github.com/huggingface/datasets/pull/4921
2022-08-31T16:52:27
2022-09-22T14:34:11
2022-09-01T05:04:53
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,357,564,589
4,920
Unable to load local tsv files through load_dataset method
## Describe the bug Unable to load local tsv files through load_dataset method. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug data_files = { 'train': 'train.tsv', 'test': 'test.tsv' } raw_datasets = load_dataset('tsv', data_files=data_files) ## Expected results I am p...
closed
https://github.com/huggingface/datasets/issues/4920
2022-08-31T16:13:39
2022-09-01T05:31:30
2022-09-01T05:31:30
{ "login": "DataNoob0723", "id": 44038517, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,357,441,599
4,919
feat: improve error message on Keys mismatch. closes #4917
Hi @lhoestq what do you think? Let me give you a code sample: ```py >>> import datasets >>> foo = datasets.Dataset.from_dict({'foo':[0,1], 'bar':[2,3]}) >>> foo.save_to_disk('foo') # edit foo/dataset_info.json e.g. rename the 'foo' feature to 'baz' >>> datasets.load_from_disk('foo') --------------------------...
closed
https://github.com/huggingface/datasets/pull/4919
2022-08-31T14:41:36
2022-09-05T08:46:01
2022-09-05T08:43:33
{ "login": "PaulLerner", "id": 25532159, "type": "User" }
[]
true
[]
1,357,242,757
4,918
Dataset Viewer issue for pysentimiento/spanish-targeted-sentiment-headlines
### Link https://huggingface.co/datasets/pysentimiento/spanish-targeted-sentiment-headlines ### Description After moving the dataset from my user (`finiteautomata`) to the `pysentimiento` organization, the dataset viewer says that it doesn't exist. ### Owner _No response_
closed
https://github.com/huggingface/datasets/issues/4918
2022-08-31T12:09:07
2022-09-05T21:36:34
2022-09-05T16:32:44
{ "login": "finiteautomata", "id": 167943, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,357,193,841
4,917
Keys mismatch: make error message more informative
**Is your feature request related to a problem? Please describe.** When loading a dataset from disk with a defect in its `dataset_info.json` describing its features (I don’t know when/why/how this happens but it deserves its own issue), you will get an error message like: `ValueError: Keys mismatch: between {'bar': V...
closed
https://github.com/huggingface/datasets/issues/4917
2022-08-31T11:24:34
2022-09-05T08:43:38
2022-09-05T08:43:38
{ "login": "PaulLerner", "id": 25532159, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "good first issue", "color": "7057ff" } ]
false
[]
1,357,076,940
4,916
Apache Beam unable to write the downloaded wikipedia dataset
## Describe the bug Hi, I am currently trying to download wikipedia dataset using load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner'). However, I end up in getting filenotfound error. I get this error for any language I try to download. It downloads the file but while s...
closed
https://github.com/huggingface/datasets/issues/4916
2022-08-31T09:39:25
2022-08-31T10:53:19
2022-08-31T10:53:19
{ "login": "Shilpac20", "id": 71849081, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,356,009,042
4,915
FileNotFoundError while downloading wikipedia dataset for any language
## Describe the bug Hi, I am currently trying to download wikipedia dataset using load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner'). However, I end up in getting filenotfound error. I get this error for any language I try to download. Environment: ## Step...
open
https://github.com/huggingface/datasets/issues/4915
2022-08-30T16:15:46
2022-12-04T22:20:33
null
{ "login": "Shilpac20", "id": 71849081, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,355,482,624
4,914
Support streaming swda dataset
Support streaming swda dataset.
closed
https://github.com/huggingface/datasets/pull/4914
2022-08-30T09:46:28
2022-08-30T11:16:33
2022-08-30T11:14:16
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,355,232,007
4,913
Add license and citation information to cosmos_qa dataset
This PR adds the license information to `cosmos_qa` dataset, once reported via email by Yejin Choi, the dataset is licensed under CC BY 4.0. This PR also updates the citation information.
closed
https://github.com/huggingface/datasets/pull/4913
2022-08-30T06:23:19
2022-08-30T09:49:31
2022-08-30T09:47:35
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,355,078,864
4,912
datasets map() handles all data at a stroke and takes long time
**1. Background** Huggingface datasets package advises using `map()` to process data in batches. In the example code on pretraining masked language model, they use `map()` to tokenize all data at a stroke before the train loop. The corresponding code: ``` with accelerator.main_process_first(): tokenized_...
closed
https://github.com/huggingface/datasets/issues/4912
2022-08-30T02:25:56
2023-04-06T09:43:58
2022-09-06T09:23:35
{ "login": "BruceStayHungry", "id": 40711748, "type": "User" }
[]
false
[]
1,354,426,978
4,911
[Tests] Ensure `datasets` supports renamed repositories
On https://hf.co/datasets you can rename a dataset (or sometimes move it to another user/org). The website handles redirections correctly and AFAIK `datasets` does as well. However it would be nice to have an integration test to make sure we don't break support for renamed datasets. To implement this we can use t...
open
https://github.com/huggingface/datasets/issues/4911
2022-08-29T14:46:14
2025-06-19T06:10:52
null
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "good second issue", "color": "BDE59C" } ]
false
[]
1,354,374,328
4,910
Identical keywords in build_kwargs and config_kwargs lead to TypeError in load_dataset_builder()
## Describe the bug In `load_dataset_builder()`, `build_kwargs` and `config_kwargs` can contain the same keywords leading to a TypeError("type object got multiple values for keyword argument "xyz"). I ran into this problem with the keyword: `base_path`. It might happen with other kwargs as well. I think a quickfix...
open
https://github.com/huggingface/datasets/issues/4910
2022-08-29T14:11:48
2022-09-13T11:58:46
null
{ "login": "bablf", "id": 57184353, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "good first issue", "color": "7057ff" } ]
false
[]
1,353,997,788
4,909
Update GLUE evaluation metadata
This PR updates the evaluation metadata for GLUE to: * Include defaults for all configs except `ax` (which only has a `test` split with no known labels) * Fix the default split from `test` to `validation` since `test` splits in GLUE have no labels (they're private) * Fix the `task_id` for some existing defaults ...
closed
https://github.com/huggingface/datasets/pull/4909
2022-08-29T09:43:44
2022-08-29T14:53:29
2022-08-29T14:51:18
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
1,353,995,574
4,908
Fix missing tags in dataset cards
Fix missing tags in dataset cards: - asnq - clue - common_gen - cosmos_qa - guardian_authorship - hindi_discourse - py_ast - x_stance This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833 - #4891 - #4896
closed
https://github.com/huggingface/datasets/pull/4908
2022-08-29T09:41:53
2022-09-22T14:35:56
2022-08-29T16:13:07
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,353,808,348
4,907
None Type error for swda datasets
## Describe the bug I got `'NoneType' object is not callable` error while calling the swda datasets. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("swda") ``` ## Expected results Run without error ## Environment info <!-- You can run the command `datase...
closed
https://github.com/huggingface/datasets/issues/4907
2022-08-29T07:05:20
2022-08-30T14:43:41
2022-08-30T14:43:41
{ "login": "hannan72", "id": 8229163, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,353,223,925
4,906
Can't import datasets AttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import)
## Describe the bug A clear and concise description of what the bug is. Not able to import datasets ## Steps to reproduce the bug ```python # Sample code to reproduce the bug import os os.environ["WANDB_API_KEY"] = "0" ## to silence warning import numpy as np import random import sklearn import matplotlib.p...
closed
https://github.com/huggingface/datasets/issues/4906
2022-08-28T02:23:24
2024-11-16T08:59:17
2022-10-03T12:22:50
{ "login": "OPterminator", "id": 63536981, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,353,002,837
4,904
[LibriSpeech] Fix dev split local_extracted_archive for 'all' config
We define the keys for the `_DL_URLS` of the dev split as `dev.clean` and `dev.other`: https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L60-L61 These keys get forwarded to the `dl_manager` and thus the `local_extracted_archive`. How...
closed
https://github.com/huggingface/datasets/pull/4904
2022-08-27T10:04:57
2022-08-30T10:06:21
2022-08-30T10:03:25
{ "login": "sanchit-gandhi", "id": 93869735, "type": "User" }
[]
true
[]