id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
1,024,856,745
3,070
Fix Windows CI with FileNotFoundError when stting up s3_base fixture
Fix #3069.
closed
https://github.com/huggingface/datasets/pull/3070
2021-10-13T06:49:01
2021-10-13T08:55:13
2021-10-13T06:49:48
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,024,818,680
3,069
CI fails on Windows with FileNotFoundError when stting up s3_base fixture
## Describe the bug After commit 9353fc863d0c99ab0427f83cc5a4f04fcf52f1df, the CI fails on Windows with FileNotFoundError when stting up s3_base fixture. See: https://app.circleci.com/pipelines/github/huggingface/datasets/8151/workflows/5db8d154-badd-4d3d-b202-ca7a318997a2/jobs/50321 Error summary: ``` ERROR tes...
closed
https://github.com/huggingface/datasets/issues/3069
2021-10-13T05:52:26
2021-10-13T08:05:49
2021-10-13T06:49:48
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,024,681,264
3,068
feat: increase streaming retry config
Increase streaming config parameters: * retry interval set to 5 seconds * max retries set to 20 (so 1mn 40s)
closed
https://github.com/huggingface/datasets/pull/3068
2021-10-13T02:00:50
2021-10-13T09:25:56
2021-10-13T09:25:54
{ "login": "borisdayma", "id": 715491, "type": "User" }
[]
true
[]
1,024,023,185
3,067
add story_cloze
null
closed
https://github.com/huggingface/datasets/pull/3067
2021-10-12T16:36:53
2021-10-13T13:48:13
2021-10-13T13:48:13
{ "login": "zaidalyafeai", "id": 15667714, "type": "User" }
[]
true
[]
1,024,005,311
3,066
Add iter_archive
Added the `iter_archive` method for the StreamingDownloadManager. It was already implemented in the regular DownloadManager. Now it can be used to stream from TAR archives as mentioned in https://github.com/huggingface/datasets/issues/2829 I also updated the `food101` dataset as an example. Any image/audio data...
closed
https://github.com/huggingface/datasets/pull/3066
2021-10-12T16:17:16
2022-09-21T14:10:10
2021-10-18T09:12:46
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,023,951,322
3,065
Fix test command after refac
Fix the `datasets-cli` test command after the `prepare_module` change in #2986
closed
https://github.com/huggingface/datasets/pull/3065
2021-10-12T15:23:30
2021-10-12T15:28:47
2021-10-12T15:28:46
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,023,900,075
3,064
Make `interleave_datasets` more robust
**Is your feature request related to a problem? Please describe.** Right now there are few hiccups using `interleave_datasets`. Interleaved dataset iterates until the smallest dataset completes it's iterator. In this way larger datasets may not complete full epoch of iteration. It creates new problems in calculation...
open
https://github.com/huggingface/datasets/issues/3064
2021-10-12T14:34:53
2022-07-30T08:47:26
null
{ "login": "sbmaruf", "id": 32699797, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,023,588,297
3,063
Windows CI is unable to test streaming properly because of SSL issues
In https://github.com/huggingface/datasets/pull/3041 the windows tests were skipped because of SSL issues with moon-staging.huggingface.co:443 The issue appears only on windows with asyncio. On Linux it works. With requests it works as well. And with the production environment huggingface.co it also works. to rep...
closed
https://github.com/huggingface/datasets/issues/3063
2021-10-12T09:33:40
2022-08-24T14:59:29
2022-08-24T14:59:29
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "streaming", "color": "fef2c0" } ]
false
[]
1,023,209,592
3,062
Update summary on PyPi beyond NLP
More than just NLP now
closed
https://github.com/huggingface/datasets/pull/3062
2021-10-11T23:27:46
2021-10-13T08:55:54
2021-10-13T08:55:54
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
1,023,103,119
3,061
Feature request : add leave=True to dataset.map to enable tqdm nested bars (and whilst we're at it couldn't we get a way to access directly tqdm underneath?)
**A clear and concise description of what you want to happen.** It would be so nice to be able to nest HuggingFace `Datasets.map() ` progress bars in the grander scheme of things and whilst we're at it why not other functions. **Describe alternatives you've considered** By the way is there not a way to directl...
open
https://github.com/huggingface/datasets/issues/3061
2021-10-11T20:49:49
2021-10-22T09:34:10
null
{ "login": "BenoitDalFerro", "id": 69694610, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,022,936,396
3,060
load_dataset('openwebtext') yields "Compressed file ended before the end-of-stream marker was reached"
## Describe the bug When I try `load_dataset('openwebtext')`, I receive a "EOFError: Compressed file ended before the end-of-stream marker was reached" error. ## Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset('openwebtext') ``` ## Expected results I expect the `datas...
closed
https://github.com/huggingface/datasets/issues/3060
2021-10-11T17:05:27
2021-10-28T05:52:21
2021-10-28T05:52:21
{ "login": "RylanSchaeffer", "id": 8942987, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,022,620,057
3,059
Fix task reloading from cache
When reloading a dataset from the cache when doing `map`, the tasks templates were kept instead of being updated regarding the output of the `map` function. This is an issue because we drop the tasks templates that are not compatible anymore after `map`, for example if a column of the template was removed. This PR f...
closed
https://github.com/huggingface/datasets/pull/3059
2021-10-11T12:03:04
2021-10-11T12:23:39
2021-10-11T12:23:39
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,022,612,664
3,058
Dataset wikipedia and Bookcorpusopen cannot be fetched from dataloader.
## Describe the bug I have used the previous version of `transformers` and `datasets`. The dataset `wikipedia` can be successfully used. Recently, I upgrade them to the newest version and find it raises errors. I also tried other datasets. The `wikitext` works and the `bookcorpusopen` raises the same errors as `wikipe...
closed
https://github.com/huggingface/datasets/issues/3058
2021-10-11T11:54:59
2022-01-19T14:03:49
2022-01-19T14:03:49
{ "login": "hobbitlzy", "id": 35392624, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,022,508,315
3,057
Error in per class precision computation
## Describe the bug When trying to get the per class precision values by providing `average=None`, following error is thrown `ValueError: can only convert an array of size 1 to a Python scalar` ## Steps to reproduce the bug ```python from datasets import load_dataset, load_metric precision_metric = load_metric("...
closed
https://github.com/huggingface/datasets/issues/3057
2021-10-11T10:05:19
2021-10-11T10:17:44
2021-10-11T10:16:16
{ "login": "tidhamecha2", "id": 38906722, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,022,345,564
3,056
Fix meteor metric for version >= 3.6.4
After `nltk` update, the meteor metric expects pre-tokenized inputs (breaking change). This PR fixes this issue, while maintaining compatibility with older versions.
closed
https://github.com/huggingface/datasets/pull/3056
2021-10-11T07:11:44
2021-10-11T07:29:20
2021-10-11T07:29:19
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,022,319,238
3,055
CI test suite fails after meteor metric update
## Describe the bug CI test suite fails: https://app.circleci.com/pipelines/github/huggingface/datasets/8110/workflows/f059ba43-9154-4632-bebb-82318447ddc9/jobs/50010 Stack trace: ``` ___________________ LocalMetricTest.test_load_metric_meteor ____________________ [gw1] linux -- Python 3.6.15 /home/circleci/.pye...
closed
https://github.com/huggingface/datasets/issues/3055
2021-10-11T06:37:12
2021-10-11T07:30:31
2021-10-11T07:30:31
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,022,108,186
3,054
Update Biosses
Fix variable naming
closed
https://github.com/huggingface/datasets/pull/3054
2021-10-10T22:25:12
2021-10-13T09:04:27
2021-10-13T09:04:27
{ "login": "bwang482", "id": 6764450, "type": "User" }
[]
true
[]
1,022,076,905
3,053
load_dataset('the_pile_openwebtext2') produces ArrowInvalid, value too large to fit in C integer type
## Describe the bug When loading `the_pile_openwebtext2`, we get the error `pyarrow.lib.ArrowInvalid: Value 2111 too large to fit in C integer type` ## Steps to reproduce the bug ```python import datasets ds = datasets.load_dataset('the_pile_openwebtext2') ``` ## Expected results Should download the dataset...
closed
https://github.com/huggingface/datasets/issues/3053
2021-10-10T19:55:21
2023-02-24T14:02:20
2023-02-24T14:02:20
{ "login": "davidbau", "id": 3458792, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,021,944,435
3,052
load_dataset cannot download the data and hangs on forever if cache dir specified
## Describe the bug After updating datasets, a code that ran just fine for ages began to fail. Specifying _datasets.load_dataset_'s _cache_dir_ optional argument on Windows 10 machine results in data download to hang on forever. Same call without cache_dir works just fine. Surprisingly exact same code just runs perfec...
closed
https://github.com/huggingface/datasets/issues/3052
2021-10-10T10:31:36
2021-10-11T10:57:09
2021-10-11T10:56:36
{ "login": "BenoitDalFerro", "id": 69694610, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,021,852,234
3,051
Non-Matching Checksum Error with crd3 dataset
## Describe the bug When I try loading the crd3 dataset (https://huggingface.co/datasets/crd3), an error is thrown. ## Steps to reproduce the bug ```python dataset = load_dataset('crd3', split='train') ``` ## Expected results I expect no error to be thrown. ## Actual results A non-matching checksum err...
closed
https://github.com/huggingface/datasets/issues/3051
2021-10-10T01:32:43
2022-03-15T15:54:26
2022-03-15T15:54:26
{ "login": "RylanSchaeffer", "id": 8942987, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,021,772,622
3,050
Fix streaming: catch Timeout error
Catches Timeout error during streaming. fix #3049
closed
https://github.com/huggingface/datasets/pull/3050
2021-10-09T18:19:20
2021-10-12T15:28:18
2021-10-11T09:35:38
{ "login": "borisdayma", "id": 715491, "type": "User" }
[]
true
[]
1,021,770,008
3,049
TimeoutError during streaming
## Describe the bug I got a TimeoutError after streaming for about 10h. ## Steps to reproduce the bug Very long code but we could do a test of streaming indefinitely data, though error may take a while to appear. ## Expected results This error was not expected in the code which considers only `ClientError` but...
closed
https://github.com/huggingface/datasets/issues/3049
2021-10-09T18:06:51
2021-10-11T09:35:38
2021-10-11T09:35:38
{ "login": "borisdayma", "id": 715491, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,021,765,661
3,048
Identify which shard data belongs to
**Is your feature request related to a problem? Please describe.** I'm training on a large dataset made of multiple sub-datasets. During training I can observe some jumps in loss which may correspond to different shards. ![image](https://user-images.githubusercontent.com/715491/136668758-521263aa-a9b2-4ad2-8d22-...
open
https://github.com/huggingface/datasets/issues/3048
2021-10-09T17:46:35
2021-10-09T20:24:17
null
{ "login": "borisdayma", "id": 715491, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,021,360,616
3,047
Loading from cache a dataset for LM built from a text classification dataset sometimes errors
## Describe the bug Yes, I know, that description sucks. So the problem is arising in the course when we build a masked language modeling dataset using the IMDB dataset. To reproduce (or try since it's a bit fickle). Create a dataset for masled-language modeling from the IMDB dataset. ```python from datasets ...
closed
https://github.com/huggingface/datasets/issues/3047
2021-10-08T18:23:11
2021-11-03T17:13:08
2021-11-03T17:13:08
{ "login": "sgugger", "id": 35901082, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,021,021,368
3,046
Fix MedDialog metadata JSON
Fix #2969.
closed
https://github.com/huggingface/datasets/pull/3046
2021-10-08T12:04:40
2021-10-11T07:46:43
2021-10-11T07:46:42
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,020,968,704
3,045
Fix inconsistent caching behaviour in Dataset.map() with multiprocessing #3044
Fix #3044 1. A rough unit test that fails without the fix. It probably doesn't comply with your code standards, but that just to draft the idea. 2. A one liner fix
closed
https://github.com/huggingface/datasets/pull/3045
2021-10-08T10:59:21
2021-10-21T16:58:32
2021-10-21T14:22:44
{ "login": "vlievin", "id": 9859840, "type": "User" }
[]
true
[]
1,020,869,778
3,044
Inconsistent caching behaviour when using `Dataset.map()` with a `new_fingerprint` and `num_proc>1`
## Describe the bug Caching does not work when using `Dataset.map()` with: 1. a function that cannot be deterministically fingerprinted 2. `num_proc>1` 3. using a custom fingerprint set with the argument `new_fingerprint`. This means that the dataset will be mapped with the function for each and every call, w...
open
https://github.com/huggingface/datasets/issues/3044
2021-10-08T09:07:10
2025-03-04T07:16:00
null
{ "login": "vlievin", "id": 9859840, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,020,252,114
3,043
Add PASS dataset
## Adding a Dataset - **Name:** PASS - **Description:** An ImageNet replacement for self-supervised pretraining without humans - **Data:** https://www.robots.ox.ac.uk/~vgg/research/pass/ https://github.com/yukimasano/PASS Instructions to add a new dataset can be found [here](https://github.com/huggingface/dataset...
closed
https://github.com/huggingface/datasets/issues/3043
2021-10-07T16:43:43
2022-01-20T16:50:47
2022-01-20T16:50:47
{ "login": "osanseviero", "id": 7246357, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "vision", "color": "bfdadc" } ]
false
[]
1,020,047,289
3,042
Improving elasticsearch integration
- adding murmurhash signature to sample in index - adding optional credentials for remote elasticsearch server - enabling sample update in index - upgrade the elasticsearch 7.10.1 python client - adding ElasticsearchBulider to instantiate a dataset from an index and a filtering query
open
https://github.com/huggingface/datasets/pull/3042
2021-10-07T13:28:35
2022-07-06T15:19:48
null
{ "login": "ggdupont", "id": 5583410, "type": "User" }
[]
true
[]
1,018,911,385
3,041
Load private data files + use glob on ZIP archives for json/csv/etc. module inference
As mentioned in https://github.com/huggingface/datasets/issues/3032 loading data files from private repository isn't working correctly because of the data files resolved. #2986 did a refactor of the data files resolver. I added authentication to it. I also improved it to glob inside ZIP archives to look for json/...
closed
https://github.com/huggingface/datasets/pull/3041
2021-10-06T18:16:36
2021-10-12T15:25:48
2021-10-12T15:25:46
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,018,782,475
3,040
[save_to_disk] Using `select()` followed by `save_to_disk` saves complete dataset making it hard to create dummy dataset
## Describe the bug When only keeping a dummy size of a dataset (say the first 100 samples), and then saving it to disk to upload it in the following to the hub for easy demo/use - not just the small dataset is saved but the whole dataset with an indices file. The problem with this is that the dataset is still very...
closed
https://github.com/huggingface/datasets/issues/3040
2021-10-06T17:08:47
2021-11-02T15:41:08
2021-11-02T15:41:08
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,018,219,800
3,039
Add sberquad dataset
null
closed
https://github.com/huggingface/datasets/pull/3039
2021-10-06T12:32:02
2021-10-13T10:19:11
2021-10-13T10:16:04
{ "login": "Alenush", "id": 13781234, "type": "User" }
[]
true
[]
1,018,113,499
3,038
add sberquad dataset
null
closed
https://github.com/huggingface/datasets/pull/3038
2021-10-06T11:33:39
2021-10-06T11:58:01
2021-10-06T11:58:01
{ "login": "Alenush", "id": 13781234, "type": "User" }
[]
true
[]
1,018,091,919
3,037
SberQuad
null
closed
https://github.com/huggingface/datasets/pull/3037
2021-10-06T11:21:08
2021-10-06T11:33:08
2021-10-06T11:33:08
{ "login": "Alenush", "id": 13781234, "type": "User" }
[]
true
[]
1,017,687,944
3,036
Protect master branch to force contributions via Pull Requests
In order to have a clearer Git history in the master branch, I propose to protect it so that all contributions must be done through a Pull Request and no direct commits to master are allowed. - The Pull Request allows to give context, discuss any potential issues and improve the quality of the contribution - The Pull...
closed
https://github.com/huggingface/datasets/issues/3036
2021-10-06T07:34:17
2021-10-07T06:51:47
2021-10-07T06:49:52
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,016,770,071
3,035
`load_dataset` does not work with uploaded arrow file
## Describe the bug I've preprocessed and uploaded a dataset here: https://huggingface.co/datasets/ami-wav2vec2/ami_headset_single_preprocessed . The dataset is in `.arrow` format. The dataset can correctly be loaded when doing: ```bash git lfs install git clone https://huggingface.co/datasets/ami-wav2vec2/a...
open
https://github.com/huggingface/datasets/issues/3035
2021-10-05T20:15:10
2021-10-06T17:01:37
null
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,016,759,202
3,034
Errors loading dataset using fs = a gcsfs.GCSFileSystem
## Describe the bug Cannot load dataset using a `gcsfs.GCSFileSystem`. I'm not sure if this should be a bug in `gcsfs` or here... Basically what seems to be happening is that since datasets saves datasets as folders and folders aren't "real objects" in gcs, gcsfs raises a 404 error. There are workarounds if you...
open
https://github.com/huggingface/datasets/issues/3034
2021-10-05T20:07:08
2021-10-05T20:26:39
null
{ "login": "dconatha", "id": 74556552, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,016,619,572
3,033
Actual "proper" install of ruamel.yaml in the windows CI
It was impossible to update the package directly with `pip`. Indeed it was installed with `distutils` which prevents `pip` or `conda` to uninstall it. I had to `rm` a directory from the `site-packages` python directory, and then do `pip install ruamel.yaml` It's not that "proper" but I couldn't find better soluti...
closed
https://github.com/huggingface/datasets/pull/3033
2021-10-05T17:52:07
2021-10-05T17:54:57
2021-10-05T17:54:57
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,016,488,475
3,032
Error when loading private dataset with "data_files" arg
## Describe the bug A clear and concise description of what the bug is. Private datasets with no loading script can't be loaded using `data_files` parameter. ## Steps to reproduce the bug ```python from datasets import load_dataset data_files = {"train": "**/train/*/*.jsonl", "valid": "**/valid/*/*.jsonl"} d...
closed
https://github.com/huggingface/datasets/issues/3032
2021-10-05T15:46:27
2021-10-12T15:26:22
2021-10-12T15:25:46
{ "login": "borisdayma", "id": 715491, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,016,458,496
3,031
Align tqdm control with cache control
Currently, once disabled with `disable_progress_bar`, progress bars cannot be re-enabled again. To overcome this limitation, this PR introduces the `set_progress_bar_enabled` function that accepts a boolean indicating whether to display progress bars. The goal is to provide a similar API to the existing cache control A...
closed
https://github.com/huggingface/datasets/pull/3031
2021-10-05T15:18:49
2021-10-18T15:00:21
2021-10-18T14:59:30
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,016,435,324
3,030
Add `remove_columns` to `IterableDataset`
Fixes #2944 WIP * Not tested yet. * We might want to allow batched remove for efficiency. @lhoestq Do you think it should have `batched=` and `batch_size=`?
closed
https://github.com/huggingface/datasets/pull/3030
2021-10-05T14:58:33
2021-10-08T15:33:15
2021-10-08T15:31:53
{ "login": "changjonathanc", "id": 31893406, "type": "User" }
[]
true
[]
1,016,389,901
3,029
Use standard open-domain validation split in nq_open
The nq_open dataset originally drew the validation set from this file: https://github.com/google-research-datasets/natural-questions/blob/master/nq_open/NQ-open.efficientqa.dev.1.1.sample.jsonl However, that's the dev set used specifically and only for the efficientqa competition, and it's not the same dev set as is ...
closed
https://github.com/huggingface/datasets/pull/3029
2021-10-05T14:19:27
2021-10-05T14:56:46
2021-10-05T14:56:45
{ "login": "craffel", "id": 417568, "type": "User" }
[]
true
[]
1,016,230,272
3,028
Properly install ruamel-yaml for windows CI
null
closed
https://github.com/huggingface/datasets/pull/3028
2021-10-05T11:51:15
2021-10-05T14:02:12
2021-10-05T11:51:22
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,016,150,117
3,027
Resolve data_files by split name
This issue is about discussing the default behavior when someone loads a dataset that consists in data files. For example: ```python load_dataset("lhoestq/demo1") ``` should return two splits "train" and "test" since the dataset repostiory is like ``` data/ ├── train.csv └── test.csv ``` Currently it returns ...
closed
https://github.com/huggingface/datasets/issues/3027
2021-10-05T10:24:36
2021-11-05T17:49:58
2021-11-05T17:49:57
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
1,016,067,794
3,026
added arxiv paper inswiss_judgment_prediction dataset card
null
closed
https://github.com/huggingface/datasets/pull/3026
2021-10-05T09:02:01
2021-10-08T16:01:44
2021-10-08T16:01:24
{ "login": "JoelNiklaus", "id": 3775944, "type": "User" }
[]
true
[]
1,016,061,222
3,025
Fix Windows test suite
Try a hotfix to restore Windows test suite. Fix #3024.
closed
https://github.com/huggingface/datasets/pull/3025
2021-10-05T08:55:22
2021-10-05T09:58:28
2021-10-05T09:58:27
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,016,052,911
3,024
Windows test suite fails
## Describe the bug There is an error during installation of tests dependencies for Windows: https://app.circleci.com/pipelines/github/huggingface/datasets/7981/workflows/9b6a0114-2b8e-4069-94e5-e844dbbdba4e/jobs/49206 ``` ERROR: Cannot uninstall 'ruamel-yaml'. It is a distutils installed project and thus we can...
closed
https://github.com/huggingface/datasets/issues/3024
2021-10-05T08:46:46
2021-10-05T09:58:27
2021-10-05T09:58:27
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,015,923,031
3,023
Fix typo
null
closed
https://github.com/huggingface/datasets/pull/3023
2021-10-05T06:06:11
2021-10-05T11:56:55
2021-10-05T11:56:55
{ "login": "qqaatw", "id": 24835382, "type": "User" }
[]
true
[]
1,015,750,221
3,022
MeDAL dataset: Add further description and update download URL
Added more details in the following sections: * Dataset Structure * Data Instances * Data Splits * Source Data * Annotations * Discussions of Biases * LIcensing Information
closed
https://github.com/huggingface/datasets/pull/3022
2021-10-05T00:13:28
2021-10-13T09:03:09
2021-10-13T09:03:09
{ "login": "xhluca", "id": 21180505, "type": "User" }
[]
true
[]
1,015,444,094
3,021
Support loading dataset from multiple zipped CSV data files
Fix partially #3018. CC: @lewtun
closed
https://github.com/huggingface/datasets/pull/3021
2021-10-04T17:33:57
2021-10-06T08:36:46
2021-10-06T08:36:45
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,015,406,105
3,020
Add a metric for the MATH dataset (competition_math).
This metric computes accuracy for the MATH dataset (https://arxiv.org/abs/2103.03874) after canonicalizing the prediction and the reference (e.g., converting "1/2" to "\\\\frac{1}{2}").
closed
https://github.com/huggingface/datasets/pull/3020
2021-10-04T16:52:16
2021-10-22T10:29:31
2021-10-22T10:29:31
{ "login": "hacobe", "id": 91226467, "type": "User" }
[]
true
[]
1,015,339,983
3,019
Fix filter leaking
If filter is called after using a first transform `shuffle`, `select`, `shard`, `train_test_split`, or `filter`, then it could not work as expected and return examples from before the first transform. This is because the indices mapping was not taken into account when saving the indices to keep when doing the filtering...
closed
https://github.com/huggingface/datasets/pull/3019
2021-10-04T15:42:58
2022-06-03T08:28:14
2021-10-05T08:33:07
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,015,311,877
3,018
Support multiple zipped CSV data files
As requested by @lewtun, support loading multiple zipped CSV data files. ```python from datasets import load_dataset url = "https://domain.org/filename.zip" data_files = {"train": "train_filename.csv", "test": "test_filename.csv"} dataset = load_dataset("csv", data_dir=url, data_files=data_files) ```
open
https://github.com/huggingface/datasets/issues/3018
2021-10-04T15:16:59
2021-10-05T14:32:57
null
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,015,215,528
3,017
Remove unused parameter in xdirname
Minor fix to remove unused args `*p` in `xdirname`.
closed
https://github.com/huggingface/datasets/pull/3017
2021-10-04T13:55:53
2021-10-05T11:37:01
2021-10-05T11:37:00
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,015,208,654
3,016
Fix Windows paths in LJ Speech dataset
Minor fix in LJ Speech dataset for Windows pathname component separator. Related to #1878.
closed
https://github.com/huggingface/datasets/pull/3016
2021-10-04T13:49:37
2021-10-04T15:23:05
2021-10-04T15:23:04
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,015,130,845
3,015
Extend support for streaming datasets that use glob.glob
This PR extends the support in streaming mode for datasets that use `glob`, by patching the function `glob.glob`. Related to #2880, #2876, #2874
closed
https://github.com/huggingface/datasets/pull/3015
2021-10-04T12:42:37
2021-10-05T13:46:39
2021-10-05T13:46:38
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,015,070,751
3,014
Fix Windows path in MATH dataset
Minor fix in MATH dataset for Windows pathname component separator. Related to #2982.
closed
https://github.com/huggingface/datasets/pull/3014
2021-10-04T11:41:07
2021-10-04T12:46:44
2021-10-04T12:46:44
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,014,960,419
3,013
Improve `get_dataset_infos`?
Using the dedicated function `get_dataset_infos` on a dataset that has no dataset-info.json file returns an empty info: ``` >>> from datasets import get_dataset_infos >>> get_dataset_infos('wit') {} ``` While it's totally possible to get it (regenerate it) with: ``` >>> from datasets import load_dataset_b...
closed
https://github.com/huggingface/datasets/issues/3013
2021-10-04T09:47:04
2022-02-21T15:57:10
2022-02-21T15:57:10
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "question", "color": "d876e3" }, { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,014,958,931
3,012
Replace item with float in metrics
As pointed out by @mariosasko in #3001, calling `float()` instad of `.item()` is faster. Moreover, it might avoid potential issues if any of the third-party functions eventually returns a `float` instead of an `np.float64`. Related to #3001.
closed
https://github.com/huggingface/datasets/pull/3012
2021-10-04T09:45:28
2021-10-04T11:30:34
2021-10-04T11:30:33
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,014,935,713
3,011
load_dataset_builder should error if "name" does not exist?
``` import datasets as ds builder = ds.load_dataset_builder('sent_comp', name="doesnotexist") builder.info.config_name ``` returns ``` 'doesnotexist' ``` Shouldn't it raise an error instead? For this dataset, the only valid values for `name` should be: `"default"` or `None` (ie. argument not passed)
open
https://github.com/huggingface/datasets/issues/3011
2021-10-04T09:20:46
2022-09-20T13:05:07
null
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,014,918,470
3,010
Chain filtering is leaking
## Describe the bug As there's no support for lists within dataset fields, I convert my lists to json-string format. However, the bug described is occurring even when the data format is 'string'. These samples show that filtering behavior diverges from what's expected when chaining filterings. On sample 2 the second...
closed
https://github.com/huggingface/datasets/issues/3010
2021-10-04T09:04:55
2022-06-01T17:36:44
2022-06-01T17:36:44
{ "login": "DrMatters", "id": 22641583, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,014,868,235
3,009
Fix Windows paths in SUPERB benchmark datasets
Minor fix in SUPERB benchmark datasets for Windows pathname component separator. Related to #2884, #2783 and #2619.
closed
https://github.com/huggingface/datasets/pull/3009
2021-10-04T08:13:49
2021-10-04T13:43:25
2021-10-04T13:43:25
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,014,849,163
3,008
Fix precision/recall metrics with None average
Related to issue #2979 and PR #2992.
closed
https://github.com/huggingface/datasets/pull/3008
2021-10-04T07:54:15
2021-10-04T09:29:37
2021-10-04T09:29:36
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,014,775,450
3,007
Correct a typo
null
closed
https://github.com/huggingface/datasets/pull/3007
2021-10-04T06:15:47
2021-10-04T09:27:57
2021-10-04T09:27:57
{ "login": "Yann21", "id": 35955430, "type": "User" }
[]
true
[]
1,014,770,821
3,006
Fix Windows paths in CommonLanguage dataset
Minor fix in CommonLanguage dataset for Windows pathname component separator. Related to #2989.
closed
https://github.com/huggingface/datasets/pull/3006
2021-10-04T06:08:58
2021-10-04T09:07:58
2021-10-04T09:07:58
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,014,615,420
3,005
DatasetDict.filter and Dataset.filter crashes with any "fn_kwargs" argument
## Describe the bug The ".filter" method of DatasetDict or Dataset objects fails when passing any "fn_kwargs" argument ## Steps to reproduce the bug ```python import datasets example_dataset = datasets.Dataset.from_dict({"a": {1, 2, 3, 4}}) def filter_value(example, value): return example['a'] == value...
closed
https://github.com/huggingface/datasets/issues/3005
2021-10-04T00:49:29
2021-10-11T10:18:01
2021-10-04T08:46:13
{ "login": "DrMatters", "id": 22641583, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,014,336,617
3,004
LexGLUE: A Benchmark Dataset for Legal Language Understanding in English.
Inspired by the recent widespread use of the GLUE multi-task benchmark NLP dataset (Wang et al., 2018), the subsequent more difficult SuperGLUE (Wang et al., 2019), other previous multi-task NLP benchmarks (Conneau and Kiela, 2018; McCann et al., 2018), and similar initiatives in other domains (Peng et al., 2019), we i...
closed
https://github.com/huggingface/datasets/pull/3004
2021-10-03T10:03:25
2021-10-13T13:37:02
2021-10-13T13:37:01
{ "login": "iliaschalkidis", "id": 1626984, "type": "User" }
[]
true
[]
1,014,137,933
3,003
common_language: Fix license in README.md
...it's correct elsewhere
closed
https://github.com/huggingface/datasets/pull/3003
2021-10-02T18:47:37
2021-10-04T09:27:01
2021-10-04T09:27:01
{ "login": "jimregan", "id": 227350, "type": "User" }
[]
true
[]
1,014,120,524
3,002
Remove a reference to the open Arrow file when deleting a TF dataset created with to_tf_dataset
This [comment](https://github.com/huggingface/datasets/issues/2934#issuecomment-922970919) explains the issue. This PR fixes that with a `weakref` callback, and additionally: * renames `TensorflowDatasetMixIn` to `TensorflowDatasetMixin` for consistency * correctly indents `TensorflowDatasetMixin`'s docstring * repl...
closed
https://github.com/huggingface/datasets/pull/3002
2021-10-02T17:44:09
2021-10-13T11:48:00
2021-10-13T09:03:23
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,014,024,982
3,001
Fix cast to Python scalar in Matthews Correlation metric
This PR is motivated by issue #2964. The Matthews Correlation metric relies on sklearn's `matthews_corrcoef` function to compute the result. This function returns either `float` or `np.float64` (see the [source](https://github.com/scikit-learn/scikit-learn/blob/844b4be24d20fc42cc13b957374c718956a0db39/sklearn/metric...
closed
https://github.com/huggingface/datasets/pull/3001
2021-10-02T11:44:59
2021-10-04T09:54:04
2021-10-04T09:26:12
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,013,613,219
3,000
Fix json loader when conversion not implemented
Sometimes the arrow json parser fails if the `block_size` is too small and returns an `ArrowNotImplementedError: JSON conversion to struct...` error. By increasing the block size it makes it work again. Hopefully it should help with https://github.com/huggingface/datasets/issues/2799 I tried with the file ment...
closed
https://github.com/huggingface/datasets/pull/3000
2021-10-01T17:47:22
2021-10-01T18:05:00
2021-10-01T17:54:23
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,013,536,933
2,999
Set trivia_qa writer batch size
Save some RAM when generating trivia_qa
closed
https://github.com/huggingface/datasets/pull/2999
2021-10-01T16:23:26
2021-10-01T16:34:55
2021-10-01T16:34:55
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,013,372,871
2,998
cannot shuffle dataset loaded from disk
## Describe the bug dataset loaded from disk cannot be shuffled. ## Steps to reproduce the bug ``` my_dataset = load_from_disk('s3://my_file/validate', fs=s3) sample = my_dataset.select(range(100)).shuffle(seed=1234) ``` ## Actual results ``` sample = my_dataset .select(range(100)).shuffle(seed=1234) ...
open
https://github.com/huggingface/datasets/issues/2998
2021-10-01T13:49:52
2021-10-01T13:49:52
null
{ "login": "pya25", "id": 54274249, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,013,270,069
2,997
Dataset has incorrect labels
The dataset https://huggingface.co/datasets/turkish_product_reviews has incorrect labels - all reviews are labelled with "1" (positive sentiment). None of the reviews is labelled with "0". See screenshot attached: ![Capture](https://user-images.githubusercontent.com/63367770/135617428-14ce0b27-5208-4e66-a3ee-71542e3...
closed
https://github.com/huggingface/datasets/issues/2997
2021-10-01T12:09:06
2021-10-01T15:32:00
2021-10-01T13:54:34
{ "login": "heiko-hotz", "id": 63367770, "type": "User" }
[]
false
[]
1,013,266,373
2,996
Remove all query parameters when extracting protocol
Fix `_get_extraction_protocol` to remove all query parameters, like `?raw=true`, `?dl=1`,...
closed
https://github.com/huggingface/datasets/pull/2996
2021-10-01T12:05:34
2021-10-04T08:48:13
2021-10-04T08:48:13
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,013,143,868
2,995
Fix trivia_qa unfiltered
Fix https://github.com/huggingface/datasets/issues/2993
closed
https://github.com/huggingface/datasets/pull/2995
2021-10-01T09:53:43
2021-10-01T10:04:11
2021-10-01T10:04:10
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,013,000,475
2,994
Fix loading compressed CSV without streaming
When implementing support to stream CSV files (https://github.com/huggingface/datasets/commit/ad489d4597381fc2d12c77841642cbeaecf7a2e0#diff-6f60f8d0552b75be8b3bfd09994480fd60dcd4e7eb08d02f721218c3acdd2782), a regression was introduced preventing loading compressed CSV files in non-streaming mode. This PR fixes it, a...
closed
https://github.com/huggingface/datasets/pull/2994
2021-10-01T07:28:59
2021-10-01T15:53:16
2021-10-01T15:53:16
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,012,702,665
2,993
Can't download `trivia_qa/unfiltered`
## Describe the bug For some reason, I can't download `trivia_qa/unfilted`. A file seems to be missing... I am able to see it fine though the viewer tough... ## Steps to reproduce the bug ```python >>> from datasets import load_dataset >>> load_dataset("trivia_qa", "unfiltered") Downloading and preparing data...
closed
https://github.com/huggingface/datasets/issues/2993
2021-09-30T23:00:18
2021-10-01T19:07:23
2021-10-01T19:07:22
{ "login": "VictorSanh", "id": 16107619, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,012,325,594
2,992
Fix f1 metric with None average
Fix #2979.
closed
https://github.com/huggingface/datasets/pull/2992
2021-09-30T15:31:57
2021-10-01T14:17:39
2021-10-01T14:17:38
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,012,174,823
2,991
add docmentation for the `Unix style pattern` matching feature that can be leverage for `data_files` into `load_dataset`
Unless I'm mistaken, it seems that in the new documentation it is no longer mentioned that you can use Unix style pattern matching in the `data_files` argument of the `load_dataset` method. This feature was mentioned [here](https://huggingface.co/docs/datasets/loading_datasets.html#from-a-community-dataset-on-the-h...
open
https://github.com/huggingface/datasets/issues/2991
2021-09-30T13:22:01
2021-09-30T13:22:01
null
{ "login": "SaulLu", "id": 55560583, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,012,097,418
2,990
Make Dataset.map accept list of np.array
Fix #2987.
closed
https://github.com/huggingface/datasets/pull/2990
2021-09-30T12:08:54
2021-10-01T13:57:46
2021-10-01T13:57:46
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,011,220,375
2,989
Add CommonLanguage
This PR adds the Common Language dataset (https://zenodo.org/record/5036977) The dataset is intended for language-identification speech classifiers and is already used by models on the Hub: * https://huggingface.co/speechbrain/lang-id-commonlanguage_ecapa * https://huggingface.co/anton-l/wav2vec2-base-langid cc @...
closed
https://github.com/huggingface/datasets/pull/2989
2021-09-29T17:21:30
2021-10-01T17:36:39
2021-10-01T17:00:03
{ "login": "anton-l", "id": 26864830, "type": "User" }
[]
true
[]
1,011,148,017
2,988
IndexError: Invalid key: 14 is out of bounds for size 0
## Describe the bug A clear and concise description of what the bug is. Hi. I am trying to implement stochastic weighted averaging optimizer with transformer library as described here https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ , for this I am using a run_clm.py codes which is wor...
closed
https://github.com/huggingface/datasets/issues/2988
2021-09-29T16:04:24
2022-04-10T14:49:49
2022-04-10T14:49:49
{ "login": "dorost1234", "id": 79165106, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,011,026,141
2,987
ArrowInvalid: Can only convert 1-dimensional array values
## Describe the bug For the ViT and LayoutLMv2 demo notebooks in my [Transformers-Tutorials repo](https://github.com/NielsRogge/Transformers-Tutorials), people reported an ArrowInvalid issue after applying the following function to a Dataset: ``` def preprocess_data(examples): images = [Image.open(path).conve...
closed
https://github.com/huggingface/datasets/issues/2987
2021-09-29T14:18:52
2021-10-01T13:57:45
2021-10-01T13:57:45
{ "login": "NielsRogge", "id": 48327001, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,010,792,783
2,986
Refac module factory + avoid etag requests for hub datasets
## Refactor the module factory When trying to extend the `data_files` logic to avoid doing unnecessary ETag requests, I noticed that the module preparation mechanism needed a refactor: - the function was 600 lines long - it was not readable - it contained many different cases that made it complex to maintain - i...
closed
https://github.com/huggingface/datasets/pull/2986
2021-09-29T10:42:00
2021-10-11T11:05:53
2021-10-11T11:05:52
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,010,500,433
2,985
add new dataset kan_hope
## Adding a Dataset - **Name:** *KanHope* - **Description:** *A code-mixed English-Kannada dataset for Hope speech detection* - **Task:** *Binary Text Classification* - **Paper:** *https://arxiv.org/abs/2108.04616* - **Data:** *https://github.com/adeepH/kan_hope/tree/main/dataset* - **Motivation:** *The dataset ...
closed
https://github.com/huggingface/datasets/pull/2985
2021-09-29T05:20:28
2021-10-01T16:55:19
2021-10-01T16:55:19
{ "login": "adeepH", "id": 46108405, "type": "User" }
[]
true
[]
1,010,484,326
2,984
Exceeded maximum rows when reading large files
## Describe the bug A clear and concise description of what the bug is. When using `load_dataset` with json files, if the files are too large, there will be "Exceeded maximum rows" error. ## Steps to reproduce the bug ```python dataset = load_dataset('json', data_files=data_files) # data files have 3M rows in a ...
closed
https://github.com/huggingface/datasets/issues/2984
2021-09-29T04:49:22
2021-10-12T06:05:42
2021-10-12T06:05:42
{ "login": "zijwang", "id": 25057983, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,010,263,058
2,983
added SwissJudgmentPrediction dataset
null
closed
https://github.com/huggingface/datasets/pull/2983
2021-09-28T22:17:56
2021-10-01T16:03:05
2021-10-01T16:03:05
{ "login": "JoelNiklaus", "id": 3775944, "type": "User" }
[]
true
[]
1,010,118,418
2,982
Add the Math Aptitude Test of Heuristics dataset.
null
closed
https://github.com/huggingface/datasets/pull/2982
2021-09-28T19:18:37
2021-10-01T19:51:23
2021-10-01T12:21:00
{ "login": "hacobe", "id": 91226467, "type": "User" }
[]
true
[]
1,009,969,310
2,981
add wit dataset
Resolves #2902 based on conversation there - would also close #2810. Open to suggestions/help 😀 CC @hassiahk @lhoestq @yjernite
closed
https://github.com/huggingface/datasets/pull/2981
2021-09-28T16:34:49
2022-05-05T14:26:41
2022-05-05T14:26:41
{ "login": "nateraw", "id": 32437151, "type": "User" }
[]
true
[]
1,009,873,482
2,980
OpenSLR 25: ASR data for Amharic, Swahili and Wolof
## Adding a Dataset - **Name:** *SLR25* - **Description:** *Subset 25 from OpenSLR. Other subsets have been added to https://huggingface.co/datasets/openslr, 25 covers Amharic, Swahili and Wolof data* - **Paper:** *https://www.openslr.org/25/ has citations for each of the three subsubsets. * - **Data:** *Currently ...
open
https://github.com/huggingface/datasets/issues/2980
2021-09-28T15:04:36
2021-09-29T17:25:14
null
{ "login": "cdleong", "id": 4109253, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,009,634,147
2,979
ValueError when computing f1 metric with average None
## Describe the bug When I try to compute the f1 score for each class in a multiclass classification problem, I get a ValueError. The same happens with recall and precision. I traced the error to the `.item()` in these scripts, which is probably there for the other averages. E.g. from f1.py: ```python return { ...
closed
https://github.com/huggingface/datasets/issues/2979
2021-09-28T11:34:53
2021-10-01T14:17:38
2021-10-01T14:17:38
{ "login": "asofiaoliveira", "id": 74454835, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,009,521,419
2,978
Run CI tests against non-production server
Currently, the CI test suite performs requests to the HF production server. As discussed with @elishowk, we should refactor our tests to use the HF staging server instead, like `huggingface_hub` and `transformers`.
open
https://github.com/huggingface/datasets/issues/2978
2021-09-28T09:41:26
2021-09-28T15:23:50
null
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
false
[]
1,009,378,692
2,977
Impossible to load compressed csv
## Describe the bug It is not possible to load from a compressed csv anymore. ## Steps to reproduce the bug ```python load_dataset('csv', data_files=['/path/to/csv.bz2']) ``` ## Problem and possible solution This used to work, but the commit that broke it is [this one](https://github.com/huggingface/datasets...
closed
https://github.com/huggingface/datasets/issues/2977
2021-09-28T07:18:54
2021-10-01T15:53:16
2021-10-01T15:53:15
{ "login": "Valahaar", "id": 19476123, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,008,647,889
2,976
Can't load dataset
I'm trying to load a wikitext dataset ``` from datasets import load_dataset raw_datasets = load_dataset("wikitext") ``` ValueError: Config name is missing. Please pick one among the available configs: ['wikitext-103-raw-v1', 'wikitext-2-raw-v1', 'wikitext-103-v1', 'wikitext-2-v1'] Example of usage: `load_d...
closed
https://github.com/huggingface/datasets/issues/2976
2021-09-27T21:38:14
2024-04-08T03:27:29
2021-09-28T06:53:01
{ "login": "mskovalova", "id": 77006774, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,008,444,654
2,975
ignore dummy folder and dataset_infos.json
Fixes #2877 Added the `dataset_infos.json` to the ignored files list and also added check to ignore files which have parent directory as `dummy`. Let me know if it is correct. Thanks :)
closed
https://github.com/huggingface/datasets/pull/2975
2021-09-27T18:09:03
2021-09-29T09:45:38
2021-09-29T09:05:38
{ "login": "Ishan-Kumar2", "id": 46553104, "type": "User" }
[]
true
[]
1,008,247,787
2,974
Actually disable dummy labels by default
So I might have just changed the docstring instead of the actual default argument value and not realized. @lhoestq I'm sorry >.>
closed
https://github.com/huggingface/datasets/pull/2974
2021-09-27T14:50:20
2021-09-29T09:04:42
2021-09-29T09:04:41
{ "login": "Rocketknight1", "id": 12866554, "type": "User" }
[]
true
[]
1,007,894,592
2,973
Fix JSON metadata of masakhaner dataset
Fix #2971.
closed
https://github.com/huggingface/datasets/pull/2973
2021-09-27T09:09:08
2021-09-27T12:59:59
2021-09-27T12:59:59
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,007,808,714
2,972
OSError: Not enough disk space.
## Describe the bug I'm trying to download `natural_questions` dataset from the Internet, and I've specified the cache_dir which locates in a mounted disk and has enough disk space. However, even though the space is enough, the disk space checking function still reports the space of root `/` disk having no enough spac...
closed
https://github.com/huggingface/datasets/issues/2972
2021-09-27T07:41:22
2024-12-04T02:56:19
2021-09-28T06:43:15
{ "login": "qqaatw", "id": 24835382, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,007,696,522
2,971
masakhaner dataset load problem
## Describe the bug Masakhaner dataset is not loading ## Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("masakhaner",'amh') ``` ## Expected results Expected the return of a dataset ## Actual results ``` NonMatchingSplitsSizesError Traceback (mo...
closed
https://github.com/huggingface/datasets/issues/2971
2021-09-27T04:59:07
2021-09-27T12:59:59
2021-09-27T12:59:59
{ "login": "huu4ontocord", "id": 8900094, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]