id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
1,102,598,241
3,577
Add The Mexican Emotional Speech Database (MESD)
## Adding a Dataset - **Name:** *The Mexican Emotional Speech Database (MESD)* - **Description:** *Contains 864 voice recordings with six different prosodies: anger, disgust, fear, happiness, neutral, and sadness. Furthermore, three voice categories are included: female adult, male adult, and child. * - **Paper:** *...
open
https://github.com/huggingface/datasets/issues/3577
2022-01-13T23:49:36
2022-01-27T14:14:38
null
{ "login": "omarespejel", "id": 4755430, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "speech", "color": "d93f0b" } ]
false
[]
1,102,059,651
3,576
Add PASS dataset
This PR adds the PASS dataset. Closes #3043
closed
https://github.com/huggingface/datasets/pull/3576
2022-01-13T17:16:07
2022-01-20T16:50:48
2022-01-20T16:50:47
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,101,947,955
3,575
Add Arrow type casting to struct for Image and Audio + Support nested casting
## Intro 1. Currently, it's not possible to have nested features containing Audio or Image. 2. Moreover one can keep an Arrow array as a StringArray to store paths to images, but such arrays can't be directly concatenated to another image array if it's stored an another Arrow type (typically, a StructType). 3...
closed
https://github.com/huggingface/datasets/pull/3575
2022-01-13T15:36:59
2022-11-29T11:14:16
2022-01-21T13:22:27
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,101,781,401
3,574
Fix qa4mre tags
The YAML tags were invalid. I also fixed the dataset mirroring logging that failed because of this issue [here](https://github.com/huggingface/datasets/actions/runs/1690109581)
closed
https://github.com/huggingface/datasets/pull/3574
2022-01-13T13:56:59
2022-01-13T14:03:02
2022-01-13T14:03:01
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,101,157,676
3,573
Add Mauve metric
Add support for the [Mauve](https://github.com/krishnap25/mauve) metric introduced in this [paper](https://arxiv.org/pdf/2102.01454.pdf) (Neurips, 2021).
closed
https://github.com/huggingface/datasets/pull/3573
2022-01-13T03:52:48
2022-01-20T15:00:08
2022-01-20T15:00:08
{ "login": "jthickstun", "id": 2321244, "type": "User" }
[]
true
[]
1,100,634,244
3,572
ConnectionError in IndicGLUE dataset
While I am trying to load IndicGLUE dataset (https://huggingface.co/datasets/indic_glue) it is giving me with the error: ``` ConnectionError: Couldn't reach https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/evaluations/wikiann-ner.tar.gz (error 403)
closed
https://github.com/huggingface/datasets/issues/3572
2022-01-12T17:59:36
2022-09-15T21:57:34
2022-09-15T21:57:34
{ "login": "sahoodib", "id": 79107194, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,100,519,604
3,571
Add missing tasks to MuchoCine dataset
Addresses the 2nd bullet point in #2520. I'm also removing the licensing information, because I couldn't verify that it is correct.
closed
https://github.com/huggingface/datasets/pull/3571
2022-01-12T16:07:32
2022-01-20T16:51:08
2022-01-20T16:51:07
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,100,480,791
3,570
Add the KMWP dataset (extension of #3564)
New pull request of #3564 (Add the KMWP dataset)
closed
https://github.com/huggingface/datasets/pull/3570
2022-01-12T15:33:08
2022-10-01T06:43:16
2022-10-01T06:43:16
{ "login": "sooftware", "id": 42150335, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,100,478,994
3,569
Add the DKTC dataset (Extension of #3564)
New pull request of #3564. (for DKTC)
closed
https://github.com/huggingface/datasets/pull/3569
2022-01-12T15:31:29
2022-10-01T06:43:05
2022-10-01T06:43:04
{ "login": "sooftware", "id": 42150335, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,100,380,631
3,568
Downloading Hugging Face Medical Dialog Dataset NonMatchingSplitsSizesError
I wanted to download the Nedical Dialog Dataset from huggingface, using this github link: https://github.com/huggingface/datasets/tree/master/datasets/medical_dialog After downloading the raw datasets from google drive, i unpacked everything and put it in the same folder as the medical_dialog.py which is: ``` ...
closed
https://github.com/huggingface/datasets/issues/3568
2022-01-12T14:03:44
2022-02-14T09:32:34
2022-02-14T09:32:34
{ "login": "fabianslife", "id": 49265757, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,100,296,696
3,567
Fix push to hub to allow individual split push
# Description of the issue If one decides to push a split on a datasets repo, he uploads the dataset and overrides the config. However previous config splits end up being lost despite still having the dataset necessary. The new flow is the following: - query the old config from the repo - update into a new co...
closed
https://github.com/huggingface/datasets/pull/3567
2022-01-12T12:42:58
2023-09-24T09:54:19
2022-07-27T12:11:11
{ "login": "thomasw21", "id": 24695242, "type": "User" }
[]
true
[]
1,100,155,902
3,566
Add initial electricity time series dataset
Here is an initial prototype time series dataset
closed
https://github.com/huggingface/datasets/pull/3566
2022-01-12T10:21:32
2022-02-15T13:31:48
2022-02-15T13:31:48
{ "login": "kashif", "id": 8100, "type": "User" }
[]
true
[]
1,099,296,693
3,565
Add parameter `preserve_index` to `from_pandas`
Added optional parameter, so that user can get rid of useless index preserving. [Issue](https://github.com/huggingface/datasets/issues/3563)
closed
https://github.com/huggingface/datasets/pull/3565
2022-01-11T15:26:37
2022-01-12T16:11:27
2022-01-12T16:11:27
{ "login": "Sorrow321", "id": 20703486, "type": "User" }
[]
true
[]
1,099,214,403
3,564
Add the KMWP & DKTC dataset.
Add the DKTC dataset. - https://github.com/tunib-ai/DKTC
closed
https://github.com/huggingface/datasets/pull/3564
2022-01-11T14:14:08
2022-01-12T15:33:49
2022-01-12T15:33:28
{ "login": "sooftware", "id": 42150335, "type": "User" }
[]
true
[]
1,099,070,368
3,563
Dataset.from_pandas preserves useless index
## Describe the bug Let's say that you want to create a Dataset object from pandas dataframe. Most likely you will write something like this: ``` import pandas as pd from datasets import Dataset df = pd.read_csv('some_dataset.csv') # Some DataFrame preprocessing code... dataset = Dataset.from_pandas(df) `...
closed
https://github.com/huggingface/datasets/issues/3563
2022-01-11T12:07:07
2022-01-12T16:11:27
2022-01-12T16:11:27
{ "login": "Sorrow321", "id": 20703486, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,098,341,351
3,562
Allow multiple task templates of the same type
Add support for multiple task templates of the same type. Fixes (partially) #2520. CC: @lewtun
closed
https://github.com/huggingface/datasets/pull/3562
2022-01-10T20:32:07
2022-01-11T14:16:47
2022-01-11T14:16:47
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,098,328,870
3,561
Cannot load ‘bookcorpusopen’
## Describe the bug Cannot load 'bookcorpusopen' ## Steps to reproduce the bug ```python dataset = load_dataset('bookcorpusopen') ``` or ```python dataset = load_dataset('bookcorpusopen',script_version='master') ``` ## Actual results ConnectionError: Couldn't reach https://the-eye.eu/public/AI/pile_pre...
closed
https://github.com/huggingface/datasets/issues/3561
2022-01-10T20:17:18
2022-02-14T09:19:27
2022-02-14T09:18:47
{ "login": "HUIYINXUE", "id": 54684403, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,098,280,652
3,560
Run pyupgrade for Python 3.6+
Run the command: ```bash pyupgrade $(find . -name "*.py" -type f) --py36-plus ``` Which mainly avoids unnecessary lists creations and also removes unnecessary code for Python 3.6+. It was originally part of #3489. Tip for reviewing faster: use the CLI (`git diff`) and scroll.
closed
https://github.com/huggingface/datasets/pull/3560
2022-01-10T19:20:53
2022-01-31T13:38:49
2022-01-31T09:37:34
{ "login": "bryant1410", "id": 3905501, "type": "User" }
[]
true
[]
1,098,178,222
3,559
Fix `DuplicatedKeysError` and improve card in `tweet_qa`
Fix #3555
closed
https://github.com/huggingface/datasets/pull/3559
2022-01-10T17:27:40
2022-01-12T15:13:58
2022-01-12T15:13:57
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,098,025,866
3,558
Integrate Milvus (pymilvus) library
Milvus is a popular open-source vector database. We should add a new vector index to support this project.
open
https://github.com/huggingface/datasets/issues/3558
2022-01-10T15:20:29
2022-03-05T12:28:36
null
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,097,946,034
3,557
Fix bug in `ImageClassifcation` task template
Fixes a bug in the `ImageClassification` task template which requires specifying class labels twice in dataset scripts. Additionally, this PR refactors the API around the classification task templates for cleaner `labels` handling. CC: @lewtun @nateraw
closed
https://github.com/huggingface/datasets/pull/3557
2022-01-10T14:09:59
2022-01-11T15:47:52
2022-01-11T15:47:52
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,097,907,724
3,556
Preserve encoding/decoding with features in `Iterable.map` call
As described in https://github.com/huggingface/datasets/issues/3505#issuecomment-1004755657, this PR uses a generator expression to encode/decode examples with `features` (which are set to None in `map`) before applying a map transform. Fix #3505
closed
https://github.com/huggingface/datasets/pull/3556
2022-01-10T13:32:20
2022-01-18T19:54:08
2022-01-18T19:54:07
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,097,736,982
3,555
DuplicatedKeysError when loading tweet_qa dataset
When loading the tweet_qa dataset with `load_dataset('tweet_qa')`, the following error occurs: `DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 2a167f9e016ba338e1813fed275a6a1e Keys should be unique and deterministic in nature ` Might be related to issues #2433 and #2333 - `datasets` ...
closed
https://github.com/huggingface/datasets/issues/3555
2022-01-10T10:53:11
2022-01-12T15:17:33
2022-01-12T15:13:56
{ "login": "LeonieWeissweiler", "id": 30300891, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,097,711,367
3,554
ImportError: cannot import name 'is_valid_waiter_error'
Based on [SO post](https://stackoverflow.com/q/70606147/17840900). I'm following along to this [Notebook][1], cell "**Loading the dataset**". Kernel: `conda_pytorch_p36`. I run: ``` ! pip install datasets transformers optimum[intel] ``` Output: ``` Requirement already satisfied: datasets in /home/ec2-u...
closed
https://github.com/huggingface/datasets/issues/3554
2022-01-10T10:32:04
2022-02-14T09:35:57
2022-02-14T09:35:57
{ "login": "danielbellhv", "id": 84714841, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,097,252,275
3,553
set_format("np") no longer works for Image data
## Describe the bug `dataset.set_format("np")` no longer works for image data, previously you could load the MNIST like this: ```python dataset = load_dataset("mnist") dataset.set_format("np") X_train = dataset["train"]["image"][..., None] # <== No longer a numpy array ``` but now it doesn't work, `set_format(...
closed
https://github.com/huggingface/datasets/issues/3553
2022-01-09T17:18:13
2022-10-14T12:03:55
2022-10-14T12:03:54
{ "login": "cgarciae", "id": 5862228, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,096,985,204
3,552
Add the KMWP & DKTC dataset.
Add the KMWP & DKTC dataset. Additional notes: - Both datasets will be released on January 10 through the GitHub link below. - https://github.com/tunib-ai/DKTC - https://github.com/tunib-ai/KMWP - So it doesn't work as a link at the moment, but the code will work soon (after it is released on January 10).
closed
https://github.com/huggingface/datasets/pull/3552
2022-01-08T17:12:14
2022-01-11T14:13:30
2022-01-11T14:13:30
{ "login": "sooftware", "id": 42150335, "type": "User" }
[]
true
[]
1,096,561,111
3,551
Add more compression types for `to_json`
This PR adds `bz2`, `xz`, and `zip` (WIP) for `to_json`. I also plan to add `infer` like how `pandas` does it
closed
https://github.com/huggingface/datasets/pull/3551
2022-01-07T18:25:02
2022-07-10T14:36:55
2022-02-21T15:58:15
{ "login": "bhavitvyamalik", "id": 19718818, "type": "User" }
[]
true
[]
1,096,522,377
3,550
Bug in `openbookqa` dataset
## Describe the bug Dataset entries contains a typo. ## Steps to reproduce the bug ```python >>> from datasets import load_dataset >>> obqa = load_dataset('openbookqa', 'main') >>> obqa['train'][0] ``` ## Expected results ```python {'id': '7-980', 'question_stem': 'The sun is responsible for', 'choices'...
closed
https://github.com/huggingface/datasets/issues/3550
2022-01-07T17:32:57
2022-05-04T06:33:00
2022-05-04T06:32:19
{ "login": "lucadiliello", "id": 23355969, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,096,426,996
3,549
Fix sem_eval_2018_task_1 download location
This changes the download location of sem_eval_2018_task_1 files to include the test set labels as discussed in https://github.com/huggingface/datasets/issues/2745#issuecomment-954588500_ with @lhoestq.
closed
https://github.com/huggingface/datasets/pull/3549
2022-01-07T15:37:52
2022-01-27T15:52:03
2022-01-27T15:52:03
{ "login": "maxpel", "id": 31095360, "type": "User" }
[]
true
[]
1,096,409,512
3,548
Specify the feature types of a dataset on the Hub without needing a dataset script
**Is your feature request related to a problem? Please describe.** Currently if I upload a CSV with paths to audio files, the column type is string instead of Audio. **Describe the solution you'd like** I'd like to be able to specify the types of the column, so that when loading the dataset I directly get the feat...
closed
https://github.com/huggingface/datasets/issues/3548
2022-01-07T15:17:06
2022-01-20T14:48:38
2022-01-20T14:48:38
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,096,405,515
3,547
Datasets created with `push_to_hub` can't be accessed in offline mode
## Describe the bug In offline mode, one can still access previously-cached datasets. This fails with datasets created with `push_to_hub`. ## Steps to reproduce the bug in Python: ``` import datasets mpwiki = datasets.load_dataset("teven/matched_passages_wikidata") ``` in bash: ``` export HF_DATASETS_OFFLIN...
closed
https://github.com/huggingface/datasets/issues/3547
2022-01-07T15:12:25
2024-02-15T17:41:24
2023-12-21T15:13:12
{ "login": "TevenLeScao", "id": 26709476, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,096,367,684
3,546
Remove print statements in datasets
This is a second time I'm removing print statements in our datasets, so I've added a test to avoid these issues in the future.
closed
https://github.com/huggingface/datasets/pull/3546
2022-01-07T14:30:24
2022-01-07T18:09:16
2022-01-07T18:09:15
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,096,189,889
3,545
fix: 🐛 pass token when retrieving the split names
null
closed
https://github.com/huggingface/datasets/pull/3545
2022-01-07T10:29:22
2022-01-10T10:51:47
2022-01-10T10:51:46
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
1,095,784,681
3,544
Ability to split a dataset in multiple files.
Hello, **Is your feature request related to a problem? Please describe.** My use case is that I have one writer that adds columns and multiple workers reading the same `Dataset`. Each worker should have access to columns added by the writer when they reload the dataset. I understand that we shouldn't overwrite...
open
https://github.com/huggingface/datasets/issues/3544
2022-01-06T23:02:25
2022-01-06T23:02:25
null
{ "login": "Dref360", "id": 8976546, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,095,226,438
3,543
Allow loading community metrics from the hub, just like datasets
**Is your feature request related to a problem? Please describe.** Currently, I can load a metric implemented by me by providing the local path to the file in `load_metric`. However, there is no option to do it with the metric uploaded to the hub. This means that if I want to allow other users to use it, they must d...
closed
https://github.com/huggingface/datasets/issues/3543
2022-01-06T11:26:26
2022-05-31T20:59:14
2022-05-31T20:53:37
{ "login": "eladsegal", "id": 13485709, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "generic discussion", "color": "c5def5" } ]
false
[]
1,095,088,485
3,542
Update the CC-100 dataset card
* summary from the dataset homepage * more details about the data structure * this dataset does not contain annotations
closed
https://github.com/huggingface/datasets/pull/3542
2022-01-06T08:35:18
2022-01-06T18:37:44
2022-01-06T18:37:44
{ "login": "aajanki", "id": 353043, "type": "User" }
[]
true
[]
1,095,033,828
3,541
Support 7-zip compressed data files
**Is your feature request related to a problem? Please describe.** We should support 7-zip compressed data files: - [x] in `extract`: - #4672 - [ ] in `iter_archive`: for streaming mode both in streaming and non-streaming modes.
open
https://github.com/huggingface/datasets/issues/3541
2022-01-06T07:11:03
2022-07-19T10:18:30
null
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,094,900,336
3,540
How to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset?
Hi, I use torch.utils.data.Dataset to define my own data, but I need to use the 'map' function of datasets.arrow_dataset.Dataset later, so I hope to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset. Here is an example. ``` from torch.utils.data import Dataset from datasets.arrow_dataset import ...
open
https://github.com/huggingface/datasets/issues/3540
2022-01-06T02:13:42
2022-01-06T02:17:39
null
{ "login": "CindyTing", "id": 35062414, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,094,813,242
3,539
Research wording for nc licenses
null
closed
https://github.com/huggingface/datasets/pull/3539
2022-01-05T23:01:38
2022-01-06T18:58:20
2022-01-06T18:58:19
{ "login": "meg-huggingface", "id": 90473723, "type": "User" }
[]
true
[]
1,094,756,755
3,538
Readme usage update
Noticing that the recent commit throws a lot of errors in the automatic checks. It looks to me that those errors are simply errors that were already there (metadata issues), unrelated to what I've just changed, but worth another look to make sure.
closed
https://github.com/huggingface/datasets/pull/3538
2022-01-05T21:26:28
2022-01-05T23:34:25
2022-01-05T23:24:15
{ "login": "meg-huggingface", "id": 90473723, "type": "User" }
[]
true
[]
1,094,738,734
3,537
added PII statements and license links to data cards
Updates for the following datacards: multilingual_librispeech openslr speech commands superb timit_asr vctk
closed
https://github.com/huggingface/datasets/pull/3537
2022-01-05T20:59:21
2022-01-05T22:02:37
2022-01-05T22:02:37
{ "login": "mcmillanmajora", "id": 26722925, "type": "User" }
[]
true
[]
1,094,645,771
3,536
update `pretty_name` for all datasets
This PR updates `pretty_name` for all datasets. Previous PR #3498 had done this for only first 200 datasets
closed
https://github.com/huggingface/datasets/pull/3536
2022-01-05T18:45:05
2022-07-10T14:36:54
2022-01-12T22:59:45
{ "login": "bhavitvyamalik", "id": 19718818, "type": "User" }
[]
true
[]
1,094,633,214
3,535
Add SVHN dataset
Add the SVHN dataset. Additional notes: * compared to the TFDS implementation, exposes additional the "full numbers" config * adds the streaming support for `os.path.splitext` and `scipy.io.loadmat` * adds `h5py` to the requirements list for the dummy data test
closed
https://github.com/huggingface/datasets/pull/3535
2022-01-05T18:29:09
2022-01-12T14:14:35
2022-01-12T14:14:35
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,094,352,449
3,534
Update wiki_dpr README.md
Some infos of wiki_dpr were missing as noted in https://github.com/huggingface/datasets/issues/3510, I added them and updated the tags and the examples Close #3510.
closed
https://github.com/huggingface/datasets/pull/3534
2022-01-05T13:29:44
2022-02-17T13:45:56
2022-01-05T14:16:51
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,094,156,147
3,533
Task search function on hub not working correctly
When I want to look at all datasets of the category: `speech-processing` *i.e.* https://huggingface.co/datasets?task_categories=task_categories:speech-processing&sort=downloads , then the following dataset doesn't show up for some reason: - https://huggingface.co/datasets/speech_commands even thought it's task t...
open
https://github.com/huggingface/datasets/issues/3533
2022-01-05T09:36:30
2022-05-12T14:45:57
null
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,094,035,066
3,532
Give clearer instructions to add the YAML tags
Fix #3531. CC: @julien-c @VictorSanh
closed
https://github.com/huggingface/datasets/pull/3532
2022-01-05T06:47:52
2022-01-17T15:54:37
2022-01-17T15:54:36
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,094,033,280
3,531
Give clearer instructions to add the YAML tags
## Describe the bug As reported by @julien-c, many community datasets contain the line `YAML tags:` at the top of the YAML section in the header of the README file. See e.g.: https://huggingface.co/datasets/bigscience/P3/commit/a03bea08cf4d58f268b469593069af6aeb15de32 Maybe we should give clearer instruction/hints...
closed
https://github.com/huggingface/datasets/issues/3531
2022-01-05T06:44:20
2022-01-17T15:54:36
2022-01-17T15:54:36
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,093,894,732
3,530
Update README.md
Removing reference to "Common Voice" in Personal and Sensitive Information section. Adding link to license. Correct license type in metadata.
closed
https://github.com/huggingface/datasets/pull/3530
2022-01-05T01:32:07
2022-01-05T12:50:51
2022-01-05T12:50:50
{ "login": "meg-huggingface", "id": 90473723, "type": "User" }
[]
true
[]
1,093,846,356
3,529
Update README.md
Updating licensing information & personal and sensitive information.
closed
https://github.com/huggingface/datasets/pull/3529
2022-01-04T23:52:47
2022-01-05T12:50:15
2022-01-05T12:50:14
{ "login": "meg-huggingface", "id": 90473723, "type": "User" }
[]
true
[]
1,093,844,616
3,528
Update README.md
Updating license with appropriate capitalization & a link. Updating Personal and Sensitive Information to address PII concern.
closed
https://github.com/huggingface/datasets/pull/3528
2022-01-04T23:48:11
2022-01-05T12:49:41
2022-01-05T12:49:40
{ "login": "meg-huggingface", "id": 90473723, "type": "User" }
[]
true
[]
1,093,840,707
3,527
Update README.md
Adding licensing information.
closed
https://github.com/huggingface/datasets/pull/3527
2022-01-04T23:39:41
2022-01-05T00:23:50
2022-01-05T00:23:50
{ "login": "meg-huggingface", "id": 90473723, "type": "User" }
[]
true
[]
1,093,833,446
3,526
Update license to bookcorpus dataset card
Not entirely sure, following the links here, but it seems the relevant license is at https://github.com/soskek/bookcorpus/blob/master/LICENSE
closed
https://github.com/huggingface/datasets/pull/3526
2022-01-04T23:25:23
2022-09-30T10:23:38
2022-09-30T10:21:20
{ "login": "meg-huggingface", "id": 90473723, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,093,831,268
3,525
Adding license information for Openbookcorpus
Not entirely sure, following the links here, but it seems the relevant license is at https://github.com/soskek/bookcorpus/blob/master/LICENSE
closed
https://github.com/huggingface/datasets/pull/3525
2022-01-04T23:20:36
2022-04-20T09:54:30
2022-04-20T09:48:10
{ "login": "meg-huggingface", "id": 90473723, "type": "User" }
[]
true
[]
1,093,826,723
3,524
Adding link to license.
null
closed
https://github.com/huggingface/datasets/pull/3524
2022-01-04T23:11:48
2022-01-05T12:31:38
2022-01-05T12:31:37
{ "login": "meg-huggingface", "id": 90473723, "type": "User" }
[]
true
[]
1,093,819,227
3,523
Added links to licensing and PII message in vctk dataset
null
closed
https://github.com/huggingface/datasets/pull/3523
2022-01-04T22:56:58
2022-01-06T19:33:50
2022-01-06T19:33:50
{ "login": "mcmillanmajora", "id": 26722925, "type": "User" }
[]
true
[]
1,093,807,586
3,522
wmt19 is broken (zh-en)
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("wmt19", 'zh-en') ``` ## Expected results The dataset should download. ## Actual results `ConnectionError: Couldn't reach ftp://cwmt-wm...
closed
https://github.com/huggingface/datasets/issues/3522
2022-01-04T22:33:45
2022-05-06T16:27:37
2022-05-06T16:27:37
{ "login": "AjayP13", "id": 5404177, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,093,797,947
3,521
Vivos license update
Updated the license information with the link to the license text
closed
https://github.com/huggingface/datasets/pull/3521
2022-01-04T22:17:47
2022-01-04T22:18:16
2022-01-04T22:18:16
{ "login": "mcmillanmajora", "id": 26722925, "type": "User" }
[]
true
[]
1,093,747,753
3,520
Audio datacard update - first pass
Filling out data card "Personal and Sensitive Information" for speech datasets to note PII concerns
closed
https://github.com/huggingface/datasets/pull/3520
2022-01-04T20:58:25
2022-01-05T12:30:21
2022-01-05T12:30:20
{ "login": "meg-huggingface", "id": 90473723, "type": "User" }
[]
true
[]
1,093,655,205
3,519
CC100: Using HTTPS for the data source URL fixes load_dataset()
Without this change the following script (with any lang parameter) consistently fails. After changing to the HTTPS URL, the script works as expected. ```python from datasets import load_dataset dataset = load_dataset("cc100", lang="en") ``` This is the error produced by the previous script: ```sh Using cus...
closed
https://github.com/huggingface/datasets/pull/3519
2022-01-04T18:45:54
2022-01-05T17:28:34
2022-01-05T17:28:34
{ "login": "aajanki", "id": 353043, "type": "User" }
[]
true
[]
1,093,063,455
3,518
Add PubMed Central Open Access dataset
## Adding a Dataset - **Name:** PubMed Central Open Access - **Description:** The PMC Open Access Subset includes more than 3.4 million journal articles and preprints that are made available under license terms that allow reuse. - **Paper:** *link to the dataset paper if available* - **Data:** https://www.ncbi.nlm....
closed
https://github.com/huggingface/datasets/issues/3518
2022-01-04T06:54:35
2022-01-17T15:25:57
2022-01-17T15:25:57
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,092,726,651
3,517
Add CPPE-5 dataset
Adds the recently released CPPE-5 dataset.
closed
https://github.com/huggingface/datasets/pull/3517
2022-01-03T18:31:20
2022-01-19T02:23:37
2022-01-05T18:53:02
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,092,657,738
3,516
dataset `asset` - change to raw.githubusercontent.com URLs
Changed the URLs to the ones it was automatically re-directing. Before, the download was failing
closed
https://github.com/huggingface/datasets/pull/3516
2022-01-03T16:43:57
2022-01-03T17:39:02
2022-01-03T17:39:01
{ "login": "VictorSanh", "id": 16107619, "type": "User" }
[]
true
[]
1,092,624,695
3,515
`ExpectedMoreDownloadedFiles` for `evidence_infer_treatment`
## Describe the bug I am trying to load a dataset called `evidence_infer_treatment`. The first subset (`1.1`) works fine but the second returns an error (`2.0`). It downloads a file but crashes during the checksums. ## Steps to reproduce the bug ```python >>> from datasets import load_dataset >>> load_dataset("e...
closed
https://github.com/huggingface/datasets/issues/3515
2022-01-03T15:58:38
2022-02-14T13:21:43
2022-02-14T13:21:43
{ "login": "VictorSanh", "id": 16107619, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,092,606,383
3,514
Fix to_tf_dataset references in docs
Fix the `to_tf_dataset` references in the docs. The currently failing example of usage will be fixed by #3338.
closed
https://github.com/huggingface/datasets/pull/3514
2022-01-03T15:31:39
2022-01-05T18:52:48
2022-01-05T18:52:48
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,092,569,802
3,513
Add desc parameter to filter
Fix #3317
closed
https://github.com/huggingface/datasets/pull/3513
2022-01-03T14:44:18
2022-01-05T18:31:25
2022-01-05T18:31:25
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,092,359,973
3,512
No Data format found
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
closed
https://github.com/huggingface/datasets/issues/3512
2022-01-03T09:41:11
2022-01-17T13:26:05
2022-01-17T13:26:05
{ "login": "shazzad47", "id": 57741378, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,092,170,411
3,511
Dataset
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
closed
https://github.com/huggingface/datasets/issues/3511
2022-01-03T02:03:23
2022-01-03T08:41:26
2022-01-03T08:23:07
{ "login": "MIKURI0114", "id": 92849978, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,091,997,004
3,510
`wiki_dpr` details for Open Domain Question Answering tasks
Hey guys! Thanks for creating the `wiki_dpr` dataset! I am currently trying to use the dataset for context retrieval using DPR on NQ questions and need details about what each of the files and data instances mean, which version of the Wikipedia dump it uses, etc. Please respond at your earliest convenience regard...
closed
https://github.com/huggingface/datasets/issues/3510
2022-01-02T11:04:01
2022-02-17T13:46:20
2022-02-17T13:46:20
{ "login": "pk1130", "id": 40918514, "type": "User" }
[]
false
[]
1,091,214,808
3,507
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data
I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point of having also the JSON metadata (dataset_infos.json file)? On the other hand, the d...
closed
https://github.com/huggingface/datasets/issues/3507
2021-12-30T17:04:25
2022-11-04T15:31:38
2022-11-04T15:31:37
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "generic discussion", "color": "c5def5" } ]
false
[]
1,091,166,595
3,506
Allows DatasetDict.filter to have batching option
- Related to: #3244 - Fixes: #3503 We extends `.filter( ... batched: bool)` support to DatasetDict.
closed
https://github.com/huggingface/datasets/pull/3506
2021-12-30T15:22:22
2022-01-04T10:24:28
2022-01-04T10:24:27
{ "login": "thomasw21", "id": 24695242, "type": "User" }
[]
true
[]
1,091,150,820
3,505
cast_column function not working with map function in streaming mode for Audio features
## Describe the bug I am trying to use Audio class for loading audio features using custom dataset. I am able to cast 'audio' feature into 'Audio' format with cast_column function. On using map function, I am not getting 'Audio' casted feature but getting path of audio file only. I am getting features of 'audio' of s...
closed
https://github.com/huggingface/datasets/issues/3505
2021-12-30T14:52:01
2022-01-18T19:54:07
2022-01-18T19:54:07
{ "login": "ashu5644", "id": 8268102, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,090,682,230
3,504
Unable to download PUBMED_title_abstracts_2019_baseline.jsonl.zst
## Describe the bug I am unable to download the PubMed dataset from the link provided in the [Hugging Face Course (Chapter 5 Section 4)](https://huggingface.co/course/chapter5/4?fw=pt). https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst ## Steps to reproduce ...
closed
https://github.com/huggingface/datasets/issues/3504
2021-12-29T18:23:20
2024-05-20T09:44:59
2022-02-17T15:04:25
{ "login": "ToddMorrill", "id": 12600692, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,090,472,735
3,503
Batched in filter throws error
I hope this is really a bug, I could not find it among the open issues ## Describe the bug using `batched=False` in DataSet.filter throws error ```python TypeError: filter() got an unexpected keyword argument 'batched' ``` but in the docs it is lister as an argument. ## Steps to reproduce the bug ```python ...
closed
https://github.com/huggingface/datasets/issues/3503
2021-12-29T12:01:04
2022-01-04T10:24:27
2022-01-04T10:24:27
{ "login": "gpucce", "id": 32967787, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,090,438,558
3,502
Add QuALITY
Fixes #3441.
closed
https://github.com/huggingface/datasets/pull/3502
2021-12-29T10:58:46
2022-10-03T09:36:14
2022-10-03T09:36:14
{ "login": "jaketae", "id": 25360440, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,090,413,758
3,501
Update pib dataset card
Related to #3496
closed
https://github.com/huggingface/datasets/pull/3501
2021-12-29T10:14:40
2021-12-29T11:13:21
2021-12-29T11:13:21
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,090,406,133
3,500
Docs: Add VCTK dataset description
This PR is a very minor followup to #1837, with only docs changes (single comment string).
closed
https://github.com/huggingface/datasets/pull/3500
2021-12-29T10:02:05
2022-01-04T10:46:02
2022-01-04T10:25:09
{ "login": "jaketae", "id": 25360440, "type": "User" }
[]
true
[]
1,090,132,618
3,499
Adjusting chunk size for streaming datasets
**Is your feature request related to a problem? Please describe.** I want to use mc4 which I cannot save locally, so I stream it. However, I want to process the entire dataset and filter some documents from it. With the current chunk size of around 1000 documents (right?) I hit a performance bottleneck because of the ...
closed
https://github.com/huggingface/datasets/issues/3499
2021-12-28T21:17:53
2022-05-06T16:29:05
2022-05-06T16:29:05
{ "login": "JoelNiklaus", "id": 3775944, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,090,096,332
3,498
update `pretty_name` for first 200 datasets
I made a script some time back to fetch `pretty_names` from `papers_with_code` dataset along with some other rules incase that dataset wasn't available on `papers_with_code`. Updating them in the `README` of `datasets`. Took only the first 200 datasets into consideration and after some eyeballing, most of them were loo...
closed
https://github.com/huggingface/datasets/pull/3498
2021-12-28T19:50:07
2022-07-10T14:36:53
2022-01-05T16:38:21
{ "login": "bhavitvyamalik", "id": 19718818, "type": "User" }
[]
true
[]
1,090,050,148
3,497
Changing sampling rate in audio dataset and subsequently mapping with `num_proc > 1` leads to weird bug
Running: ```python from datasets import load_dataset, DatasetDict import datasets from transformers import AutoFeatureExtractor raw_datasets = DatasetDict() raw_datasets["train"] = load_dataset("common_voice", "ab", split="train") feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2ve...
closed
https://github.com/huggingface/datasets/issues/3497
2021-12-28T18:03:49
2022-01-21T13:22:27
2022-01-21T13:22:27
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,089,989,155
3,496
Update version of pib dataset and make it streamable
This PR: - Updates version of pib dataset: from 0.0.0 to 1.3.0 - Makes the dataset streamable Fix #3491. CC: @severo
closed
https://github.com/huggingface/datasets/pull/3496
2021-12-28T16:01:55
2022-01-03T14:42:28
2021-12-29T08:42:57
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,089,983,632
3,495
Add VoxLingua107
## Adding a Dataset - **Name:** VoxLingua107 - **Description:** VoxLingua107 is a speech dataset for training spoken language identification models. - **Paper:** https://arxiv.org/abs/2011.12998 - **Data:** http://bark.phon.ioc.ee/voxlingua107/ - **Motivation:** 107 languages, totaling 6628 hours for the train sp...
open
https://github.com/huggingface/datasets/issues/3495
2021-12-28T15:51:43
2021-12-28T15:51:43
null
{ "login": "jaketae", "id": 25360440, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,089,983,103
3,494
Clone full repo to detect new tags when mirroring datasets on the Hub
The new releases of `datasets` were not detected because the shallow clone in the CI wasn't getting the git tags. By cloning the full repository we can properly detect a new release, and tag all the dataset repositories accordingly cc @SBrandeis
closed
https://github.com/huggingface/datasets/pull/3494
2021-12-28T15:50:47
2021-12-28T16:07:21
2021-12-28T16:07:20
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,089,967,286
3,493
Fix VCTK encoding
utf-8 encoding was missing in the VCTK dataset builder added in #3351
closed
https://github.com/huggingface/datasets/pull/3493
2021-12-28T15:23:36
2021-12-28T15:48:18
2021-12-28T15:48:17
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,089,952,943
3,492
Add `gzip` for `to_json`
(Partially) closes #3480. I have added `gzip` compression for `to_json`. I realised we can run into this compression problem with `to_csv` as well. `IOHandler` can be used for `to_csv` too. Please let me know if any changes are required.
closed
https://github.com/huggingface/datasets/pull/3492
2021-12-28T15:01:11
2022-07-10T14:36:52
2022-01-05T13:03:36
{ "login": "bhavitvyamalik", "id": 19718818, "type": "User" }
[]
true
[]
1,089,918,018
3,491
Update version of pib dataset
On the Hub we have v0, while there exists v1.3. Related to bigscience-workshop/data_tooling#130
closed
https://github.com/huggingface/datasets/issues/3491
2021-12-28T14:03:58
2021-12-29T08:42:57
2021-12-29T08:42:57
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,089,730,181
3,490
Does datasets support load text from HDFS?
The raw text data is stored on HDFS due to the dataset's size is too large to store on my develop machine, so I wander does datasets support read data from hdfs?
open
https://github.com/huggingface/datasets/issues/3490
2021-12-28T08:56:02
2022-02-14T14:00:51
null
{ "login": "dancingpipi", "id": 20511825, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,089,401,926
3,489
Avoid unnecessary list creations
Like in `join([... for s in ...])`. Also changed other things that I saw: * Use a `with` statement for many `open` that missed them, so the files don't remain open. * Remove unused variables. * Many HTTP links converted into HTTPS (verified). * Remove unnecessary "r" mode arg in `open` (double-checked it was actual...
open
https://github.com/huggingface/datasets/pull/3489
2021-12-27T18:20:56
2022-07-06T15:19:49
null
{ "login": "bryant1410", "id": 3905501, "type": "User" }
[]
true
[]
1,089,345,653
3,488
URL query parameters are set as path in the compression hop for fsspec
## Describe the bug There is an ssue with `StreamingDownloadManager._extract`. I don't know how the test `test_streaming_gg_drive_gzipped` passes: For ```python TEST_GG_DRIVE_GZIPPED_URL = "https://drive.google.com/uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz" urlpath = StreamingDownloadManager()....
open
https://github.com/huggingface/datasets/issues/3488
2021-12-27T16:29:00
2022-01-05T15:15:25
null
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,089,209,031
3,487
Update ADD_NEW_DATASET.md
fixed make style prompt for Windows Terminal
closed
https://github.com/huggingface/datasets/pull/3487
2021-12-27T12:24:51
2021-12-27T15:00:45
2021-12-27T15:00:45
{ "login": "apergo-ai", "id": 68908804, "type": "User" }
[]
true
[]
1,089,171,551
3,486
Fix weird spacing in ManualDownloadError message
`textwrap.dedent` works based on the spaces at the beginning. Before this change, there wasn't any space.
closed
https://github.com/huggingface/datasets/pull/3486
2021-12-27T11:20:36
2021-12-28T09:03:26
2021-12-28T09:00:28
{ "login": "bryant1410", "id": 3905501, "type": "User" }
[]
true
[]
1,089,027,581
3,485
skip columns which cannot set to specific format when set_format
**Is your feature request related to a problem? Please describe.** When using `dataset.set_format("torch")`, I must make sure every columns in datasets can convert to `torch`, however, sometimes I want to keep some string columns. **Describe the solution you'd like** skip columns which cannot set to specific forma...
closed
https://github.com/huggingface/datasets/issues/3485
2021-12-27T07:19:55
2021-12-27T09:07:07
2021-12-27T09:07:07
{ "login": "tshu-w", "id": 13161779, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,088,910,402
3,484
make shape verification to use ArrayXD instead of nested lists for map
As describe in https://github.com/huggingface/datasets/issues/2005#issuecomment-793716753 and mentioned by @mariosasko in [image feature example](https://colab.research.google.com/drive/1mIrTnqTVkWLJWoBzT1ABSe-LFelIep1c#scrollTo=ow3XHDvf2I0B&line=1&uniqifier=1), IMO make shape verifcaiton to use ArrayXD instead of nest...
open
https://github.com/huggingface/datasets/issues/3484
2021-12-27T02:16:02
2022-01-05T13:54:03
null
{ "login": "tshu-w", "id": 13161779, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,088,784,157
3,483
Remove unused phony rule from Makefile
null
closed
https://github.com/huggingface/datasets/pull/3483
2021-12-26T14:37:13
2022-01-05T19:44:56
2022-01-05T16:34:12
{ "login": "bryant1410", "id": 3905501, "type": "User" }
[]
true
[]
1,088,317,921
3,482
Fix duplicate keys in NewsQA
* Fix duplicate keys in NewsQA when loading from CSV files. * Fix s/narqa/newsqa/ in the download manually error message. * Make the download manually error message show nicely when printed. Otherwise, is hard to read due to spacing issues. * Fix the format of the license text. * Reformat the code to make it simple...
closed
https://github.com/huggingface/datasets/pull/3482
2021-12-24T11:01:59
2022-09-23T12:57:10
2022-09-23T12:57:10
{ "login": "bryant1410", "id": 3905501, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,088,308,343
3,481
Fix overriding of filesystem info
Previously, `BaseCompressedFileFileSystem.info` was overridden and transformed from function to dict. This generated a bug for filesystem methods that use `self.info()`, like e.g. `fs.isfile()`. This PR: - Adds tests for `fs.isfile` (that use `fs.info`). - Fixes custom `BaseCompressedFileFileSystem.info` by rem...
closed
https://github.com/huggingface/datasets/pull/3481
2021-12-24T10:42:31
2021-12-24T11:08:59
2021-12-24T11:08:59
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,088,267,110
3,480
the compression format requested when saving a dataset in json format is not respected
## Describe the bug In the documentation of the `to_json` method, it is stated in the parameters that > **to_json_kwargs – Parameters to pass to pandas’s pandas.DataFrame.to_json. however when we pass for example `compression="gzip"`, the saved file is not compressed. Would you also have expected compression t...
closed
https://github.com/huggingface/datasets/issues/3480
2021-12-24T09:23:51
2022-01-05T13:03:35
2022-01-05T13:03:35
{ "login": "SaulLu", "id": 55560583, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,088,232,880
3,479
Dataset preview is not available (I think for all Hugging Face datasets)
## Dataset viewer issue for '*french_book_reviews*' **Link:** https://huggingface.co/datasets/Abirate/french_book_reviews **short description of the issue** For my dataset, the dataset preview is no longer functional (it used to work: The dataset had been added the day before and it was fine...) And, after lo...
closed
https://github.com/huggingface/datasets/issues/3479
2021-12-24T08:18:48
2021-12-24T14:27:46
2021-12-24T14:27:46
{ "login": "Abirate", "id": 66887439, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,087,860,180
3,478
Extend support for streaming datasets that use os.walk
This PR extends the support in streaming mode for datasets that use `os.walk`, by patching that function. This PR adds support for streaming mode to datasets: 1. autshumato 1. code_x_glue_cd_code_to_text 1. code_x_glue_tc_nl_code_search_adv 1. nchlt CC: @severo
closed
https://github.com/huggingface/datasets/pull/3478
2021-12-23T16:42:55
2021-12-24T10:50:20
2021-12-24T10:50:19
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,087,850,253
3,477
Use `iter_files` instead of `str(Path(...)` in image dataset
Use `iter_files` in the `beans` and the `cats_vs_dogs` dataset scripts as suggested by @albertvillanova. Additional changes: * Fix `iter_files` in `MockDownloadManager` (see this [CI error](https://app.circleci.com/pipelines/github/huggingface/datasets/9247/workflows/2657ff8a-b531-4fd9-a9fc-6541a72e8d83/jobs/57028)...
closed
https://github.com/huggingface/datasets/pull/3477
2021-12-23T16:26:55
2021-12-28T15:15:02
2021-12-28T15:15:02
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,087,622,872
3,476
Extend support for streaming datasets that use ET.parse
This PR extends the support in streaming mode for datasets that use `ET.parse`, by patching the function. This PR adds support for streaming mode to datasets: 1. ami 1. assin 1. assin2 1. counter 1. enriched_web_nlg 1. europarl_bilingual 1. hyperpartisan_news_detection 1. polsum 1. qa4mre 1. quail 1. ted_...
closed
https://github.com/huggingface/datasets/pull/3476
2021-12-23T11:18:46
2021-12-23T15:34:30
2021-12-23T15:34:30
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]