id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
1,087,352,041
3,475
The rotten_tomatoes dataset of movie reviews contains some reviews in Spanish
## Describe the bug See title. I don't think this is intentional and they probably should be removed. If they stay the dataset description should be at least updated to make it clear to the user. ## Steps to reproduce the bug Go to the [dataset viewer](https://huggingface.co/datasets/viewer/?dataset=rotten_tomato...
open
https://github.com/huggingface/datasets/issues/3475
2021-12-23T03:56:43
2021-12-24T00:23:03
null
{ "login": "puzzler10", "id": 17426779, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,086,945,384
3,474
Decode images when iterating
If I iterate over a vision dataset, the images are not decoded, and the dictionary with the bytes is returned. This PR enables image decoding in `Dataset.__iter__` Close https://github.com/huggingface/datasets/issues/3473
closed
https://github.com/huggingface/datasets/pull/3474
2021-12-22T15:34:49
2023-09-24T09:54:04
2021-12-28T16:08:10
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,086,937,610
3,473
Iterating over a vision dataset doesn't decode the images
## Describe the bug If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned. ## Steps to reproduce the bug ```python from datasets import load_dataset import PIL mnist = load_dataset("mnist", split="train") first_image = mnist[0]["image"...
closed
https://github.com/huggingface/datasets/issues/3473
2021-12-22T15:26:32
2021-12-27T14:13:21
2021-12-23T15:21:57
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "vision", "color": "bfdadc" } ]
false
[]
1,086,908,508
3,472
Fix `str(Path(...))` conversion in streaming on Linux
Fix `str(Path(...))` conversion in streaming on Linux. This should fix the streaming of the `beans` and `cats_vs_dogs` datasets.
closed
https://github.com/huggingface/datasets/pull/3472
2021-12-22T15:06:03
2021-12-22T16:52:53
2021-12-22T16:52:52
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,086,588,074
3,471
Fix Tashkeela dataset to yield stripped text
This PR: - Yields stripped text - Fix path for Windows - Adds license - Adds more info in dataset card Close bigscience-workshop/data_tooling#279
closed
https://github.com/huggingface/datasets/pull/3471
2021-12-22T08:41:30
2021-12-22T10:12:08
2021-12-22T10:12:07
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,086,049,888
3,470
Fix rendering of docs
Minor fix in docs. Currently, `ClassLabel` docstring rendering is not right.
closed
https://github.com/huggingface/datasets/pull/3470
2021-12-21T17:17:01
2021-12-22T09:23:47
2021-12-22T09:23:47
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,085,882,664
3,469
Fix METEOR missing NLTK's omw-1.4
NLTK 3.6.6 now requires `omw-1.4` to be downloaded for METEOR to work. This should fix the CI on master
closed
https://github.com/huggingface/datasets/pull/3469
2021-12-21T14:19:11
2021-12-21T14:52:28
2021-12-21T14:49:28
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,085,871,301
3,468
Add COCO dataset
This PR adds the MS COCO dataset. Compared to the [TFDS](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/object_detection/coco.py) script, this implementation adds 8 additional configs to cover the tasks other than object detection. Some notes: * the data exposed by TFDS is contained in the `...
closed
https://github.com/huggingface/datasets/pull/3468
2021-12-21T14:07:50
2023-09-24T09:33:31
2022-10-03T09:36:08
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,085,870,665
3,467
Push dataset infos.json to Hub
When doing `push_to_hub`, the feature types are lost (see issue https://github.com/huggingface/datasets/issues/3394). This PR fixes this by also pushing a `dataset_infos.json` file to the Hub, that stores the feature types. Other minor changes: - renamed the `___` separator to `--`, since `--` is now disallowed in...
closed
https://github.com/huggingface/datasets/pull/3467
2021-12-21T14:07:13
2021-12-21T17:00:10
2021-12-21T17:00:09
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,085,722,837
3,466
Add CRASS dataset
Added crass dataset
closed
https://github.com/huggingface/datasets/pull/3466
2021-12-21T11:17:22
2022-10-03T09:37:06
2022-10-03T09:37:06
{ "login": "apergo-ai", "id": 68908804, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,085,400,432
3,465
Unable to load 'cnn_dailymail' dataset
## Describe the bug I wanted to load cnn_dailymail dataset from huggingface datasets on Google Colab, but I am getting an error while loading it. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0', ignore_verifications = True) ``` ## Expe...
closed
https://github.com/huggingface/datasets/issues/3465
2021-12-21T03:32:21
2024-06-12T14:41:17
2022-02-17T14:13:57
{ "login": "talha1503", "id": 42352729, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "duplicate", "color": "cfd3d7" }, { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,085,399,097
3,464
struct.error: 'i' format requires -2147483648 <= number <= 2147483647
## Describe the bug A clear and concise description of what the bug is. using latest datasets=datasets-1.16.1-py3-none-any.whl process my own multilingual dataset by following codes, and the number of rows in all dataset is 306000, the max_length of each sentence is 256: ![image](https://user-images.githubusercon...
open
https://github.com/huggingface/datasets/issues/3464
2021-12-21T03:29:01
2022-11-21T19:55:11
null
{ "login": "koukoulala", "id": 30341159, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,085,078,795
3,463
Update swahili_news dataset
Update dataset with latest verion data files. Fix #3462. Close bigscience-workshop/data_tooling#107
closed
https://github.com/huggingface/datasets/pull/3463
2021-12-20T18:20:20
2021-12-21T06:24:03
2021-12-21T06:24:02
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,085,049,661
3,462
Update swahili_news dataset
Please note also: the HuggingFace version at https://huggingface.co/datasets/swahili_news is outdated. An updated version, with deduplicated text and official splits, can be found at https://zenodo.org/record/5514203. ## Adding a Dataset - **Name:** swahili_news Instructions to add a new dataset can be found [he...
closed
https://github.com/huggingface/datasets/issues/3462
2021-12-20T17:44:01
2021-12-21T06:24:02
2021-12-21T06:24:01
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,085,007,346
3,461
Fix links in metrics description
Remove Markdown syntax for links in metrics description, as it is not properly rendered. Related to #3437.
closed
https://github.com/huggingface/datasets/pull/3461
2021-12-20T16:56:19
2021-12-20T17:14:52
2021-12-20T17:14:51
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,085,002,469
3,460
Don't encode lists as strings when using `Value("string")`
Following https://github.com/huggingface/datasets/pull/3456#event-5792250497 it looks like `datasets` can silently convert lists to strings using `str()`, instead of raising an error. This PR fixes this and should fix the issue with WER showing low values if the input format is not right.
closed
https://github.com/huggingface/datasets/pull/3460
2021-12-20T16:50:49
2023-09-25T10:28:30
2023-09-25T09:20:28
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,084,969,672
3,459
dataset.filter overwriting previously set dataset._indices values, resulting in the wrong elements being selected.
## Describe the bug When using dataset.select to select a subset of a dataset, dataset._indices are set to indicate which elements are now considered in the dataset. The same thing happens when you shuffle the dataset; dataset._indices are set to indicate what the new order of the data is. However, if you then use a...
closed
https://github.com/huggingface/datasets/issues/3459
2021-12-20T16:16:49
2021-12-20T16:34:57
2021-12-20T16:34:57
{ "login": "mmajurski", "id": 9354454, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,084,926,025
3,458
Fix duplicated tag in wikicorpus dataset card
null
closed
https://github.com/huggingface/datasets/pull/3458
2021-12-20T15:34:16
2021-12-20T16:03:25
2021-12-20T16:03:24
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,084,862,121
3,457
Add CMU Graphics Lab Motion Capture dataset
## Adding a Dataset - **Name:** CMU Graphics Lab Motion Capture database - **Description:** The database contains free motions which you can download and use. - **Data:** http://mocap.cs.cmu.edu/ - **Motivation:** Nice motion capture dataset Instructions to add a new dataset can be found [here](https://github.c...
open
https://github.com/huggingface/datasets/issues/3457
2021-12-20T14:34:39
2022-03-16T16:53:09
null
{ "login": "osanseviero", "id": 7246357, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "vision", "color": "bfdadc" } ]
false
[]
1,084,687,973
3,456
[WER] Better error message for wer
Currently we have the following problem when using the WER. When the input format to the WER metric is wrong, instead of throwing an error message a word-error-rate is computed which is incorrect. E.g. when doing the following: ```python from datasets import load_metric wer = load_metric("wer") target_str ...
closed
https://github.com/huggingface/datasets/pull/3456
2021-12-20T11:38:40
2021-12-20T16:53:37
2021-12-20T16:53:36
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
1,084,599,650
3,455
Easier information editing
**Is your feature request related to a problem? Please describe.** It requires a lot of effort to improve a datasheet. **Describe the solution you'd like** UI or at least a link to the place where the code that needs to be edited is (and an easy way to edit this code directly from the site, without cloning, branc...
closed
https://github.com/huggingface/datasets/issues/3455
2021-12-20T10:10:43
2023-07-25T15:36:14
2023-07-25T15:36:14
{ "login": "borgr", "id": 6416600, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "generic discussion", "color": "c5def5" } ]
false
[]
1,084,519,107
3,454
Fix iter_archive generator
This PR: - Adds tests to DownloadManager and StreamingDownloadManager `iter_archive` for both path and file inputs - Fixes bugs in `iter_archive` introduced in: - #3443 Fix #3453.
closed
https://github.com/huggingface/datasets/pull/3454
2021-12-20T08:50:15
2021-12-20T10:05:00
2021-12-20T10:04:59
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,084,515,911
3,453
ValueError while iter_archive
## Describe the bug After the merge of: - #3443 the method `iter_archive` throws a ValueError: ``` ValueError: read of closed file ``` ## Steps to reproduce the bug ```python for path, file in dl_manager.iter_archive(archive_path): pass ```
closed
https://github.com/huggingface/datasets/issues/3453
2021-12-20T08:46:18
2021-12-20T10:04:59
2021-12-20T10:04:59
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,083,803,178
3,452
why the stratify option is omitted from test_train_split function?
why the stratify option is omitted from test_train_split function? is there any other way implement the stratify option while splitting the dataset? as it is important point to be considered while splitting the dataset.
closed
https://github.com/huggingface/datasets/issues/3452
2021-12-18T10:37:47
2022-05-25T20:43:51
2022-05-25T20:43:51
{ "login": "j-sieger", "id": 9985334, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "good second issue", "color": "BDE59C" } ]
false
[]
1,083,459,137
3,451
[Staging] Update dataset repos automatically on the Hub
Let's have a script that updates the dataset repositories on staging for now. This way we can make sure it works fine before going in prod. Related to https://github.com/huggingface/datasets/issues/3341 The script runs on each commit on `master`. It checks the datasets that were changed, and it pushes the changes...
closed
https://github.com/huggingface/datasets/pull/3451
2021-12-17T17:12:11
2021-12-21T10:25:46
2021-12-20T14:09:51
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,083,450,158
3,450
Unexpected behavior doing Split + Filter
## Describe the bug I observed unexpected behavior when applying 'train_test_split' followed by 'filter' on dataset. Elements of the training dataset eventually end up in the test dataset (after applying the 'filter') ## Steps to reproduce the bug ``` from datasets import Dataset import pandas as pd dic = {'x'...
closed
https://github.com/huggingface/datasets/issues/3450
2021-12-17T17:00:39
2023-07-25T15:38:47
2023-07-25T15:38:47
{ "login": "jbrachat", "id": 26432605, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,083,373,018
3,449
Add `__add__()`, `__iadd__()` and similar to `Dataset` class
**Is your feature request related to a problem? Please describe.** No. **Describe the solution you'd like** I would like to be able to concatenate datasets as follows: ```python >>> dataset["train"] += dataset["validation"] ``` ... instead of using `concatenate_datasets()`: ```python >>> raw_datasets["trai...
closed
https://github.com/huggingface/datasets/issues/3449
2021-12-17T15:29:11
2024-02-29T16:47:56
2023-07-25T15:33:56
{ "login": "sgraaf", "id": 8904453, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "generic discussion", "color": "c5def5" } ]
false
[]
1,083,231,080
3,448
JSONDecodeError with HuggingFace dataset viewer
## Dataset viewer issue for 'pubmed_neg' **Link:** https://huggingface.co/datasets/IGESML/pubmed_neg I am getting the error: Status code: 400 Exception: JSONDecodeError Message: Expecting property name enclosed in double quotes: line 61 column 2 (char 1202) I have checked all files - I am not u...
closed
https://github.com/huggingface/datasets/issues/3448
2021-12-17T12:52:41
2022-02-24T09:10:26
2022-02-24T09:10:26
{ "login": "kathrynchapman", "id": 57716109, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,082,539,790
3,447
HF_DATASETS_OFFLINE=1 didn't stop datasets.builder from downloading
## Describe the bug According to https://huggingface.co/docs/datasets/loading_datasets.html#loading-a-dataset-builder, setting HF_DATASETS_OFFLINE to 1 should make datasets to "run in full offline mode". It didn't work for me. At the very beginning, datasets still tried to download "custom data configuration" for JSON...
closed
https://github.com/huggingface/datasets/issues/3447
2021-12-16T18:51:13
2022-02-17T14:16:27
2022-02-17T14:16:27
{ "login": "dunalduck0", "id": 51274745, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,082,414,229
3,446
Remove redundant local path information in audio/image datasets
Remove the redundant path information in the audio/image dataset as discussed in https://github.com/huggingface/datasets/pull/3430#issuecomment-994734828 TODOs: * [ ] merge https://github.com/huggingface/datasets/pull/3430 * [ ] merge https://github.com/huggingface/datasets/pull/3364 * [ ] re-generate the info fi...
closed
https://github.com/huggingface/datasets/pull/3446
2021-12-16T16:35:15
2023-09-24T10:09:30
2023-09-24T10:09:27
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,082,370,968
3,445
question
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
closed
https://github.com/huggingface/datasets/issues/3445
2021-12-16T15:57:00
2022-01-03T10:09:00
2022-01-03T10:09:00
{ "login": "BAKAYOKO0232", "id": 38075175, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,082,078,961
3,444
Align the Dataset and IterableDataset processing API
## Intro items marked like <s>this</s> are done already :) Currently the two classes have two distinct API for processing: ### The `.map()` method Both have those parameters in common: function, batched, batch_size - IterableDataset is missing those parameters: <s>with_indices</s>, with_rank, <s>input_columns</s>,...
open
https://github.com/huggingface/datasets/issues/3444
2021-12-16T11:26:11
2025-01-31T11:07:07
null
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "generic discussion", "color": "c5def5" } ]
false
[]
1,082,052,833
3,443
Extend iter_archive to support file object input
This PR adds support to passing a file object to `[Streaming]DownloadManager.iter_archive`. With this feature, we can iterate over a tar file inside another tar file.
closed
https://github.com/huggingface/datasets/pull/3443
2021-12-16T10:59:14
2021-12-17T17:53:03
2021-12-17T17:53:02
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,081,862,747
3,442
Extend text to support yielding lines, paragraphs or documents
Add `config.row` option to `text` module to allow yielding lines (default, current case), paragraphs or documents. Feel free to comment on the name of the config parameter `row`: - Currently, the docs state datasets are made of rows and columns - Other names I considered: `example`, `item`
closed
https://github.com/huggingface/datasets/pull/3442
2021-12-16T07:33:17
2021-12-20T16:59:10
2021-12-20T16:39:18
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,081,571,784
3,441
Add QuALITY dataset
## Adding a Dataset - **Name:** QuALITY - **Description:** A challenging question answering with very long contexts (Twitter [thread](https://twitter.com/sleepinyourhat/status/1471225421794529281?s=20)) - **Paper:** No ArXiv link yet, but draft is [here](https://github.com/nyu-mll/quality/blob/main/quality_preprint....
open
https://github.com/huggingface/datasets/issues/3441
2021-12-15T22:26:19
2021-12-28T15:17:05
null
{ "login": "lewtun", "id": 26859204, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,081,528,426
3,440
datasets keeps reading from cached files, although I disabled it
## Describe the bug Hi, I am trying to avoid dataset library using cached files, I get the following bug when this tried to read the cached files. I tried to do the followings: ``` from datasets import set_caching_enabled set_caching_enabled(False) ``` also force redownlaod: ``` download_mode='force_redownloa...
closed
https://github.com/huggingface/datasets/issues/3440
2021-12-15T21:26:22
2022-02-24T09:12:22
2022-02-24T09:12:22
{ "login": "dorost1234", "id": 79165106, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,081,389,723
3,439
Add `cast_column` to `IterableDataset`
Closes #3369. cc: @patrickvonplaten
closed
https://github.com/huggingface/datasets/pull/3439
2021-12-15T19:00:45
2021-12-16T15:55:20
2021-12-16T15:55:19
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,081,302,203
3,438
Update supported versions of Python in setup.py
Update the list of supported versions of Python in `setup.py` to keep the PyPI project description updated.
closed
https://github.com/huggingface/datasets/pull/3438
2021-12-15T17:30:12
2021-12-20T14:22:13
2021-12-20T14:22:12
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,081,247,889
3,437
Update BLEURT hyperlink
The description of BLEURT on the hf.co website has a strange use of URL hyperlinking. This PR attempts to fix this, although I am not 100% sure Markdown syntax is allowed on the frontend or not. ![Screen Shot 2021-12-15 at 17 31 27](https://user-images.githubusercontent.com/26859204/146226432-c83cbdaf-f57d-4999-b53c...
closed
https://github.com/huggingface/datasets/pull/3437
2021-12-15T16:34:47
2021-12-17T13:28:26
2021-12-17T13:28:25
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
1,081,068,139
3,436
Add the OneStopQa dataset
Adding OneStopQA, a multiple choice reading comprehension dataset annotated according to the STARC (Structured Annotations for Reading Comprehension) scheme.
closed
https://github.com/huggingface/datasets/pull/3436
2021-12-15T13:53:31
2021-12-17T14:32:00
2021-12-17T13:25:29
{ "login": "OmerShubi", "id": 28459495, "type": "User" }
[]
true
[]
1,081,043,756
3,435
Improve Wikipedia Loading Script
* More structured approach to detecting redirects * Remove redundant template filter code (covered by strip_code) * Add language-specific lists of additional media namespace aliases for filtering * Add language-specific lists of category namespace aliases for new link text cleaning step * Remove magic words (parser...
closed
https://github.com/huggingface/datasets/pull/3435
2021-12-15T13:30:06
2022-03-04T08:16:00
2022-03-04T08:16:00
{ "login": "geohci", "id": 45494522, "type": "User" }
[]
true
[]
1,080,917,446
3,434
Add The People's Speech
## Adding a Dataset - **Name:** The People's Speech - **Description:** a massive English-language dataset of audio transcriptions of full sentences. - **Paper:** https://openreview.net/pdf?id=R8CwidgJ0yT - **Data:** https://mlcommons.org/en/peoples-speech/ - **Motivation:** With over 30,000 hours of speech, this ...
closed
https://github.com/huggingface/datasets/issues/3434
2021-12-15T11:21:21
2023-02-28T16:22:29
2023-02-28T16:22:28
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "speech", "color": "d93f0b" } ]
false
[]
1,080,910,724
3,433
Add Multilingual Spoken Words dataset
## Adding a Dataset - **Name:** Multilingual Spoken Words - **Description:** Multilingual Spoken Words Corpus is a large and growing audio dataset of spoken words in 50 languages for academic research and commercial applications in keyword spotting and spoken term search, licensed under CC-BY 4.0. The dataset contain...
closed
https://github.com/huggingface/datasets/issues/3433
2021-12-15T11:14:44
2022-02-22T10:03:53
2022-02-22T10:03:53
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "speech", "color": "d93f0b" } ]
false
[]
1,079,910,769
3,432
Correctly indent builder config in dataset script docs
null
closed
https://github.com/huggingface/datasets/pull/3432
2021-12-14T15:39:47
2021-12-14T17:35:17
2021-12-14T17:35:17
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,079,866,083
3,431
Unable to resolve any data file after loading once
when I rerun my program, it occurs this error " Unable to resolve any data file that matches '['**train*']' at /data2/whr/lzy/open_domain_data/retrieval/wiki_dpr with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'zip']", so how could i deal with this problem? thx. And below is my code . ...
closed
https://github.com/huggingface/datasets/issues/3431
2021-12-14T15:02:15
2022-12-11T10:53:04
2022-02-24T09:13:52
{ "login": "LzyFischer", "id": 84694183, "type": "User" }
[]
false
[]
1,079,811,124
3,430
Make decoding of Audio and Image feature optional
Add the `decode` argument (`True` by default) to the `Audio` and the `Image` feature to make it possible to toggle on/off decoding of these features. Even though we've discussed that on Slack, I'm not removing the `_storage_dtype` argument of the Audio feature in this PR to avoid breaking the Audio feature tests.
closed
https://github.com/huggingface/datasets/pull/3430
2021-12-14T14:15:08
2022-01-25T18:57:52
2022-01-25T18:57:52
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,078,902,390
3,429
Make cast cacheable (again) on Windows
`cast` currently emits the following warning when called on Windows: ``` Parameter 'function'=<function Dataset.cast.<locals>.<lambda> at 0x000001C930571EA0> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameter...
closed
https://github.com/huggingface/datasets/pull/3429
2021-12-13T19:32:02
2021-12-14T14:39:51
2021-12-14T14:39:50
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,078,863,468
3,428
Clean squad dummy data
Some unused files were remaining, this PR removes them. We just need to keep the dummy_data.zip file
closed
https://github.com/huggingface/datasets/pull/3428
2021-12-13T18:46:29
2021-12-13T18:57:50
2021-12-13T18:57:50
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,078,782,159
3,427
Add The Pile Enron Emails subset
Add: - Enron Emails subset of The Pile: "enron_emails" config Close bigscience-workshop/data_tooling#310. CC: @StellaAthena
closed
https://github.com/huggingface/datasets/pull/3427
2021-12-13T17:14:16
2021-12-14T17:30:59
2021-12-14T17:30:57
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,078,670,031
3,426
Update disaster_response_messages download urls (+ add validation split)
Fixes #3240, fixes #3416
closed
https://github.com/huggingface/datasets/pull/3426
2021-12-13T15:30:12
2021-12-14T14:38:30
2021-12-14T14:38:29
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,078,598,140
3,425
Getting configs names takes too long
## Steps to reproduce the bug ```python from datasets import get_dataset_config_names get_dataset_config_names("allenai/c4") ``` ## Expected results I would expect to get the answer quickly, at least in less than 10s ## Actual results It takes about 45s on my environment ## Environment info - `d...
open
https://github.com/huggingface/datasets/issues/3425
2021-12-13T14:27:57
2021-12-13T14:53:33
null
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,078,543,625
3,424
Add RedCaps dataset
Add the RedCaps dataset. I'm not adding the generated `dataset_infos.json` file for now due to its size (11 MB). TODOs: - [x] dummy data - [x] dataset card Close #3316
closed
https://github.com/huggingface/datasets/pull/3424
2021-12-13T13:38:13
2022-01-12T14:13:16
2022-01-12T14:13:15
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,078,049,638
3,423
data duplicate when setting num_works > 1 with streaming data
## Describe the bug The data is repeated num_works times when we load_dataset with streaming and set num_works > 1 when construct dataloader ## Steps to reproduce the bug ```python # Sample code to reproduce the bug import pandas as pd import numpy as np import os from datasets import load_dataset from tor...
closed
https://github.com/huggingface/datasets/issues/3423
2021-12-13T03:43:17
2022-12-14T16:04:22
2022-12-14T16:04:22
{ "login": "cloudyuyuyu", "id": 16486492, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "streaming", "color": "fef2c0" } ]
false
[]
1,078,022,619
3,422
Error about load_metric
## Describe the bug File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1371, in load_metric metric = metric_cls( TypeError: 'NoneType' object is not callable ## Steps to reproduce the bug ```python metric = load_metric("glue", "sst2") ``` ## Environment info - `datasets` version: ...
closed
https://github.com/huggingface/datasets/issues/3422
2021-12-13T02:49:51
2022-01-07T14:06:47
2022-01-07T14:06:47
{ "login": "jiacheng-ye", "id": 30772464, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,077,966,571
3,421
Adding mMARCO dataset
Adding mMARCO (v1.1) to HF datasets.
closed
https://github.com/huggingface/datasets/pull/3421
2021-12-13T00:56:43
2022-10-03T09:37:15
2022-10-03T09:37:15
{ "login": "lhbonifacio", "id": 17603035, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,077,913,468
3,420
Add eli5_category dataset
This pull request adds a categorized Long-form question answering dataset `ELI5_Category`. It's a new variant of the [ELI5](https://huggingface.co/datasets/eli5) dataset that uses the Reddit tags to alleviate the training/validation overlapping in the origin ELI5 dataset. A [report](https://celeritasml.netlify.app/p...
closed
https://github.com/huggingface/datasets/pull/3420
2021-12-12T21:30:45
2021-12-14T17:53:03
2021-12-14T17:53:02
{ "login": "jingshenSN2", "id": 40377373, "type": "User" }
[]
true
[]
1,077,350,974
3,419
`.to_json` is extremely slow after `.select`
## Describe the bug Saving a dataset to JSON with `to_json` is extremely slow after using `.select` on the original dataset. ## Steps to reproduce the bug ```python from datasets import load_dataset original = load_dataset("squad", split="train") original.to_json("from_original.json") # Takes 0 seconds se...
open
https://github.com/huggingface/datasets/issues/3419
2021-12-11T01:36:31
2021-12-21T15:49:07
null
{ "login": "eladsegal", "id": 13485709, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,077,053,296
3,418
Add Wikisource dataset
Add loading script for Wikisource dataset. Fix #3399. CC: @geohci, @yjernite
closed
https://github.com/huggingface/datasets/pull/3418
2021-12-10T17:04:44
2022-10-04T09:35:56
2022-10-03T09:37:20
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,076,943,343
3,417
Fix type of bridge field in QED
Use `Value("string")` instead of `Value("bool")` for the feature type of the `"bridge"` field in the QED dataset. If the value is `False`, set to `None`. The following paragraph in the QED repo explains the purpose of this field: >Each annotation in referential_equalities is a pair of spans, the question_reference ...
closed
https://github.com/huggingface/datasets/pull/3417
2021-12-10T15:07:21
2021-12-14T14:39:06
2021-12-14T14:39:05
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,076,868,771
3,416
disaster_response_messages unavailable
## Dataset viewer issue for '* disaster_response_messages*' **Link:** https://huggingface.co/datasets/disaster_response_messages Dataset unavailable. Link dead: https://datasets.appen.com/appen_datasets/disaster_response_data/disaster_response_messages_training.csv Am I the one who added this dataset ?No
closed
https://github.com/huggingface/datasets/issues/3416
2021-12-10T13:49:17
2021-12-14T14:38:29
2021-12-14T14:38:29
{ "login": "sacdallago", "id": 6240943, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,076,472,534
3,415
Non-deterministic tests: CI tests randomly fail
## Describe the bug Some CI tests fail randomly. 1. In https://github.com/huggingface/datasets/pull/3375/commits/c10275fe36085601cb7bdb9daee9a8f1fc734f48, there were 3 failing tests, only on Linux: ``` =========================== short test summary info ============================ FAILED tests/test_str...
closed
https://github.com/huggingface/datasets/issues/3415
2021-12-10T06:08:59
2022-03-31T16:38:51
2022-03-31T16:38:51
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,076,028,998
3,414
Skip None encoding (line deleted by accident in #3195)
Return the line deleted by accident in #3195 while [resolving merge conflicts](https://github.com/huggingface/datasets/pull/3195/commits/8b0ed15be08559056b817836a07d47acda0c4510). Fix #3181 (finally :))
closed
https://github.com/huggingface/datasets/pull/3414
2021-12-09T21:17:33
2021-12-10T11:00:03
2021-12-10T11:00:02
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,075,854,325
3,413
Add WIDER FACE dataset
Adds the WIDER FACE face detection benchmark. TODOs: * [x] dataset card * [x] dummy data
closed
https://github.com/huggingface/datasets/pull/3413
2021-12-09T18:03:38
2022-01-12T14:13:47
2022-01-12T14:13:47
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,075,846,368
3,412
Fix flaky test again for s3 serialization
Following https://github.com/huggingface/datasets/pull/3388 that wasn't enough (see CI error [here](https://app.circleci.com/pipelines/github/huggingface/datasets/9080/workflows/b971fb27-ff20-4220-9416-c19acdfdf6f4/jobs/55985))
closed
https://github.com/huggingface/datasets/pull/3412
2021-12-09T17:54:41
2021-12-09T18:00:52
2021-12-09T18:00:52
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,075,846,272
3,411
[chinese wwm] load_datasets behavior not as expected when using run_mlm_wwm.py script
## Describe the bug Model I am using (Bert, XLNet ...): bert-base-chinese The problem arises when using: * [https://github.com/huggingface/transformers/blob/master/examples/research_projects/mlm_wwm/run_mlm_wwm.py] the official example scripts: `rum_mlm_wwm.py` The tasks I am working on is: pretraining whole ...
open
https://github.com/huggingface/datasets/issues/3411
2021-12-09T17:54:35
2021-12-22T11:21:33
null
{ "login": "hyusterr", "id": 52968111, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,075,815,415
3,410
Fix dependencies conflicts in Windows CI after conda update to 4.11
For some reason the CI wasn't using python 3.6 but python 3.7 after the update to conda 4.11
closed
https://github.com/huggingface/datasets/pull/3410
2021-12-09T17:19:11
2021-12-09T17:36:20
2021-12-09T17:36:19
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,075,684,593
3,409
Pass new_fingerprint in multiprocessing
Following https://github.com/huggingface/datasets/pull/3045 Currently one can pass `new_fingerprint` to `.map()` to use a custom fingerprint instead of the one computed by hashing the map transform. However it's ignored if `num_proc>1`. In this PR I fixed that by passing `new_fingerprint` to `._map_single()` when...
closed
https://github.com/huggingface/datasets/pull/3409
2021-12-09T15:12:00
2022-08-19T10:41:04
2021-12-09T17:38:43
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,075,642,915
3,408
Typo in Dataset viewer error message
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* When creating an empty dataset repo, the Dataset Preview provides a helpful message that no files were found. There is a tiny typo in that message: "ressource" should be "resource" ...
closed
https://github.com/huggingface/datasets/issues/3408
2021-12-09T14:34:02
2021-12-22T11:02:53
2021-12-22T11:02:53
{ "login": "lewtun", "id": 26859204, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,074,502,225
3,407
Use max number of data files to infer module
When inferring the module for datasets without script, set a maximum number of iterations over data files. This PR fixes the issue of taking too long when hundred of data files present. Please, feel free to agree on both numbers: ``` # Datasets without script DATA_FILES_MAX_NUMBER = 10 ARCHIVED_DATA_FILES_MAX...
closed
https://github.com/huggingface/datasets/pull/3407
2021-12-08T14:58:43
2021-12-14T17:08:42
2021-12-14T17:08:42
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,074,366,050
3,406
Fix module inference for archive with a directory
Fix module inference for an archive file that contains files within a directory. Fix #3405.
closed
https://github.com/huggingface/datasets/pull/3406
2021-12-08T12:39:12
2021-12-08T13:03:30
2021-12-08T13:03:29
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,074,360,362
3,405
ZIP format inference does not work when files located in a dir inside the archive
## Describe the bug When a zipped file contains archived files within a directory, the function `infer_module_for_data_files_in_archives` does not work. It only works for files located in the root directory of the ZIP file. ## Steps to reproduce the bug ```python infer_module_for_data_files_in_archives(["path/...
closed
https://github.com/huggingface/datasets/issues/3405
2021-12-08T12:32:15
2021-12-08T13:03:29
2021-12-08T13:03:29
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,073,657,561
3,404
Optimize ZIP format inference
**Is your feature request related to a problem? Please describe.** When hundreds of ZIP files are present in a dataset, format inference takes too long. See: https://github.com/bigscience-workshop/data_tooling/issues/232#issuecomment-986685497 **Describe the solution you'd like** Iterate over a maximum number o...
closed
https://github.com/huggingface/datasets/issues/3404
2021-12-07T18:44:49
2021-12-14T17:08:41
2021-12-14T17:08:41
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,073,622,120
3,403
Cannot import name 'maybe_sync'
## Describe the bug Cannot seem to import datasets when running run_summarizer.py script on a VM set up on ovhcloud ## Steps to reproduce the bug ```python from datasets import load_dataset ``` ## Expected results No error ## Actual results Traceback (most recent call last): File "<stdin>", line 1, in...
closed
https://github.com/huggingface/datasets/issues/3403
2021-12-07T17:57:59
2021-12-17T07:00:35
2021-12-17T07:00:35
{ "login": "KMFODA", "id": 35491698, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,073,614,815
3,402
More robust first elem check in encode/cast example
Fix #3306
closed
https://github.com/huggingface/datasets/pull/3402
2021-12-07T17:48:16
2021-12-08T13:02:16
2021-12-08T13:02:15
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,073,603,508
3,401
Add Wikimedia pre-processed datasets
## Adding a Dataset - **Name:** Add pre-processed data to: - *wikimedia/wikipedia*: https://huggingface.co/datasets/wikimedia/wikipedia - *wikimedia/wikisource*: https://huggingface.co/datasets/wikimedia/wikisource - **Description:** Add pre-processed data to the Hub for all languages - **Paper:** *link to the...
closed
https://github.com/huggingface/datasets/issues/3401
2021-12-07T17:33:19
2024-10-09T16:10:47
2024-10-09T16:10:47
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,073,600,382
3,400
Improve Wikipedia loading script
As reported by @geohci, the "wikipedia" processing/loading script could be improved by some additional small suggested processing functions: - _extract_content(filepath): - Replace .startswith("#redirect") with more structured approach: if elem.find(f"./{namespace}redirect") is None: continue - _parse_and_clean_wi...
closed
https://github.com/huggingface/datasets/issues/3400
2021-12-07T17:29:25
2022-03-22T16:52:28
2022-03-22T16:52:28
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,073,593,861
3,399
Add Wikisource dataset
## Adding a Dataset - **Name:** *wikisource* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** Additional high quality textual d...
closed
https://github.com/huggingface/datasets/issues/3399
2021-12-07T17:21:31
2024-10-09T16:11:27
2024-10-09T16:11:26
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,073,590,384
3,398
Add URL field to Wikimedia dataset instances: wikipedia,...
As reported by @geohci, in order to host pre-processed data in the Hub, we should add the full URL to data instances (new field "url"), so that we conform to proper attribution from license requirement. See, e.g.: https://fair-trec.github.io/docs/Fair_Ranking_2021_Participant_Instructions.pdf#subsection.3.2 This sho...
closed
https://github.com/huggingface/datasets/issues/3398
2021-12-07T17:17:27
2022-03-22T16:53:27
2022-03-22T16:53:27
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,073,502,444
3,397
add BNL newspapers
This pull request adds the BNL's [processed newspaper collections](https://data.bnl.lu/data/historical-newspapers/) as a dataset. This is partly done to support BigScience see: https://github.com/bigscience-workshop/data_tooling/issues/192. The Datacard is more sparse than I would like but I plan to make a separate...
closed
https://github.com/huggingface/datasets/pull/3397
2021-12-07T15:43:21
2022-01-17T18:35:34
2022-01-17T18:35:34
{ "login": "davanstrien", "id": 8995957, "type": "User" }
[]
true
[]
1,073,467,183
3,396
Install Audio dependencies to support audio decoding
## Dataset viewer issue for '*openslr*', '*projecte-aina/parlament_parla*' **Link:** *https://huggingface.co/datasets/openslr* **Link:** *https://huggingface.co/datasets/projecte-aina/parlament_parla* Error: ``` Status code: 400 Exception: ImportError Message: To support decoding audio files, ple...
closed
https://github.com/huggingface/datasets/issues/3396
2021-12-07T15:11:36
2022-04-25T16:12:22
2022-04-25T16:12:01
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" }, { "name": "audio_column", "color": "F83ACF" } ]
false
[]
1,073,432,650
3,395
Fix formatting in IterableDataset.map docs
Fix formatting in the recently added `Map` section of the streaming docs.
closed
https://github.com/huggingface/datasets/pull/3395
2021-12-07T14:41:01
2021-12-08T10:11:33
2021-12-08T10:11:33
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,073,396,308
3,394
Preserve all feature types when saving a dataset on the Hub with `push_to_hub`
Currently, if one of the dataset features is of type `ClassLabel`, saving the dataset with `push_to_hub` and reloading the dataset with `load_dataset` will return the feature of type `Value`. To fix this, we should do something similar to `save_to_disk` (which correctly preserves the types) and not only push the parque...
closed
https://github.com/huggingface/datasets/issues/3394
2021-12-07T14:08:30
2021-12-21T17:00:09
2021-12-21T17:00:09
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,073,189,777
3,393
Common Voice Belarusian Dataset
## Adding a Dataset - **Name:** *Common Voice Belarusian Dataset* - **Description:** *[commonvoice.mozilla.org/be](https://commonvoice.mozilla.org/be)* - **Data:** *[commonvoice.mozilla.org/be/datasets](https://commonvoice.mozilla.org/be/datasets)* - **Motivation:** *It has more than 7GB of data, so it will be grea...
open
https://github.com/huggingface/datasets/issues/3393
2021-12-07T10:37:02
2021-12-09T15:56:03
null
{ "login": "wiedymi", "id": 42713027, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "speech", "color": "d93f0b" } ]
false
[]
1,073,073,408
3,392
Dataset viewer issue for `dansbecker/hackernews_hiring_posts`
## Dataset viewer issue for `dansbecker/hackernews_hiring_posts` **Link:** https://huggingface.co/datasets/dansbecker/hackernews_hiring_posts *short description of the issue* Dataset preview not showing for uploaded DatasetDict. See https://discuss.huggingface.co/t/dataset-preview-not-showing-for-uploaded-data...
closed
https://github.com/huggingface/datasets/issues/3392
2021-12-07T08:41:01
2021-12-07T14:04:28
2021-12-07T14:04:28
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,072,849,055
3,391
method to select columns
**Is your feature request related to a problem? Please describe.** * There is currently no way to select some columns of a dataset. In pandas, one can use `df[['col1', 'col2']]` to select columns, but in `datasets`, it results in error. **Describe the solution you'd like** * A new method that can be used to cr...
closed
https://github.com/huggingface/datasets/issues/3391
2021-12-07T02:44:19
2021-12-07T02:45:27
2021-12-07T02:45:27
{ "login": "changjonathanc", "id": 31893406, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,072,462,456
3,390
Loading dataset throws "KeyError: 'Field "builder_name" does not exist in table schema'"
## Describe the bug I have prepared dataset to datasets and now I am trying to load it back Finnish-NLP/voxpopuli_fi I get "KeyError: 'Field "builder_name" does not exist in table schema'" My dataset folder and files should be like @patrickvonplaten has here https://huggingface.co/datasets/flax-community/german-c...
closed
https://github.com/huggingface/datasets/issues/3390
2021-12-06T18:22:49
2021-12-06T20:22:05
2021-12-06T20:22:05
{ "login": "R4ZZ3", "id": 25264037, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,072,191,865
3,389
Add EDGAR
## Adding a Dataset - **Name:** EDGAR Database - **Description:** https://www.sec.gov/edgar/about EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system, is the primary system for companies and others submitting documents under the Securities Act of 1933, the Securities Exchange Act of 1934, the Trust I...
open
https://github.com/huggingface/datasets/issues/3389
2021-12-06T14:06:11
2022-10-05T10:40:22
null
{ "login": "philschmid", "id": 32632186, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,072,022,021
3,388
Fix flaky test of the temporary directory used by load_from_disk
The test is flaky, here is an example of random CI failure: https://github.com/huggingface/datasets/commit/73ed6615b4b3eb74d5311684f7b9e05cdb76c989 I fixed that by not checking the content of the random part of the temporary directory name
closed
https://github.com/huggingface/datasets/pull/3388
2021-12-06T11:09:31
2021-12-06T11:25:03
2021-12-06T11:24:49
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,071,836,456
3,387
Create Language Modeling task
Create Language Modeling task to be able to specify the input "text" column in a dataset. This can be useful for datasets which are not exclusively used for language modeling and have more than one column: - for text classification datasets (with columns "review" and "rating", for example), the Language Modeling ta...
closed
https://github.com/huggingface/datasets/pull/3387
2021-12-06T07:56:07
2021-12-17T17:18:28
2021-12-17T17:18:27
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,071,813,141
3,386
Fix typos in dataset cards
This PR: - Fix typos in dataset cards - Fix Papers With Code ID for: - Bilingual Corpus of Arabic-English Parallel Tweets - Tweets Hate Speech Detection - Add pretty name tags
closed
https://github.com/huggingface/datasets/pull/3386
2021-12-06T07:20:40
2021-12-06T09:30:55
2021-12-06T09:30:54
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,071,742,310
3,385
None batched `with_transform`, `set_transform`
**Is your feature request related to a problem? Please describe.** A `torch.utils.data.Dataset.__getitem__` operates on a single example. But 🤗 `Datasets.with_transform` doesn't seem to allow non-batched transform. **Describe the solution you'd like** Have a `batched=True` argument in `Datasets.with_transfor...
open
https://github.com/huggingface/datasets/issues/3385
2021-12-06T05:20:54
2022-01-17T15:25:01
null
{ "login": "changjonathanc", "id": 31893406, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,071,594,165
3,384
Adding mMARCO dataset
We are adding mMARCO dataset to HuggingFace datasets repo. This way, all the languages covered in the translation are available in a easy way.
closed
https://github.com/huggingface/datasets/pull/3384
2021-12-05T23:59:11
2021-12-12T15:27:36
2021-12-12T15:27:36
{ "login": "lhbonifacio", "id": 17603035, "type": "User" }
[]
true
[]
1,071,551,884
3,383
add Georgian data in cc100.
update cc100 dataset to support loading Georgian (ka) data which is originally available in CC100 dataset source. All tests are passed. Dummy data generated. metadata generated.
closed
https://github.com/huggingface/datasets/pull/3383
2021-12-05T20:38:09
2021-12-14T14:37:23
2021-12-14T14:37:22
{ "login": "AnzorGozalishvili", "id": 55232459, "type": "User" }
[]
true
[]
1,071,293,299
3,382
#3337 Add typing overloads to Dataset.__getitem__ for mypy
Add typing overloads to Dataset.__getitem__ for mypy Fixes #3337 **Iterable** Iterable from `collections` cannot have a type, so you can't do `Iterable[int]` for example. `typing` has a Generic version that builds upon the one from `collections`. **Flake8** I had to add `# noqa: F811`, this is a bug from Fl...
closed
https://github.com/huggingface/datasets/pull/3382
2021-12-04T20:54:49
2021-12-14T10:28:55
2021-12-14T10:28:55
{ "login": "Dref360", "id": 8976546, "type": "User" }
[]
true
[]
1,071,283,879
3,381
Unable to load audio_features from common_voice dataset
## Describe the bug I am not able to load audio features from common_voice dataset ## Steps to reproduce the bug ``` from datasets import load_dataset import torchaudio test_dataset = load_dataset("common_voice", "hi", split="test[:2%]") resampler = torchaudio.transforms.Resample(48_000, 16_000) def spe...
closed
https://github.com/huggingface/datasets/issues/3381
2021-12-04T19:59:11
2021-12-06T17:52:42
2021-12-06T17:52:42
{ "login": "ashu5644", "id": 8268102, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,071,166,270
3,380
[Quick poll] Give your opinion on the future of the Hugging Face Open Source ecosystem!
Thanks to all of you, `datasets` will pass 11.5k stars :star2: this week! If you have a couple of minutes and want to participate in shaping the future of the ecosystem, please share your thoughts: [**hf.co/oss-survey**](https://hf.co/oss-survey) (please reply in the above feedback form rather than to this th...
closed
https://github.com/huggingface/datasets/issues/3380
2021-12-04T09:18:33
2022-01-11T12:29:53
2022-01-11T12:29:53
{ "login": "LysandreJik", "id": 30755778, "type": "User" }
[]
false
[]
1,071,079,146
3,379
iter_archive on zipfiles with better compression type check
Hello @lhoestq , thank you for your detailed answer on previous PR ! I made this new PR because I misused git on the previous one #3347. Related issue #3272. # Comments : * For extension check I used the `_get_extraction_protocol` function in **download_manager.py** with a slight change and called it `_get_e...
closed
https://github.com/huggingface/datasets/pull/3379
2021-12-04T01:04:48
2023-01-24T13:00:19
2023-01-24T12:53:08
{ "login": "Mehdi2402", "id": 56029953, "type": "User" }
[]
true
[]
1,070,580,126
3,378
Add The Pile subsets
Add The Pile subsets: - pubmed - ubuntu_irc - europarl - hacker_news - nih_exporter Close bigscience-workshop/data_tooling#301. CC: @StellaAthena
closed
https://github.com/huggingface/datasets/pull/3378
2021-12-03T13:14:54
2021-12-09T18:11:25
2021-12-09T18:11:23
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,070,562,907
3,377
COCO 🥥 on the 🤗 Hub?
This is a draft PR since I ran into few small problems. I referred to this TFDS code: https://github.com/tensorflow/datasets/blob/2538a08c184d53b37bfcf52cc21dd382572a88f4/tensorflow_datasets/object_detection/coco.py cc: @mariosasko
closed
https://github.com/huggingface/datasets/pull/3377
2021-12-03T12:55:27
2021-12-20T14:14:01
2021-12-20T14:14:00
{ "login": "merveenoyan", "id": 53175384, "type": "User" }
[]
true
[]
1,070,522,979
3,376
Update clue benchmark
Fix #3374
closed
https://github.com/huggingface/datasets/pull/3376
2021-12-03T12:06:01
2021-12-08T14:14:42
2021-12-08T14:14:41
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]