id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
946,470,815
2,662
Load Dataset from the Hub (NO DATASET SCRIPT)
## Load the data from any Dataset repository on the Hub This PR adds support for loading datasets from any dataset repository on the hub, without requiring any dataset script. As a user it's now possible to create a repo and upload some csv/json/text/parquet files, and then be able to load the data in one line. H...
closed
https://github.com/huggingface/datasets/pull/2662
2021-07-16T17:21:58
2021-08-25T14:53:01
2021-08-25T14:18:08
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
946,446,967
2,661
Add SD task for SUPERB
Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization). TODO: - [x] Generate the LibriMix corpus - [x] Prepare the corpus for diarization - [x] Upl...
closed
https://github.com/huggingface/datasets/pull/2661
2021-07-16T16:43:21
2021-08-04T17:03:53
2021-08-04T17:03:53
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
946,316,180
2,660
Move checks from _map_single to map
The goal of this PR is to remove duplicated checks in the `map` logic to execute them only once whenever possible (`fn_kwargs`, `input_columns`, ...). Additionally, this PR improves the consistency (to align it with `input_columns`) of the `remove_columns` check by adding support for a single string value, which is the...
closed
https://github.com/huggingface/datasets/pull/2660
2021-07-16T13:53:33
2021-09-06T14:12:23
2021-09-06T14:12:23
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
946,155,407
2,659
Allow dataset config kwargs to be None
Close https://github.com/huggingface/datasets/issues/2658 The dataset config kwargs that were set to None we simply ignored. This was an issue when None has some meaning for certain parameters of certain builders, like the `sep` parameter of the "csv" builder that allows to infer to separator. cc @SBrandeis
closed
https://github.com/huggingface/datasets/pull/2659
2021-07-16T10:25:38
2021-07-16T12:46:07
2021-07-16T12:46:07
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
946,139,532
2,658
Can't pass `sep=None` to load_dataset("csv", ...) to infer the separator via pandas.read_csv
When doing `load_dataset("csv", sep=None)`, the `sep` passed to `pd.read_csv` is still the default `sep=","` instead, which makes it impossible to make the csv loader infer the separator. Related to https://github.com/huggingface/datasets/pull/2656 cc @SBrandeis
closed
https://github.com/huggingface/datasets/issues/2658
2021-07-16T10:05:44
2021-07-16T12:46:06
2021-07-16T12:46:06
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
945,822,829
2,657
`to_json` reporting enhancements
While using `to_json` 2 things came to mind that would have made the experience easier on the user: 1. Could we have a `desc` arg for the tqdm use and a fallback to just `to_json` so that it'd be clear to the user what's happening? Surely, one can just print the description before calling json, but I thought perhaps...
open
https://github.com/huggingface/datasets/issues/2657
2021-07-15T23:32:18
2021-07-15T23:33:53
null
{ "login": "stas00", "id": 10676103, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
945,421,790
2,656
Change `from_csv` default arguments
Passing `sep=None` to pandas's `read_csv` lets pandas guess the CSV file's separator This PR allows users to use this pandas's feature by passing `sep=None` to `Dataset.from_csv`: ```python Dataset.from_csv( ..., sep=None ) ```
closed
https://github.com/huggingface/datasets/pull/2656
2021-07-15T14:09:06
2023-09-24T09:56:44
2021-07-16T10:23:26
{ "login": "SBrandeis", "id": 33657802, "type": "User" }
[]
true
[]
945,382,723
2,655
Allow the selection of multiple columns at once
**Is your feature request related to a problem? Please describe.** Similar to pandas, it would be great if we could select multiple columns at once. **Describe the solution you'd like** ```python my_dataset = ... # Has columns ['idx', 'sentence', 'label'] idx, label = my_dataset[['idx', 'label']] ``` **...
closed
https://github.com/huggingface/datasets/issues/2655
2021-07-15T13:30:45
2024-01-09T15:11:27
2024-01-09T07:46:28
{ "login": "Dref360", "id": 8976546, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
945,167,231
2,654
Give a user feedback if the dataset he loads is streamable or not
**Is your feature request related to a problem? Please describe.** I would love to know if a `dataset` is with the current implementation streamable or not. **Describe the solution you'd like** We could show a warning when a dataset is loaded with `load_dataset('...',streaming=True)` when its lot streamable, e.g....
open
https://github.com/huggingface/datasets/issues/2654
2021-07-15T09:07:27
2021-08-02T11:03:21
null
{ "login": "philschmid", "id": 32632186, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
945,102,321
2,653
Add SD task for SUPERB
Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization). Steps: - [x] Generate the LibriMix corpus - [x] Prepare the corpus for diarization - [x] Up...
closed
https://github.com/huggingface/datasets/issues/2653
2021-07-15T07:51:40
2021-08-04T17:03:52
2021-08-04T17:03:52
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
944,865,924
2,652
Fix logging docstring
Remove "no tqdm bars" from the docstring in the logging module to align it with the changes introduced in #2534.
closed
https://github.com/huggingface/datasets/pull/2652
2021-07-14T23:19:58
2021-07-18T11:41:06
2021-07-15T09:57:31
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
944,796,961
2,651
Setting log level higher than warning does not suppress progress bar
## Describe the bug I would like to disable progress bars for `.map` method (and other methods like `.filter` and `load_dataset` as well). According to #1627 one can suppress it by setting log level higher than `warning`, however doing so doesn't suppress it with version 1.9.0. I also tried to set `DATASETS_VERBOS...
closed
https://github.com/huggingface/datasets/issues/2651
2021-07-14T21:06:51
2022-07-08T14:51:57
2021-07-15T03:41:35
{ "login": "Isa-rentacs", "id": 1147443, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
944,672,565
2,650
[load_dataset] shard and parallelize the process
- Some huge datasets take forever to build the first time. (e.g. oscar/en) as it's done in a single cpu core. - If the build crashes, everything done up to that point gets lost Request: Shard the build over multiple arrow files, which would enable: - much faster build by parallelizing the build process - if the p...
closed
https://github.com/huggingface/datasets/issues/2650
2021-07-14T18:04:58
2023-11-28T19:11:41
2023-11-28T19:11:40
{ "login": "stas00", "id": 10676103, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
944,651,229
2,649
adding progress bar / ETA for `load_dataset`
Please consider: ``` Downloading and preparing dataset oscar/unshuffled_deduplicated_en (download: 462.40 GiB, generated: 1.18 TiB, post-processed: Unknown size, total: 1.63 TiB) to cache/oscar/unshuffled_deduplicated_en/1.0.0/84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2... HF google storage unre...
open
https://github.com/huggingface/datasets/issues/2649
2021-07-14T17:34:39
2023-03-27T10:32:49
null
{ "login": "stas00", "id": 10676103, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
944,484,522
2,648
Add web_split dataset for Paraphase and Rephrase benchmark
## Describe: For getting simple sentences from complex sentence there are dataset and task like wiki_split that is available in hugging face datasets. This web_split is a very similar dataset. There some research paper which states that by combining these two datasets we if we train the model it will yield better resu...
open
https://github.com/huggingface/datasets/issues/2648
2021-07-14T14:24:36
2021-07-14T14:26:12
null
{ "login": "bhadreshpsavani", "id": 26653468, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
944,424,941
2,647
Fix anchor in README
I forgot to push this fix in #2611, so I'm sending it now.
closed
https://github.com/huggingface/datasets/pull/2647
2021-07-14T13:22:44
2021-07-18T11:41:18
2021-07-15T06:50:47
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
944,379,954
2,646
downloading of yahoo_answers_topics dataset failed
## Describe the bug I get an error datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files when I try to download the yahoo_answers_topics dataset ## Steps to reproduce the bug self.dataset = load_dataset( 'yahoo_answers_topics', cache_dir=self.config...
closed
https://github.com/huggingface/datasets/issues/2646
2021-07-14T12:31:05
2022-08-04T08:28:24
2022-08-04T08:28:24
{ "login": "vikrant7k", "id": 66781249, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
944,374,284
2,645
load_dataset processing failed with OS error after downloading a dataset
## Describe the bug After downloading a dataset like opus100, there is a bug that OSError: Cannot find data file. Original error: dlopen: cannot load any more object with static TLS ## Steps to reproduce the bug ```python from datasets import load_dataset this_dataset = load_dataset('opus100', 'af-en') ``` ...
closed
https://github.com/huggingface/datasets/issues/2645
2021-07-14T12:23:53
2021-07-15T09:34:02
2021-07-15T09:34:02
{ "login": "fake-warrior8", "id": 40395156, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
944,254,748
2,644
Batched `map` not allowed to return 0 items
## Describe the bug I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingface.co/docs/datasets/processing.html#augmenting...
closed
https://github.com/huggingface/datasets/issues/2644
2021-07-14T09:58:19
2021-07-26T14:55:15
2021-07-26T14:55:15
{ "login": "pcuenca", "id": 1177582, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
944,220,273
2,643
Enum used in map functions will raise a RecursionError with dill.
## Describe the bug Enums used in functions pass to `map` will fail at pickling with a maximum recursion exception as described here: https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284 In my particular case, I use an enum to define an argument with fixed options using the `TraininigArguments` ...
open
https://github.com/huggingface/datasets/issues/2643
2021-07-14T09:16:08
2021-11-02T09:51:11
null
{ "login": "jorgeecardona", "id": 100702, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
944,175,697
2,642
Support multi-worker with streaming dataset (IterableDataset).
**Is your feature request related to a problem? Please describe.** The current `.map` does not support multi-process, CPU can become bottleneck if the pre-processing is complex (e.g. t5 span masking). **Describe the solution you'd like** Ideally `.map` should support multi-worker like tfds, with `AUTOTUNE`. **D...
open
https://github.com/huggingface/datasets/issues/2642
2021-07-14T08:22:58
2024-05-03T10:11:04
null
{ "login": "changjonathanc", "id": 31893406, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
943,838,085
2,641
load_dataset("financial_phrasebank") NonMatchingChecksumError
## Describe the bug Attempting to download the financial_phrasebank dataset results in a NonMatchingChecksumError ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("financial_phrasebank", 'sentences_allagree') ``` ## Expected results I expect to see the financi...
closed
https://github.com/huggingface/datasets/issues/2641
2021-07-13T21:21:49
2022-08-04T08:30:08
2022-08-04T08:30:08
{ "login": "courtmckay", "id": 13956255, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
943,591,055
2,640
Fix docstrings
Fix rendering of some docstrings.
closed
https://github.com/huggingface/datasets/pull/2640
2021-07-13T16:09:14
2021-07-15T06:51:01
2021-07-15T06:06:12
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
943,527,463
2,639
Refactor patching to specific submodule
Minor reorganization of the code, so that additional patching functions (not related to streaming) might be created. In relation with the initial approach followed in #2631.
closed
https://github.com/huggingface/datasets/pull/2639
2021-07-13T15:08:45
2021-07-13T16:52:49
2021-07-13T16:52:49
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
943,484,913
2,638
Streaming for the Json loader
It was not using `open` in the builder. Therefore `pyarrow.json.read_json` was downloading the full file to start yielding rows. Moreover, it appeared that `pyarrow.json.read_json` was not really suited for streaming as it was downloading too much data and failing if `block_size` was not properly configured (related...
closed
https://github.com/huggingface/datasets/pull/2638
2021-07-13T14:37:06
2021-07-16T15:59:32
2021-07-16T15:59:31
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
943,044,514
2,636
Streaming for the Pandas loader
It was not using open in the builder. Therefore pd.read_pickle could fail when streaming from a private repo for example. Indeed, when streaming, open is extended to support reading from remote files and handles authentication to the HF Hub
closed
https://github.com/huggingface/datasets/pull/2636
2021-07-13T09:18:21
2021-07-13T14:37:24
2021-07-13T14:37:23
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
943,030,999
2,635
Streaming for the CSV loader
It was not using `open` in the builder. Therefore `pd.read_csv` was downloading the full file to start yielding rows. Indeed, when streaming, `open` is extended to support reading from remote file progressively.
closed
https://github.com/huggingface/datasets/pull/2635
2021-07-13T09:08:58
2021-07-13T15:19:38
2021-07-13T15:19:37
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
942,805,621
2,634
Inject ASR template for lj_speech dataset
Related to: #2565, #2633. cc: @lewtun
closed
https://github.com/huggingface/datasets/pull/2634
2021-07-13T06:04:54
2021-07-13T09:05:09
2021-07-13T09:05:09
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
942,396,414
2,633
Update ASR tags
This PR updates the ASR tags of the 5 datasets added in #2565 following the change of task categories in #2620
closed
https://github.com/huggingface/datasets/pull/2633
2021-07-12T19:58:31
2021-07-13T05:45:26
2021-07-13T05:45:13
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
942,293,727
2,632
add image-classification task template
Snippet below is the tl;dr, but you can try it out directly here: [![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/nateraw/005c025d41f0e48ae3d4ee61c0f20b70/image-classification-task-template-demo.ipynb) ```python from datasets import load_datase...
closed
https://github.com/huggingface/datasets/pull/2632
2021-07-12T17:41:03
2021-07-13T15:44:28
2021-07-13T15:28:16
{ "login": "nateraw", "id": 32437151, "type": "User" }
[]
true
[]
942,242,271
2,631
Delete extracted files when loading dataset
Close #2481, close #2604, close #2591. cc: @stas00, @thomwolf, @BirgerMoell
closed
https://github.com/huggingface/datasets/pull/2631
2021-07-12T16:39:33
2021-07-19T09:08:19
2021-07-19T09:08:19
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
942,102,956
2,630
Progress bars are not properly rendered in Jupyter notebook
## Describe the bug The progress bars are not Jupyter widgets; regular progress bars appear (like in a terminal). ## Steps to reproduce the bug ```python ds.map(tokenize, num_proc=10) ``` ## Expected results Jupyter widgets displaying the progress bars. ## Actual results Simple plane progress bars. cc...
closed
https://github.com/huggingface/datasets/issues/2630
2021-07-12T14:07:13
2022-02-03T15:55:33
2022-02-03T15:55:33
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
941,819,205
2,629
Load datasets from the Hub without requiring a dataset script
As a user I would like to be able to upload my csv/json/text/parquet/etc. files in a dataset repository on the Hugging Face Hub and be able to load this dataset with `load_dataset` without having to implement a dataset script. Moreover I would like to be able to specify which file goes into which split using the `da...
closed
https://github.com/huggingface/datasets/issues/2629
2021-07-12T08:45:17
2021-08-25T14:18:08
2021-08-25T14:18:08
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
941,676,404
2,628
Use ETag of remote data files
Use ETag of remote data files to create config ID. Related to #2616.
closed
https://github.com/huggingface/datasets/pull/2628
2021-07-12T05:10:10
2021-07-12T14:08:34
2021-07-12T08:40:07
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
941,503,349
2,627
Minor fix tests with Windows paths
Minor fix tests with Windows paths.
closed
https://github.com/huggingface/datasets/pull/2627
2021-07-11T17:55:48
2021-07-12T14:08:47
2021-07-12T08:34:50
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
941,497,830
2,626
Use correct logger in metrics.py
Fixes #2624
closed
https://github.com/huggingface/datasets/pull/2626
2021-07-11T17:22:30
2021-07-12T14:08:54
2021-07-12T05:54:29
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
941,439,922
2,625
⚛️😇⚙️🔑
closed
https://github.com/huggingface/datasets/issues/2625
2021-07-11T12:14:34
2021-07-12T05:55:59
2021-07-12T05:55:59
{ "login": "hustlen0mics", "id": 50596661, "type": "User" }
[]
false
[]
941,318,247
2,624
can't set verbosity for `metric.py`
## Describe the bug ``` [2021-07-10 20:13:11,528][datasets.utils.filelock][INFO] - Lock 139705371374976 acquired on /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow.lock [2021-07-10 20:13:11,529][datasets.arrow_writer][INFO] - Done writing 32 examples in 6100 bytes /root/.cache/huggingfa...
closed
https://github.com/huggingface/datasets/issues/2624
2021-07-10T20:23:45
2021-07-12T05:54:29
2021-07-12T05:54:29
{ "login": "thomas-happify", "id": 66082334, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
941,265,342
2,623
[Metrics] added wiki_split metrics
Fixes: #2606 This pull request adds combine metrics for the wikisplit or English sentence split task Reviewer: @patrickvonplaten
closed
https://github.com/huggingface/datasets/pull/2623
2021-07-10T14:51:50
2021-07-14T14:28:13
2021-07-12T22:34:31
{ "login": "bhadreshpsavani", "id": 26653468, "type": "User" }
[]
true
[]
941,127,785
2,622
Integration with AugLy
**Is your feature request related to a problem? Please describe.** Facebook recently launched a library, [AugLy](https://github.com/facebookresearch/AugLy) , that has a unified API for augmentations for image, video and text. It would be pretty exciting to have it hooked up to HF libraries so that we can make NLP m...
closed
https://github.com/huggingface/datasets/issues/2622
2021-07-10T00:03:09
2023-07-20T13:18:48
2023-07-20T13:18:47
{ "login": "Darktex", "id": 890615, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
940,916,446
2,621
Use prefix to allow exceed Windows MAX_PATH
By using this prefix, you can exceed the Windows MAX_PATH limit. See: https://docs.microsoft.com/en-us/windows/win32/fileio/naming-a-file?redirectedfrom=MSDN#win32-file-namespaces Related to #2524, #2220.
closed
https://github.com/huggingface/datasets/pull/2621
2021-07-09T16:39:53
2021-07-16T15:28:12
2021-07-16T15:28:11
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
940,893,389
2,620
Add speech processing tasks
This PR replaces the `automatic-speech-recognition` task category with a broader `speech-processing` category. The tasks associated with this category are derived from the [SUPERB benchmark](https://arxiv.org/abs/2105.01051), and ASR is included in this set.
closed
https://github.com/huggingface/datasets/pull/2620
2021-07-09T16:07:29
2021-07-12T18:32:59
2021-07-12T17:32:02
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
940,858,236
2,619
Add ASR task for SUPERB
This PR starts building up the SUPERB benchmark by including the ASR task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/v0.2.0/downstream#asr-automatic-speech-recognition). Usage: ```python from datasets import load_dataset ...
closed
https://github.com/huggingface/datasets/pull/2619
2021-07-09T15:19:45
2021-07-15T08:55:58
2021-07-13T12:40:18
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
940,852,640
2,618
`filelock.py` Error
## Describe the bug It seems that the `filelock.py` went error. ``` >>> ds=load_dataset('xsum') ^CTraceback (most recent call last): File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB) ...
closed
https://github.com/huggingface/datasets/issues/2618
2021-07-09T15:12:49
2024-06-21T06:14:07
2023-11-23T19:06:19
{ "login": "liyucheng09", "id": 27999909, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
940,846,847
2,617
Fix missing EOL issue in to_json for old versions of pandas
Some versions of pandas don't add an EOL at the end of the output of `to_json`. Therefore users could end up having two samples in the same line Close https://github.com/huggingface/datasets/issues/2615
closed
https://github.com/huggingface/datasets/pull/2617
2021-07-09T15:05:45
2021-07-12T14:09:00
2021-07-09T15:28:33
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
940,799,038
2,616
Support remote data files
Add support for (streaming) remote data files: ```python data_files = f"https://huggingface.co/datasets/{repo_id}/resolve/main/{relative_file_path}" ds = load_dataset("json", split="train", data_files=data_files, streaming=True) ``` cc: @thomwolf
closed
https://github.com/huggingface/datasets/pull/2616
2021-07-09T14:07:38
2021-07-09T16:13:41
2021-07-09T16:13:41
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
true
[]
940,794,339
2,615
Jsonlines export error
## Describe the bug When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default ## Steps to reproduce the bug This wha...
closed
https://github.com/huggingface/datasets/issues/2615
2021-07-09T14:02:05
2021-07-09T15:29:07
2021-07-09T15:28:33
{ "login": "TevenLeScao", "id": 26709476, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
940,762,427
2,614
Convert numpy scalar to python float in Pearsonr output
Following of https://github.com/huggingface/datasets/pull/2612
closed
https://github.com/huggingface/datasets/pull/2614
2021-07-09T13:22:55
2021-07-12T14:13:02
2021-07-09T14:04:38
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
940,759,852
2,613
Use ndarray.item instead of ndarray.tolist
This PR follows up on #2612 to use `numpy.ndarray.item` instead of `numpy.ndarray.tolist` as the latter is somewhat confusing to the developer (even though it works). Judging from the `numpy` docs, `ndarray.item` is closer to what we want: https://numpy.org/doc/stable/reference/generated/numpy.ndarray.item.html#nump...
closed
https://github.com/huggingface/datasets/pull/2613
2021-07-09T13:19:35
2021-07-12T14:12:57
2021-07-09T13:50:05
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
940,604,512
2,612
Return Python float instead of numpy.float64 in sklearn metrics
This PR converts the return type of all `sklearn` metrics to be Python `float` instead of `numpy.float64`. The reason behind this is that our Hub evaluation framework relies on converting benchmark-specific metrics to YAML ([example](https://huggingface.co/datasets/autonlp/autonlp-benchmark-raft-neelalex__raft-test-...
closed
https://github.com/huggingface/datasets/pull/2612
2021-07-09T09:48:09
2021-07-12T14:12:53
2021-07-09T13:03:54
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
940,307,053
2,611
More consistent naming
As per @stas00's suggestion in #2500, this PR inserts a space between the logo and the lib name (`🤗Datasets` -> `🤗 Datasets`) for consistency with the Transformers lib. Additionally, more consistent names are used for Datasets Hub, etc.
closed
https://github.com/huggingface/datasets/pull/2611
2021-07-09T00:09:17
2021-07-13T17:13:19
2021-07-13T16:08:30
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
939,899,829
2,610
Add missing WikiANN language tags
Add missing language tags for WikiANN datasets.
closed
https://github.com/huggingface/datasets/pull/2610
2021-07-08T14:08:01
2021-07-12T14:12:16
2021-07-08T15:44:04
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
939,616,682
2,609
Fix potential DuplicatedKeysError
Fix potential DiplicatedKeysError by ensuring keys are unique. We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique).
closed
https://github.com/huggingface/datasets/pull/2609
2021-07-08T08:38:04
2021-07-12T14:13:16
2021-07-09T16:42:08
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
938,897,626
2,608
Support streaming JSON files
Use open in JSON dataset builder, so that it can be patched with xopen for streaming. Close #2607.
closed
https://github.com/huggingface/datasets/pull/2608
2021-07-07T13:30:22
2021-07-12T14:12:31
2021-07-08T16:08:41
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
938,796,902
2,607
Streaming local gzip compressed JSON line files is not working
## Describe the bug Using streaming to iterate on local gzip compressed JSON files raise a file not exist error ## Steps to reproduce the bug ```python from datasets import load_dataset streamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True) next(iter(streamed_dataset))...
closed
https://github.com/huggingface/datasets/issues/2607
2021-07-07T11:36:33
2021-07-20T09:50:19
2021-07-08T16:08:41
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
938,763,684
2,606
[Metrics] addition of wiki_split metrics
**Is your feature request related to a problem? Please describe.** While training the model on sentence split the task in English we require to evaluate the trained model on `Exact Match`, `SARI` and `BLEU` score like this ![image](https://user-images.githubusercontent.com/26653468/124746876-ff5a3380-df3e-11eb-9a01...
closed
https://github.com/huggingface/datasets/issues/2606
2021-07-07T10:56:04
2021-07-12T22:34:31
2021-07-12T22:34:31
{ "login": "bhadreshpsavani", "id": 26653468, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "metric request", "color": "d4c5f9" } ]
false
[]
938,648,164
2,605
Make any ClientError trigger retry in streaming mode (e.g. ClientOSError)
During the FLAX sprint some users have this error when streaming datasets: ```python aiohttp.client_exceptions.ClientOSError: [Errno 104] Connection reset by peer ``` This error must trigger a retry instead of directly crashing Therefore I extended the error type that triggers the retry to be the base aiohttp er...
closed
https://github.com/huggingface/datasets/pull/2605
2021-07-07T08:47:23
2021-07-12T14:10:27
2021-07-07T08:59:13
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
938,602,237
2,604
Add option to delete temporary files (e.g. extracted files) when loading dataset
I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Having a simple way to delete the extracted files after usage (or even better, to strea...
closed
https://github.com/huggingface/datasets/issues/2604
2021-07-07T07:56:16
2021-07-19T09:08:18
2021-07-19T09:08:18
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
938,588,149
2,603
Fix DuplicatedKeysError in omp
Close #2598.
closed
https://github.com/huggingface/datasets/pull/2603
2021-07-07T07:38:32
2021-07-12T14:10:41
2021-07-07T12:56:35
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
938,555,712
2,602
Remove import of transformers
When pickling a tokenizer within multiprocessing, check that is instance of transformers PreTrainedTokenizerBase without importing transformers. Related to huggingface/transformers#12549 and #502.
closed
https://github.com/huggingface/datasets/pull/2602
2021-07-07T06:58:18
2021-07-12T14:10:22
2021-07-07T08:28:51
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
938,096,396
2,601
Fix `filter` with multiprocessing in case all samples are discarded
Fixes #2600 Also I moved the check for `num_proc` larger than dataset size added in #2566 up so that multiprocessing is not used with one process.
closed
https://github.com/huggingface/datasets/pull/2601
2021-07-06T17:06:28
2021-07-12T14:10:35
2021-07-07T12:50:31
{ "login": "mxschmdt", "id": 4904985, "type": "User" }
[]
true
[]
938,086,745
2,600
Crash when using multiprocessing (`num_proc` > 1) on `filter` and all samples are discarded
## Describe the bug If `filter` is applied to a dataset using multiprocessing (`num_proc` > 1) and all sharded datasets are empty afterwards (due to all samples being discarded), the program crashes. ## Steps to reproduce the bug ```python from datasets import Dataset data = Dataset.from_dict({'id': [0,1]}) dat...
closed
https://github.com/huggingface/datasets/issues/2600
2021-07-06T16:53:25
2021-07-07T12:50:31
2021-07-07T12:50:31
{ "login": "mxschmdt", "id": 4904985, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
937,980,229
2,599
Update processing.rst with other export formats
Add other supported export formats than CSV in the docs.
closed
https://github.com/huggingface/datasets/pull/2599
2021-07-06T14:50:38
2021-07-12T14:10:16
2021-07-07T08:05:48
{ "login": "TevenLeScao", "id": 26709476, "type": "User" }
[]
true
[]
937,930,632
2,598
Unable to download omp dataset
## Describe the bug The omp dataset cannot be downloaded because of a DuplicatedKeysError ## Steps to reproduce the bug from datasets import load_dataset omp = load_dataset('omp', 'posts_labeled') print(omp) ## Expected results This code should download the omp dataset and print the dictionary ## Actual r...
closed
https://github.com/huggingface/datasets/issues/2598
2021-07-06T14:00:52
2021-07-07T12:56:35
2021-07-07T12:56:35
{ "login": "erikadistefano", "id": 25797960, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
937,917,770
2,597
Remove redundant prepare_module
I have noticed that after implementing `load_dataset_builder` (#2500), there is a redundant call to `prepare_module`.
closed
https://github.com/huggingface/datasets/pull/2597
2021-07-06T13:47:45
2021-07-12T14:10:52
2021-07-07T13:01:46
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "refactoring", "color": "B67A40" } ]
true
[]
937,598,914
2,596
Transformer Class on dataset
Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit).
closed
https://github.com/huggingface/datasets/issues/2596
2021-07-06T07:27:15
2022-11-02T14:26:09
2022-11-02T14:26:09
{ "login": "arita37", "id": 18707623, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
937,483,120
2,595
ModuleNotFoundError: No module named 'datasets.tasks' while importing common voice datasets
Error traceback: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-8-a7b592d3bca0> in <module>() 1 from datasets import load_dataset, load_metric 2 ----> 3 common_voice_train = load_da...
closed
https://github.com/huggingface/datasets/issues/2595
2021-07-06T03:20:55
2021-07-06T05:59:49
2021-07-06T05:59:49
{ "login": "profsatwinder", "id": 41314912, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
937,294,772
2,594
Fix BibTeX entry
Fix BibTeX entry.
closed
https://github.com/huggingface/datasets/pull/2594
2021-07-05T18:24:10
2021-07-06T04:59:38
2021-07-06T04:59:38
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
937,242,137
2,593
Support pandas 1.3.0 read_csv
Workaround for this issue in pandas 1.3.0 : https://github.com/pandas-dev/pandas/issues/42387 The csv reader raises an error: ```python /usr/local/lib/python3.7/dist-packages/pandas/io/parsers/readers.py in _refine_defaults_read(dialect, delimiter, delim_whitespace, engine, sep, error_bad_lines, warn_bad_lines, on...
closed
https://github.com/huggingface/datasets/pull/2593
2021-07-05T16:40:04
2021-07-05T17:14:14
2021-07-05T17:14:14
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
937,060,559
2,592
Add c4.noclean infos
Adding the data files checksums and the dataset size of the c4.noclean configuration of the C4 dataset
closed
https://github.com/huggingface/datasets/pull/2592
2021-07-05T12:51:40
2021-07-05T13:15:53
2021-07-05T13:15:52
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
936,957,975
2,591
Cached dataset overflowing disk space
I'm training a Swedish Wav2vec2 model on a Linux GPU and having issues that the huggingface cached dataset folder is completely filling up my disk space (I'm training on a dataset of around 500 gb). The cache folder is 500gb (and now my disk space is full). Is there a way to toggle caching or set the caching to b...
closed
https://github.com/huggingface/datasets/issues/2591
2021-07-05T10:43:19
2021-07-19T09:08:19
2021-07-19T09:08:19
{ "login": "BirgerMoell", "id": 1704131, "type": "User" }
[]
false
[]
936,954,348
2,590
Add language tags
This PR adds some missing language tags needed for ASR datasets in #2565
closed
https://github.com/huggingface/datasets/pull/2590
2021-07-05T10:39:57
2021-07-05T10:58:48
2021-07-05T10:58:48
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
936,825,060
2,589
Support multilabel metrics
Currently, multilabel metrics are not supported because `predictions` and `references` are defined as `Value("int32")`. This PR creates a new feature type `OptionalSequence` which can act as either `Value("int32")` or `Sequence(Value("int32"))`, depending on the data passed. Close #2554.
closed
https://github.com/huggingface/datasets/pull/2589
2021-07-05T08:19:25
2022-07-29T10:56:25
2021-07-08T08:40:15
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
936,795,541
2,588
Fix test_is_small_dataset
Remove environment variable fixture `env_max_in_memory_dataset_size`. This fixture does not work because env variable is read in datasets.config when first loading datasets, and it is never reread during tests.
closed
https://github.com/huggingface/datasets/pull/2588
2021-07-05T07:46:26
2021-07-12T14:10:11
2021-07-06T17:09:30
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
936,771,339
2,587
Add aiohttp to tests extras require
Currently, none of the streaming tests are runned within our CI test suite, because the streaming tests require aiohttp and this is missing from our tests extras require dependencies. Our CI test suite should be exhaustive and test all the library functionalities.
closed
https://github.com/huggingface/datasets/pull/2587
2021-07-05T07:14:01
2021-07-05T09:04:38
2021-07-05T09:04:38
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
936,747,588
2,586
Fix misalignment in SQuAD
Fix misalignment between: - the answer text and - the answer_start within the context by keeping original leading blank spaces in the context. Fix #2585.
closed
https://github.com/huggingface/datasets/pull/2586
2021-07-05T06:42:20
2021-07-12T14:11:10
2021-07-07T13:18:51
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
936,484,419
2,585
sqaud_v2 dataset contains misalignment between the answer text and the context value at the answer index
## Describe the bug The built in huggingface squad_v2 dataset that you can access via datasets.load_dataset contains mis-alignment between the answers['text'] and the characters in the context at the location specified by answers['answer_start']. For example: id = '56d1f453e7d4791d009025bd' answers = {'text': ['P...
closed
https://github.com/huggingface/datasets/issues/2585
2021-07-04T15:39:49
2021-07-07T13:18:51
2021-07-07T13:18:51
{ "login": "mmajurski", "id": 9354454, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
936,049,736
2,584
wi_locness: reference latest leaderboard on codalab
The dataset's author asked me to put this codalab link into the dataset's README.
closed
https://github.com/huggingface/datasets/pull/2584
2021-07-02T20:26:22
2021-07-05T09:06:14
2021-07-05T09:06:14
{ "login": "aseifert", "id": 4944799, "type": "User" }
[]
true
[]
936,034,976
2,583
Error iteration over IterableDataset using Torch DataLoader
## Describe the bug I have an IterableDataset (created using streaming=True) and I am trying to create batches using Torch DataLoader class by passing this IterableDataset to it. This throws error which is pasted below. I can do the same by using Torch IterableDataset. One thing I noticed is that in the former case wh...
closed
https://github.com/huggingface/datasets/issues/2583
2021-07-02T19:55:58
2021-07-20T09:04:45
2021-07-05T23:48:23
{ "login": "LeenaShekhar", "id": 12227436, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
935,859,104
2,582
Add skip and take
As discussed in https://github.com/huggingface/datasets/pull/2375#discussion_r657084544 I added the `IterableDataset.skip` and `IterableDataset.take` methods that allows to do basic splitting of iterable datasets. You can create new dataset with the first `n` examples using `IterableDataset.take()`, or you can get a...
closed
https://github.com/huggingface/datasets/pull/2582
2021-07-02T15:10:19
2021-07-05T16:06:40
2021-07-05T16:06:39
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
935,783,588
2,581
Faster search_batch for ElasticsearchIndex due to threading
Hey, I think it makes sense to perform search_batch threaded, so ES can perform search in parallel. Cheers!
closed
https://github.com/huggingface/datasets/pull/2581
2021-07-02T13:42:07
2021-07-12T14:13:46
2021-07-12T09:52:51
{ "login": "mwrzalik", "id": 1376337, "type": "User" }
[]
true
[]
935,767,421
2,580
Fix Counter import
Import from `collections` instead of `typing`.
closed
https://github.com/huggingface/datasets/pull/2580
2021-07-02T13:21:48
2021-07-02T14:37:47
2021-07-02T14:37:46
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
935,486,894
2,579
Fix BibTeX entry
Add missing contributor to BibTeX entry. cc: @abhishekkrthakur @thomwolf
closed
https://github.com/huggingface/datasets/pull/2579
2021-07-02T07:10:40
2021-07-02T07:33:44
2021-07-02T07:33:44
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
935,187,497
2,578
Support Zstandard compressed files
Close #2572. cc: @thomwolf
closed
https://github.com/huggingface/datasets/pull/2578
2021-07-01T20:22:34
2021-08-11T14:46:24
2021-07-05T10:50:27
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
934,986,761
2,576
Add mC4
AllenAI is now hosting the processed C4 and mC4 dataset in this repo: https://huggingface.co/datasets/allenai/c4 Thanks a lot to them ! In this PR I added the mC4 dataset builder. It supports 108 languages You can load it with ```python from datasets import load_dataset en_mc4 = load_dataset("mc4", "en") f...
closed
https://github.com/huggingface/datasets/pull/2576
2021-07-01T15:51:25
2021-07-02T14:50:56
2021-07-02T14:50:55
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
934,876,496
2,575
Add C4
The old code for the C4 dataset was to generate the C4 with Apache Beam, as in Tensorflow Datasets. However AllenAI is now hosting the processed C4 dataset in this repo: https://huggingface.co/datasets/allenai/c4 Thanks a lot to them for their amazing work ! In this PR I changed the script to download and prepare ...
closed
https://github.com/huggingface/datasets/pull/2575
2021-07-01T13:58:08
2021-07-02T14:50:23
2021-07-02T14:50:23
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
934,632,378
2,574
Add streaming in load a dataset docs
Mention dataset streaming on the "loading a dataset" page of the documentation
closed
https://github.com/huggingface/datasets/pull/2574
2021-07-01T09:32:53
2021-07-01T14:12:22
2021-07-01T14:12:21
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
934,584,745
2,573
Finding right block-size with JSON loading difficult for user
As reported by @thomwolf, while loading a JSON Lines file with "json" loading script, he gets > json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 383)
open
https://github.com/huggingface/datasets/issues/2573
2021-07-01T08:48:35
2021-07-01T19:10:53
null
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
934,573,767
2,572
Support Zstandard compressed files
Add support for Zstandard compressed files: https://facebook.github.io/zstd/
closed
https://github.com/huggingface/datasets/issues/2572
2021-07-01T08:37:04
2023-01-03T15:34:01
2021-07-05T10:50:27
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
933,791,018
2,571
Filter expected warning log from transformers
Close #2569.
closed
https://github.com/huggingface/datasets/pull/2571
2021-06-30T14:48:19
2021-07-02T04:08:17
2021-07-02T04:08:17
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
933,402,521
2,570
Minor fix docs format for bertscore
Minor fix docs format for bertscore: - link to README - format of KWARGS_DESCRIPTION
closed
https://github.com/huggingface/datasets/pull/2570
2021-06-30T07:42:12
2021-06-30T15:31:01
2021-06-30T15:31:01
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
933,015,797
2,569
Weights of model checkpoint not initialized for RobertaModel for Bertscore
When applying bertscore out of the box, ```Some weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.weight']``` Following the typical ...
closed
https://github.com/huggingface/datasets/issues/2569
2021-06-29T18:55:23
2021-07-01T07:08:59
2021-06-30T07:35:49
{ "login": "suzyahyah", "id": 2980993, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
932,934,795
2,568
Add interleave_datasets for map-style datasets
### Add interleave_datasets for map-style datasets Add support for map-style datasets (i.e. `Dataset` objects) in `interleave_datasets`. It was only supporting iterable datasets (i.e. `IterableDataset` objects). ### Implementation details It works by concatenating the datasets and then re-order the indices to...
closed
https://github.com/huggingface/datasets/pull/2568
2021-06-29T17:19:24
2021-07-01T09:33:34
2021-07-01T09:33:33
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
932,933,536
2,567
Add ASR task and new languages to resources
This PR adds a new `automatic-speech-recognition` task to the list of supported tasks in `tasks.json` and also includes a few new languages missing from `common_voice`. Note: I used the [Papers with Code list](https://www.paperswithcode.com/area/speech/speech-recognition) as inspiration for the ASR subtasks
closed
https://github.com/huggingface/datasets/pull/2567
2021-06-29T17:18:01
2021-07-01T09:42:23
2021-07-01T09:42:09
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
932,804,725
2,566
fix Dataset.map when num_procs > num rows
closes #2470 ## Testing notes To run updated tests: ```sh pytest tests/test_arrow_dataset.py -k "BaseDatasetTest and test_map_multiprocessing" -s ``` With Python code (to view warning): ```python from datasets import Dataset dataset = Dataset.from_dict({"x": ["sample"]}) print(len(dataset)) dataset.map...
closed
https://github.com/huggingface/datasets/pull/2566
2021-06-29T15:07:07
2021-07-01T09:11:13
2021-07-01T09:11:13
{ "login": "connor-mccarthy", "id": 55268212, "type": "User" }
[]
true
[]
932,445,439
2,565
Inject templates for ASR datasets
This PR adds ASR templates for 5 of the most common speech datasets on the Hub, where "common" is defined by the number of models trained on them. I also fixed a bunch of the tags in the READMEs 😎
closed
https://github.com/huggingface/datasets/pull/2565
2021-06-29T10:02:01
2021-07-05T14:26:26
2021-07-05T14:26:26
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
932,389,639
2,564
concatenate_datasets for iterable datasets
Currently `concatenate_datasets` only works for map-style `Dataset`. It would be nice to have it work for `IterableDataset` objects as well. It would simply chain the iterables of the iterable datasets.
closed
https://github.com/huggingface/datasets/issues/2564
2021-06-29T08:59:41
2022-06-28T21:15:04
2022-06-28T21:15:04
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
932,387,639
2,563
interleave_datasets for map-style datasets
Currently the `interleave_datasets` functions only works for `IterableDataset`. Let's make it work for map-style `Dataset` objects as well. It would work the same way: either alternate between the datasets in order or randomly given probabilities specified by the user.
closed
https://github.com/huggingface/datasets/issues/2563
2021-06-29T08:57:24
2021-07-01T09:33:33
2021-07-01T09:33:33
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
932,333,436
2,562
Minor fix in loading metrics docs
Make some minor fixes in "Loading metrics" docs.
closed
https://github.com/huggingface/datasets/pull/2562
2021-06-29T07:55:11
2021-06-29T17:21:22
2021-06-29T17:21:22
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
932,321,725
2,561
Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True`
## Describe the bug If i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`. This slows down debugging and cache generator for very large datasets. ## Steps to reproduce th...
closed
https://github.com/huggingface/datasets/issues/2561
2021-06-29T07:43:03
2022-08-04T11:58:36
2022-08-04T11:58:36
{ "login": "apsdehal", "id": 3616806, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]