id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
1,038,427,245
3,174
Asserts replaced by exceptions (huggingface#3171)
I've replaced two asserts with their proper exceptions following the guidelines described in issue #3171 by following the contributing guidelines. PS: This is one of my first PRs, hoping I don't break anything!
closed
https://github.com/huggingface/datasets/pull/3174
2021-10-28T11:55:45
2021-11-06T06:35:32
2021-10-29T13:08:43
{ "login": "joseporiolayats", "id": 5772490, "type": "User" }
[]
true
[]
1,038,404,300
3,173
Fix issue with filelock filename being too long on encrypted filesystems
Infer max filename length in filelock on Unix-like systems. Should fix problems on encrypted filesystems such as eCryptfs. Fix #2924 cc: @lmmx
closed
https://github.com/huggingface/datasets/pull/3173
2021-10-28T11:28:57
2021-10-29T09:42:24
2021-10-29T09:42:24
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,038,351,587
3,172
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1`
## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. The exception is raised only when the code runs within a specific context. Despite ~10h spent ...
closed
https://github.com/huggingface/datasets/issues/3172
2021-10-28T10:29:00
2024-04-02T18:13:21
2021-11-03T11:26:10
{ "login": "vlievin", "id": 9859840, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,037,728,059
3,171
Raise exceptions instead of using assertions for control flow
Motivated by https://github.com/huggingface/transformers/issues/12789 in Transformers, one welcoming change would be replacing assertions with proper exceptions. The only type of assertions we should keep are those used as sanity checks. Currently, there is a total of 87 files with the `assert` statements (located u...
closed
https://github.com/huggingface/datasets/issues/3171
2021-10-27T18:26:52
2021-12-23T16:40:37
2021-12-23T16:40:37
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "good first issue", "color": "7057ff" } ]
false
[]
1,037,601,926
3,170
Preserve ordering in `zip_dict`
Replace `set` with the `unique_values` generator in `zip_dict`. This PR fixes the problem with the different ordering of the example keys across different Python sessions caused by the `zip_dict` call in `Features.decode_example`.
closed
https://github.com/huggingface/datasets/pull/3170
2021-10-27T16:07:30
2021-10-29T13:09:37
2021-10-29T13:09:37
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,036,773,357
3,169
Configurable max filename length in file locks
Resolve #2924 (https://github.com/huggingface/datasets/issues/2924#issuecomment-952330956) wherein the assumption of file lock maximum filename length to be 255 raises an OSError on encrypted drives (ecryptFS on Linux uses part of the lower filename, reducing the maximum filename size to 143). Allowing this limit to be...
closed
https://github.com/huggingface/datasets/pull/3169
2021-10-26T21:52:55
2021-10-28T16:14:14
2021-10-28T16:14:13
{ "login": "lmmx", "id": 2979452, "type": "User" }
[]
true
[]
1,036,673,263
3,168
OpenSLR/83 is empty
## Describe the bug As the summary says, openslr / SLR83 / train is empty. The dataset returned after loading indicates there are **zero** rows. The correct number should be **17877**. ## Steps to reproduce the bug ```python import datasets datasets.load_dataset('openslr', 'SLR83') ``` ## Expected resul...
closed
https://github.com/huggingface/datasets/issues/3168
2021-10-26T19:42:21
2021-10-29T10:04:09
2021-10-29T10:04:09
{ "login": "tyrius02", "id": 4561309, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,036,488,992
3,167
bookcorpusopen no longer works
## Describe the bug When using the latest version of datasets (1.14.0), I cannot use the `bookcorpusopen` dataset. The process blocks always around `9924 examples [00:06, 1439.61 examples/s]` when preparing the dataset. I also noticed that after half an hour the process is automatically killed because of the RAM usa...
closed
https://github.com/huggingface/datasets/issues/3167
2021-10-26T16:06:15
2021-11-17T15:53:46
2021-11-17T15:53:46
{ "login": "lucadiliello", "id": 23355969, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,036,450,283
3,166
Deprecate prepare_module
In version 1.13, `prepare_module` was deprecated. This PR adds a deprecation warning and removes it from all the library, using `dataset_module_factory` or `metric_module_factory` instead. Fix #3165.
closed
https://github.com/huggingface/datasets/pull/3166
2021-10-26T15:28:24
2021-11-05T09:27:37
2021-11-05T09:27:36
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,036,448,998
3,165
Deprecate prepare_module
In version 1.13, `prepare_module` was deprecated. Add deprecation warning and remove its usage from all the library.
closed
https://github.com/huggingface/datasets/issues/3165
2021-10-26T15:27:15
2021-11-05T09:27:36
2021-11-05T09:27:36
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
false
[]
1,035,662,830
3,164
Add raw data files to the Hub with GitHub LFS for canonical dataset
I'm interested in sharing the CaseHOLD dataset (https://arxiv.org/abs/2104.08671) as a canonical dataset on the HuggingFace Hub and would like to add the raw data files to the Hub with GitHub LFS, since it seems like a more sustainable long term storage solution, compared to other storage solutions available to my team...
closed
https://github.com/huggingface/datasets/issues/3164
2021-10-25T23:28:21
2021-10-30T19:54:51
2021-10-30T19:54:51
{ "login": "zlucia", "id": 40370937, "type": "User" }
[]
false
[]
1,035,475,061
3,163
Add Image feature
Adds the Image feature. This feature is heavily inspired by the recently added Audio feature (#2324). Currently, this PR is pretty simple. Some considerations that need further discussion: * I've decided to use `Pillow`/`PIL` as the image decoding library. Another candidate I considered is `torchvision`, mostly bec...
closed
https://github.com/huggingface/datasets/pull/3163
2021-10-25T19:07:48
2021-12-30T06:37:21
2021-12-06T17:49:02
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,035,462,136
3,162
`datasets-cli test` should work with datasets without scripts
It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). I wasn't able to run the script for a private test dataset that I had created on the hub (https://huggingface.co/datasets/huggingface/DataMeasurementsTest/t...
open
https://github.com/huggingface/datasets/issues/3162
2021-10-25T18:52:30
2021-11-25T16:04:29
null
{ "login": "sashavor", "id": 14205986, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,035,444,292
3,161
Add riddle_sense dataset
Adding a new dataset for QA with riddles. I'm confused about the tagging process because it looks like the streamlit app loads data from the current repo, so is it something that should be done after merging or off my fork?
closed
https://github.com/huggingface/datasets/pull/3161
2021-10-25T18:30:56
2021-11-04T14:01:15
2021-11-04T14:01:15
{ "login": "ziyiwu9494", "id": 44691149, "type": "User" }
[]
true
[]
1,035,274,640
3,160
Better error msg if `len(predictions)` doesn't match `len(references)` in metrics
Improve the error message in `Metric.add_batch` if `len(predictions)` doesn't match `len(references)`. cc: @BramVanroy (feel free to test this code on your examples and review this PR)
closed
https://github.com/huggingface/datasets/pull/3160
2021-10-25T15:25:05
2021-11-05T11:44:59
2021-11-05T09:31:02
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,035,174,560
3,159
Make inspect.get_dataset_config_names always return a non-empty list
Make all named configs cases, so that no special unnamed config case needs to be handled differently. Fix #3135.
closed
https://github.com/huggingface/datasets/pull/3159
2021-10-25T13:59:43
2021-10-29T13:14:37
2021-10-28T05:44:49
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,035,158,070
3,158
Fix string encoding for Value type
Some metrics have `string` features but currently it fails if users pass integers instead. Indeed feature encoding that handles the conversion of the user's objects to the right python type is missing a case for `string`, while it already works as expected for integers, floats and booleans Here is an example code th...
closed
https://github.com/huggingface/datasets/pull/3158
2021-10-25T13:44:13
2021-10-25T14:12:06
2021-10-25T14:12:05
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,034,775,165
3,157
Fixed: duplicate parameter and missing parameter in docstring
changing duplicate parameter `data_files` in `DatasetBuilder.__init__` to the missing parameter `data_dir`
closed
https://github.com/huggingface/datasets/pull/3157
2021-10-25T07:26:00
2021-10-25T14:02:19
2021-10-25T14:02:19
{ "login": "PanQiWei", "id": 46810637, "type": "User" }
[]
true
[]
1,034,468,757
3,155
Illegal instruction (core dumped) at datasets import
## Describe the bug I install datasets using conda and when I import datasets I get: "Illegal instruction (core dumped)" ## Steps to reproduce the bug ``` conda create --prefix path/to/env conda activate path/to/env conda install -c huggingface -c conda-forge datasets # exits with output "Illegal instruction...
closed
https://github.com/huggingface/datasets/issues/3155
2021-10-24T17:21:36
2021-11-18T19:07:04
2021-11-18T19:07:03
{ "login": "hacobe", "id": 91226467, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,034,361,806
3,154
Sacrebleu unexpected behaviour/requirement for data format
## Describe the bug When comparing with the original `sacrebleu` implementation, the `datasets` implementation does some strange things that I do not quite understand. This issue was triggered when I was trying to implement TER and found the datasets implementation of BLEU [here](https://github.com/huggingface/dataset...
closed
https://github.com/huggingface/datasets/issues/3154
2021-10-24T08:55:33
2021-10-31T09:08:32
2021-10-31T09:08:31
{ "login": "BramVanroy", "id": 2779410, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,034,179,198
3,153
Add TER (as implemented in sacrebleu)
Implements TER (Translation Edit Rate) as per its implementation in sacrebleu. Sacrebleu for BLEU scores is already implemented in `datasets` so I thought this would be a nice addition. I started from the sacrebleu implementation, as the two metrics have a lot in common. Verified with sacrebleu's [testing suite](...
closed
https://github.com/huggingface/datasets/pull/3153
2021-10-23T14:26:45
2021-11-02T11:04:11
2021-11-02T11:04:11
{ "login": "BramVanroy", "id": 2779410, "type": "User" }
[]
true
[]
1,034,039,379
3,152
Fix some typos in the documentation
null
closed
https://github.com/huggingface/datasets/pull/3152
2021-10-23T01:38:35
2021-10-25T14:27:36
2021-10-25T14:03:48
{ "login": "h4iku", "id": 3812788, "type": "User" }
[]
true
[]
1,033,890,501
3,151
Re-add faiss to windows testing suite
In recent versions, `faiss-cpu` seems to be available for Windows as well. See the [PyPi page](https://pypi.org/project/faiss-cpu/#files) to confirm. We can therefore included it for Windows in the setup file. At first tests didn't pass due to problems with permissions as caused by `NamedTemporaryFile` on Windows. T...
closed
https://github.com/huggingface/datasets/pull/3151
2021-10-22T19:34:29
2021-11-02T10:47:34
2021-11-02T10:06:03
{ "login": "BramVanroy", "id": 2779410, "type": "User" }
[]
true
[]
1,033,831,530
3,150
Faiss _is_ available on Windows
In the setup file, I find the following: https://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/setup.py#L171 However, FAISS does install perfectly fine on Windows on my system. You can also confirm this on the [PyPi page](https://pypi.org/project/faiss-cpu/#files), where Windows wh...
closed
https://github.com/huggingface/datasets/issues/3150
2021-10-22T18:07:16
2021-11-02T10:06:03
2021-11-02T10:06:03
{ "login": "BramVanroy", "id": 2779410, "type": "User" }
[]
false
[]
1,033,747,625
3,149
Add CMU Hinglish DoG Dataset for MT
Address part of #2841 Added the CMU Hinglish DoG Dataset as in GLUECoS. Added it as a seperate dataset as unlike other tasks of GLUE CoS this can't be evaluated for a BERT like model. Consists of parallel dataset between Hinglish (Hindi-English) and English, can be used for Machine Translation between the two. ...
closed
https://github.com/huggingface/datasets/pull/3149
2021-10-22T16:17:25
2021-11-15T11:36:42
2021-11-15T10:27:45
{ "login": "Ishan-Kumar2", "id": 46553104, "type": "User" }
[]
true
[]
1,033,685,208
3,148
Streaming with num_workers != 0
## Describe the bug When using dataset streaming with pytorch DataLoader, the setting num_workers to anything other than 0 causes the code to freeze forever before yielding the first batch. The code owner is likely @lhoestq ## Steps to reproduce the bug For your convenience, we've prepped a colab notebook th...
closed
https://github.com/huggingface/datasets/issues/3148
2021-10-22T15:07:17
2022-07-04T12:14:58
2022-07-04T12:14:58
{ "login": "justheuristic", "id": 3491902, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,033,607,659
3,147
Fix CLI test to ignore verfications when saving infos
Fix #3146.
closed
https://github.com/huggingface/datasets/pull/3147
2021-10-22T13:52:46
2021-10-27T08:01:50
2021-10-27T08:01:49
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,033,605,947
3,146
CLI test command throws NonMatchingSplitsSizesError when saving infos
When trying to generate a datset JSON metadata, a `NonMatchingSplitsSizesError` is thrown: ``` $ datasets-cli test datasets/arabic_billion_words --save_infos --all_configs Testing builder 'Alittihad' (1/10) Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: Unknown si...
closed
https://github.com/huggingface/datasets/issues/3146
2021-10-22T13:50:53
2021-10-27T08:01:49
2021-10-27T08:01:49
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,033,580,009
3,145
[when Image type will exist] provide a way to get the data as binary + filename
**Is your feature request related to a problem? Please describe.** When a dataset cell contains a value of type Image (be it from a remote URL, an Array2D/3D, or any other way to represent images), I want to be able to write the image to the disk, with the correct filename, and optionally to know its mimetype, in or...
closed
https://github.com/huggingface/datasets/issues/3145
2021-10-22T13:23:49
2021-12-22T11:05:37
2021-12-22T11:05:36
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,033,573,760
3,144
Infer the features if missing
**Is your feature request related to a problem? Please describe.** Some datasets, in particular community datasets, have no info file, thus no features. **Describe the solution you'd like** If a dataset has no features, the first loaded data (5-10 rows) could be used to infer the type. Related: `datasets` w...
closed
https://github.com/huggingface/datasets/issues/3144
2021-10-22T13:17:33
2022-09-08T08:23:10
2022-09-08T08:23:10
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,033,569,655
3,143
Provide a way to check if the features (in info) match with the data of a split
**Is your feature request related to a problem? Please describe.** I understand that currently the data loaded has not always the type described in the info features **Describe the solution you'd like** Provide a way to check if the rows have the type described by info features **Describe alternatives you'v...
open
https://github.com/huggingface/datasets/issues/3143
2021-10-22T13:13:36
2021-10-22T13:17:56
null
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,033,566,034
3,142
Provide a way to write a streamed dataset to the disk
**Is your feature request related to a problem? Please describe.** The streaming mode allows to get the 100 first rows of a dataset very quickly. But it does not cache the answer, so a posterior call to get the same 100 rows will send a request to the server again and again. **Describe the solution you'd like** ...
open
https://github.com/huggingface/datasets/issues/3142
2021-10-22T13:09:53
2024-01-12T07:26:43
null
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,033,555,910
3,141
Fix caching bugs
This PR fixes some caching bugs (most likely introduced in the latest refactor): * remove ")" added by accident in the dataset dir name * correctly pass the namespace kwargs in `CachedDatasetModuleFactory` * improve the warning message if `HF_DATASETS_OFFLINE is `True`
closed
https://github.com/huggingface/datasets/pull/3141
2021-10-22T12:59:25
2021-10-22T20:52:08
2021-10-22T13:47:05
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,033,524,079
3,139
Fix file/directory deletion on Windows
Currently, on Windows, some attempts to delete a dataset file/directory will fail with the `PerimissionError`. Examples: - download a dataset, then force redownload it in the same session while keeping a reference to the downloaded dataset ```python from datasets import load_dataset dset = load_dataset("sst", s...
open
https://github.com/huggingface/datasets/issues/3139
2021-10-22T12:22:08
2021-10-22T12:22:08
null
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,033,379,997
3,138
More fine-grained taxonomy of error types
**Is your feature request related to a problem? Please describe.** Exceptions like `FileNotFoundError` can be raised by different parts of the code, and it's hard to detect which one did **Describe the solution you'd like** Give a specific exception type for every group of similar errors **Describe alternat...
open
https://github.com/huggingface/datasets/issues/3138
2021-10-22T09:35:29
2022-09-20T13:04:42
null
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,033,363,652
3,137
Fix numpy deprecation warning for ragged tensors
Numpy shows a deprecation warning when we call `np.array` on a list of ragged tensors without specifying the `dtype`. If their shapes match, the tensors can be collated together, otherwise the resulting array should have `dtype=np.object`. Fix #3084 cc @Rocketknight1
closed
https://github.com/huggingface/datasets/pull/3137
2021-10-22T09:17:46
2021-10-22T16:04:15
2021-10-22T16:04:14
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,033,360,396
3,136
Fix script of Arabic Billion Words dataset to return all data
The script has a bug and only parses and generates a portion of the entire dataset. This PR fixes the loading script so that is properly parses the entire dataset. Current implementation generates the same number of examples as reported in the [original paper](https://arxiv.org/abs/1611.04033) for all configurat...
closed
https://github.com/huggingface/datasets/pull/3136
2021-10-22T09:14:24
2021-10-22T13:28:41
2021-10-22T13:28:40
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,033,294,299
3,135
Make inspect.get_dataset_config_names always return a non-empty list of configs
**Is your feature request related to a problem? Please describe.** Currently, some datasets have a configuration, while others don't. It would be simpler for the user to always have configuration names to refer to **Describe the solution you'd like** In that sense inspect.get_dataset_config_names should always...
closed
https://github.com/huggingface/datasets/issues/3135
2021-10-22T08:02:50
2021-10-28T05:44:49
2021-10-28T05:44:49
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,033,251,755
3,134
Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py
datasets version: 1.12.1 `metric = datasets.load_metric('rouge')` The error: > ConnectionError Traceback (most recent call last) > <ipython-input-3-dd10a0c5212f> in <module> > ----> 1 metric = datasets.load_metric('rouge') > > /usr/local/lib/python3.6/dist-packages/datasets/load....
closed
https://github.com/huggingface/datasets/issues/3134
2021-10-22T07:07:52
2023-09-14T01:19:45
2022-01-19T14:02:31
{ "login": "yanan1116", "id": 26405281, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,032,511,710
3,133
Support Audio feature in streaming mode
Fix #3132.
closed
https://github.com/huggingface/datasets/pull/3133
2021-10-21T13:37:57
2021-11-12T14:13:05
2021-11-12T14:13:04
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,032,505,430
3,132
Support Audio feature in streaming mode
Currently, Audio feature is only supported for non-streaming datasets. Due to the large size of many speech datasets, we should also support Audio feature in streaming mode.
closed
https://github.com/huggingface/datasets/issues/3132
2021-10-21T13:32:18
2021-11-12T14:13:04
2021-11-12T14:13:04
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,032,309,865
3,131
Add ADE20k
## Adding a Dataset - **Name:** ADE20k (actually it's called the MIT Scene Parsing Benchmark, it's actually a subset of ADE20k but a lot of authors still call it ADE20k) - **Description:** A semantic segmentation dataset, consisting of 150 classes. - **Paper:** http://people.csail.mit.edu/bzhou/publication/scene-par...
closed
https://github.com/huggingface/datasets/issues/3131
2021-10-21T10:13:09
2023-01-27T14:40:20
2023-01-27T14:40:20
{ "login": "NielsRogge", "id": 48327001, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "vision", "color": "bfdadc" } ]
false
[]
1,032,299,417
3,130
Create SECURITY.md
To let the repository confirm feedback@huggingface.co as its security contact.
closed
https://github.com/huggingface/datasets/pull/3130
2021-10-21T10:03:03
2021-10-21T14:33:28
2021-10-21T14:31:50
{ "login": "zidingz", "id": 28839565, "type": "User" }
[]
true
[]
1,032,234,167
3,129
Support Audio feature for TAR archives in sequential access
Add Audio feature support for TAR archived files in sequential access. Fix #3128.
closed
https://github.com/huggingface/datasets/pull/3129
2021-10-21T08:56:51
2021-11-17T17:42:08
2021-11-17T17:42:07
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,032,201,870
3,128
Support Audio feature for TAR archives in sequential access
Currently, Audio feature accesses each audio file by their file path. However, streamed TAR archive files do not allow random access to their archived files. Therefore, we should enhance the Audio feature to support TAR archived files in sequential access.
closed
https://github.com/huggingface/datasets/issues/3128
2021-10-21T08:23:01
2021-11-17T17:42:07
2021-11-17T17:42:07
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,032,100,613
3,127
datasets-cli: convertion of a tfds dataset to a huggingface one.
### Discussed in https://github.com/huggingface/datasets/discussions/3079 <div type='discussions-op-text'> <sup>Originally posted by **vitalyshalumov** October 14, 2021</sup> I'm trying to convert a tfds dataset to a huggingface one. I've tried: 1. datasets-cli convert --tfds_path ~/tensorflow_datas...
open
https://github.com/huggingface/datasets/issues/3127
2021-10-21T06:14:27
2021-10-27T11:36:05
null
{ "login": "vitalyshalumov", "id": 33824221, "type": "User" }
[]
false
[]
1,032,093,055
3,126
"arabic_billion_words" dataset does not create the full dataset
## Describe the bug When running: raw_dataset = load_dataset('arabic_billion_words','Alittihad') the correct dataset file is pulled from the url. But, the generated dataset includes just a small portion of the data included in the file. This is true for all other portions of the "arabic_billion_words" dataset ('A...
closed
https://github.com/huggingface/datasets/issues/3126
2021-10-21T06:02:38
2021-10-22T13:28:40
2021-10-22T13:28:40
{ "login": "vitalyshalumov", "id": 33824221, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,032,046,666
3,125
Add SLR83 to OpenSLR
The PR resolves #3119, adding SLR83 (UK and Ireland dialects) to the previously created OpenSLR dataset.
closed
https://github.com/huggingface/datasets/pull/3125
2021-10-21T04:26:00
2021-10-22T20:10:05
2021-10-22T08:30:22
{ "login": "tyrius02", "id": 4561309, "type": "User" }
[]
true
[]
1,031,976,286
3,124
More efficient nested features encoding
Nested encoding of features wastes a lot of time on operations which are effectively doing nothing when lists are used. For example, if in the input we have a list of integers, `encoded_nested_example` will iterate over it and apply `encoded_nested_example` on every element even though it just return the int as is. ...
closed
https://github.com/huggingface/datasets/pull/3124
2021-10-21T01:55:31
2021-11-02T15:07:13
2021-11-02T11:04:04
{ "login": "eladsegal", "id": 13485709, "type": "User" }
[]
true
[]
1,031,793,207
3,123
Segmentation fault when loading datasets from file
## Describe the bug Custom dataset loading sometimes segfaults and kills the process if chunks contain a variety of features/ ## Steps to reproduce the bug Download an example file: ``` wget https://gist.githubusercontent.com/TevenLeScao/11e2184394b3fa47d693de2550942c6b/raw/4232704d08fbfcaf93e5b51def9e50515076...
closed
https://github.com/huggingface/datasets/issues/3123
2021-10-20T20:16:11
2021-11-02T14:57:07
2021-11-02T14:57:07
{ "login": "TevenLeScao", "id": 26709476, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,031,787,509
3,122
OSError with a custom dataset loading script
## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost identical and they have the same directory struc...
closed
https://github.com/huggingface/datasets/issues/3122
2021-10-20T20:08:39
2021-11-23T09:55:38
2021-11-23T09:55:38
{ "login": "suzanab", "id": 38602977, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,031,673,115
3,121
Use huggingface_hub.HfApi to list datasets/metrics
Delete `datasets.inspect.HfApi` and use `huggingface_hub.HfApi` instead. WIP until https://github.com/huggingface/huggingface_hub/pull/429 is merged, then wait for the new release of `huggingface_hub`, update the `huggingface_hub` version in `setup.py` and merge this PR. cc: @lhoestq
closed
https://github.com/huggingface/datasets/pull/3121
2021-10-20T17:48:29
2021-11-05T11:45:08
2021-11-05T09:48:36
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,031,574,511
3,120
Correctly update metadata to preserve features when concatenating datasets with axis=1
This PR correctly updates metadata to preserve higher-level feature types (e.g. `ClassLabel`) in `datasets.concatenate_datasets` when `axis=1`. Previously, we would delete the feature metadata in `datasets.concatenate_datasets` if `axis=1` and restore the feature types from the arrow table schema in `Dataset.__init__`....
closed
https://github.com/huggingface/datasets/pull/3120
2021-10-20T15:54:58
2021-10-22T08:28:51
2021-10-21T14:50:21
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,031,328,044
3,119
Add OpenSLR 83 - Crowdsourced high-quality UK and Ireland English Dialect speech
## Adding a Dataset - **Name:** *openslr** - **Description:** *Data set which contains male and female recordings of English from various dialects of the UK and Ireland.* - **Paper:** *https://www.openslr.org/resources/83/about.html* - **Data:** *Eleven separate data files can be found via https://www.openslr.org/r...
closed
https://github.com/huggingface/datasets/issues/3119
2021-10-20T12:05:07
2021-10-22T19:00:52
2021-10-22T08:30:22
{ "login": "tyrius02", "id": 4561309, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,031,309,549
3,118
Fix CI error at each release commit
Fix test_load_dataset_canonical at release commit. Fix #3117.
closed
https://github.com/huggingface/datasets/pull/3118
2021-10-20T11:44:38
2021-10-20T13:02:36
2021-10-20T13:02:36
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,031,308,083
3,117
CI error at each release commit
After 1.12.0, there is a recurrent CI error at each release commit: https://app.circleci.com/pipelines/github/huggingface/datasets/8289/workflows/665d954d-e409-4602-8202-e678594d2946/jobs/51110 ``` ____________________ LoadTest.test_load_dataset_canonical _____________________ [gw0] win32 -- Python 3.6.8 C:\tools\...
closed
https://github.com/huggingface/datasets/issues/3117
2021-10-20T11:42:53
2021-10-20T13:02:35
2021-10-20T13:02:35
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,031,270,611
3,116
Update doc links to point to new docs
This PR: * updates the README links and the ADD_NEW_DATASET template to point to the new docs (the new docs don't have a section with the list of all the possible features, so I added that info to the `Features` docstring, which is then referenced in the ADD_NEW_DATASET template) * fixes some broken links in the `.rs...
closed
https://github.com/huggingface/datasets/pull/3116
2021-10-20T11:00:47
2021-10-22T08:29:28
2021-10-22T08:26:45
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
true
[]
1,030,737,524
3,115
Fill in dataset card for NCBI disease dataset
null
closed
https://github.com/huggingface/datasets/pull/3115
2021-10-19T20:57:05
2021-10-22T08:25:07
2021-10-22T08:25:07
{ "login": "edugp", "id": 17855740, "type": "User" }
[]
true
[]
1,030,693,130
3,114
load_from_disk in DatasetsDict/Dataset not working with PyArrowHDFS wrapper implementing fsspec.spec.AbstractFileSystem
## Describe the bug Passing a PyArrowHDFS implementation of fsspec.spec.AbstractFileSystem (in the `fs` param required by `load_from_disk` methods in `DatasetDict` (in datasets_dict.py) and `Dataset` (in arrow_dataset.py) results in an error when calling the download method in the `fs` parameter. ## Steps to repr...
closed
https://github.com/huggingface/datasets/issues/3114
2021-10-19T20:01:45
2022-02-14T14:00:28
2022-02-14T14:00:28
{ "login": "francisco-perez-sorrosal", "id": 918006, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,030,667,547
3,113
Loading Data from HDF files
**Is your feature request related to a problem? Please describe.** More often than not I come along big HDF datasets, and currently there is no straight forward way to feed them to a dataset. **Describe the solution you'd like** I would love to see a `from_h5` method that gets an interface implemented by the user ...
open
https://github.com/huggingface/datasets/issues/3113
2021-10-19T19:26:46
2025-06-19T05:41:23
null
{ "login": "FeryET", "id": 30388648, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "good second issue", "color": "BDE59C" } ]
false
[]
1,030,613,083
3,112
OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB
## Describe the bug Despite having batches way under 2Gb when running `datasets.map()`, after processing correctly the data of the first batch without fuss and irrespective of writer_batch_size (say 2,4,8,16,32,64 and 128 in my case), it returns the following error : > OverflowError: There was an overflow in the <c...
open
https://github.com/huggingface/datasets/issues/3112
2021-10-19T18:21:41
2021-10-19T18:52:29
null
{ "login": "BenoitDalFerro", "id": 69694610, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,030,598,983
3,111
concatenate_datasets removes ClassLabel typing.
## Describe the bug When concatenating two datasets, we lose typing of ClassLabel columns. I can work on this if this is a legitimate bug, ## Steps to reproduce the bug ```python import datasets from datasets import Dataset, ClassLabel, Value, concatenate_datasets DS_LEN = 100 my_dataset = Dataset.from_...
closed
https://github.com/huggingface/datasets/issues/3111
2021-10-19T18:05:31
2021-10-21T14:50:21
2021-10-21T14:50:21
{ "login": "Dref360", "id": 8976546, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,030,558,484
3,110
Stream TAR-based dataset using iter_archive
I converted all the dataset based on TAR archive to use iter_archive instead, so that they can be streamable. It means that around 80 datasets become streamable :)
closed
https://github.com/huggingface/datasets/pull/3110
2021-10-19T17:16:24
2021-11-05T17:48:49
2021-11-05T17:48:48
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,030,543,284
3,109
Update BibTeX entry
Update BibTeX entry.
closed
https://github.com/huggingface/datasets/pull/3109
2021-10-19T16:59:31
2021-10-19T17:13:28
2021-10-19T17:13:27
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,030,405,618
3,108
Add Google BLEU (aka GLEU) metric
This PR adds the NLTK implementation of Google BLEU metric. This is also a part of an effort to resolve an unfortunate naming collision between GLEU for machine translation and GLEU for grammatical error correction. I used [this page](https://huggingface.co/docs/datasets/add_metric.html) for reference. Please, point ...
closed
https://github.com/huggingface/datasets/pull/3108
2021-10-19T14:48:38
2021-10-25T14:07:04
2021-10-25T14:07:04
{ "login": "slowwavesleep", "id": 44175589, "type": "User" }
[]
true
[]
1,030,357,527
3,107
Add paper BibTeX citation
Add paper BibTeX citation to README file.
closed
https://github.com/huggingface/datasets/pull/3107
2021-10-19T14:08:11
2021-10-19T14:26:22
2021-10-19T14:26:21
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,030,112,473
3,106
Fix URLs in blog_authorship_corpus dataset
After contacting the authors of the paper "Effects of Age and Gender on Blogging", they confirmed: - the old URLs are no longer valid - there are alternative host URLs Fix #3091.
closed
https://github.com/huggingface/datasets/pull/3106
2021-10-19T10:06:05
2021-10-19T12:50:40
2021-10-19T12:50:39
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,029,098,843
3,105
download_mode=`force_redownload` does not work on removed datasets
## Describe the bug If a cached dataset is removed from the library, I don't see how to delete it programmatically. I thought that using `force_redownload` would try to refresh the cache, then raise an exception, but it reuses the cache instead. ## Steps to reproduce the bug _requires to already have `wit` in ...
open
https://github.com/huggingface/datasets/issues/3105
2021-10-18T13:12:38
2021-10-22T09:36:10
null
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,029,080,412
3,104
Missing Zenodo 1.13.3 release
After `datasets` 1.13.3 release, this does not appear in Zenodo releases: https://zenodo.org/record/5570305 TODO: - [x] Contact Zenodo support - [x] Check it is fixed
closed
https://github.com/huggingface/datasets/issues/3104
2021-10-18T12:57:18
2021-10-22T13:22:25
2021-10-22T13:22:24
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,029,069,310
3,103
Fix project description in PyPI
Fix project description appearing in PyPI, so that it contains the content of the README.md file (like transformers). Currently, `datasets` project description appearing in PyPI shows the release instructions addressed to core maintainers: https://pypi.org/project/datasets/1.13.3/ Fix #3102.
closed
https://github.com/huggingface/datasets/pull/3103
2021-10-18T12:47:29
2021-10-18T12:59:57
2021-10-18T12:59:56
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,029,067,062
3,102
Unsuitable project description in PyPI
Currently, `datasets` project description appearing in PyPI shows the release instructions addressed to core maintainers: https://pypi.org/project/datasets/1.13.3/
closed
https://github.com/huggingface/datasets/issues/3102
2021-10-18T12:45:00
2021-10-18T12:59:56
2021-10-18T12:59:56
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
false
[]
1,028,966,968
3,101
Update SUPERB to use Audio features
This is the same dataset refresh as the other Audio ones: https://github.com/huggingface/datasets/pull/3081 cc @patrickvonplaten
closed
https://github.com/huggingface/datasets/pull/3101
2021-10-18T11:05:18
2021-10-18T12:33:54
2021-10-18T12:06:46
{ "login": "anton-l", "id": 26864830, "type": "User" }
[]
true
[]
1,028,738,180
3,100
Replace FSTimeoutError with parent TimeoutError
PR #3050 introduced a dependency on `fsspec.FSTiemoutError`. Note that this error only exists from `fsspec` version `2021.06.0` (June 2021). To fix #3097, there are 2 alternatives: - Either pinning `fsspec` to versions newer or equal to `2021.06.0` - Or replacing `fsspec.FSTimeoutError` wth its parent `asyncio.Tim...
closed
https://github.com/huggingface/datasets/pull/3100
2021-10-18T07:37:09
2021-10-18T07:51:55
2021-10-18T07:51:54
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,028,338,078
3,099
AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo'
## Describe the bug When using `pip install datasets` or use `conda install -c huggingface -c conda-forge datasets` cannot install datasets ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("sst", "default") ``` ## Actual results ---------------------------...
closed
https://github.com/huggingface/datasets/issues/3099
2021-10-17T14:17:47
2021-11-09T16:42:29
2021-11-09T16:42:28
{ "login": "JTWang2000", "id": 49268567, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,028,210,790
3,098
Push to hub capabilities for `Dataset` and `DatasetDict`
This PR implements a `push_to_hub` method on `Dataset` and `DatasetDict`. This does not currently work in `IterableDatasetDict` nor `IterableDataset` as those are simple dicts and I would like your opinion on how you would like to implement this before going ahead and doing it. This implementation needs to be used w...
closed
https://github.com/huggingface/datasets/pull/3098
2021-10-17T04:12:44
2021-12-08T16:04:50
2021-11-24T11:25:36
{ "login": "LysandreJik", "id": 30755778, "type": "User" }
[]
true
[]
1,027,750,811
3,097
`ModuleNotFoundError: No module named 'fsspec.exceptions'`
## Describe the bug I keep runnig into a fsspec ModuleNotFound error ## Steps to reproduce the bug ```python >>> from datasets import get_dataset_infos 2021-10-15 15:25:37.863206: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudar...
closed
https://github.com/huggingface/datasets/issues/3097
2021-10-15T19:34:38
2021-10-18T07:51:54
2021-10-18T07:51:54
{ "login": "VictorSanh", "id": 16107619, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,027,535,685
3,096
Fix Audio feature mp3 resampling
Issue #3095 is related to mp3 resampling, not to `cast_column`. This PR fixes Audio feature mp3 resampling. Fix #3095.
closed
https://github.com/huggingface/datasets/pull/3096
2021-10-15T15:05:19
2021-10-15T15:38:30
2021-10-15T15:38:30
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,027,453,146
3,095
`cast_column` makes audio decoding fail
## Describe the bug After changing the sampling rate automatic decoding fails. ## Steps to reproduce the bug ```python from datasets import load_dataset import datasets ds = load_dataset("common_voice", "ab", split="train") ds = ds.cast_column("audio", datasets.features.Audio(sampling_rate=16_000)) pr...
closed
https://github.com/huggingface/datasets/issues/3095
2021-10-15T13:36:58
2023-04-07T09:43:20
2021-10-15T15:38:30
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,027,328,633
3,094
Support loading a dataset from SQLite files
As requested by @julien-c, we could eventually support loading a dataset from SQLite files, like it is the case for JSON/CSV files.
closed
https://github.com/huggingface/datasets/issues/3094
2021-10-15T10:58:41
2022-10-03T16:32:29
2022-10-03T16:32:29
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "good second issue", "color": "BDE59C" } ]
false
[]
1,027,262,124
3,093
Error loading json dataset with multiple splits if keys in nested dicts have a different order
## Describe the bug Loading a json dataset with multiple splits that have nested dicts with keys in different order results in the error below. If the keys in the nested dicts always have the same order or even if you just load a single split in which the nested dicts don't have the same order, everything works fin...
closed
https://github.com/huggingface/datasets/issues/3093
2021-10-15T09:33:25
2022-04-10T14:06:29
2022-04-10T14:06:29
{ "login": "dthulke", "id": 8331189, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,027,260,383
3,092
Fix JNLBA dataset
As mentioned in #3089, I've added more tags and also updated the link for dataset which was earlier using a Google Drive link. I'm having problem with generating dummy data as `datasets-cli dummy_data ./datasets/jnlpba --auto_generate --match_text_files "*.iob2"` is giving `datasets.keyhash.DuplicatedKeysError: FAIL...
closed
https://github.com/huggingface/datasets/pull/3092
2021-10-15T09:31:14
2022-07-10T14:36:49
2021-10-22T08:23:57
{ "login": "bhavitvyamalik", "id": 19718818, "type": "User" }
[]
true
[]
1,027,251,530
3,091
`blog_authorship_corpus` is broken
## Describe the bug The dataset `blog_authorship_corpus` is broken. By bypassing the checksum checks, the loading does not return any error but the resulting dataset is empty. I suspect it is because the data download url is broken (http://www.cs.biu.ac.il/~koppel/blogs/blogs.zip). ## Steps to reproduce the bug ...
closed
https://github.com/huggingface/datasets/issues/3091
2021-10-15T09:20:40
2021-10-19T13:06:10
2021-10-19T12:50:39
{ "login": "fdtomasi", "id": 12514317, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,027,100,371
3,090
Update BibTeX entry
Update BibTeX entry.
closed
https://github.com/huggingface/datasets/pull/3090
2021-10-15T05:39:27
2021-10-15T07:35:57
2021-10-15T07:35:57
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,026,973,360
3,089
JNLPBA Dataset
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` ## Expected results The dataset loading script for this dataset is incorrect. This is a biomedical dataset used for named entity recognition. The entities in ...
closed
https://github.com/huggingface/datasets/issues/3089
2021-10-15T01:16:02
2021-10-22T08:23:57
2021-10-22T08:23:57
{ "login": "sciarrilli", "id": 10460111, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,026,920,369
3,088
Use template column_mapping to transmit_format instead of template features
Use `template.column_mapping` to check for modified columns since `template.features` represent a generic template/column mapping. Fix #3087 TODO: - [x] Add a test
closed
https://github.com/huggingface/datasets/pull/3088
2021-10-14T23:49:40
2021-10-15T14:40:05
2021-10-15T10:11:04
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,026,780,469
3,087
Removing label column in a text classification dataset yields to errors
## Describe the bug This looks like #3059 but it's not linked to the cache this time. Removing the `label` column from a text classification dataset and then performing any processing will result in an error. To reproduce: ```py from datasets import load_dataset from transformers import AutoTokenizer raw_da...
closed
https://github.com/huggingface/datasets/issues/3087
2021-10-14T20:12:50
2021-10-15T10:11:04
2021-10-15T10:11:04
{ "login": "sgugger", "id": 35901082, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,026,481,905
3,086
Remove _resampler from Audio fields
The `_resampler` Audio attribute was implemented to optimize audio resampling, but it should not be cached. This PR removes `_resampler` from Audio fields, so that it is not returned by `fields()` or `asdict()`. Fix #3083.
closed
https://github.com/huggingface/datasets/pull/3086
2021-10-14T14:38:50
2021-10-14T15:13:41
2021-10-14T15:13:40
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,026,467,384
3,085
Fixes to `to_tf_dataset`
null
closed
https://github.com/huggingface/datasets/pull/3085
2021-10-14T14:25:56
2021-10-21T15:05:29
2021-10-21T15:05:28
{ "login": "Rocketknight1", "id": 12866554, "type": "User" }
[]
true
[]
1,026,428,992
3,084
VisibleDeprecationWarning when using `set_format("numpy")`
Code to reproduce: ``` from datasets import load_dataset dataset = load_dataset("glue", "mnli") from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('distilbert-base-cased') def tokenize_function(dataset): return tokenizer(dataset['premise']) tokenized_datasets = dataset....
closed
https://github.com/huggingface/datasets/issues/3084
2021-10-14T13:53:01
2021-10-22T16:04:14
2021-10-22T16:04:14
{ "login": "Rocketknight1", "id": 12866554, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,026,397,062
3,083
Datasets with Audio feature raise error when loaded from cache due to _resampler parameter
## Describe the bug As reported by @patrickvonplaten, when loaded from the cache, datasets containing the Audio feature raise TypeError. ## Steps to reproduce the bug ```python from datasets import load_dataset # load first time works ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean") # ...
closed
https://github.com/huggingface/datasets/issues/3083
2021-10-14T13:23:53
2021-10-14T15:13:40
2021-10-14T15:13:40
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,026,388,994
3,082
Fix error related to huggingface_hub timeout parameter
The `huggingface_hub` package added the parameter `timeout` from version 0.0.19. This PR bumps this minimal version. Fix #3080.
closed
https://github.com/huggingface/datasets/pull/3082
2021-10-14T13:17:47
2021-10-14T14:39:52
2021-10-14T14:39:51
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,026,383,749
3,081
[Audio datasets] Adapting all audio datasets
This PR adds the new `Audio(...)` features - see: https://github.com/huggingface/datasets/pull/2324 to the most important audio datasets: - Librispeech - Timit - Common Voice - AMI - ... (others I'm forgetting now) The PR is curently blocked because the following leads to a problem: ```python from dataset...
closed
https://github.com/huggingface/datasets/pull/3081
2021-10-14T13:13:45
2021-10-15T12:52:03
2021-10-15T12:22:33
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
1,026,380,626
3,080
Error related to timeout keyword argument
## Describe the bug As reported by @patrickvonplaten, a TypeError is raised when trying to load a dataset. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean") ``` ## Actual results ``` TypeError: dataset_info() got ...
closed
https://github.com/huggingface/datasets/issues/3080
2021-10-14T13:10:58
2021-10-14T14:39:51
2021-10-14T14:39:51
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,026,150,362
3,077
Fix loading a metric with internal import
After refactoring the module factory (#2986), a bug was introduced when loading metrics with internal imports. This PR adds a new test case and fixes this bug. Fix #3076. CC: @sgugger @merveenoyan
closed
https://github.com/huggingface/datasets/pull/3077
2021-10-14T09:06:58
2021-10-14T09:14:56
2021-10-14T09:14:55
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,026,113,484
3,076
Error when loading a metric
## Describe the bug As reported by @sgugger, after last release, exception is thrown when loading a metric. ## Steps to reproduce the bug ```python from datasets import load_metric metric = load_metric("squad_v2") ``` ## Actual results ``` FileNotFoundError Traceback (most recent ...
closed
https://github.com/huggingface/datasets/issues/3076
2021-10-14T08:29:27
2021-10-14T09:14:55
2021-10-14T09:14:55
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,026,103,388
3,075
Updates LexGLUE and MultiEURLEX README.md files
Updates LexGLUE and MultiEURLEX README.md files - Fix leaderboard in LexGLUE. - Fix an error in the CaseHOLD data example. - Turn MultiEURLEX dataset statistics table into HTML to nicely render in HF website.
closed
https://github.com/huggingface/datasets/pull/3075
2021-10-14T08:19:16
2021-10-18T10:13:40
2021-10-18T10:13:40
{ "login": "iliaschalkidis", "id": 1626984, "type": "User" }
[]
true
[]
1,025,940,085
3,074
add XCSR dataset
Hi, I wanted to add the [XCSR ](https://inklab.usc.edu//XCSR/xcsr_datasets) dataset to huggingface! :) I followed the instructions of adding new dataset to huggingface and have all the required files ready now! It would be super helpful if you can take a look and review them. Thanks in advance for your time and ...
closed
https://github.com/huggingface/datasets/pull/3074
2021-10-14T04:39:59
2021-11-08T13:52:36
2021-11-08T13:52:36
{ "login": "yangxqiao", "id": 42788901, "type": "User" }
[]
true
[]
1,025,718,469
3,073
Import error installing with ppc64le
## Describe the bug Installing the datasets library with a computer running with ppc64le seems to cause an issue when importing the datasets library. ``` python Python 3.6.13 | packaged by conda-forge | (default, Sep 23 2021, 07:37:44) [GCC 9.4.0] on linux Type "help", "copyright", "credits" or "license" for...
closed
https://github.com/huggingface/datasets/issues/3073
2021-10-13T21:37:23
2021-10-14T16:35:46
2021-10-14T16:33:28
{ "login": "gcervantes8", "id": 21228908, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,025,233,152
3,072
Fix pathlib patches for streaming
Fix issue https://github.com/huggingface/datasets/issues/2866 (for good this time) `counter` now works in both streaming and non-streaming mode. And the `AttributeError: 'str' object has no attribute 'as_posix'` related to the patch of Path.open is fixed as well Note : the patches should only affect the datasets...
closed
https://github.com/huggingface/datasets/pull/3072
2021-10-13T13:11:15
2021-10-13T13:31:05
2021-10-13T13:31:05
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,024,893,493
3,071
Custom plain text dataset, plain json dataset and plain csv dataset are remove from datasets template folder
## Adding a Dataset - **Name:** text, json, csv - **Description:** I am developing a customized dataset loading script. The problem is mainly about my custom dataset is seperate into many files and I only find a dataset loading template in [https://github.com/huggingface/datasets/blob/1.2.1/datasets/json/json.py](ht...
closed
https://github.com/huggingface/datasets/issues/3071
2021-10-13T07:32:10
2021-10-13T08:27:04
2021-10-13T08:27:03
{ "login": "zixiliuUSC", "id": 49173327, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]