id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
1,438,544,617
5,211
Update Overview.ipynb google colab
- removed metrics stuff - added image example - added audio example (with ffmpeg instructions) - updated the "add a new dataset" section
closed
https://github.com/huggingface/datasets/pull/5211
2022-11-07T15:23:52
2022-11-29T15:59:48
2022-11-29T15:54:17
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,438,492,507
5,210
Tweak readme
Tweaked some paragraphs mentioning the modalities we support + added a paragraph on security
closed
https://github.com/huggingface/datasets/pull/5210
2022-11-07T14:51:23
2022-11-24T11:35:07
2022-11-24T11:26:16
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,438,367,678
5,209
Implement ability to define splits in metadata section of dataset card
### Feature request If you go here: https://huggingface.co/datasets/inria-soda/tabular-benchmark/tree/main you will see bunch of folders that has various CSV files. I’d like dataset viewer to show these files instead of only one dataset like it currently does. (and also people to be able to load them as splits inste...
closed
https://github.com/huggingface/datasets/issues/5209
2022-11-07T13:27:16
2023-07-21T14:36:02
2023-07-21T14:36:01
{ "login": "merveenoyan", "id": 53175384, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,438,035,707
5,208
Refactor CI hub fixtures to use monkeypatch instead of patch
Minor refactoring of CI to use `pytest` `monkeypatch` instead of `unittest` `patch`.
closed
https://github.com/huggingface/datasets/pull/5208
2022-11-07T09:25:05
2022-11-08T06:51:20
2022-11-08T06:49:17
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,437,858,506
5,207
Connection error of the HuggingFace's dataset Hub due to SSLError with proxy
### Describe the bug It's weird. I could not normally connect the dataset Hub of HuggingFace due to a SSLError in my office. Even when I try to connect using my company's proxy address (e.g., http_proxy and https_proxy), I'm getting the SSLError issue. What should I do to download the datanet stored in Hugg...
open
https://github.com/huggingface/datasets/issues/5207
2022-11-07T06:56:23
2025-03-08T09:04:10
null
{ "login": "leemgs", "id": 82404, "type": "User" }
[]
false
[]
1,437,223,894
5,206
Use logging instead of printing to console
### Describe the bug Some logs ([here](https://github.com/huggingface/datasets/blob/4a6e1fe2735505efc7e3a3dbd3e1835da0702575/src/datasets/builder.py#L778), [here](https://github.com/huggingface/datasets/blob/4a6e1fe2735505efc7e3a3dbd3e1835da0702575/src/datasets/builder.py#L786), and [here](https://github.com/huggingfa...
closed
https://github.com/huggingface/datasets/issues/5206
2022-11-05T23:48:02
2022-11-06T00:06:00
2022-11-06T00:05:59
{ "login": "bilelomrani1", "id": 16692099, "type": "User" }
[]
false
[]
1,437,221,987
5,205
Add missing `DownloadConfig.use_auth_token` value
This PR solves https://github.com/huggingface/datasets/issues/5204 Now the `token` is propagated so that `DownloadConfig.use_auth_token` value is set before trying to download private files from existing datasets in the Hub.
closed
https://github.com/huggingface/datasets/pull/5205
2022-11-05T23:36:36
2022-11-08T08:13:00
2022-11-07T16:20:24
{ "login": "alvarobartt", "id": 36760800, "type": "User" }
[]
true
[]
1,437,221,259
5,204
`push_to_hub` not propagating `token` through `DownloadConfig`
### Describe the bug When trying to upload a new 🤗 Dataset to the Hub via Python, and providing the `token` as a parameter to the `Dataset.push_to_hub` function, it just works for the first time, assuming that the dataset didn't exist before. But when trying to run `Dataset.push_to_hub` again over the same dataset...
closed
https://github.com/huggingface/datasets/issues/5204
2022-11-05T23:32:20
2022-11-08T10:12:09
2022-11-08T10:12:08
{ "login": "alvarobartt", "id": 36760800, "type": "User" }
[]
false
[]
1,436,710,518
5,203
Update canonical links to Hub links
This PR updates some of the canonical dataset links to their corresponding links on the Hub; closes #5200.
closed
https://github.com/huggingface/datasets/pull/5203
2022-11-04T22:50:50
2022-11-07T18:43:05
2022-11-07T18:40:19
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[]
true
[]
1,435,886,090
5,202
CI fails after bulk edit of canonical datasets
``` ______ test_get_dataset_config_info[paws-labeled_final-expected_splits2] _______ [gw0] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python path = 'paws', config_name = 'labeled_final' expected_splits = ['train', 'test', 'validation'] @pytest.mark.parametrize( "path, config...
closed
https://github.com/huggingface/datasets/issues/5202
2022-11-04T10:51:20
2023-02-16T09:11:10
2023-02-16T09:11:10
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,435,881,554
5,201
Do not sort splits in dataset info
I suggest not to sort splits by their names in dataset_info in README so that they are displayed in the order specified in the loading script. Otherwise `test` split is displayed first, see this repo: https://huggingface.co/datasets/paws What do you think? But I added sorting in tests to fix CI (for the same datase...
closed
https://github.com/huggingface/datasets/pull/5201
2022-11-04T10:47:21
2022-11-04T14:47:37
2022-11-04T14:45:09
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,435,831,559
5,200
Some links to canonical datasets in the docs are outdated
As we don't have canonical datasets in the github repo anymore, some old links to them doesn't work. I don't know how many of them are there, I found link to SuperGlue here: https://huggingface.co/docs/datasets/dataset_script#multiple-configurations, probably there are more of them. These links should be replaced by li...
closed
https://github.com/huggingface/datasets/issues/5200
2022-11-04T10:06:21
2022-11-07T18:40:20
2022-11-07T18:40:20
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
false
[]
1,434,818,836
5,199
Deprecate dummy data generation command
Deprecate the `dummy_data` CLI command.
closed
https://github.com/huggingface/datasets/pull/5199
2022-11-03T15:05:54
2022-11-04T14:01:50
2022-11-04T13:59:47
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,434,699,165
5,198
Add note about the name of a dataset script
Add note that a dataset script should has the same name as a repo/dir, a bit related to this issue https://github.com/huggingface/datasets/issues/5193 also fixed two minor issues in audio docs (broken links)
closed
https://github.com/huggingface/datasets/pull/5198
2022-11-03T13:51:32
2022-11-04T12:47:59
2022-11-04T12:46:01
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,434,676,150
5,197
[zstd] Use max window log size
ZstdDecompressor has a parameter `max_window_size` to limit max memory usage when decompressing zstd files. The default `max_window_size` is not enough when files are compressed by `zstd --ultra` flags. Change `max_window_size` to the zstd's max window size. NOTE, the `zstd.WINDOWLOG_MAX` is the log_2 value of the m...
open
https://github.com/huggingface/datasets/pull/5197
2022-11-03T13:35:58
2022-11-03T13:45:19
null
{ "login": "reyoung", "id": 728699, "type": "User" }
[]
true
[]
1,434,401,646
5,196
Use hfh hf_hub_url function
Small refactoring to use `hf_hub_url` function from `huggingface_hub`. This PR also creates the `hub` module that will contain all `huggingface_hub` functionalities relevant to `datasets`. This is a necessary stage before implementing the use of the `hfh` caching system (which uses its `hf_hub_url` under the hood...
closed
https://github.com/huggingface/datasets/pull/5196
2022-11-03T10:08:09
2022-12-06T11:38:17
2022-11-09T07:15:12
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,434,290,689
5,195
[wip testing docs]
null
closed
https://github.com/huggingface/datasets/pull/5195
2022-11-03T08:37:34
2023-04-04T15:10:37
2023-04-04T15:10:33
{ "login": "mishig25", "id": 11827707, "type": "User" }
[]
true
[]
1,434,206,951
5,194
Fix docs about dataset_info in YAML
This PR fixes some misalignment in the docs after we transferred the dataset_info from `dataset_infos.json` to YAML in the dataset card: - #4926 Related to: - #5193
closed
https://github.com/huggingface/datasets/pull/5194
2022-11-03T07:10:23
2022-11-03T13:31:27
2022-11-03T13:29:21
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,433,883,780
5,193
"One or several metadata. were found, but not in the same directory or in a parent directory"
### Describe the bug When loading my own dataset, on loading it I get an error. Here is my dataset link: https://huggingface.co/datasets/corentinm7/MyoQuant-SDH-Data And the error after loading with: ```python from datasets import load_dataset load_dataset("corentinm7/MyoQuant-SDH-Data") ``` ```python Downlo...
closed
https://github.com/huggingface/datasets/issues/5193
2022-11-02T22:46:25
2022-11-03T13:39:16
2022-11-03T13:35:44
{ "login": "lambda-science", "id": 20109584, "type": "User" }
[]
false
[]
1,433,199,790
5,192
Drop labels in Image and Audio folders if files are on different levels in directory or if there is only one label
Will close https://github.com/huggingface/datasets/issues/5153 Drop labels by default (`drop_labels=None`) when: * there are files on different levels of directory hierarchy by checking their path depth * all files are in the same directory (=only one label was inferred) First one fixes cases like this: ``` r...
closed
https://github.com/huggingface/datasets/pull/5192
2022-11-02T14:01:41
2022-11-15T16:32:53
2022-11-15T16:31:07
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
true
[]
1,433,191,658
5,191
Make torch.Tensor and spacy models cacheable
Override `Pickler.save` to implement deterministic reduction (lazily registered; inspired by https://github.com/uqfoundation/dill/blob/master/dill/_dill.py#L343) functions for `torch.Tensor` and spaCy models. Fix https://github.com/huggingface/datasets/issues/5170, fix https://github.com/huggingface/datasets/issues/...
closed
https://github.com/huggingface/datasets/pull/5191
2022-11-02T13:56:18
2022-11-02T17:20:48
2022-11-02T17:18:42
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,433,014,626
5,190
`path` is `None` when downloading a custom audio dataset from the Hub
### Describe the bug I've created an [audio dataset](https://huggingface.co/datasets/lewtun/audio-test-push) using the `audiofolder` feature desribed in the [docs](https://huggingface.co/docs/datasets/audio_dataset#audiofolder) and then pushed it to the Hub. Locally, I can see the `audio.path` feature is of the ...
closed
https://github.com/huggingface/datasets/issues/5190
2022-11-02T11:51:25
2022-11-02T12:55:02
2022-11-02T12:55:02
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
false
[]
1,432,769,143
5,189
Reduce friction in tabular dataset workflow by eliminating having splits when dataset is loaded
### Feature request Sorry for cryptic name but I'd like to explain using code itself. When I want to load a specific dataset from a repository (for instance, this: https://huggingface.co/datasets/inria-soda/tabular-benchmark) ```python from datasets import load_dataset dataset = load_dataset("inria-soda/tabular-b...
open
https://github.com/huggingface/datasets/issues/5189
2022-11-02T09:15:02
2022-12-06T12:13:17
null
{ "login": "merveenoyan", "id": 53175384, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,432,477,139
5,188
add: segmentation guide.
Closes #5181 I have opened a PR on Hub (https://huggingface.co/datasets/huggingface/documentation-images/discussions/5) to include the images in our central Hub repository. Once the PR is merged I will edit the image links. I have also prepared a [Colab Notebook](https://colab.research.google.com/drive/1BMDCfOT...
closed
https://github.com/huggingface/datasets/pull/5188
2022-11-02T04:34:36
2022-11-04T18:25:57
2022-11-04T18:23:34
{ "login": "sayakpaul", "id": 22957388, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
true
[]
1,432,375,375
5,187
chore: add notebook links to img cls and obj det.
Closes https://github.com/huggingface/datasets/issues/5182
closed
https://github.com/huggingface/datasets/pull/5187
2022-11-02T02:30:09
2022-11-03T01:52:24
2022-11-03T01:49:56
{ "login": "sayakpaul", "id": 22957388, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
true
[]
1,432,045,011
5,186
Incorrect error message when Dataset.from_sql fails and sqlalchemy not installed
### Describe the bug When calling `Dataset.from_sql` (in my case, with sqlite3), it fails with a message ```ValueError: Please pass `features` or at least one example when writing data``` when I don't have `sqlalchemy` installed. ### Steps to reproduce the bug Make a new sqlite db with `sqlite3` and `pandas` from...
closed
https://github.com/huggingface/datasets/issues/5186
2022-11-01T20:25:51
2022-11-15T18:24:39
2022-11-15T18:24:39
{ "login": "nateraw", "id": 32437151, "type": "User" }
[]
false
[]
1,432,021,611
5,185
Allow passing a subset of output features to Dataset.map
### Feature request Currently, map does one of two things to the features (if I'm not mistaken): * when you do not pass features, types are assumed to be equal to the input if they can be cast, and inferred otherwise * when you pass a full specification of features, output features are set to this However, so...
open
https://github.com/huggingface/datasets/issues/5185
2022-11-01T20:07:20
2022-11-01T20:07:34
null
{ "login": "sanderland", "id": 48946947, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,431,418,066
5,183
Loading an external dataset in a format similar to conll2003
I'm trying to load a custom dataset in a Dataset object, it's similar to conll2003 but with 2 columns only (word entity), I used the following script: features = datasets.Features( {"tokens": datasets.Sequence(datasets.Value("string")), "ner_tags": datasets.Sequence( datasets.featu...
closed
https://github.com/huggingface/datasets/issues/5183
2022-11-01T13:18:29
2022-11-02T11:57:50
2022-11-02T11:57:50
{ "login": "Taghreed7878", "id": 112555442, "type": "User" }
[]
false
[]
1,431,029,547
5,182
Add notebook / other resource links to the task-specific data loading guides
Does it make sense to include links to notebooks / scripts that show how to use a dataset for training / fine-tuning a model? For example, here in [https://huggingface.co/docs/datasets/image_classification] we could include a mention of https://github.com/huggingface/notebooks/blob/main/examples/image_classificatio...
closed
https://github.com/huggingface/datasets/issues/5182
2022-11-01T07:57:26
2022-11-03T01:49:57
2022-11-03T01:49:57
{ "login": "sayakpaul", "id": 22957388, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,431,027,102
5,181
Add a guide for semantic segmentation
Currently, we have these guides for object detection and image classification: * https://huggingface.co/docs/datasets/object_detection * https://huggingface.co/docs/datasets/image_classification I am proposing adding a similar guide for semantic segmentation. I am happy to contribute a PR for it. Cc: @os...
closed
https://github.com/huggingface/datasets/issues/5181
2022-11-01T07:54:50
2022-11-04T18:23:36
2022-11-04T18:23:36
{ "login": "sayakpaul", "id": 22957388, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
false
[]
1,431,012,438
5,180
An example or recommendations for creating large image datasets?
I know that Apache Beam and `datasets` have [some connector utilities](https://huggingface.co/docs/datasets/beam). But it's a little unclear what we mean by "But if you want to run your own Beam pipeline with Dataflow, here is how:". What does that pipeline do? As a user, I was wondering if we have this support for...
open
https://github.com/huggingface/datasets/issues/5180
2022-11-01T07:38:38
2022-11-02T10:17:11
null
{ "login": "sayakpaul", "id": 22957388, "type": "User" }
[]
false
[]
1,430,826,100
5,179
`map()` fails midway due to format incompatibility
### Describe the bug I am using the `emotion` dataset from Hub for sequence classification. After training the model, I am using it to generate predictions for all the entries present in the `validation` split of the dataset. ```py def get_test_accuracy(model): def fn(batch): inputs = {k:v.to(device...
closed
https://github.com/huggingface/datasets/issues/5179
2022-11-01T03:57:59
2022-11-08T11:35:26
2022-11-08T11:35:26
{ "login": "sayakpaul", "id": 22957388, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,430,800,810
5,178
Unable to download the Chinese `wikipedia`, the dumpstatus.json not found!
### Describe the bug I tried: `data = load_dataset('wikipedia', '20220301.zh', beam_runner='DirectRunner')` and `data = load_dataset("wikipedia", language="zh", date="20220301", beam_runner='DirectRunner')` but both got: `FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/zhwiki/20220301/dumpsta...
closed
https://github.com/huggingface/datasets/issues/5178
2022-11-01T03:17:55
2022-11-02T08:27:15
2022-11-02T08:24:29
{ "login": "beyondguo", "id": 37113676, "type": "User" }
[]
false
[]
1,430,238,556
5,177
Update create image dataset docs
Based on @osanseviero and community feedback, it wasn't super clear how to upload a dataset to the Hub after creating something like an image captioning dataset. This PR adds a brief section on how to upload the dataset with `push_to_hub`.
closed
https://github.com/huggingface/datasets/pull/5177
2022-10-31T17:45:56
2022-11-02T17:15:22
2022-11-02T17:13:02
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
true
[]
1,430,214,539
5,176
prepare dataset for cloud storage doesn't work
### Describe the bug Following the [documentation](https://huggingface.co/docs/datasets/filesystems#load-and-save-your-datasets-using-your-cloud-storage-filesystem) and [this PR](https://github.com/huggingface/datasets/pull/4724), I was downloading and storing huggingface dataset to cloud storage. ``` from datasets ...
closed
https://github.com/huggingface/datasets/issues/5176
2022-10-31T17:28:57
2023-03-28T09:11:46
2023-03-28T09:11:45
{ "login": "araonblake", "id": 27285078, "type": "User" }
[]
false
[]
1,428,696,231
5,175
Loading an external NER dataset
I need to use huggingface datasets to load a custom dataset similar to conll2003 but with more entities and each the files contain only two columns: word and ner tag. I tried this code snnipet that I found here as an answer to a similar issue: from datasets import Dataset INPUT_COLUMNS = "ID Text NER".split() ...
closed
https://github.com/huggingface/datasets/issues/5175
2022-10-30T09:31:55
2022-11-01T13:15:49
2022-11-01T13:15:49
{ "login": "Taghreed7878", "id": 112555442, "type": "User" }
[]
false
[]
1,427,216,416
5,174
Preserve None in list type cast in PyArrow 10
The `ListArray` type in PyArrow 10.0.0 supports the `mask` parameter, which allows us to preserve Nones in nested lists in `cast` instead of replacing them with empty lists. Fix https://github.com/huggingface/datasets/issues/3676
closed
https://github.com/huggingface/datasets/pull/5174
2022-10-28T12:48:30
2022-10-28T13:15:33
2022-10-28T13:13:18
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,425,880,441
5,173
Raise ffmpeg warnings only once
Our warnings looks nice now. `librosa` warning that was raised at each decoding: ``` /usr/local/lib/python3.7/dist-packages/librosa/core/audio.py:165: UserWarning: PySoundFile failed. Trying audioread instead. warnings.warn("PySoundFile failed. Trying audioread instead.") ``` is suppressed with `filterwarnin...
closed
https://github.com/huggingface/datasets/pull/5173
2022-10-27T15:58:33
2022-10-28T16:03:05
2022-10-28T16:00:51
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,425,523,114
5,172
Inconsistency behavior between handling local file protocol and other FS protocols
### Describe the bug These lines us used during load_from_disk: ``` if is_remote_filesystem(fs): dest_dataset_dict_path = extract_path_from_uri(dataset_dict_path) else: fs = fsspec.filesystem("file") dest_dataset_dict_path = dataset_dict_path ``` If a local FS is given, then it will the URL as th...
open
https://github.com/huggingface/datasets/issues/5172
2022-10-27T12:03:20
2024-05-08T19:31:13
null
{ "login": "leoleoasd", "id": 37735580, "type": "User" }
[]
false
[]
1,425,355,111
5,171
Add PB and TB in convert_file_size_to_int
null
closed
https://github.com/huggingface/datasets/pull/5171
2022-10-27T09:50:31
2022-10-27T12:14:27
2022-10-27T12:12:30
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,425,301,835
5,170
[Caching] Deterministic hashing of torch tensors
Currently this fails ```python import torch from datasets.fingerprint import Hasher t = torch.tensor([1.]) def func(x): return t + x hash1 = Hasher.hash(func) t = torch.tensor([1.]) hash2 = Hasher.hash(func) assert hash1 == hash2 ``` Also as noticed in https://discuss.huggingface.co/t/dataset-ca...
closed
https://github.com/huggingface/datasets/issues/5170
2022-10-27T09:15:15
2022-11-02T17:18:43
2022-11-02T17:18:43
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,425,075,254
5,169
Add "ipykernel" to list of `co_filename`s to remove
Should resolve #5157
closed
https://github.com/huggingface/datasets/pull/5169
2022-10-27T05:56:17
2022-11-02T15:46:00
2022-11-02T15:43:20
{ "login": "gpucce", "id": 32967787, "type": "User" }
[]
true
[]
1,424,368,572
5,168
Fix CI require beam
This PR: - Fixes the CI `require_beam`: before it was requiring PyTorch instead ```python def require_beam(test_case): if not config.TORCH_AVAILABLE: test_case = unittest.skip("test requires PyTorch")(test_case) return test_case ``` - Fixes a missing `require_beam` in `test_beam_base...
closed
https://github.com/huggingface/datasets/pull/5168
2022-10-26T16:49:33
2022-10-27T09:25:19
2022-10-27T09:23:26
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,424,124,477
5,167
Add ffmpeg4 installation instructions in warnings
Adds instructions on how to install `ffmpeg=4` on Linux (relevant for Colab users). Looks pretty ugly because I didn't find a way to check `ffmpeg` version from python (without `subprocess.call()`; `ctypes.util.find_library` doesn't work`), so the warning is raised on each decoding. Any suggestions on how to make it...
closed
https://github.com/huggingface/datasets/pull/5167
2022-10-26T14:21:14
2022-10-27T09:01:12
2022-10-27T08:58:58
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,423,629,582
5,166
Support dill 0.3.6
This PR: - ~~Unpins dill to allow installing dill>=0.3.6~~ - ~~Removes the fix on dill for >=0.3.6 because they implemented a deterministic mode (to be confirmed by @anivegesana)~~ - Pins dill<0.3.7 to allow latest dill 0.3.6 - Implements a fix for dill `save_function` for dill 0.3.6 - Additionally had to implemen...
closed
https://github.com/huggingface/datasets/pull/5166
2022-10-26T08:24:59
2022-10-28T05:41:05
2022-10-28T05:38:14
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,423,616,677
5,165
Memory explosion when trying to access 4d tensors in datasets cast to torch or np
### Describe the bug When trying to access an item by index, in a datasets.Dataset cast to torch/np using `set_format` or `with_format`, we get a memory explosion if the item contains 4d (or above) tensors. ### Steps to reproduce the bug MWE: ```python from datasets import load_dataset import numpy as np de...
open
https://github.com/huggingface/datasets/issues/5165
2022-10-26T08:14:47
2022-10-26T08:14:47
null
{ "login": "clefourrier", "id": 22726840, "type": "User" }
[]
false
[]
1,422,813,247
5,164
WIP: drop labels in Image and Audio folders by default
will fix https://github.com/huggingface/datasets/issues/5153 and redundant labels displaying for most of the images datasets on the Hub (which are used just to store files) TODO: discuss adding `drop_labels` (and `drop_metadata`) params to yaml
closed
https://github.com/huggingface/datasets/pull/5164
2022-10-25T17:21:49
2022-11-16T14:21:16
2022-11-02T14:03:02
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,422,540,337
5,163
Reduce default max `writer_batch_size`
Reduce the default writer_batch_size from 10k to 1k examples. Additionally, align the default values of `batch_size` and `writer_batch_size` in `Dataset.cast` with the values from the corresponding docstring.
closed
https://github.com/huggingface/datasets/pull/5163
2022-10-25T14:14:52
2022-10-27T12:19:27
2022-10-27T12:16:47
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,422,461,112
5,162
Pip-compile: Could not find a version that matches dill<0.3.6,>=0.3.6
### Describe the bug When using `pip-compile` (part of `pip-tools`) to generate a pinned requirements file that includes `datasets`, a version conflict of `dill` appears. It is caused by a transitive dependency conflict between `datasets` and `multiprocess`. ### Steps to reproduce the bug ```bash $ echo "dataset...
closed
https://github.com/huggingface/datasets/issues/5162
2022-10-25T13:23:50
2022-11-14T08:25:37
2022-10-28T05:38:15
{ "login": "Rijgersberg", "id": 8604946, "type": "User" }
[]
false
[]
1,422,371,748
5,161
Dataset can’t cache model’s outputs
### Describe the bug Hi, I try to cache some outputs of teacher model( Knowledge Distillation ) by using map function of Dataset library, while every time I run my code, I still recompute all the sequences. I tested Bert Model like this, I got different hash every single run, so any idea to deal with this? ### Ste...
closed
https://github.com/huggingface/datasets/issues/5161
2022-10-25T12:19:00
2022-11-03T16:12:52
2022-11-03T16:12:51
{ "login": "jongjyh", "id": 37979232, "type": "User" }
[]
false
[]
1,422,193,938
5,160
Automatically add filename for image/audio folder
### Feature request When creating a custom audio of image dataset, it would be great to automatically have access to the filename. It should be both: a) Automatically displayed in the viewer b) Automatically added as a column to the dataset when doing `load_dataset` In `diffusers` our test rely quite heavily on i...
open
https://github.com/huggingface/datasets/issues/5160
2022-10-25T09:56:49
2022-10-26T16:51:46
null
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,422,172,080
5,159
fsspec lock reset in multiprocessing
`fsspec` added a clean way of resetting its lock - instead of doing it manually
closed
https://github.com/huggingface/datasets/pull/5159
2022-10-25T09:41:59
2022-11-03T20:51:15
2022-11-03T20:48:53
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,422,059,287
5,158
Fix language and license tag names in all Hub datasets
While working on this: - #5137 we realized there are still many datasets with deprecated "languages" and "licenses" tag names (instead of "language" and "license"). This is a blocking issue: no subsequent PR can be opened to modify their metadata: a ValueError will be thrown. We should fix the "language" and ...
closed
https://github.com/huggingface/datasets/issues/5158
2022-10-25T08:19:29
2022-10-25T11:27:26
2022-10-25T10:42:19
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
false
[]
1,421,703,577
5,157
Consistent caching between python and jupyter
### Feature request I hope this is not my mistake, currently if I use `load_dataset` from a python session on a custom dataset to do the preprocessing, it will be saved in the cache and in other python sessions it will be loaded from the cache, however calling the same from a jupyter notebook does not work, meaning th...
closed
https://github.com/huggingface/datasets/issues/5157
2022-10-25T01:34:33
2022-11-02T15:43:22
2022-11-02T15:43:22
{ "login": "gpucce", "id": 32967787, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,421,667,125
5,156
Unable to download dataset using Azure Data Lake Gen 2
### Describe the bug When using the DatasetBuilder method with the credentials for the cloud storage Azure Data Lake (adl) Gen2, the following error is showed: ``` Traceback (most recent call last): File "download_hf_dataset.py", line 143, in <module> main() File "download_hf_dataset.py", line 102, in mai...
closed
https://github.com/huggingface/datasets/issues/5156
2022-10-25T00:43:18
2024-02-15T09:48:36
2022-11-17T23:37:08
{ "login": "clarissesimoes", "id": 87379512, "type": "User" }
[]
false
[]
1,421,278,748
5,155
TextConfig: added "errors"
This patch adds the ability to set the `errors` option of `open` for loading text datasets. I needed it because some data I had scraped had bad bytes in it, so I needed `errors='ignore'`.
closed
https://github.com/huggingface/datasets/pull/5155
2022-10-24T18:56:52
2022-11-03T13:38:13
2022-11-03T13:35:35
{ "login": "NightMachinery", "id": 36224762, "type": "User" }
[]
true
[]
1,421,161,992
5,154
Test latest fsspec in CI
Following the discussion in https://discuss.huggingface.co/t/attributeerror-module-fsspec-has-no-attribute-asyn/19255 I think we need to test the latest fsspec in the CI
closed
https://github.com/huggingface/datasets/pull/5154
2022-10-24T17:18:13
2023-09-24T10:06:06
2022-10-25T09:30:45
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,420,833,457
5,153
default Image/AudioFolder infers labels when there is no metadata files even if there is only one dir
### Describe the bug By default FolderBasedBuilder infers labels if there is not metadata files, even if it's meaningless (for example, they are in a single directory or in the root folder, see this repo as an example: https://huggingface.co/datasets/patrickvonplaten/audios As this is a corner case for quick expl...
closed
https://github.com/huggingface/datasets/issues/5153
2022-10-24T13:28:18
2022-11-15T16:31:10
2022-11-15T16:31:09
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,420,808,919
5,152
refactor FolderBasedBuilder and Image/AudioFolder tests
Tests for FolderBasedBuilder, ImageFolder and AudioFolder are mostly duplicating each other. They need to be refactored and Audio/ImageFolder should have only tests specific to the loader.
open
https://github.com/huggingface/datasets/issues/5152
2022-10-24T13:11:52
2022-10-24T13:11:52
null
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[ { "name": "refactoring", "color": "B67A40" } ]
false
[]
1,420,791,163
5,151
Add support to create different configs with `push_to_hub` (+ inferring configs from directories with package managers?)
Now one can push only different splits within one default config of a dataset. Would be nice to allow something like: ``` ds.push_to_hub(repo_name, config=config_name) ``` I'm not sure, but this will probably require changes in `data_files.py` patterns. If so, it would also allow to create different configs fo...
open
https://github.com/huggingface/datasets/issues/5151
2022-10-24T12:59:18
2022-11-04T14:55:20
null
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,420,684,999
5,150
Problems after upgrading to 2.6.1
### Describe the bug Loading a dataset_dict from disk with `load_from_disk` is now creating a `KeyError "length"` that was not occurring in v2.5.2. Context: - Each individual dataset in the dict is created with `Dataset.from_pandas` - The dataset_dict is create from a dict of `Dataset`s, e.g., `DatasetDict({"tr...
open
https://github.com/huggingface/datasets/issues/5150
2022-10-24T11:32:36
2024-05-12T07:40:03
null
{ "login": "pietrolesci", "id": 61748653, "type": "User" }
[]
false
[]
1,420,415,639
5,149
Make iter_files deterministic
Fix #5145.
closed
https://github.com/huggingface/datasets/pull/5149
2022-10-24T08:16:27
2022-10-27T09:53:23
2022-10-27T09:51:09
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,420,219,222
5,148
Cannot find the rvl_cdip dataset
Hi, I am trying to use load_dataset to load the official "rvl_cdip" dataset but getting an error. dataset = load_dataset("rvl_cdip") Couldn't find 'rvl_cdip' on the Hugging Face Hub either: FileNotFoundError: Couldn't find the file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/rvl_cdi...
closed
https://github.com/huggingface/datasets/issues/5148
2022-10-24T04:57:42
2022-10-24T12:23:47
2022-10-24T06:25:28
{ "login": "santule", "id": 20509836, "type": "User" }
[]
false
[]
1,419,522,275
5,147
Allow ignoring kwargs inside fn_kwargs during dataset.map's fingerprinting
### Feature request `dataset.map` accepts a `fn_kwargs` that is passed to `fn`. Currently, the whole `fn_kwargs` is used by `fingerprint_transform` to calculate the new fingerprint. I'd like to be able to inform `fingerprint_transform` which `fn_kwargs` shoud/shouldn't be taken into account during hashing. Of co...
open
https://github.com/huggingface/datasets/issues/5147
2022-10-22T21:46:38
2022-11-01T22:19:07
null
{ "login": "falcaopetri", "id": 8387736, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,418,331,282
5,146
Delete duplicate issue template file
A conflict between two PRs: - #5116 - #5136 was not properly resolved, resulting in a duplicate issue template. This PR removes the duplicate template.
closed
https://github.com/huggingface/datasets/pull/5146
2022-10-21T13:18:46
2022-10-21T13:52:30
2022-10-21T13:50:04
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,418,005,452
5,145
Dataset order is not deterministic with ZIP archives and `iter_files`
### Describe the bug For the `beans` dataset (did not try on other), the order of samples is not the same on different machines. Tested on my local laptop, github actions machine, and ec2 instance. The three yield a different order. ### Steps to reproduce the bug In a clean docker container or conda environmen...
closed
https://github.com/huggingface/datasets/issues/5145
2022-10-21T09:00:03
2022-10-27T09:51:49
2022-10-27T09:51:10
{ "login": "fxmarty", "id": 9808326, "type": "User" }
[]
false
[]
1,417,974,731
5,144
Inconsistent documentation on map remove_columns
### Describe the bug The page [process](https://huggingface.co/docs/datasets/process) says this about the parameter `remove_columns` of the function `map`: When you remove a column, it is only removed after the example has been provided to the mapped function. So it seems that the `remove_columns` parameter remo...
closed
https://github.com/huggingface/datasets/issues/5144
2022-10-21T08:37:53
2022-11-15T14:15:10
2022-11-15T14:15:10
{ "login": "zhaowei-wang-nlp", "id": 22047467, "type": "User" }
[ { "name": "documentation", "color": "0075ca" }, { "name": "duplicate", "color": "cfd3d7" }, { "name": "good first issue", "color": "7057ff" }, { "name": "hacktoberfest", "color": "DF8D62" } ]
false
[]
1,416,837,186
5,143
DownloadManager Git LFS support
### Feature request Maybe I'm mistaken but the `DownloadManager` does not support extracting git lfs files out of the box right? Using `dl_manager.download()` or `dl_manager.download_and_extract()` still returns lfs files afaict. Is there a good way to write a dataset loading script for a repo with lfs files? ##...
closed
https://github.com/huggingface/datasets/issues/5143
2022-10-20T15:29:29
2022-10-20T17:17:10
2022-10-20T17:17:10
{ "login": "Muennighoff", "id": 62820084, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,416,317,678
5,142
Deprecate num_proc parameter in DownloadManager.extract
fixes #5132 : Deprecated the `num_proc` parameter in `DownloadManager.extract` by passing `num_proc` parameter to `map_nested` .
closed
https://github.com/huggingface/datasets/pull/5142
2022-10-20T09:52:52
2022-10-25T18:06:56
2022-10-25T15:56:45
{ "login": "ayushthe1", "id": 114604338, "type": "User" }
[]
true
[]
1,415,479,438
5,141
Raise ImportError instead of OSError
fixes #5134 : Replaced OSError with ImportError if required extraction library is not installed.
closed
https://github.com/huggingface/datasets/pull/5141
2022-10-19T19:30:05
2022-10-25T15:59:25
2022-10-25T15:56:58
{ "login": "ayushthe1", "id": 114604338, "type": "User" }
[]
true
[]
1,415,075,530
5,140
Make the KeyHasher FIPS compliant
MD5 is not FIPS compliant thus I am proposing this minimal change to make datasets package FIPS compliant
closed
https://github.com/huggingface/datasets/pull/5140
2022-10-19T14:25:52
2022-11-07T16:20:43
2022-11-07T16:20:43
{ "login": "vvalouch", "id": 22592860, "type": "User" }
[]
true
[]
1,414,642,723
5,137
Align task tags in dataset metadata
## Describe Once we have agreed on a common naming for task tags for all open source projects, we should align on them. ## Steps - [x] Align task tags in canonical datasets - [x] task_categories: 4 datasets - [x] task_ids (by @lhoestq) - [x] Open PRs in community datasets - [x] task_categories: 451 datas...
closed
https://github.com/huggingface/datasets/issues/5137
2022-10-19T09:41:42
2022-11-10T05:25:58
2022-10-25T06:17:00
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
false
[]
1,414,492,139
5,136
Update docs once dataset scripts transferred to the Hub
Todo: - [x] Update docs: - [x] Datasets on GitHub (legacy) - [x] Load: offline - [x] About dataset load: - [x] Maintaining integrity - [x] Security - [x] Update docstrings: - [x] Inspect: - [x] get_dataset_config_info - [x] get_dataset_split_names - [x] Load: - [x] dataset_modu...
closed
https://github.com/huggingface/datasets/pull/5136
2022-10-19T07:58:27
2022-10-20T08:12:21
2022-10-20T08:10:00
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,414,413,519
5,135
Update docs once dataset scripts transferred to the Hub
## Describe the bug As discussed in: - https://github.com/huggingface/hub-docs/pull/423#pullrequestreview-1146083701 we should update our docs once dataset scripts have been transferred to the Hub (and removed from GitHub): - #4974 Concretely: - [x] Datasets on GitHub (legacy): https://huggingface.co/docs/dat...
closed
https://github.com/huggingface/datasets/issues/5135
2022-10-19T06:58:19
2022-10-20T08:10:01
2022-10-20T08:10:01
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
false
[]
1,413,623,687
5,134
Raise ImportError instead of OSError if required extraction library is not installed
According to the official Python docs, `OSError` should be thrown in the following situations: > This exception is raised when a system function returns a system-related error, including I/O failures such as “file not found” or “disk full” (not for illegal argument types or other incidental errors). Hence, it makes...
closed
https://github.com/huggingface/datasets/issues/5134
2022-10-18T17:53:46
2022-10-25T15:56:59
2022-10-25T15:56:59
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "good first issue", "color": "7057ff" }, { "name": "hacktoberfest", "color": "DF8D62" } ]
false
[]
1,413,623,462
5,133
Tensor operation not functioning in dataset mapping
## Describe the bug I'm doing a torch.mean() operation in data preprocessing, and it's not working. ## Steps to reproduce the bug ``` from transformers import pipeline import torch import numpy as np from datasets import load_dataset device = 'cuda:0' raw_dataset = load_dataset("glue", "sst2") feature_extra...
closed
https://github.com/huggingface/datasets/issues/5133
2022-10-18T17:53:35
2022-10-19T04:15:45
2022-10-19T04:15:44
{ "login": "xinghaow99", "id": 50691954, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,413,607,306
5,132
Depracate `num_proc` parameter in `DownloadManager.extract`
The `num_proc` parameter is only present in `DownloadManager.extract` but not in `StreamingDownloadManager.extract`, making it impossible to support streaming in the dataset scripts that use it (`openwebtext` and `the_pile_stack_exchange`). We can avoid this situation by deprecating this parameter and passing `Download...
closed
https://github.com/huggingface/datasets/issues/5132
2022-10-18T17:41:05
2022-10-25T15:56:46
2022-10-25T15:56:46
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "good first issue", "color": "7057ff" }, { "name": "hacktoberfest", "color": "DF8D62" } ]
false
[]
1,413,534,863
5,131
WikiText 103 tokenizer hangs
See issue here: https://github.com/huggingface/transformers/issues/19702
closed
https://github.com/huggingface/datasets/issues/5131
2022-10-18T16:44:00
2023-08-08T08:42:40
2023-07-21T14:41:51
{ "login": "TrentBrick", "id": 12433427, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,413,435,000
5,130
Avoid extra cast in `class_encode_column`
Pass the updated features to `map` to avoid the `cast` in `class_encode_column`.
closed
https://github.com/huggingface/datasets/pull/5130
2022-10-18T15:31:24
2022-10-19T11:53:02
2022-10-19T11:50:46
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,413,031,664
5,129
unexpected `cast` or `class_encode_column` result after `rename_column`
## Describe the bug When invoke `cast` or `class_encode_column` to a colunm renamed by `rename_column` , it will convert all the variables in this column into one variable. I also run this script in version 2.5.2, this bug does not appear. So I switched to the older version. ## Steps to reproduce the bug ```python...
closed
https://github.com/huggingface/datasets/issues/5129
2022-10-18T11:15:24
2022-10-19T03:02:26
2022-10-19T03:02:26
{ "login": "quaeast", "id": 35144675, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,412,783,855
5,128
Make filename matching more robust
Fix #5046
closed
https://github.com/huggingface/datasets/pull/5128
2022-10-18T08:22:48
2022-10-28T13:07:38
2022-10-28T13:05:06
{ "login": "riccardobucco", "id": 9295277, "type": "User" }
[]
true
[]
1,411,897,544
5,127
[WIP] WebDataset export
I added a first draft of the `IterableDataset.to_wds` method. You can use it to savea dataset loaded in streamign mode as a webdataset locally. The API can be further improved to allow to export to a cloud storage like the HF Hub. I also included sharding with a default max shard size of 500MB (uncompressed), an...
closed
https://github.com/huggingface/datasets/pull/5127
2022-10-17T16:50:22
2024-01-11T06:27:04
2024-01-08T14:25:43
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,411,757,124
5,126
Fix class name of symbolic link
Fix #5098
closed
https://github.com/huggingface/datasets/pull/5126
2022-10-17T15:11:02
2022-11-14T14:40:18
2022-11-14T14:40:18
{ "login": "riccardobucco", "id": 9295277, "type": "User" }
[]
true
[]
1,411,602,813
5,125
Add `pyproject.toml` for `black`
Add `pyproject.toml` as a config file for the `black` tool to support VS Code's auto-formatting on save (and to be more consistent with the other HF projects).
closed
https://github.com/huggingface/datasets/pull/5125
2022-10-17T13:38:47
2024-11-20T13:36:11
2022-10-17T14:21:09
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,411,159,725
5,124
Install tensorflow-macos dependency conditionally
Fix #5118.
closed
https://github.com/huggingface/datasets/pull/5124
2022-10-17T08:45:08
2022-10-19T09:12:17
2022-10-19T09:10:06
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,410,828,756
5,123
datasets freezes with streaming mode in multiple-gpu
## Describe the bug Hi. I am using this dataloader, which is for processing large datasets in streaming mode mentioned in one of examples of huggingface. I am using it to read c4: https://github.com/huggingface/transformers/blob/b48ac1a094e572d6076b46a9e4ed3e0ebe978afc/examples/research_projects/codeparrot/scripts/cod...
open
https://github.com/huggingface/datasets/issues/5123
2022-10-17T03:28:16
2023-05-14T06:55:20
null
{ "login": "jackfeinmann5", "id": 59409879, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,410,732,403
5,122
Add warning
Fixes: #5105 I think removing the directory with warning is a better solution for this issue. Because if we decide to keep existing files in directory, then we should deal with the case providing same directory for several datasets! Which we know is not possible since `dataset_info.json` exists in that directory.
closed
https://github.com/huggingface/datasets/pull/5122
2022-10-17T01:30:37
2022-11-05T12:23:53
2022-11-05T12:23:53
{ "login": "Salehbigdeli", "id": 34204311, "type": "User" }
[]
true
[]
1,410,681,067
5,121
Bugfix ignore function when creating new_fingerprint for caching
maybe fixes: #5109
closed
https://github.com/huggingface/datasets/pull/5121
2022-10-17T00:03:43
2022-10-17T12:39:36
2022-10-17T12:39:36
{ "login": "Salehbigdeli", "id": 34204311, "type": "User" }
[]
true
[]
1,410,641,221
5,120
Fix `tqdm` zip bug
This PR solves #5117, by wrapping the entire `zip` clause in tqdm. For more information, please checkout this Stack Overflow thread: https://stackoverflow.com/questions/41171191/tqdm-progressbar-and-zip-built-in-do-not-work-together
closed
https://github.com/huggingface/datasets/pull/5120
2022-10-16T22:19:18
2022-10-23T10:27:53
2022-10-19T08:53:17
{ "login": "david1542", "id": 9879252, "type": "User" }
[]
true
[]
1,410,561,363
5,119
[TYPO] Update new_dataset_script.py
null
closed
https://github.com/huggingface/datasets/pull/5119
2022-10-16T17:36:49
2022-10-19T09:48:19
2022-10-19T09:45:59
{ "login": "cakiki", "id": 3664563, "type": "User" }
[]
true
[]
1,410,547,373
5,118
Installing `datasets` on M1 computers
## Describe the bug I wanted to install `datasets` dependencies on my M1 (in order to start contributing to the project). However, I got an error regarding `tensorflow`. On M1, `tensorflow-macos` needs to be installed instead. Can we add a conditional requirement, so that `tensorflow-macos` would be installed on M1...
closed
https://github.com/huggingface/datasets/issues/5118
2022-10-16T16:50:08
2022-10-19T09:10:08
2022-10-19T09:10:08
{ "login": "david1542", "id": 9879252, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,409,571,346
5,117
Progress bars have color red and never completed to 100%
## Describe the bug Progress bars after transformative operations turn in red and never be completed to 100% ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset('rotten_tomatoes', split='test').filter(lambda o: True) ``` ## Expected results Progress bar should be 100% an...
closed
https://github.com/huggingface/datasets/issues/5117
2022-10-14T16:12:30
2024-06-19T19:03:42
2022-10-23T12:58:41
{ "login": "echatzikyriakidis", "id": 63857529, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,409,549,471
5,116
Use yaml for issue templates + revamp
Use YAML instead of markdown (more expressive) for the issue templates. In addition, update their structure/fields to be more aligned with Transformers. PS: also removes the "add_dataset" PR template, as we no longer accept such PRs.
closed
https://github.com/huggingface/datasets/pull/5116
2022-10-14T15:53:13
2022-10-19T13:05:49
2022-10-19T13:03:22
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,409,250,020
5,115
Fix iter_batches
The `pa.Table.to_reader()` method available in `pyarrow>=8.0.0` may return chunks of size < `max_chunksize`, therefore `iter_batches` can return batches smaller than the `batch_size` specified by the user Therefore batched `map` couldn't always use batches of the right size, e.g. this fails because it runs only on o...
closed
https://github.com/huggingface/datasets/pull/5115
2022-10-14T12:06:14
2022-10-14T15:02:15
2022-10-14T14:59:58
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,409,236,738
5,114
load_from_disk with remote filesystem fails due to a wrong temporary local folder path
## Describe the bug The function load_from_disk fails when using a remote filesystem because of a wrong temporary path generation in the load_from_disk method of arrow_dataset.py: ```python if is_remote_filesystem(fs): src_dataset_path = extract_path_from_uri(dataset_path) dataset_path = Dataset._build...
open
https://github.com/huggingface/datasets/issues/5114
2022-10-14T11:54:53
2022-11-19T07:13:10
null
{ "login": "bruno-hays", "id": 48770768, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,409,207,607
5,113
Fix filter indices when batched
This PR fixes a bug introduced by: - #5030 Fix #5112.
closed
https://github.com/huggingface/datasets/pull/5113
2022-10-14T11:30:03
2022-10-24T06:21:09
2022-10-14T12:11:44
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,409,143,409
5,112
Bug with filtered indices
## Describe the bug As reported by @PartiallyTyped (and by @Muennighoff): - https://github.com/huggingface/datasets/issues/5111#issuecomment-1278652524 There is an issue with the indices of a filtered dataset. ## Steps to reproduce the bug ```python ds = Dataset.from_dict({"num": [0, 1, 2, 3]}) ds = ds.filte...
closed
https://github.com/huggingface/datasets/issues/5112
2022-10-14T10:35:47
2022-10-14T13:55:03
2022-10-14T12:11:45
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,408,143,170
5,111
map and filter not working properly in multiprocessing with the new release 2.6.0
## Describe the bug When mapping is used on a dataset with more than one process, there is a weird behavior when trying to use `filter` , it's like only the samples from one worker are retrieved, one needs to specify the same `num_proc` in filter for it to work properly. This doesn't happen with `datasets` version 2.5...
closed
https://github.com/huggingface/datasets/issues/5111
2022-10-13T17:00:55
2022-10-17T08:26:59
2022-10-14T14:59:59
{ "login": "loubnabnl", "id": 44069155, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,407,434,706
5,109
Map caching not working for some class methods
## Describe the bug The cache loading is not working as expected for some class methods with a model stored in an attribute. The new fingerprint for `_map_single` is not the same at each run. The hasher generate a different hash for the class method. This comes from `dumps` function in `datasets.utils.py_utils` whic...
closed
https://github.com/huggingface/datasets/issues/5109
2022-10-13T09:12:58
2022-10-17T10:38:45
2022-10-17T10:38:45
{ "login": "Mouhanedg56", "id": 23029765, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,407,044,107
5,108
Fix a typo in arrow_dataset.py
null
closed
https://github.com/huggingface/datasets/pull/5108
2022-10-13T02:33:55
2022-10-14T09:47:28
2022-10-14T09:47:27
{ "login": "yangky11", "id": 5431913, "type": "User" }
[]
true
[]