id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
2,129,147,085
6,657
Release not pushed to conda channel
### Describe the bug The github actions step to publish the release 2.17.0 to conda channel has failed due to expired token. Can some one please update the anaconda token rerun the failed action? @albertvillanova ? ![image](https://github.com/huggingface/datasets/assets/7138162/1b56ad3d-7643-4778-9cce-4bf531717700...
closed
https://github.com/huggingface/datasets/issues/6657
2024-02-11T20:05:17
2024-03-06T15:06:22
2024-03-06T15:06:22
{ "login": "atulsaurav", "id": 7138162, "type": "User" }
[]
false
[]
2,127,338,377
6,656
Error when loading a big local json file
### Describe the bug When trying to load big json files from a local directory, `load_dataset` throws the following error ``` Traceback (most recent call last): File "/miniconda3/envs/conda-env/lib/python3.10/site-packages/datasets/builder.py", line 1989, in _prepare_split_single writer.write_table(table) ...
open
https://github.com/huggingface/datasets/issues/6656
2024-02-09T15:14:21
2024-11-29T10:06:57
null
{ "login": "Riccorl", "id": 10062216, "type": "User" }
[]
false
[]
2,127,020,042
6,655
Cannot load the dataset go_emotions
### Describe the bug When I run the following code I get an exception; `go_emotions = load_dataset("go_emotions")` > AttributeError Traceback (most recent call last) Cell In[6], [line 1](vscode-notebook-cell:?execution_count=6&line=1) ----> [1](vscode-notebook-cell:?execution_count=6&l...
open
https://github.com/huggingface/datasets/issues/6655
2024-02-09T12:15:39
2024-02-12T09:35:55
null
{ "login": "arame", "id": 688324, "type": "User" }
[]
false
[]
2,126,939,358
6,654
Batched dataset map throws exception that cannot cast fixed length array to Sequence
### Describe the bug I encountered a TypeError when batch processing a dataset with Sequence features in datasets package version 2.16.1. The error arises from a mismatch in handling fixed-size list arrays during the map function execution. Debugging pinpoints the issue to an if-statement in datasets/table.py, line 20...
closed
https://github.com/huggingface/datasets/issues/6654
2024-02-09T11:23:19
2024-02-12T08:26:53
2024-02-12T08:26:53
{ "login": "keesjandevries", "id": 1029671, "type": "User" }
[]
false
[]
2,126,831,929
6,653
Set dev version
null
closed
https://github.com/huggingface/datasets/pull/6653
2024-02-09T10:12:02
2024-02-09T10:18:20
2024-02-09T10:12:12
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,126,760,798
6,652
Release: 2.17.0
null
closed
https://github.com/huggingface/datasets/pull/6652
2024-02-09T09:25:01
2024-02-09T10:11:48
2024-02-09T10:05:35
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,126,649,626
6,651
Slice splits support for datasets.load_from_disk
### Feature request Support for slice splits in `datasets.load_from_disk`, similar to how it's already supported for `datasets.load_dataset`. ### Motivation Slice splits are convienient in a numer of cases - adding support to `datasets.load_from_disk` would make working with local datasets easier and homogeniz...
open
https://github.com/huggingface/datasets/issues/6651
2024-02-09T08:00:21
2024-06-14T14:42:46
null
{ "login": "mhorlacher", "id": 37439882, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,125,680,991
6,650
AttributeError: 'InMemoryTable' object has no attribute '_batches'
### Describe the bug ``` Traceback (most recent call last): File "finetune.py", line 103, in <module> main(args) File "finetune.py", line 45, in main data_tokenized = data.map(partial(funcs.tokenize_function, tokenizer, File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/dataset_dict....
open
https://github.com/huggingface/datasets/issues/6650
2024-02-08T17:11:26
2024-02-21T00:34:41
null
{ "login": "matsuobasho", "id": 13874772, "type": "User" }
[]
false
[]
2,124,940,213
6,649
Minor multi gpu doc improvement
just added torch.no_grad and eval()
closed
https://github.com/huggingface/datasets/pull/6649
2024-02-08T11:17:24
2024-02-08T11:23:35
2024-02-08T11:17:35
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,124,813,589
6,648
Document usage of hfh cli instead of git
(basically the same content as the hfh upload docs, but adapted for datasets)
closed
https://github.com/huggingface/datasets/pull/6648
2024-02-08T10:24:56
2024-02-08T13:57:41
2024-02-08T13:51:39
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,123,397,569
6,647
Update loading.mdx to include "jsonl" file loading.
* A small update to the documentation, noting the ability to load jsonl files.
open
https://github.com/huggingface/datasets/pull/6647
2024-02-07T16:18:08
2024-02-08T15:34:17
null
{ "login": "mosheber", "id": 22236370, "type": "User" }
[]
true
[]
2,123,134,128
6,646
Better multi-gpu example
Use Qwen1.5-0.5B-Chat as an easy example for multi-GPU the previous example was using a model for translation and the way it was setup was not really the right way to use the model.
closed
https://github.com/huggingface/datasets/pull/6646
2024-02-07T14:15:01
2024-02-09T17:43:32
2024-02-07T14:59:11
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,122,956,818
6,645
Support fsspec 2024.2
Support fsspec 2024.2. First, we should address: - #6644
closed
https://github.com/huggingface/datasets/issues/6645
2024-02-07T12:45:29
2024-02-29T15:12:19
2024-02-29T15:12:19
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,122,955,282
6,644
Support fsspec 2023.12
Support fsspec 2023.12 by handling previous and new glob behavior.
closed
https://github.com/huggingface/datasets/issues/6644
2024-02-07T12:44:39
2024-02-29T15:12:18
2024-02-29T15:12:18
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,121,239,039
6,643
Faiss GPU index cannot be serialised when passed to trainer
### Describe the bug I am working on a retrieval project and encountering I have encountered two issues in the hugging face faiss integration: 1. I am trying to pass in a dataset with a faiss index to the Huggingface trainer. The code works for a cpu faiss index, but doesn't for a gpu one, getting error: ``` ...
open
https://github.com/huggingface/datasets/issues/6643
2024-02-06T16:41:00
2024-02-15T10:29:32
null
{ "login": "rubenweitzman", "id": 56388976, "type": "User" }
[]
false
[]
2,119,085,766
6,642
Differently dataset object saved than it is loaded.
### Describe the bug Differently sized object is saved than it is loaded. ### Steps to reproduce the bug Hi, I save dataset in a following way: ``` dataset = load_dataset("json", data_files={ "train": os.path.join(input_folder, f"{task_met...
closed
https://github.com/huggingface/datasets/issues/6642
2024-02-05T17:28:57
2024-02-06T09:50:19
2024-02-06T09:50:19
{ "login": "MFajcik", "id": 31218150, "type": "User" }
[]
false
[]
2,116,963,132
6,641
unicodedecodeerror: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte
### Describe the bug unicodedecodeerror: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte ### Steps to reproduce the bug ``` import sys sys.getdefaultencoding() 'utf-8' from datasets import load_dataset print(f"Train dataset size: {len(dataset['train'])}") print(f"Test datase...
closed
https://github.com/huggingface/datasets/issues/6641
2024-02-04T08:49:31
2024-02-06T09:26:07
2024-02-06T09:11:45
{ "login": "Hughhuh", "id": 109789057, "type": "User" }
[]
false
[]
2,115,864,531
6,640
Sign Language Support
### Feature request Currently, there are only several Sign Language labels, I would like to propose adding all the Signed Languages as new labels which are described in this ISO standard: https://www.evertype.com/standards/iso639/sign-language.html ### Motivation Datasets currently only have labels for several signe...
open
https://github.com/huggingface/datasets/issues/6640
2024-02-02T21:54:51
2024-02-02T21:54:51
null
{ "login": "Merterm", "id": 6684795, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,114,620,200
6,639
Run download_and_prepare if missing splits
A first step towards https://github.com/huggingface/datasets/issues/6529
open
https://github.com/huggingface/datasets/pull/6639
2024-02-02T10:36:49
2024-02-06T16:54:22
null
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,113,329,257
6,638
Cannot download wmt16 dataset
### Describe the bug As of this morning (PST) 2/1/2024, seeing the wmt16 dataset is missing from opus , could you suggest an alternative? ``` Downloading data files: 0%| | 0/4 [00:00<?, ?it/s]Tra...
closed
https://github.com/huggingface/datasets/issues/6638
2024-02-01T19:41:42
2024-02-01T20:07:29
2024-02-01T20:07:29
{ "login": "vidyasiv", "id": 81709031, "type": "User" }
[]
false
[]
2,113,025,975
6,637
'with_format' is extremely slow when used together with 'interleave_datasets' or 'shuffle' on IterableDatasets
### Describe the bug If you: 1. Interleave two iterable datasets together with the interleave_datasets function, or shuffle an iterable dataset 2. Set the output format to torch tensors with .with_format('torch') Then iterating through the dataset becomes over 100x slower than it is if you don't apply the torch...
open
https://github.com/huggingface/datasets/issues/6637
2024-02-01T17:16:54
2024-02-05T10:43:47
null
{ "login": "tobycrisford", "id": 22883190, "type": "User" }
[]
false
[]
2,110,781,097
6,636
Faster column validation and reordering
I work with bioinformatics data and often these tables have thousands and even tens of thousands of features. These tables are also accompanied by metadata that I do not want to pass in the model. When I perform `set_format('pt', columns=large_column_list)` , it can take several minutes before it finishes. The culprit ...
closed
https://github.com/huggingface/datasets/pull/6636
2024-01-31T19:08:28
2024-02-07T19:39:00
2024-02-06T23:03:38
{ "login": "psmyth94", "id": 11325244, "type": "User" }
[]
true
[]
2,110,659,519
6,635
Fix missing info when loading some datasets from Parquet export
Fix getting the info for script-based datasets with Parquet export with a single config not named "default". E.g. ```python from datasets import load_dataset_builder b = load_dataset_builder("bookcorpus") print(b.info.features) # should print {'text': Value(dtype='string', id=None)} ``` I fixed this by ...
closed
https://github.com/huggingface/datasets/pull/6635
2024-01-31T17:55:21
2024-02-07T16:48:55
2024-02-07T16:41:04
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,110,242,376
6,634
Support data_dir parameter in push_to_hub
Support `data_dir` parameter in `push_to_hub`. This allows users to organize the data files according to their specific needs. For example, "wikimedia/wikipedia" files could be organized by year and/or date, e.g. "2024/20240101/20240101.en".
closed
https://github.com/huggingface/datasets/pull/6634
2024-01-31T14:37:36
2024-02-05T10:32:49
2024-02-05T10:26:40
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,110,124,475
6,633
dataset viewer requires no-script
null
closed
https://github.com/huggingface/datasets/pull/6633
2024-01-31T13:41:54
2024-01-31T14:05:04
2024-01-31T13:59:01
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
2,108,541,678
6,632
Fix reload cache with data dir
The cache used to only check for the latest cache directory with a given config_name, but it was wrong (e.g. `default-data_dir=data%2Ffortran-data_dir=data%2Ffortran` instead of `default-data_dir=data%2Ffortran`) I fixed this by not passing the `config_kwargs` to the parent Builder `__init__`, and passing the config...
closed
https://github.com/huggingface/datasets/pull/6632
2024-01-30T18:52:23
2024-02-06T17:27:35
2024-02-06T17:21:24
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,107,802,473
6,631
Fix filelock: use current umask for filelock >= 3.10
reported in https://github.com/huggingface/evaluate/issues/542 cc @stas00 @williamberrios close https://github.com/huggingface/datasets/issues/6589
closed
https://github.com/huggingface/datasets/pull/6631
2024-01-30T12:56:01
2024-01-30T15:34:49
2024-01-30T15:28:37
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,106,478,275
6,630
Bump max range of dill to 0.3.8
Release on Jan 27, 2024: https://pypi.org/project/dill/0.3.8/#history
closed
https://github.com/huggingface/datasets/pull/6630
2024-01-29T21:35:55
2024-01-30T16:19:45
2024-01-30T15:12:25
{ "login": "ringohoffman", "id": 27844407, "type": "User" }
[]
true
[]
2,105,774,482
6,629
Support push_to_hub without org/user to default to logged-in user
This behavior is aligned with: - the behavior of `datasets` before merging #6519 - the behavior described in the corresponding docstring - the behavior of `huggingface_hub.create_repo` Revert "Support push_to_hub canonical datasets (#6519)" - This reverts commit a887ee78835573f5d80f9e414e8443b4caff3541. Fix...
closed
https://github.com/huggingface/datasets/pull/6629
2024-01-29T15:36:52
2024-02-05T12:35:43
2024-02-05T12:29:36
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,105,760,502
6,628
Make CLI test support multi-processing
Support passing `--num_proc` to CLI test. This was really useful recently to run the command on `pubmed`: https://huggingface.co/datasets/pubmed/discussions/11
closed
https://github.com/huggingface/datasets/pull/6628
2024-01-29T15:30:09
2024-02-05T10:29:20
2024-02-05T10:23:13
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,105,735,816
6,627
Disable `tqdm` bars in non-interactive environments
Replace `disable=False` with `disable=None` in the `tqdm` bars to disable them in non-interactive environments (by default). For more info, see a [similar PR](https://github.com/huggingface/huggingface_hub/pull/2000) in `huggingface_hub`.
closed
https://github.com/huggingface/datasets/pull/6627
2024-01-29T15:18:21
2024-01-29T15:47:34
2024-01-29T15:41:32
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,105,482,522
6,626
Raise error on bad split name
e.g. dashes '-' are not allowed in split names This should add an error message on datasets with unsupported split names like https://huggingface.co/datasets/open-source-metrics/test cc @AndreaFrancis
closed
https://github.com/huggingface/datasets/pull/6626
2024-01-29T13:17:41
2024-01-29T15:18:25
2024-01-29T15:12:18
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,103,950,718
6,624
How to download the laion-coco dataset
The laion coco dataset is not available now. How to download it https://huggingface.co/datasets/laion/laion-coco
closed
https://github.com/huggingface/datasets/issues/6624
2024-01-28T03:56:05
2024-02-06T09:43:31
2024-02-06T09:43:31
{ "login": "vanpersie32", "id": 15981416, "type": "User" }
[]
false
[]
2,103,870,123
6,623
streaming datasets doesn't work properly with multi-node
### Feature request Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it. Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitt...
open
https://github.com/huggingface/datasets/issues/6623
2024-01-27T23:46:13
2024-10-16T00:55:19
null
{ "login": "rohitgr7", "id": 30778939, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,103,780,697
6,622
multi-GPU map does not work
### Describe the bug Here is the code for single-GPU processing: https://pastebin.com/bfmEeK2y Here is the code for multi-GPU processing: https://pastebin.com/gQ7i5AQy Here is the video showing that the multi-GPU mapping does not work as expected (there are so many things wrong here, it's better to watch the 3-min...
closed
https://github.com/huggingface/datasets/issues/6622
2024-01-27T20:06:08
2024-02-08T11:18:21
2024-02-08T11:18:21
{ "login": "kopyl", "id": 17604849, "type": "User" }
[]
false
[]
2,103,675,294
6,621
deleted
...
closed
https://github.com/huggingface/datasets/issues/6621
2024-01-27T16:59:58
2024-01-27T17:14:43
2024-01-27T17:14:43
{ "login": "kopyl", "id": 17604849, "type": "User" }
[]
false
[]
2,103,110,536
6,620
wiki_dpr.py error (ID mismatch between lines {id} and vector {vec_id}
### Describe the bug I'm trying to run a rag example, and the dataset is wiki_dpr. wiki_dpr download and extracting have been completed successfully. However, at the generating train split stage, an error from wiki_dpr.py keeps popping up. Especially in "_generate_examples" : 1. The following error occurs in the...
closed
https://github.com/huggingface/datasets/issues/6620
2024-01-27T01:00:09
2024-02-06T09:40:19
2024-02-06T09:40:19
{ "login": "kiehls90", "id": 101498700, "type": "User" }
[]
false
[]
2,102,407,478
6,619
Migrate from `setup.cfg` to `pyproject.toml`
Based on https://github.com/huggingface/huggingface_hub/pull/1971 in `hfh`
closed
https://github.com/huggingface/datasets/pull/6619
2024-01-26T15:27:10
2024-01-26T15:53:40
2024-01-26T15:47:32
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,101,868,198
6,618
While importing load_dataset from datasets
### Describe the bug cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' this is the error i received ### Steps to reproduce the bug from datasets import load_dataset ### Expected behavior No errors ### Environment info python 3.11.5
closed
https://github.com/huggingface/datasets/issues/6618
2024-01-26T09:21:57
2024-07-23T09:31:07
2024-02-06T09:25:54
{ "login": "suprith-hub", "id": 77973415, "type": "User" }
[]
false
[]
2,100,459,449
6,617
Fix CI: pyarrow 15, pandas 2.2 and sqlachemy
this should fix the CI failures on `main` close https://github.com/huggingface/datasets/issues/5477
closed
https://github.com/huggingface/datasets/pull/6617
2024-01-25T13:57:41
2024-01-26T14:56:46
2024-01-26T14:50:44
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,100,125,709
6,616
Use schema metadata only if it matches features
e.g. if we use `map` in arrow format and transform the table, the returned table might have new columns but the metadata might be wrong
closed
https://github.com/huggingface/datasets/pull/6616
2024-01-25T11:01:14
2024-01-26T16:25:24
2024-01-26T16:19:12
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,098,951,409
6,615
...
...
closed
https://github.com/huggingface/datasets/issues/6615
2024-01-24T19:37:03
2024-01-24T19:42:30
2024-01-24T19:40:11
{ "login": "ftkeys", "id": 22179777, "type": "User" }
[]
false
[]
2,098,884,520
6,614
`datasets/downloads` cleanup tool
### Feature request Splitting off https://github.com/huggingface/huggingface_hub/issues/1997 - currently `huggingface-cli delete-cache` doesn't take care of cleaning `datasets` temp files e.g. I discovered having millions of files under `datasets/downloads` cache, I had to do: ``` sudo find /data/huggingface/...
open
https://github.com/huggingface/datasets/issues/6614
2024-01-24T18:52:10
2024-01-24T18:55:09
null
{ "login": "stas00", "id": 10676103, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,098,078,210
6,612
cnn_dailymail repeats itself
### Describe the bug When I try to load `cnn_dailymail` dataset, it takes longer than usual and when I checked the dataset it's 3x bigger than it's supposed to be. Check https://huggingface.co/datasets/cnn_dailymail: it says 287k rows for train. But when I check length of train split it says 861339. Also I che...
closed
https://github.com/huggingface/datasets/issues/6612
2024-01-24T11:38:25
2024-02-01T08:14:50
2024-02-01T08:14:50
{ "login": "KeremZaman", "id": 8274752, "type": "User" }
[]
false
[]
2,096,004,858
6,611
`load_from_disk` with large dataset from S3 runs into `botocore.exceptions.ClientError`
### Describe the bug When loading a large dataset (>1000GB) from S3 I run into the following error: ``` Traceback (most recent call last): File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 113, in _error_wrapper return await func(*args, **kwargs) File "/home/alp/.local/lib/python3....
open
https://github.com/huggingface/datasets/issues/6611
2024-01-23T12:37:57
2024-01-23T12:37:57
null
{ "login": "zotroneneis", "id": 15320635, "type": "User" }
[]
false
[]
2,095,643,711
6,610
cast_column to Sequence(subfeatures_dict) has err
### Describe the bug I am working with the following demo code: ``` from datasets import load_dataset from datasets.features import Sequence, Value, ClassLabel, Features ais_dataset = load_dataset("/data/ryan.gao/ais_dataset_cache/raw/1978/") ais_dataset = ais_dataset["train"] def add_class(example): ...
closed
https://github.com/huggingface/datasets/issues/6610
2024-01-23T09:32:32
2024-01-25T02:15:23
2024-01-25T02:15:23
{ "login": "neiblegy", "id": 16574677, "type": "User" }
[]
false
[]
2,095,085,650
6,609
Wrong path for cache directory in offline mode
### Describe the bug Dear huggingfacers, I'm trying to use a subset of the-stack dataset. When I run the command the first time ``` dataset = load_dataset( path='bigcode/the-stack', data_dir='data/fortran', split='train' ) ``` It downloads the files and caches them normally. Nevertheless, ...
closed
https://github.com/huggingface/datasets/issues/6609
2024-01-23T01:47:19
2024-02-06T17:21:25
2024-02-06T17:21:25
{ "login": "je-santos", "id": 42117435, "type": "User" }
[]
false
[]
2,094,153,292
6,608
Add `with_rank` param to `Dataset.filter`
Fix #6564
closed
https://github.com/huggingface/datasets/pull/6608
2024-01-22T15:19:16
2024-01-29T16:43:11
2024-01-29T16:36:53
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,091,766,063
6,607
Update features.py to avoid bfloat16 unsupported error
Fixes https://github.com/huggingface/datasets/issues/6566 Let me know if there's any tests I need to clear.
closed
https://github.com/huggingface/datasets/pull/6607
2024-01-20T00:39:44
2024-05-17T09:46:29
2024-05-17T09:40:13
{ "login": "skaulintel", "id": 75697181, "type": "User" }
[]
true
[]
2,091,088,785
6,606
Dedicated RNG object for fingerprinting
Closes https://github.com/huggingface/datasets/issues/6604, closes https://github.com/huggingface/datasets/issues/2775
closed
https://github.com/huggingface/datasets/pull/6606
2024-01-19T18:34:47
2024-01-26T15:11:38
2024-01-26T15:05:34
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,090,188,376
6,605
ELI5 no longer available, but referenced in example code
Here, an example code is given: https://huggingface.co/docs/transformers/tasks/language_modeling This code + article references the ELI5 dataset. ELI5 is no longer available, as the ELI5 dataset page states: https://huggingface.co/datasets/eli5 "Defunct: Dataset "eli5" is defunct and no longer accessible due to u...
closed
https://github.com/huggingface/datasets/issues/6605
2024-01-19T10:21:52
2024-02-01T17:58:23
2024-02-01T17:58:22
{ "login": "drdsgvo", "id": 81480344, "type": "User" }
[]
false
[]
2,089,713,945
6,604
Transform fingerprint collisions due to setting fixed random seed
### Describe the bug The transform fingerprinting logic relies on the `random` library for random bits when the function is not hashable (e.g. bound methods as used in `trl`: https://github.com/huggingface/trl/blob/main/trl/trainer/dpo_trainer.py#L356). This causes collisions when the training code sets a fixed random...
closed
https://github.com/huggingface/datasets/issues/6604
2024-01-19T06:32:25
2024-01-26T15:05:35
2024-01-26T15:05:35
{ "login": "normster", "id": 6687910, "type": "User" }
[]
false
[]
2,089,230,766
6,603
datasets map `cache_file_name` does not work
### Describe the bug In the documentation `datasets.Dataset.map` arg `cache_file_name` is said to be a string, but it doesn't work. ### Steps to reproduce the bug 1. pick a dataset 2. write a map function 3. do `ds.map(..., cache_file_name='some_filename')` 4. it crashes ### Expected behavior It will tell you t...
open
https://github.com/huggingface/datasets/issues/6603
2024-01-18T23:08:30
2024-01-28T04:01:15
null
{ "login": "ChenchaoZhao", "id": 35147961, "type": "User" }
[]
false
[]
2,089,217,483
6,602
Index error when data is large
### Describe the bug At `save_to_disk` step, the `max_shard_size` by default is `500MB`. However, one row of the dataset might be larger than `500MB` then the saving will throw an index error. Without looking at the source code, the bug is due to wrong calculation of number of shards which i think is `total_size / m...
open
https://github.com/huggingface/datasets/issues/6602
2024-01-18T23:00:47
2025-04-16T04:13:01
null
{ "login": "ChenchaoZhao", "id": 35147961, "type": "User" }
[]
false
[]
2,088,624,054
6,601
add safety checks when using only part of dataset
Added some checks to prevent errors that arrise when using evaluate.py on only a portion of the squad 2.0 dataset.
open
https://github.com/huggingface/datasets/pull/6601
2024-01-18T16:16:59
2024-02-08T14:33:10
null
{ "login": "benseddikismail", "id": 63422923, "type": "User" }
[]
true
[]
2,088,446,385
6,600
Loading CSV exported dataset has unexpected format
### Describe the bug I wanted to be able to save a HF dataset for translations and load it again in another script, but I'm a bit confused with the documentation and the result I've got so I'm opening this issue to ask if this behavior is as expected. ### Steps to reproduce the bug The documentation I've mainly cons...
open
https://github.com/huggingface/datasets/issues/6600
2024-01-18T14:48:27
2024-01-23T14:42:32
null
{ "login": "OrianeN", "id": 59572247, "type": "User" }
[]
false
[]
2,086,684,664
6,599
Easy way to segment into 30s snippets given an m4a file and a vtt file
### Feature request Uploading datasets is straightforward thanks to the ability to push Audio to hub. However, it would be nice if the data (text and audio) could be segmented when being pushed (if not possible already). ### Motivation It's easy to create a vtt file from an audio file. If there could be auto-segment...
closed
https://github.com/huggingface/datasets/issues/6599
2024-01-17T17:51:40
2024-01-23T10:42:17
2024-01-22T15:35:49
{ "login": "RonanKMcGovern", "id": 78278410, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,084,236,605
6,598
Unexpected keyword argument 'hf' when downloading CSV dataset from S3
### Describe the bug I receive this error message when using `load_dataset` with "csv" path and `dataset_files=s3://...`: ``` TypeError: Session.__init__() got an unexpected keyword argument 'hf' ``` I found a similar issue here: https://stackoverflow.com/questions/77596258/aws-issue-load-dataset-from-s3-fails-w...
closed
https://github.com/huggingface/datasets/issues/6598
2024-01-16T15:16:01
2025-01-31T15:35:33
2024-07-23T14:30:10
{ "login": "dguenms", "id": 5592111, "type": "User" }
[]
false
[]
2,083,708,521
6,597
Dataset.push_to_hub of a canonical dataset creates an additional dataset under the user namespace
While using `Dataset.push_to_hub` of a canonical dataset, an additional dataset was created under my user namespace. ## Steps to reproduce the bug The command: ```python commit_info = ds.push_to_hub( "caner", config_name="default", commit_message="Convert dataset to Parquet", commit_descriptio...
closed
https://github.com/huggingface/datasets/issues/6597
2024-01-16T11:27:07
2024-02-05T12:29:37
2024-02-05T12:29:37
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
2,083,108,156
6,596
Drop redundant None guard.
`xxx if xxx is not None else None` is no-op.
closed
https://github.com/huggingface/datasets/pull/6596
2024-01-16T06:31:54
2024-01-16T17:16:16
2024-01-16T17:05:52
{ "login": "xkszltl", "id": 5203025, "type": "User" }
[]
true
[]
2,082,896,148
6,595
Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2
### Describe the bug I'm aware of the issue #5695 . I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16 So i 1. Map dataset 2. Save to disk 3. Try to upload: ``` import data...
closed
https://github.com/huggingface/datasets/issues/6595
2024-01-16T02:03:09
2024-01-27T18:26:33
2024-01-26T02:28:32
{ "login": "kopyl", "id": 17604849, "type": "User" }
[]
false
[]
2,082,748,275
6,594
IterableDataset sharding logic needs improvement
### Describe the bug The sharding of IterableDatasets with respect to distributed and dataloader worker processes appears problematic with significant performance traps and inconsistencies wrt to distributed train processes vs worker processes. Splitting across num_workers (per train process loader processes) and...
open
https://github.com/huggingface/datasets/issues/6594
2024-01-15T22:22:36
2024-10-15T06:27:13
null
{ "login": "rwightman", "id": 5702664, "type": "User" }
[]
false
[]
2,082,410,257
6,592
Logs are delayed when doing .map when `docker logs`
### Describe the bug When I run my SD training in a Docker image and then listen to logs like `docker logs train -f`, the progress bar is delayed. It's updating every few percent. When you have a large dataset that has to be mapped (like 1+ million samples), it's crucial to see the updates in real-time, not every co...
closed
https://github.com/huggingface/datasets/issues/6592
2024-01-15T17:05:21
2024-02-12T17:35:21
2024-02-12T17:35:21
{ "login": "kopyl", "id": 17604849, "type": "User" }
[]
false
[]
2,082,378,957
6,591
The datasets models housed in Dropbox can't support a lot of users downloading them
### Describe the bug I'm using the datasets ``` from datasets import load_dataset, Audio dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") ``` And it seems that sometimes when I imagine a lot of users are accessing the same resources, the Dropbox host fails: `raise ConnectionError(...
closed
https://github.com/huggingface/datasets/issues/6591
2024-01-15T16:43:38
2024-01-22T23:18:09
2024-01-22T23:18:09
{ "login": "RDaneelOlivav", "id": 4933774, "type": "User" }
[]
false
[]
2,082,000,084
6,590
Feature request: Multi-GPU dataset mapping for SDXL training
### Feature request We need to speed up SDXL dataset pre-process. Please make it possible to use multiple GPUs for the [official SDXL trainer](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py) :) ### Motivation Pre-computing 3 million of images takes around ...
open
https://github.com/huggingface/datasets/issues/6590
2024-01-15T13:06:06
2024-01-15T13:07:07
null
{ "login": "kopyl", "id": 17604849, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,081,358,619
6,589
After `2.16.0` version, there are `PermissionError` when users use shared cache_dir
### Describe the bug - We use shared `cache_dir` using `HF_HOME="{shared_directory}"` - After dataset version 2.16.0, datasets uses `filelock` package for file locking #6445 - But, `filelock` package make `.lock` file with `644` permission - Dataset is not available to other users except the user who created the ...
closed
https://github.com/huggingface/datasets/issues/6589
2024-01-15T06:46:27
2024-02-02T07:55:38
2024-01-30T15:28:38
{ "login": "minhopark-neubla", "id": 106717516, "type": "User" }
[]
false
[]
2,081,284,253
6,588
fix os.listdir return name is empty string
### Describe the bug xlistdir return name is empty string Overloaded os.listdir ### Steps to reproduce the bug ```python from datasets.download.streaming_download_manager import xjoin from datasets.download.streaming_download_manager import xlistdir config = DownloadConfig(storage_options=options) manger = Str...
closed
https://github.com/huggingface/datasets/issues/6588
2024-01-15T05:34:36
2024-01-24T10:08:29
2024-01-24T10:08:29
{ "login": "d710055071", "id": 12895488, "type": "User" }
[]
false
[]
2,080,348,016
6,587
Allow concatenation of datasets with mixed structs
Fixes #6466 The idea is to do a recursive check for structs. PyArrow handles it well enough. For a demo you can do: ```python from datasets import Dataset, concatenate_datasets ds = Dataset.from_dict({'speaker': [{'name': 'Ben', 'email': None}]}) ds2 = Dataset.from_dict({'speaker': [{'name': 'Fred', 'e...
closed
https://github.com/huggingface/datasets/pull/6587
2024-01-13T15:33:20
2024-02-15T15:20:06
2024-02-08T14:38:32
{ "login": "Dref360", "id": 8976546, "type": "User" }
[]
true
[]
2,079,192,651
6,586
keep more info in DatasetInfo.from_merge #6585
* try not to merge DatasetInfos if they're equal * fixes losing DatasetInfo during parallel Dataset.map
closed
https://github.com/huggingface/datasets/pull/6586
2024-01-12T16:08:16
2024-01-26T15:59:35
2024-01-26T15:53:28
{ "login": "JochenSiegWork", "id": 135010976, "type": "User" }
[]
true
[]
2,078,874,005
6,585
losing DatasetInfo in Dataset.map when num_proc > 1
### Describe the bug Hello and thanks for developing this package! When I process a Dataset with the map function using multiple processors some set attributes of the DatasetInfo get lost and are None in the resulting Dataset. ### Steps to reproduce the bug ```python from datasets import Dataset, DatasetInfo...
open
https://github.com/huggingface/datasets/issues/6585
2024-01-12T13:39:19
2024-01-12T14:08:24
null
{ "login": "JochenSiegWork", "id": 135010976, "type": "User" }
[]
false
[]
2,078,454,878
6,584
np.fromfile not supported
How to do np.fromfile to use it like np.load ```python def xnumpy_fromfile(filepath_or_buffer, *args, download_config: Optional[DownloadConfig] = None, **kwargs): import numpy as np if hasattr(filepath_or_buffer, "read"): return np.fromfile(filepath_or_buffer, *args, **kwargs) else: ...
open
https://github.com/huggingface/datasets/issues/6584
2024-01-12T09:46:17
2024-01-15T05:20:50
null
{ "login": "d710055071", "id": 12895488, "type": "User" }
[]
false
[]
2,077,049,491
6,583
remove eli5 test
since the dataset is defunct
closed
https://github.com/huggingface/datasets/pull/6583
2024-01-11T16:05:20
2024-01-11T16:15:34
2024-01-11T16:09:24
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,076,072,101
6,582
Fix for Incorrect ex_iterable used with multi num_worker
Corrects an issue where `self._ex_iterable` was erroneously used instead of `ex_iterable`, when both Distributed Data Parallel (DDP) and multi num_worker are used concurrently. This improper usage led to the generation of incorrect `shards_indices`, subsequently causing issues with the control flow responsible for work...
closed
https://github.com/huggingface/datasets/pull/6582
2024-01-11T08:49:43
2024-03-01T19:09:14
2024-03-01T19:02:33
{ "login": "kq-chen", "id": 136600500, "type": "User" }
[]
true
[]
2,075,919,265
6,581
fix os.listdir return name is empty string
fix #6588 xlistdir return name is empty string for example: ` from datasets.download.streaming_download_manager import xjoin from datasets.download.streaming_download_manager import xlistdir config = DownloadConfig(storage_options=options) manger = StreamingDownloadManager("ILSVRC2012",download_config=config...
closed
https://github.com/huggingface/datasets/pull/6581
2024-01-11T07:10:55
2024-01-24T10:14:43
2024-01-24T10:08:28
{ "login": "d710055071", "id": 12895488, "type": "User" }
[]
true
[]
2,075,645,042
6,580
dataset cache only stores one config of the dataset in parquet dir, and uses that for all other configs resulting in showing same data in all configs.
### Describe the bug ds = load_dataset("ai2_arc", "ARC-Easy"), i have tried to force redownload, delete cache and changing the cache dir. ### Steps to reproduce the bug dataset = [] dataset_name = "ai2_arc" possible_configs = [ 'ARC-Challenge', 'ARC-Easy' ] for config in possible_configs: data...
closed
https://github.com/huggingface/datasets/issues/6580
2024-01-11T03:14:18
2024-01-20T12:46:16
2024-01-20T12:46:16
{ "login": "kartikgupta321", "id": 78641018, "type": "User" }
[]
false
[]
2,075,407,473
6,579
Unable to load `eli5` dataset with streaming
### Describe the bug Unable to load `eli5` dataset with streaming. ### Steps to reproduce the bug This fails with FileNotFoundError: https://files.pushshift.io/reddit/submissions ``` from datasets import load_dataset load_dataset("eli5", streaming=True) ``` This works correctly. ``` from datasets import lo...
closed
https://github.com/huggingface/datasets/issues/6579
2024-01-10T23:44:20
2024-01-11T09:19:18
2024-01-11T09:19:17
{ "login": "haok1402", "id": 89672451, "type": "User" }
[]
false
[]
2,074,923,321
6,578
Faster webdataset streaming
requests.get(..., streaming=True) is faster than using HTTP range requests when streaming large TAR files it can be enabled using block_size=0 in fsspec cc @rwightman
closed
https://github.com/huggingface/datasets/pull/6578
2024-01-10T18:18:09
2024-01-30T18:46:02
2024-01-30T18:39:51
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,074,790,848
6,577
502 Server Errors when streaming large dataset
### Describe the bug When streaming a [large ASR dataset](https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set) from the Hug (~3TB) I often encounter 502 Server Errors seemingly randomly during streaming: ``` huggingface_hub.utils._errors.HfHubHTTPError: 502 Server Error: Bad Gateway for url: http...
closed
https://github.com/huggingface/datasets/issues/6577
2024-01-10T16:59:36
2024-02-12T11:46:03
2024-01-15T16:05:44
{ "login": "sanchit-gandhi", "id": 93869735, "type": "User" }
[ { "name": "streaming", "color": "fef2c0" } ]
false
[]
2,073,710,124
6,576
document page 404 not found after redirection
### Describe the bug The redirected page encountered 404 not found. ### Steps to reproduce the bug 1. In this tutorial: https://huggingface.co/learn/nlp-course/chapter5/4?fw=pt original md: https://github.com/huggingface/course/blob/2c733c2246b8b7e0e6f19a9e5d15bb12df43b2a3/chapters/en/chapter5/4.mdx#L49 `...
closed
https://github.com/huggingface/datasets/issues/6576
2024-01-10T06:48:14
2024-01-17T14:01:31
2024-01-17T14:01:31
{ "login": "annahung31", "id": 39179888, "type": "User" }
[]
false
[]
2,072,617,406
6,575
[IterableDataset] Fix `drop_last_batch`in map after shuffling or sharding
It was not taken into account e.g. when passing to a DataLoader with num_workers>0 Fix https://github.com/huggingface/datasets/issues/6565
closed
https://github.com/huggingface/datasets/pull/6575
2024-01-09T15:35:31
2024-01-11T16:16:54
2024-01-11T16:10:30
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,072,579,549
6,574
Fix tests based on datasets that used to have scripts
...now that `squad` and `paws` don't have a script anymore
closed
https://github.com/huggingface/datasets/pull/6574
2024-01-09T15:16:16
2024-01-09T16:11:33
2024-01-09T16:05:13
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,072,553,951
6,573
[WebDataset] Audio support and bug fixes
- Add audio support - Fix an issue where user-provided features with additional fields are not taken into account Close https://github.com/huggingface/datasets/issues/6569
closed
https://github.com/huggingface/datasets/pull/6573
2024-01-09T15:03:04
2024-01-11T16:17:28
2024-01-11T16:11:04
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,072,384,281
6,572
Adding option for multipart achive download
Right now we can only download multiple separate archives or a single file archive, but not multipart archives, such as those produced by `tar --multi-volume`. This PR allows for downloading and extraction of archives split into multiple parts. With the new `multi_part` field of the `DownloadConfig` set, the downloa...
closed
https://github.com/huggingface/datasets/pull/6572
2024-01-09T13:35:44
2024-02-25T08:13:01
2024-02-25T08:13:01
{ "login": "jpodivin", "id": 66251151, "type": "User" }
[]
true
[]
2,072,111,000
6,571
Make DatasetDict.column_names return a list instead of dict
Currently, `DatasetDict.column_names` returns a dict, with each split name as keys and the corresponding list of column names as values. However, by construction, all splits have the same column names. I think it makes more sense to return a single list with the column names, which is the same for all the split k...
open
https://github.com/huggingface/datasets/issues/6571
2024-01-09T10:45:17
2024-01-09T10:45:17
null
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,071,805,265
6,570
No online docs for 2.16 release
We do not have the online docs for the latest minor release 2.16 (2.16.0 nor 2.16.1). In the online docs, the latest version appearing is 2.15.0: https://huggingface.co/docs/datasets/index ![Screenshot from 2024-01-09 08-43-08](https://github.com/huggingface/datasets/assets/8515462/83613222-867f-41f4-8833-7a4a765...
closed
https://github.com/huggingface/datasets/issues/6570
2024-01-09T07:43:30
2024-01-09T16:45:50
2024-01-09T16:45:50
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "documentation", "color": "0075ca" } ]
false
[]
2,070,251,122
6,569
WebDataset ignores features defined in YAML or passed to load_dataset
we should not override if the features exist already https://github.com/huggingface/datasets/blob/d26abadce0b884db32382b92422d8a6aa997d40a/src/datasets/packaged_modules/webdataset/webdataset.py#L78-L85
closed
https://github.com/huggingface/datasets/issues/6569
2024-01-08T11:24:21
2024-01-11T16:11:06
2024-01-11T16:11:05
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
2,069,922,151
6,568
keep_in_memory=True does not seem to work
UPD: [Fixed](https://github.com/huggingface/datasets/issues/6568#issuecomment-1880817794) . But a new issue came up :(
open
https://github.com/huggingface/datasets/issues/6568
2024-01-08T08:03:58
2024-01-13T04:53:04
null
{ "login": "kopyl", "id": 17604849, "type": "User" }
[]
false
[]
2,069,808,842
6,567
AttributeError: 'str' object has no attribute 'to'
### Describe the bug ``` -------------------------------------------------------------------------- AttributeError Traceback (most recent call last) [<ipython-input-6-80c6086794e8>](https://localhost:8080/#) in <cell line: 10>() 8 report_to="wandb") 9 ---> 10 trainer =...
closed
https://github.com/huggingface/datasets/issues/6567
2024-01-08T06:40:21
2024-01-08T11:56:19
2024-01-08T10:03:17
{ "login": "andysingal", "id": 20493493, "type": "User" }
[]
false
[]
2,069,495,429
6,566
I train controlnet_sdxl in bf16 datatype, got unsupported ERROR in datasets
### Describe the bug ``` Traceback (most recent call last): File "train_controlnet_sdxl.py", line 1252, in <module> main(args) File "train_controlnet_sdxl.py", line 1013, in main train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint) File "/home/mini...
closed
https://github.com/huggingface/datasets/issues/6566
2024-01-08T02:37:03
2024-06-02T14:24:39
2024-05-17T09:40:14
{ "login": "HelloWorldBeginner", "id": 25008090, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
2,068,939,670
6,565
`drop_last_batch=True` for IterableDataset map function is ignored with multiprocessing DataLoader
### Describe the bug Scenario: - Interleaving two iterable datasets of unequal lengths (`all_exhausted`), followed by a batch mapping with batch size 2 to effectively merge the two datasets and get a sample from each dataset in a single batch, with `drop_last_batch=True` to skip the last batch in case it doesn't ha...
closed
https://github.com/huggingface/datasets/issues/6565
2024-01-07T02:46:50
2025-03-08T09:46:05
2024-01-11T16:10:31
{ "login": "naba89", "id": 12119806, "type": "User" }
[]
false
[]
2,068,893,194
6,564
`Dataset.filter` missing `with_rank` parameter
### Describe the bug The issue shall be open: https://github.com/huggingface/datasets/issues/6435 When i try to pass `with_rank` to `Dataset.filter()`, i get this: `Dataset.filter() got an unexpected keyword argument 'with_rank'` ### Steps to reproduce the bug Run notebook: https://colab.research.google.com...
closed
https://github.com/huggingface/datasets/issues/6564
2024-01-06T23:48:13
2024-01-29T16:36:55
2024-01-29T16:36:54
{ "login": "kopyl", "id": 17604849, "type": "User" }
[]
false
[]
2,068,302,402
6,563
`ImportError`: cannot import name 'insecure_hashlib' from 'huggingface_hub.utils' (.../huggingface_hub/utils/__init__.py)
### Describe the bug Yep its not [there](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/__init__.py) anymore. ```text + python /home/trainer/sft_train.py --model_name cognitivecomputations/dolphin-2.2.1-mistral-7b --dataset_name wasertech/OneOS --load_in_4bit --use_peft --batch_...
closed
https://github.com/huggingface/datasets/issues/6563
2024-01-06T02:28:54
2024-03-14T02:59:42
2024-01-06T16:13:27
{ "login": "wasertech", "id": 79070834, "type": "User" }
[]
false
[]
2,067,904,504
6,562
datasets.DownloadMode.FORCE_REDOWNLOAD use cache to download dataset features with load_dataset function
### Describe the bug I have updated my dataset by adding a new feature, and push it to the hub. When I want to download it on my machine which contain the old version by using `datasets.load_dataset("your_dataset_name", download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)` I get an error (paste bellow). Seems that...
open
https://github.com/huggingface/datasets/issues/6562
2024-01-05T19:10:25
2024-01-05T19:10:25
null
{ "login": "LsTam91", "id": 73234162, "type": "User" }
[]
false
[]
2,067,404,951
6,561
Document YAML configuration with "data_dir"
See https://huggingface.co/datasets/uonlp/CulturaX/discussions/15#6597e83f185db94370d6bf50 for reference
open
https://github.com/huggingface/datasets/issues/6561
2024-01-05T14:03:33
2024-01-05T14:06:18
null
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
false
[]
2,065,637,625
6,560
Support Video
### Feature request HF datasets are awesome in supporting text and images. Will be great to see such a support in videos :) ### Motivation Video generation :) ### Your contribution Will probably be limited to raising this feature request ;)
closed
https://github.com/huggingface/datasets/issues/6560
2024-01-04T13:10:58
2024-08-23T09:51:27
2024-08-23T09:51:27
{ "login": "yuvalkirstain", "id": 57996478, "type": "User" }
[ { "name": "duplicate", "color": "cfd3d7" }, { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,065,118,332
6,559
Latest version 2.16.1, when load dataset error occurs. ValueError: BuilderConfig 'allenai--c4' not found. Available: ['default']
### Describe the bug python script is: ``` from datasets import load_dataset cache_dir = 'path/to/your/cache/directory' dataset = load_dataset('allenai/c4','allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', use_auth_token=False, cache_dir=cache_dir) ``` the script su...
closed
https://github.com/huggingface/datasets/issues/6559
2024-01-04T07:04:48
2024-04-03T10:40:53
2024-01-05T01:26:25
{ "login": "zhulinJulia24", "id": 145004780, "type": "User" }
[]
false
[]
2,064,885,984
6,558
OSError: image file is truncated (1 bytes not processed) #28323
### Describe the bug ``` --------------------------------------------------------------------------- OSError Traceback (most recent call last) Cell In[24], line 28 23 return example 25 # Filter the dataset 26 # filtered_dataset = dataset.filter(contains_number...
closed
https://github.com/huggingface/datasets/issues/6558
2024-01-04T02:15:13
2024-02-21T00:38:12
2024-02-21T00:38:12
{ "login": "andysingal", "id": 20493493, "type": "User" }
[]
false
[]
2,064,341,965
6,557
Support standalone yaml
see (internal) https://huggingface.slack.com/archives/C02V51Q3800/p1703885853581679
closed
https://github.com/huggingface/datasets/pull/6557
2024-01-03T16:47:35
2024-01-11T17:59:51
2024-01-11T17:53:42
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,064,018,208
6,556
Fix imagefolder with one image
A dataset repository with one image and one metadata file was considered a JSON dataset instead of an ImageFolder dataset. This is because we pick the dataset type with the most compatible data file extensions present in the repository and it results in a tie in this case. e.g. for https://huggingface.co/datasets/mu...
closed
https://github.com/huggingface/datasets/pull/6556
2024-01-03T13:13:02
2024-02-12T21:57:34
2024-01-09T13:06:30
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,063,841,286
6,555
Do not use Parquet exports if revision is passed
Fix #6554.
closed
https://github.com/huggingface/datasets/pull/6555
2024-01-03T11:33:10
2024-02-02T10:41:33
2024-02-02T10:35:28
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]