id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
2,011,907,787
6,453
Update hub-docs reference
Follow up to huggingface/huggingface.js#296
closed
https://github.com/huggingface/datasets/pull/6453
2023-11-27T09:57:20
2023-11-27T10:23:44
2023-11-27T10:17:34
{ "login": "mishig25", "id": 11827707, "type": "User" }
[]
true
[]
2,011,632,708
6,452
Praveen_repo_pull_req
null
closed
https://github.com/huggingface/datasets/pull/6452
2023-11-27T07:07:50
2023-11-27T09:28:00
2023-11-27T09:28:00
{ "login": "Praveenhh", "id": 151713216, "type": "User" }
[]
true
[]
2,010,693,912
6,451
Unable to read "marsyas/gtzan" data
Hi, this is my code and the error: ``` from datasets import load_dataset gtzan = load_dataset("marsyas/gtzan", "all") ``` [error_trace.txt](https://github.com/huggingface/datasets/files/13464397/error_trace.txt) [audio_yml.txt](https://github.com/huggingface/datasets/files/13464410/audio_yml.txt) Python 3.11.5 ...
closed
https://github.com/huggingface/datasets/issues/6451
2023-11-25T15:13:17
2023-12-01T12:53:46
2023-11-27T09:36:25
{ "login": "gerald-wrona", "id": 32300890, "type": "User" }
[]
false
[]
2,009,491,386
6,450
Support multiple image/audio columns in ImageFolder/AudioFolder
### Feature request Have a metadata.csv file with multiple columns that point to relative image or audio files. ### Motivation Currently, ImageFolder allows one column, called `file_name`, pointing to relative image files. On the same model, AudioFolder allows one column, called `file_name`, pointing to relative aud...
closed
https://github.com/huggingface/datasets/issues/6450
2023-11-24T10:34:09
2023-11-28T11:07:17
2023-11-24T17:24:38
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "duplicate", "color": "cfd3d7" }, { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,008,617,992
6,449
Fix metadata file resolution when inferred pattern is `**`
Refetch metadata files in case they were dropped by `filter_extensions` in the previous step. Fix #6442
closed
https://github.com/huggingface/datasets/pull/6449
2023-11-23T17:35:02
2023-11-27T10:02:56
2023-11-24T17:13:02
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,008,614,985
6,448
Use parquet export if possible
The idea is to make this code work for datasets with scripts if they have a Parquet export ```python ds = load_dataset("squad", trust_remote_code=False) ``` And more generally, it means we use the Parquet export whenever it's possible (it's safer and faster than dataset scripts). I also added a `config.USE_P...
closed
https://github.com/huggingface/datasets/pull/6448
2023-11-23T17:31:57
2023-12-01T17:57:17
2023-12-01T17:50:59
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,008,195,298
6,447
Support one dataset loader per config when using YAML
### Feature request See https://huggingface.co/datasets/datasets-examples/doc-unsupported-1 I would like to use CSV loader for the "csv" config, JSONL loader for the "jsonl" config, etc. ### Motivation It would be more flexible for the users ### Your contribution No specific contribution
open
https://github.com/huggingface/datasets/issues/6447
2023-11-23T13:03:07
2023-11-23T13:03:07
null
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,007,092,708
6,446
Speech Commands v2 dataset doesn't match AST-v2 config
### Describe the bug [According](https://huggingface.co/MIT/ast-finetuned-speech-commands-v2) to `MIT/ast-finetuned-speech-commands-v2`, the model was trained on the Speech Commands v2 dataset. However, while the model config says the model should have 35 class labels, the dataset itself has 36 class labels. Moreover,...
closed
https://github.com/huggingface/datasets/issues/6446
2023-11-22T20:46:36
2023-11-28T14:46:08
2023-11-28T14:46:08
{ "login": "vymao", "id": 18024303, "type": "User" }
[]
false
[]
2,006,958,595
6,445
Use `filelock` package for file locking
Use the `filelock` package instead of `datasets.utils.filelock` for file locking to be consistent with `huggingface_hub` and not to be responsible for improving the `filelock` capabilities 🙂. (Reverts https://github.com/huggingface/datasets/pull/859, but these `INFO` logs are not printed by default (anymore?), so ...
closed
https://github.com/huggingface/datasets/pull/6445
2023-11-22T19:04:45
2023-11-23T18:47:30
2023-11-23T18:41:23
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,006,842,179
6,444
Remove `Table.__getstate__` and `Table.__setstate__`
When using distributed training, the code of `os.remove(filename)` may be executed separately by each rank, leading to `FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmprxxxxxxx.arrow'` ```python from torch import distributed as dist if dist.get_rank() == 0: dataset = process_dataset(*args, ...
closed
https://github.com/huggingface/datasets/pull/6444
2023-11-22T17:55:10
2023-11-23T15:19:43
2023-11-23T15:13:28
{ "login": "LZHgrla", "id": 36994684, "type": "User" }
[]
true
[]
2,006,568,368
6,443
Trouble loading files defined in YAML explicitly
Look at https://huggingface.co/datasets/severo/doc-yaml-2 It's a reproduction of the example given in the docs at https://huggingface.co/docs/hub/datasets-manual-configuration ``` You can select multiple files per split using a list of paths: my_dataset_repository/ ├── README.md ├── data/ │ ├── abc.csv ...
open
https://github.com/huggingface/datasets/issues/6443
2023-11-22T15:18:10
2025-06-23T13:46:46
null
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
2,006,086,907
6,442
Trouble loading image folder with additional features - metadata file ignored
### Describe the bug Loading image folder with a caption column using `load_dataset(<image_folder_path>)` doesn't load the captions. When loading a local image folder with captions using `datasets==2.13.0` ``` from datasets import load_dataset data = load_dataset(<image_folder_path>) data.column_names ``` ...
closed
https://github.com/huggingface/datasets/issues/6442
2023-11-22T11:01:35
2023-11-24T17:13:03
2023-11-24T17:13:03
{ "login": "linoytsaban", "id": 57615435, "type": "User" }
[]
false
[]
2,004,985,857
6,441
Trouble Loading a Gated Dataset For User with Granted Permission
### Describe the bug I have granted permissions to several users to access a gated huggingface dataset. The users accepted the invite and when trying to load the dataset using their access token they get `FileNotFoundError: Couldn't find a dataset script at .....` . Also when they try to click the url link for the d...
closed
https://github.com/huggingface/datasets/issues/6441
2023-11-21T19:24:36
2023-12-13T08:27:16
2023-12-13T08:27:16
{ "login": "e-trop", "id": 124715309, "type": "User" }
[]
false
[]
2,004,509,301
6,440
`.map` not hashing under python 3.9
### Describe the bug The `.map` function cannot hash under python 3.9. Tried to use [the solution here](https://github.com/huggingface/datasets/issues/4521#issuecomment-1205166653), but still get the same message: `Parameter 'function'=<function map_to_pred at 0x7fa0b49ead30> of the transform datasets.arrow_data...
closed
https://github.com/huggingface/datasets/issues/6440
2023-11-21T15:14:54
2023-11-28T16:29:33
2023-11-28T16:29:33
{ "login": "changyeli", "id": 9058204, "type": "User" }
[]
false
[]
2,002,916,514
6,439
Download + preparation speed of datasets.load_dataset is 20x slower than huggingface hub snapshot and manual loding
### Describe the bug I am working with a dataset I am trying to publish. The path is Antreas/TALI. It's a fairly large dataset, and contains images, video, audio and text. I have been having multiple problems when the dataset is being downloaded using the load_dataset function -- even with 64 workers takin...
open
https://github.com/huggingface/datasets/issues/6439
2023-11-20T20:07:23
2023-11-20T20:07:37
null
{ "login": "AntreasAntoniou", "id": 10792502, "type": "User" }
[]
false
[]
2,002,032,804
6,438
Support GeoParquet
### Feature request Support the GeoParquet format ### Motivation GeoParquet (https://geoparquet.org/) is a common format for sharing vectorial geospatial data on the cloud, along with "traditional" data columns. It would be nice to be able to load this format with datasets, and more generally, in the Datasets Hub...
open
https://github.com/huggingface/datasets/issues/6438
2023-11-20T11:54:58
2024-02-07T08:36:51
null
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,001,272,606
6,437
Problem in training iterable dataset
### Describe the bug I am using PyTorch DDP (Distributed Data Parallel) to train my model. Since the data is too large to load into memory at once, I am using load_dataset to read the data as an iterable dataset. I have used datasets.distributed.split_dataset_by_node to distribute the dataset. However, I have notice...
open
https://github.com/huggingface/datasets/issues/6437
2023-11-20T03:04:02
2024-05-22T03:14:13
null
{ "login": "21Timothy", "id": 38107672, "type": "User" }
[]
false
[]
2,000,844,474
6,436
TypeError: <lambda>() takes 0 positional arguments but 1 was given
### Describe the bug ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) [<ipython-input-35-7b6becee3685>](https://localhost:8080/#) in <cell line: 1>() ----> 1 from datasets import Dataset 9 frames [/usr/lo...
closed
https://github.com/huggingface/datasets/issues/6436
2023-11-19T13:10:20
2025-05-05T18:21:21
2023-11-29T16:28:34
{ "login": "ahmadmustafaanis", "id": 47111429, "type": "User" }
[]
false
[]
2,000,690,513
6,435
Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
### Describe the bug 1. I ran dataset mapping with `num_proc=6` in it and got this error: `RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method` I can't actually find a way to run multi-GPU dataset mapping. Can you help? ### Steps to...
closed
https://github.com/huggingface/datasets/issues/6435
2023-11-19T04:21:16
2024-01-27T17:14:20
2023-12-04T16:57:43
{ "login": "kopyl", "id": 17604849, "type": "User" }
[]
false
[]
1,999,554,915
6,434
Use `ruff` for formatting
Use `ruff` instead of `black` for formatting to be consistent with `transformers` ([PR](https://github.com/huggingface/transformers/pull/27144)) and `huggingface_hub` ([PR 1](https://github.com/huggingface/huggingface_hub/pull/1783) and [PR 2](https://github.com/huggingface/huggingface_hub/pull/1789)).
closed
https://github.com/huggingface/datasets/pull/6434
2023-11-17T16:53:22
2023-11-21T14:19:21
2023-11-21T14:13:13
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,999,419,105
6,433
Better `tqdm` wrapper
This PR aligns the `tqdm` logic with `huggingface_hub` (without introducing breaking changes), as the current one is error-prone. Additionally, it improves the doc page about the `datasets`' utilities, and the handling of local `fsspec` paths in `cached_path`. Fix #6409
closed
https://github.com/huggingface/datasets/pull/6433
2023-11-17T15:45:15
2023-11-22T16:48:18
2023-11-22T16:42:08
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,999,258,140
6,432
load_dataset does not load all of the data in my input file
### Describe the bug I have 127 elements in my input dataset. When I do a len on the dataset after loaded, it is only 124 elements. ### Steps to reproduce the bug train_dataset = nlp.load_dataset(data_args.dataset_path, name=data_args.qg_format, split=nlp.Split.TRAIN) valid_dataset = nlp.load_dataset(data_...
open
https://github.com/huggingface/datasets/issues/6432
2023-11-17T14:28:50
2023-11-22T17:34:58
null
{ "login": "demongolem-biz2", "id": 121301001, "type": "User" }
[]
false
[]
1,997,202,770
6,431
Create DatasetNotFoundError and DataFilesNotFoundError
Create `DatasetNotFoundError` and `DataFilesNotFoundError`. Fix #6397. CC: @severo
closed
https://github.com/huggingface/datasets/pull/6431
2023-11-16T16:02:55
2023-11-22T15:18:51
2023-11-22T15:12:33
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,996,723,698
6,429
Add trust_remote_code argument
Draft about adding `trust_remote_code` to `load_dataset`. ```python ds = load_dataset(..., trust_remote_code=True) # run remote code (current default) ``` It would default to `True` (current behavior) and in the next major release it will prompt the user to check the code before running it (we'll communicate o...
closed
https://github.com/huggingface/datasets/pull/6429
2023-11-16T12:12:54
2023-11-28T16:10:39
2023-11-28T16:03:43
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,996,306,394
6,428
Set dev version
null
closed
https://github.com/huggingface/datasets/pull/6428
2023-11-16T08:12:55
2023-11-16T08:19:39
2023-11-16T08:13:28
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,996,248,605
6,427
Release: 2.15.0
null
closed
https://github.com/huggingface/datasets/pull/6427
2023-11-16T07:37:20
2023-11-16T08:12:12
2023-11-16T07:43:05
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,995,363,264
6,426
More robust temporary directory deletion
While fixing the Windows errors in #6362, I noticed that `PermissionError` can still easily be thrown on the session exit by the temporary cache directory's finalizer (we would also have to keep track of intermediate datasets, copies, etc.). ~~Due to the low usage of `datasets` on Windows, this PR takes a simpler appro...
closed
https://github.com/huggingface/datasets/pull/6426
2023-11-15T19:06:42
2023-12-01T15:37:32
2023-12-01T15:31:19
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,995,269,382
6,425
Fix deprecation warning when building conda package
When building/releasing conda package, we get this deprecation warning: ``` /usr/share/miniconda/envs/build-datasets/bin/conda-build:11: DeprecationWarning: conda_build.cli.main_build.main is deprecated and will be removed in 4.0.0. Use `conda build` instead. ``` This PR fixes the deprecation warning by using `co...
closed
https://github.com/huggingface/datasets/pull/6425
2023-11-15T18:00:11
2023-12-13T14:22:30
2023-12-13T14:16:00
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,995,224,516
6,424
[docs] troubleshooting guide
Hi all! This is a PR adding a troubleshooting guide for Datasets docs. I went through the library's GitHub Issues and Forum questions and identified a few issues that are common enough that I think it would be valuable to include them in the troubleshooting guide. These are: - creating a dataset from a folder and n...
closed
https://github.com/huggingface/datasets/pull/6424
2023-11-15T17:28:14
2023-11-30T17:29:55
2023-11-30T17:23:46
{ "login": "MKhalusova", "id": 1065417, "type": "User" }
[]
true
[]
1,994,946,847
6,423
Fix conda release by adding pyarrow-hotfix dependency
Fix conda release by adding pyarrow-hotfix dependency. Note that conda release failed in latest 2.14.7 release: https://github.com/huggingface/datasets/actions/runs/6874667214/job/18696761723 ``` Traceback (most recent call last): File "/usr/share/miniconda/envs/build-datasets/conda-bld/datasets_1700036460222/t...
closed
https://github.com/huggingface/datasets/pull/6423
2023-11-15T14:57:12
2023-11-15T17:15:33
2023-11-15T17:09:24
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,994,579,267
6,422
Allow to choose the `writer_batch_size` when using `save_to_disk`
### Feature request Add an argument in `save_to_disk` regarding batch size, which would be passed to `shard` and other methods. ### Motivation The `Dataset.save_to_disk` method currently calls `shard` without passing a `writer_batch_size` argument, thus implicitly using the default value (1000). This can result in R...
open
https://github.com/huggingface/datasets/issues/6422
2023-11-15T11:18:34
2023-11-16T10:00:21
null
{ "login": "NathanGodey", "id": 38216711, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,994,451,553
6,421
Add pyarrow-hotfix to release docs
Add `pyarrow-hotfix` to release docs.
closed
https://github.com/huggingface/datasets/pull/6421
2023-11-15T10:06:44
2023-11-15T13:49:55
2023-11-15T13:38:22
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "maintenance", "color": "d4c5f9" } ]
true
[]
1,994,278,903
6,420
Set dev version
null
closed
https://github.com/huggingface/datasets/pull/6420
2023-11-15T08:22:19
2023-11-15T08:33:36
2023-11-15T08:22:33
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,994,257,873
6,419
Release: 2.14.7
Release 2.14.7.
closed
https://github.com/huggingface/datasets/pull/6419
2023-11-15T08:07:37
2023-11-15T17:35:30
2023-11-15T08:12:59
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,993,224,629
6,418
Remove token value from warnings
Fix #6412
closed
https://github.com/huggingface/datasets/pull/6418
2023-11-14T17:34:06
2023-11-14T22:26:04
2023-11-14T22:19:45
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,993,149,416
6,417
Bug: LayoutLMv3 finetuning on FUNSD Notebook; Arrow Error
### Describe the bug Arrow issues when running the example Notebook laptop locally on Mac with M1. Works on Google Collab. **Notebook**: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv3/Fine_tune_LayoutLMv3_on_FUNSD_(HuggingFace_Trainer).ipynb **Error**: `ValueError: Arrow type extensi...
closed
https://github.com/huggingface/datasets/issues/6417
2023-11-14T16:53:20
2023-11-16T20:23:41
2023-11-16T20:23:41
{ "login": "Davo00", "id": 57496007, "type": "User" }
[]
false
[]
1,992,954,723
6,416
Rename audio_classificiation.py to audio_classification.py
null
closed
https://github.com/huggingface/datasets/pull/6416
2023-11-14T15:15:29
2023-11-15T11:59:32
2023-11-15T11:53:20
{ "login": "carlthome", "id": 1595907, "type": "User" }
[]
true
[]
1,992,917,248
6,415
Fix multi gpu map example
- use `orch.cuda.set_device` instead of `CUDA_VISIBLE_DEVICES ` - add `if __name__ == "__main__"` fix https://github.com/huggingface/datasets/issues/6186
closed
https://github.com/huggingface/datasets/pull/6415
2023-11-14T14:57:18
2024-01-31T00:49:15
2023-11-22T15:42:19
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,992,482,491
6,414
Set `usedforsecurity=False` in hashlib methods (FIPS compliance)
Related to https://github.com/huggingface/transformers/issues/27034 and https://github.com/huggingface/huggingface_hub/pull/1782. **TL;DR:** `hashlib` is not a secure library for cryptography-related stuff. We are only using `hashlib` for non-security-related purposes in `datasets` so it's fine. From Python 3.9 we s...
closed
https://github.com/huggingface/datasets/pull/6414
2023-11-14T10:47:09
2023-11-17T14:23:20
2023-11-17T14:17:00
{ "login": "Wauplin", "id": 11801849, "type": "User" }
[]
true
[]
1,992,401,594
6,412
User token is printed out!
This line prints user token on command line! Is it safe? https://github.com/huggingface/datasets/blob/12ebe695b4748c5a26e08b44ed51955f74f5801d/src/datasets/load.py#L2091
closed
https://github.com/huggingface/datasets/issues/6412
2023-11-14T10:01:34
2023-11-14T22:19:46
2023-11-14T22:19:46
{ "login": "mohsen-goodarzi", "id": 25702692, "type": "User" }
[]
false
[]
1,992,386,630
6,411
Fix dependency conflict within CI build documentation
Manually fix dependency conflict on `typing-extensions` version originated by `apache-beam` + `pydantic` (now a dependency of `huggingface-hub`). This is a temporary hot fix of our CI build documentation until we stop using `apache-beam`. Fix #6406.
closed
https://github.com/huggingface/datasets/pull/6411
2023-11-14T09:52:51
2023-11-14T10:05:59
2023-11-14T10:05:35
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,992,100,209
6,410
Datasets does not load HuggingFace Repository properly
### Describe the bug Dear Datasets team, We just have published a dataset on Huggingface: https://huggingface.co/ai4privacy However, when trying to read it using the Dataset library we get an error. As I understand jsonl files are compatible, could you please clarify how we can solve the issue? Please let me ...
open
https://github.com/huggingface/datasets/issues/6410
2023-11-14T06:50:49
2023-11-16T06:54:36
null
{ "login": "MikeDoes", "id": 40600201, "type": "User" }
[]
false
[]
1,991,960,865
6,409
using DownloadManager to download from local filesystem and disable_progress_bar, there will be an exception
### Describe the bug i'm using datasets.download.download_manager.DownloadManager to download files like "file:///a/b/c.txt", and i disable_progress_bar() to disable bar. there will be an exception as follows: `AttributeError: 'function' object has no attribute 'close' Exception ignored in: <function TqdmCallback....
closed
https://github.com/huggingface/datasets/issues/6409
2023-11-14T04:21:01
2023-11-22T16:42:09
2023-11-22T16:42:09
{ "login": "neiblegy", "id": 16574677, "type": "User" }
[]
false
[]
1,991,902,972
6,408
`IterableDataset` lost but not keep columns when map function adding columns with names in `remove_columns`
### Describe the bug IterableDataset lost but not keep columns when map function adding columns with names in remove_columns, Dataset not. May be related to the code below: https://github.com/huggingface/datasets/blob/06c3ffb8d068b6307b247164b10f7c7311cefed4/src/datasets/iterable_dataset.py#L750-L756 ### Steps t...
open
https://github.com/huggingface/datasets/issues/6408
2023-11-14T03:12:08
2023-11-16T06:24:10
null
{ "login": "shmily326", "id": 24571857, "type": "User" }
[]
false
[]
1,991,514,079
6,407
Loading the dataset from private S3 bucket gives "TypeError: cannot pickle '_contextvars.Context' object"
### Describe the bug I'm trying to read the parquet file from the private s3 bucket using the `load_dataset` function, but I receive `TypeError: cannot pickle '_contextvars.Context' object` error I'm working on a machine with `~/.aws/credentials` file. I can't give credentials and the path to a file in a private bu...
open
https://github.com/huggingface/datasets/issues/6407
2023-11-13T21:27:43
2024-07-30T12:35:09
null
{ "login": "eawer", "id": 1741779, "type": "User" }
[]
false
[]
1,990,469,045
6,406
CI Build PR Documentation is broken: ImportError: cannot import name 'TypeAliasType' from 'typing_extensions'
Our CI Build PR Documentation is broken. See: https://github.com/huggingface/datasets/actions/runs/6799554060/job/18486828777?pr=6390 ``` ImportError: cannot import name 'TypeAliasType' from 'typing_extensions' ```
closed
https://github.com/huggingface/datasets/issues/6406
2023-11-13T11:36:10
2023-11-14T10:05:36
2023-11-14T10:05:36
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
false
[]
1,990,358,743
6,405
ConfigNamesError on a simple CSV file
See https://huggingface.co/datasets/Nguyendo1999/mmath/discussions/1 ``` Error code: ConfigNamesError Exception: TypeError Message: __init__() missing 1 required positional argument: 'dtype' Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runn...
closed
https://github.com/huggingface/datasets/issues/6405
2023-11-13T10:28:29
2023-11-13T20:01:24
2023-11-13T20:01:24
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,990,211,901
6,404
Support pyarrow 14.0.1 and fix vulnerability CVE-2023-47248
Support `pyarrow` 14.0.1 and fix vulnerability [CVE-2023-47248](https://github.com/advisories/GHSA-5wvp-7f3h-6wmm). Fix #6396.
closed
https://github.com/huggingface/datasets/pull/6404
2023-11-13T09:15:39
2023-11-14T10:29:48
2023-11-14T10:23:29
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,990,098,817
6,403
Cannot import datasets on google colab (python 3.10.12)
### Describe the bug I'm trying A full colab demo notebook of zero-shot-distillation from https://github.com/huggingface/transformers/tree/main/examples/research_projects/zero-shot-distillation but i got this type of error when importing datasets on my google colab (python version is 3.10.12) ![image](https://gith...
closed
https://github.com/huggingface/datasets/issues/6403
2023-11-13T08:14:43
2023-11-16T05:04:22
2023-11-16T05:04:21
{ "login": "nabilaannisa", "id": 15389235, "type": "User" }
[]
false
[]
1,989,094,542
6,402
Update torch_formatter.py
Ensure PyTorch images are converted to (C, H, W) instead of (H, W, C). See #6394 for motivation.
closed
https://github.com/huggingface/datasets/pull/6402
2023-11-11T19:40:41
2024-03-15T11:31:53
2024-03-15T11:25:37
{ "login": "varunneal", "id": 32204417, "type": "User" }
[]
true
[]
1,988,710,061
6,401
dataset = load_dataset("Hyperspace-Technologies/scp-wiki-text") not working
### Describe the bug ``` (datasets) mruserbox@guru-X99:/media/10TB_HHD/_LLM_DATASETS$ python dataset.py Downloading readme: 100%|███████████████████████████████████| 360/360 [00:00<00:00, 2.16MB/s] Downloading data: 100%|█████████████████████████████████| 65.1M/65.1M [00:19<00:00, 3.38MB/s] Downloading data: 100...
closed
https://github.com/huggingface/datasets/issues/6401
2023-11-11T04:09:07
2023-11-20T17:45:20
2023-11-20T17:45:20
{ "login": "userbox020", "id": 47074021, "type": "User" }
[]
false
[]
1,988,571,317
6,400
Safely load datasets by disabling execution of dataset loading script
### Feature request Is there a way to disable execution of dataset loading script using `load_dataset`? This is a security vulnerability that could lead to arbitrary code execution. Any suggested workarounds are welcome as well. ### Motivation This is a security vulnerability that could lead to arbitrary code e...
closed
https://github.com/huggingface/datasets/issues/6400
2023-11-10T23:48:29
2024-06-13T15:56:13
2024-06-13T15:56:13
{ "login": "irenedea", "id": 14367635, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,988,368,503
6,399
TypeError: Cannot convert pyarrow.lib.ChunkedArray to pyarrow.lib.Array
### Describe the bug Hi, I am preprocessing a large custom dataset with numpy arrays. I am running into this TypeError during writing in a dataset.map() function. I've tried decreasing writer batch size, but this error persists. This error does not occur for smaller datasets. Thank you! ### Steps to repro...
open
https://github.com/huggingface/datasets/issues/6399
2023-11-10T20:48:46
2024-06-22T00:13:48
null
{ "login": "y-hwang", "id": 76236359, "type": "User" }
[]
false
[]
1,987,786,446
6,398
Remove redundant condition in builders
Minor refactoring to remove redundant condition.
closed
https://github.com/huggingface/datasets/pull/6398
2023-11-10T14:56:43
2023-11-14T10:49:15
2023-11-14T10:43:00
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,987,622,152
6,397
Raise a different exception for inexisting dataset vs files without known extension
See https://github.com/huggingface/datasets-server/issues/2082#issuecomment-1805716557 We have the same error for: - https://huggingface.co/datasets/severo/a_dataset_that_does_not_exist: a dataset that does not exist - https://huggingface.co/datasets/severo/test_files_without_extension: a dataset with files withou...
closed
https://github.com/huggingface/datasets/issues/6397
2023-11-10T13:22:14
2023-11-22T15:12:34
2023-11-22T15:12:34
{ "login": "severo", "id": 1676121, "type": "User" }
[]
false
[]
1,987,308,077
6,396
Issue with pyarrow 14.0.1
See https://github.com/huggingface/datasets-server/pull/2089 for reference ``` from datasets import (Array2D, Dataset, Features) feature_type = Array2D(shape=(2, 2), dtype="float32") content = [[0.0, 0.0], [0.0, 0.0]] features = Features({"col": feature_type}) dataset = Dataset.from_dict({"col": [content]}, fea...
closed
https://github.com/huggingface/datasets/issues/6396
2023-11-10T10:02:12
2023-11-14T10:23:30
2023-11-14T10:23:30
{ "login": "severo", "id": 1676121, "type": "User" }
[]
false
[]
1,986,484,124
6,395
Add ability to set lock type
### Feature request Allow setting file lock type, maybe from an environment variable Currently, it only depends on whether fnctl is available: https://github.com/huggingface/datasets/blob/12ebe695b4748c5a26e08b44ed51955f74f5801d/src/datasets/utils/filelock.py#L463-L470C16 ### Motivation In my environment...
closed
https://github.com/huggingface/datasets/issues/6395
2023-11-09T22:12:30
2023-11-23T18:50:00
2023-11-23T18:50:00
{ "login": "leoleoasd", "id": 37735580, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,985,947,116
6,394
TorchFormatter images (H, W, C) instead of (C, H, W) format
### Describe the bug Using .set_format("torch") leads to images having shape (H, W, C), the same as in numpy. However, pytorch normally uses (C, H, W) format. Maybe I'm missing something but this makes the format a lot less useful as I then have to permute it anyways. If not using the format it is possible to ...
closed
https://github.com/huggingface/datasets/issues/6394
2023-11-09T16:02:15
2024-04-11T12:40:16
2024-04-11T12:40:16
{ "login": "Modexus", "id": 37351874, "type": "User" }
[]
false
[]
1,984,913,259
6,393
Filter occasionally hangs
### Describe the bug A call to `.filter` occasionally hangs (after the filter is complete, according to tqdm) There is a trace produced ``` Exception ignored in: <function Dataset.__del__ at 0x7efb48130c10> Traceback (most recent call last): File "/usr/lib/python3/dist-packages/datasets/arrow_dataset.py", l...
closed
https://github.com/huggingface/datasets/issues/6393
2023-11-09T06:18:30
2025-02-22T00:49:19
2025-02-22T00:49:19
{ "login": "dakinggg", "id": 43149077, "type": "User" }
[]
false
[]
1,984,369,545
6,392
`push_to_hub` is not robust to hub closing connection
### Describe the bug Like to #6172, `push_to_hub` will crash if Hub resets the connection and raise the following error: ``` Pushing dataset shards to the dataset hub: 32%|███▏ | 54/171 [06:38<14:23, 7.38s/it] Traceback (most recent call last): File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/...
closed
https://github.com/huggingface/datasets/issues/6392
2023-11-08T20:44:53
2023-12-20T07:28:24
2023-12-01T17:51:34
{ "login": "msis", "id": 577139, "type": "User" }
[]
false
[]
1,984,091,776
6,391
Webdataset dataset builder
Allow `load_dataset` to support the Webdataset format. It allows users to download/stream data from local files or from the Hugging Face Hub. Moreover it will enable the Dataset Viewer for Webdataset datasets on HF. ## Implementation details - I added a new Webdataset builder - dataset with TAR files are n...
closed
https://github.com/huggingface/datasets/pull/6391
2023-11-08T17:31:59
2024-05-22T16:51:08
2023-11-28T16:33:10
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,983,725,707
6,390
handle future deprecation argument
getting this error: ``` /root/miniconda3/envs/py3.10/lib/python3.10/site-packages/datasets/table.py:1387: FutureWarning: promote has been superseded by mode='default'. return cls._concat_blocks(pa_tables_to_concat_vertically, axis=0) ``` Since datasets supports arrow greater than 8.0.0, we need to handle both ...
closed
https://github.com/huggingface/datasets/pull/6390
2023-11-08T14:21:25
2023-11-21T02:10:24
2023-11-14T15:15:59
{ "login": "winglian", "id": 381258, "type": "User" }
[]
true
[]
1,983,545,744
6,389
Index 339 out of range for dataset of size 339 <-- save_to_file()
### Describe the bug When saving out some Audio() data. The data is audio recordings with associated 'sentences'. (They use the audio 'bytes' approach because they're clips within audio files). Code is below the traceback (I can't upload the voice audio/text (it's not even me)). ``` Traceback (most recent call ...
open
https://github.com/huggingface/datasets/issues/6389
2023-11-08T12:52:09
2023-11-24T09:14:13
null
{ "login": "jaggzh", "id": 20318973, "type": "User" }
[]
false
[]
1,981,136,093
6,388
How to create 3d medical imgae dataset?
### Feature request I am newer to huggingface, after i look up `datasets` docs, I can't find how to create the dataset contains 3d medical image (ends with '.mhd', '.dcm', '.nii') ### Motivation help us to upload 3d medical dataset to huggingface! ### Your contribution I'll submit a PR if I find a way to...
open
https://github.com/huggingface/datasets/issues/6388
2023-11-07T11:27:36
2023-11-07T11:28:53
null
{ "login": "QingYunA", "id": 41177312, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,980,224,020
6,387
How to load existing downloaded dataset ?
Hi @mariosasko @lhoestq @katielink Thanks for your contribution and hard work. ### Feature request First, I download a dataset as normal by: ``` from datasets import load_dataset dataset = load_dataset('username/data_name', cache_dir='data') ``` The dataset format in `data` directory will be: ``` ...
closed
https://github.com/huggingface/datasets/issues/6387
2023-11-06T22:51:44
2023-11-16T18:07:01
2023-11-16T18:07:01
{ "login": "liming-ai", "id": 73068772, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,979,878,014
6,386
Formatting overhead
### Describe the bug Hi! I very recently noticed that my training time is dominated by batch formatting. Using Lightning's profilers, I located the bottleneck within `datasets.formatting.formatting` and then narrowed it down with `line-profiler`. It turns out that almost all of the overhead is due to creating new inst...
closed
https://github.com/huggingface/datasets/issues/6386
2023-11-06T19:06:38
2023-11-06T23:56:12
2023-11-06T23:56:12
{ "login": "d-miketa", "id": 320321, "type": "User" }
[]
false
[]
1,979,308,338
6,385
Get an error when i try to concatenate the squad dataset with my own dataset
### Describe the bug Hello, I'm new here and I need to concatenate the squad dataset with my own dataset i created. I find the following error when i try to do it: Traceback (most recent call last): Cell In[9], line 1 concatenated_dataset = concatenate_datasets([train_dataset, dataset1]) File ~\ana...
closed
https://github.com/huggingface/datasets/issues/6385
2023-11-06T14:29:22
2023-11-06T16:50:45
2023-11-06T16:50:45
{ "login": "CCDXDX", "id": 149378500, "type": "User" }
[]
false
[]
1,979,117,069
6,384
Load the local dataset folder from other place
This is from https://github.com/huggingface/diffusers/issues/5573
closed
https://github.com/huggingface/datasets/issues/6384
2023-11-06T13:07:04
2023-11-19T05:42:06
2023-11-19T05:42:05
{ "login": "OrangeSodahub", "id": 54439582, "type": "User" }
[]
false
[]
1,978,189,389
6,383
imagenet-1k downloads over and over
### Describe the bug What could be causing this? ``` $ python3 Python 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> from datasets import load_dataset >>> load_dataset("imagenet-1k") Downloading builder ...
closed
https://github.com/huggingface/datasets/issues/6383
2023-11-06T02:58:58
2024-06-12T13:15:00
2023-11-06T06:02:39
{ "login": "seann999", "id": 6847529, "type": "User" }
[]
false
[]
1,977,400,799
6,382
Add CheXpert dataset for vision
### Feature request ### Name **CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison** ### Paper https://arxiv.org/abs/1901.07031 ### Data https://stanfordaimi.azurewebsites.net/datasets/8cbd9ed4-2eb9-4565-affc-111cf4f7ebe2 ### Motivation CheXpert is one of the fund...
open
https://github.com/huggingface/datasets/issues/6382
2023-11-04T15:36:11
2024-01-10T11:53:52
null
{ "login": "SauravMaheshkar", "id": 61241031, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "dataset request", "color": "e99695" } ]
false
[]
1,975,028,470
6,381
Add my dataset
## medical data **Description:** This dataset, named "medical data," is a collection of text data from various sources, carefully curated and cleaned for use in natural language processing (NLP) tasks. It consists of a diverse range of text, including articles, books, and online content, covering topics from scienc...
closed
https://github.com/huggingface/datasets/pull/6381
2023-11-02T20:59:52
2023-11-08T14:37:46
2023-11-06T15:50:14
{ "login": "keyur536", "id": 103646675, "type": "User" }
[]
true
[]
1,974,741,221
6,380
Fix for continuation behaviour on broken dataset archives due to starving download connections via HTTP-GET
This PR proposes a (slightly hacky) fix for an Issue that can occur when downloading large dataset parts over unstable connections. The underlying issue is also being discussed in https://github.com/huggingface/datasets/issues/5594. Issue Symptoms & Behaviour: - Download of a large archive file during dataset down...
open
https://github.com/huggingface/datasets/pull/6380
2023-11-02T17:28:23
2023-11-02T17:31:19
null
{ "login": "RuntimeRacer", "id": 49956579, "type": "User" }
[]
true
[]
1,974,638,850
6,379
Avoid redundant warning when encoding NumPy array as `Image`
Avoid a redundant warning in `encode_np_array` by removing the identity check as NumPy `dtype`s can be equal without having identical `id`s. Additionally, fix "unreachable" checks in `encode_np_array`.
closed
https://github.com/huggingface/datasets/pull/6379
2023-11-02T16:37:58
2023-11-06T17:53:27
2023-11-02T17:08:07
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,973,942,770
6,378
Support pyarrow 14.0.0
Support `pyarrow` 14.0.0. Fix #6377 and fix #6374 (root cause). This fix is analog to a previous one: - #6175
closed
https://github.com/huggingface/datasets/pull/6378
2023-11-02T10:25:10
2023-11-02T15:24:28
2023-11-02T15:15:44
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,973,937,612
6,377
Support pyarrow 14.0.0
Support pyarrow 14.0.0 by fixing the root cause of: - #6374 and revert: - #6375
closed
https://github.com/huggingface/datasets/issues/6377
2023-11-02T10:22:08
2023-11-02T15:15:45
2023-11-02T15:15:45
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
false
[]
1,973,927,468
6,376
Caching problem when deleting a dataset
### Describe the bug Pushing a dataset with n + m features to a repo which was deleted, but contained n features, will fail. ### Steps to reproduce the bug 1. Create a dataset with n features per row 2. `dataset.push_to_hub(YOUR_PATH, SPLIT, token=TOKEN)` 3. Go on the hub, delete the repo at `YOUR_PATH` 4. Update...
closed
https://github.com/huggingface/datasets/issues/6376
2023-11-02T10:15:58
2023-12-04T16:53:34
2023-12-04T16:53:33
{ "login": "clefourrier", "id": 22726840, "type": "User" }
[]
false
[]
1,973,877,879
6,375
Temporarily pin pyarrow < 14.0.0
Temporarily pin `pyarrow` < 14.0.0 until permanent solution is found. Hot fix #6374.
closed
https://github.com/huggingface/datasets/pull/6375
2023-11-02T09:48:58
2023-11-02T10:22:33
2023-11-02T10:11:19
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,973,857,428
6,374
CI is broken: TypeError: Couldn't cast array
See: https://github.com/huggingface/datasets/actions/runs/6730567226/job/18293518039 ``` FAILED tests/test_table.py::test_cast_sliced_fixed_size_array_to_features - TypeError: Couldn't cast array of type fixed_size_list<item: int32>[3] to Sequence(feature=Value(dtype='int64', id=None), length=3, id=None) ```
closed
https://github.com/huggingface/datasets/issues/6374
2023-11-02T09:37:06
2023-11-02T10:11:20
2023-11-02T10:11:20
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
false
[]
1,973,349,695
6,373
Fix typo in `Dataset.map` docstring
null
closed
https://github.com/huggingface/datasets/pull/6373
2023-11-02T01:36:49
2023-11-02T15:18:22
2023-11-02T10:11:38
{ "login": "bryant1410", "id": 3905501, "type": "User" }
[]
true
[]
1,972,837,794
6,372
do not try to download from HF GCS for generator
attempt to fix https://github.com/huggingface/datasets/issues/6371
closed
https://github.com/huggingface/datasets/pull/6372
2023-11-01T17:57:11
2023-11-02T16:02:52
2023-11-02T15:52:09
{ "login": "yundai424", "id": 43726198, "type": "User" }
[]
true
[]
1,972,807,579
6,371
`Dataset.from_generator` should not try to download from HF GCS
### Describe the bug When using [`Dataset.from_generator`](https://github.com/huggingface/datasets/blob/c9c1166e1cf81d38534020f9c167b326585339e5/src/datasets/arrow_dataset.py#L1072) with `streaming=False`, the internal logic will call [`download_and_prepare`](https://github.com/huggingface/datasets/blob/main/src/datas...
closed
https://github.com/huggingface/datasets/issues/6371
2023-11-01T17:36:17
2023-11-02T15:52:10
2023-11-02T15:52:10
{ "login": "yundai424", "id": 43726198, "type": "User" }
[]
false
[]
1,972,073,909
6,370
TensorDataset format does not work with Trainer from transformers
### Describe the bug The model was built to do fine tunning on BERT model for relation extraction. trainer.train() returns an error message ```TypeError: vars() argument must have __dict__ attribute``` when it has `train_dataset` generated from `torch.utils.data.TensorDataset` However, in the document, the req...
closed
https://github.com/huggingface/datasets/issues/6370
2023-11-01T10:09:54
2023-11-29T16:31:08
2023-11-29T16:31:08
{ "login": "jinzzasol", "id": 49014051, "type": "User" }
[]
false
[]
1,971,794,108
6,369
Multi process map did not load cache file correctly
### Describe the bug When I was training model on Multiple GPUs by DDP, the dataset is tokenized multiple times after main process. ![1698820541284](https://github.com/huggingface/datasets/assets/14285786/0b2fe054-54d8-4e00-96e6-6ca5b69e662b) ![1698820501568](https://github.com/huggingface/datasets/assets/142857...
closed
https://github.com/huggingface/datasets/issues/6369
2023-11-01T06:36:54
2023-11-30T16:04:46
2023-11-30T16:04:45
{ "login": "enze5088", "id": 14285786, "type": "User" }
[]
false
[]
1,971,193,692
6,368
Fix python formatting for complex types in `format_table`
Fix #6366
closed
https://github.com/huggingface/datasets/pull/6368
2023-10-31T19:48:08
2023-11-02T14:42:28
2023-11-02T14:21:16
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,971,015,861
6,367
Fix time measuring snippet in docs
Fix https://discuss.huggingface.co/t/attributeerror-enter/60509
closed
https://github.com/huggingface/datasets/pull/6367
2023-10-31T17:57:17
2023-10-31T18:35:53
2023-10-31T18:24:02
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,970,213,490
6,366
with_format() function returns bytes instead of PIL images even when image column is not part of "columns"
### Describe the bug When using the with_format() function on a dataset containing images, even if the image column is not part of the columns provided in the function, its type will be changed to bytes. Here is a minimal reproduction of the bug: https://colab.research.google.com/drive/1hyaOspgyhB41oiR1-tXE3k_gJCdJU...
closed
https://github.com/huggingface/datasets/issues/6366
2023-10-31T11:10:48
2023-11-02T14:21:17
2023-11-02T14:21:17
{ "login": "leot13", "id": 17809020, "type": "User" }
[]
false
[]
1,970,140,392
6,365
Parquet size grows exponential for categorical data
### Describe the bug It seems that when saving a data frame with a categorical column inside the size can grow exponentially. This seems to happen because when we save the categorical data to parquet, we are saving the data + all the categories existing in the original data. This happens even when the categories ar...
closed
https://github.com/huggingface/datasets/issues/6365
2023-10-31T10:29:02
2023-10-31T10:49:17
2023-10-31T10:49:17
{ "login": "aseganti", "id": 82567957, "type": "User" }
[]
false
[]
1,969,136,106
6,364
ArrowNotImplementedError: Unsupported cast from string to list using function cast_list
Hi, I am trying to load a local csv dataset(similar to explodinggradients_fiqa) using load_dataset. When I try to pass features, I am facing the mentioned issue. CSV Data sample(golden_dataset.csv): Question | Context | answer | groundtruth "what is abc?"...
closed
https://github.com/huggingface/datasets/issues/6364
2023-10-30T20:14:01
2023-10-31T19:21:23
2023-10-31T19:21:23
{ "login": "divyakrishna-devisetty", "id": 32887094, "type": "User" }
[]
false
[]
1,968,891,277
6,363
dataset.transform() hangs indefinitely while finetuning the stable diffusion XL
### Describe the bug Multi-GPU fine-tuning the stable diffusion X by following https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/README_sdxl.md hangs indefinitely. ### Steps to reproduce the bug accelerate launch train_text_to_image_sdxl.py --pretrained_model_name_or_path=$MODEL_NAME --...
closed
https://github.com/huggingface/datasets/issues/6363
2023-10-30T17:34:05
2023-11-22T00:29:21
2023-11-22T00:29:21
{ "login": "bhosalems", "id": 10846405, "type": "User" }
[]
false
[]
1,965,794,569
6,362
Simplify filesystem logic
Simplifies the existing filesystem logic (e.g., to avoid unnecessary if-else as mentioned in https://github.com/huggingface/datasets/pull/6098#issue-1827655071)
closed
https://github.com/huggingface/datasets/pull/6362
2023-10-27T15:54:18
2023-11-15T14:08:29
2023-11-15T14:02:02
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,965,672,950
6,360
Add support for `Sequence(Audio/Image)` feature in `push_to_hub`
### Feature request Allow for `Sequence` of `Image` (or `Audio`) to be embedded inside the shards. ### Motivation Currently, thanks to #3685, when `embed_external_files` is set to True (which is the default) in `push_to_hub`, features of type `Image` and `Audio` are embedded inside the arrow/parquet shards, instead ...
closed
https://github.com/huggingface/datasets/issues/6360
2023-10-27T14:39:57
2024-02-06T19:24:20
2024-02-06T19:24:20
{ "login": "Laurent2916", "id": 21087104, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,965,378,583
6,359
Stuck in "Resolving data files..."
### Describe the bug I have an image dataset with 300k images, the size of image is 768 * 768. When I run `dataset = load_dataset("imagefolder", data_dir="/path/to/img_dir", split='train')` in second time, it takes 50 minutes to finish "Resolving data files" part, what's going on in this part? From my understa...
open
https://github.com/huggingface/datasets/issues/6359
2023-10-27T12:01:51
2025-03-09T02:18:19
null
{ "login": "Luciennnnnnn", "id": 20135317, "type": "User" }
[]
false
[]
1,965,014,595
6,358
Mounting datasets cache fails due to absolute paths.
### Describe the bug Creating a datasets cache and mounting this into, for example, a docker container, renders the data unreadable due to absolute paths written into the cache. ### Steps to reproduce the bug 1. Create a datasets cache by downloading some data 2. Mount the dataset folder into a docker contain...
closed
https://github.com/huggingface/datasets/issues/6358
2023-10-27T08:20:27
2024-04-10T08:50:06
2023-11-28T14:47:12
{ "login": "charliebudd", "id": 72921588, "type": "User" }
[]
false
[]
1,964,653,995
6,357
Allow passing a multiprocessing context to functions that support `num_proc`
### Feature request Allow specifying [a multiprocessing context](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods) to functions that support `num_proc` or use multiprocessing pools. For example, the following could be done: ```python dataset = dataset.map(_func, num_proc=2, mp_cont...
open
https://github.com/huggingface/datasets/issues/6357
2023-10-27T02:31:16
2023-10-27T02:31:16
null
{ "login": "bryant1410", "id": 3905501, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,964,015,802
6,356
Add `fsspec` version to the `datasets-cli env` command output
... to make debugging issues easier, as `fsspec`'s releases often introduce breaking changes.
closed
https://github.com/huggingface/datasets/pull/6356
2023-10-26T17:19:25
2023-10-26T18:42:56
2023-10-26T18:32:21
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,963,979,896
6,355
More hub centric docs
Let's have more hub-centric documentation in the datasets docs Tutorials - Add “Configure the dataset viewer” page - Change order: - Overview - and more focused on the Hub rather than the library - Then all the hub related things - and mention how to read/write with other tools like pandas - The...
closed
https://github.com/huggingface/datasets/pull/6355
2023-10-26T16:54:46
2024-01-11T06:34:16
2023-10-30T17:32:57
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,963,483,324
6,354
`IterableDataset.from_spark` does not support multiple workers in pytorch `Dataloader`
### Describe the bug Looks like `IterableDataset.from_spark` does not support multiple workers in pytorch `Dataloader` if I'm not missing anything. Also, returns not consistent error messages, which probably depend on the nondeterministic order of worker executions Some exampes I've encountered: ``` File "/l...
open
https://github.com/huggingface/datasets/issues/6354
2023-10-26T12:43:36
2024-12-10T14:06:06
null
{ "login": "NazyS", "id": 50199774, "type": "User" }
[]
false
[]
1,962,646,450
6,353
load_dataset save_to_disk load_from_disk error
### Describe the bug datasets version: 2.10.1 I `load_dataset `and `save_to_disk` sucessfully on windows10( **and I `load_from_disk(/LLM/data/wiki)` succcesfully on windows10**), and I copy the dataset `/LLM/data/wiki` into a ubuntu system, but when I `load_from_disk(/LLM/data/wiki)` on ubuntu, something weird ha...
closed
https://github.com/huggingface/datasets/issues/6353
2023-10-26T03:47:06
2024-04-03T05:31:01
2023-10-26T10:18:04
{ "login": "brisker", "id": 13804492, "type": "User" }
[]
false
[]
1,962,296,057
6,352
Error loading wikitext data raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.")
I was trying to load the wiki dataset, but i got this error traindata = load_dataset('wikitext', 'wikitext-2-raw-v1', split='train') File "/home/aelkordy/.conda/envs/prune_llm/lib/python3.9/site-packages/datasets/load.py", line 1804, in load_dataset ds = builder_instance.as_dataset(split=split, verific...
closed
https://github.com/huggingface/datasets/issues/6352
2023-10-25T21:55:31
2024-03-19T16:46:22
2023-11-07T07:26:54
{ "login": "Ahmed-Roushdy", "id": 68569076, "type": "User" }
[]
false
[]
1,961,982,988
6,351
Fix use_dataset.mdx
The current example isn't working because it can't find `labels` inside the Dataset object. So I've added an extra step to the process. Tested and working in Colab.
closed
https://github.com/huggingface/datasets/pull/6351
2023-10-25T18:21:08
2023-10-26T17:19:49
2023-10-26T17:10:27
{ "login": "angel-luis", "id": 17672548, "type": "User" }
[]
true
[]