id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
2,845,184,764
7,391
AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType'
pyarrow 尝试了若干个版本都不可以
open
https://github.com/huggingface/datasets/issues/7391
2025-02-11T12:02:26
2025-02-11T12:02:26
null
{ "login": "LinXin04", "id": 25193686, "type": "User" }
[]
false
[]
2,843,813,365
7,390
Re-add py.typed
### Feature request The motivation for removing py.typed no longer seems to apply. Would a solution like [this one](https://github.com/huggingface/huggingface_hub/pull/2752) work here? ### Motivation MyPy support is broken. As more type checkers come out, such as RedKnot, these may also be broken. It would be goo...
open
https://github.com/huggingface/datasets/issues/7390
2025-02-10T22:12:52
2025-02-10T22:12:52
null
{ "login": "NeilGirdhar", "id": 730137, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,843,592,606
7,389
Getting statistics about filtered examples
@lhoestq wondering if the team has thought about this and if there are any recommendations? Currently when processing datasets some examples are bound to get filtered out, whether it's due to bad format, or length is too long, or any other custom filters that might be getting applied. Let's just focus on the filter by...
closed
https://github.com/huggingface/datasets/issues/7389
2025-02-10T20:48:29
2025-02-11T20:44:15
2025-02-11T20:44:13
{ "login": "jonathanasdf", "id": 511073, "type": "User" }
[]
false
[]
2,843,188,499
7,388
OSError: [Errno 22] Invalid argument forbidden character
### Describe the bug I'm on Windows and i'm trying to load a datasets but i'm having title error because files in the repository are named with charactere like < >which can't be in a name file. Could it be possible to load this datasets but removing those charactere ? ### Steps to reproduce the bug load_dataset("CAT...
closed
https://github.com/huggingface/datasets/issues/7388
2025-02-10T17:46:31
2025-02-11T13:42:32
2025-02-11T13:42:30
{ "login": "langflogit", "id": 124634542, "type": "User" }
[]
false
[]
2,841,228,048
7,387
Dynamic adjusting dataloader sampling weight
Hi, Thanks for your wonderful work! I'm wondering is there a way to dynamically adjust the sampling weight of each data in the dataset during training? Looking forward to your reply, thanks again.
open
https://github.com/huggingface/datasets/issues/7387
2025-02-10T03:18:47
2025-03-07T14:06:54
null
{ "login": "whc688", "id": 72799643, "type": "User" }
[]
false
[]
2,840,032,524
7,386
Add bookfolder Dataset Builder for Digital Book Formats
### Feature request This feature proposes adding a new dataset builder called bookfolder to the datasets library. This builder would allow users to easily load datasets consisting of various digital book formats, including: AZW, AZW3, CB7, CBR, CBT, CBZ, EPUB, MOBI, and PDF. ### Motivation Currently, loading dataset...
closed
https://github.com/huggingface/datasets/issues/7386
2025-02-08T14:27:55
2025-02-08T14:30:10
2025-02-08T14:30:09
{ "login": "shikanime", "id": 22115108, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,830,664,522
7,385
Make IterableDataset (optionally) resumable
### What does this PR do? This PR introduces a new `stateful` option to the `dataset.shuffle` method, which defaults to `False`. When enabled, this option allows for resumable shuffling of `IterableDataset` instances, albeit with some additional memory overhead. Key points: * All tests have passed * Docstrings ...
open
https://github.com/huggingface/datasets/pull/7385
2025-02-04T15:55:33
2025-03-03T17:31:40
null
{ "login": "yzhangcs", "id": 18402347, "type": "User" }
[]
true
[]
2,828,208,828
7,384
Support async functions in map()
e.g. to download images or call an inference API like HF or vLLM ```python import asyncio import random from datasets import Dataset async def f(x): await asyncio.sleep(random.random()) ds = Dataset.from_dict({"data": range(100)}) ds.map(f) # Map: 100%|█████████████████████████████| 100/100 [00:0...
closed
https://github.com/huggingface/datasets/pull/7384
2025-02-03T18:18:40
2025-02-13T14:01:13
2025-02-13T14:00:06
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,823,480,924
7,382
Add Pandas, PyArrow and Polars docs
(also added the missing numpy docs and fixed a small bug in pyarrow formatting)
closed
https://github.com/huggingface/datasets/pull/7382
2025-01-31T13:22:59
2025-01-31T16:30:59
2025-01-31T16:30:57
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,815,649,092
7,381
Iterating over values of a column in the IterableDataset
### Feature request I would like to be able to iterate (and re-iterate if needed) over a column of an `IterableDataset` instance. The following example shows the supposed API: ```python def gen(): yield {"text": "Good", "label": 0} yield {"text": "Bad", "label": 1} ds = IterableDataset.from_generator(gen) tex...
closed
https://github.com/huggingface/datasets/issues/7381
2025-01-28T13:17:36
2025-05-22T18:00:04
2025-05-22T18:00:04
{ "login": "TopCoder2K", "id": 47208659, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,811,566,116
7,380
fix: dill default for version bigger 0.3.8
Fixes def log for dill version >= 0.3.9 https://pypi.org/project/dill/ This project uses dill with the release of version 0.3.9 the datasets lib.
closed
https://github.com/huggingface/datasets/pull/7380
2025-01-26T13:37:16
2025-03-13T20:40:19
2025-03-13T20:40:19
{ "login": "sam-hey", "id": 40773225, "type": "User" }
[]
true
[]
2,802,957,388
7,378
Allow pushing config version to hub
### Feature request Currently, when datasets are created, they can be versioned by passing the `version` argument to `load_dataset(...)`. For example creating `outcomes.csv` on the command line ``` echo "id,value\n1,0\n2,0\n3,1\n4,1\n" > outcomes.csv ``` and creating it ``` import datasets dataset = datasets.load_dat...
open
https://github.com/huggingface/datasets/issues/7378
2025-01-21T22:35:07
2025-01-30T13:56:56
null
{ "login": "momeara", "id": 129072, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,802,723,285
7,377
Support for sparse arrays with the Arrow Sparse Tensor format?
### Feature request AI in biology is becoming a big thing. One thing that would be a huge benefit to the field that Huggingface Datasets doesn't currently have is native support for **sparse arrays**. Arrow has support for sparse tensors. https://arrow.apache.org/docs/format/Other.html#sparse-tensor It would be ...
open
https://github.com/huggingface/datasets/issues/7377
2025-01-21T20:14:35
2025-01-30T14:06:45
null
{ "login": "JulesGM", "id": 3231217, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,802,621,104
7,376
[docs] uv install
Proposes adding uv to installation docs (see Slack thread [here](https://huggingface.slack.com/archives/C01N44FJDHT/p1737377177709279) for more context) if you're interested!
closed
https://github.com/huggingface/datasets/pull/7376
2025-01-21T19:15:48
2025-03-14T20:16:35
2025-03-14T20:16:35
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[]
true
[]
2,800,609,218
7,375
vllm批量推理报错
### Describe the bug ![Image](https://github.com/user-attachments/assets/3d958e43-28dc-4467-9333-5990c7af3b3f) ### Steps to reproduce the bug ![Image](https://github.com/user-attachments/assets/3067eeca-a54d-4956-b0fd-3fc5ea93dabb) ### Expected behavior ![Image](https://github.com/user-attachments/assets/77d32936-...
open
https://github.com/huggingface/datasets/issues/7375
2025-01-21T03:22:23
2025-01-30T14:02:40
null
{ "login": "YuShengzuishuai", "id": 51228154, "type": "User" }
[]
false
[]
2,793,442,320
7,374
Remove .h5 from imagefolder extensions
the format is not relevant for imagefolder, and makes the viewer fail to process datasets on HF (so many that the viewer takes more time to process new datasets)
closed
https://github.com/huggingface/datasets/pull/7374
2025-01-16T18:17:24
2025-01-16T18:26:40
2025-01-16T18:26:38
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,793,237,139
7,373
Excessive RAM Usage After Dataset Concatenation concatenate_datasets
### Describe the bug When loading a dataset from disk, concatenating it, and starting the training process, the RAM usage progressively increases until the kernel terminates the process due to excessive memory consumption. https://github.com/huggingface/datasets/issues/2276 ### Steps to reproduce the bug ```python ...
open
https://github.com/huggingface/datasets/issues/7373
2025-01-16T16:33:10
2025-03-27T17:40:59
null
{ "login": "sam-hey", "id": 40773225, "type": "User" }
[]
false
[]
2,791,760,968
7,372
Inconsistent Behavior Between `load_dataset` and `load_from_disk` When Loading Sharded Datasets
### Description I encountered an inconsistency in behavior between `load_dataset` and `load_from_disk` when loading sharded datasets. Here is a minimal example to reproduce the issue: #### Code 1: Using `load_dataset` ```python from datasets import Dataset, load_dataset # First save with max_shard_size=10 Dataset.fr...
open
https://github.com/huggingface/datasets/issues/7372
2025-01-16T05:47:20
2025-01-16T05:47:20
null
{ "login": "gaohongkui", "id": 38203359, "type": "User" }
[]
false
[]
2,790,549,889
7,371
500 Server error with pushing a dataset
### Describe the bug Suddenly, I started getting this error message saying it was an internal error. `Error creating/pushing dataset: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/ll4ma-lab/grasp-dataset/commit/main (Request ID: Root=1-6787f0b7-66d5bd45413e481c4c2fb22d;670d04ff-...
open
https://github.com/huggingface/datasets/issues/7371
2025-01-15T18:23:02
2025-01-15T20:06:05
null
{ "login": "martinmatak", "id": 7677814, "type": "User" }
[]
false
[]
2,787,972,786
7,370
Support faster processing using pandas or polars functions in `IterableDataset.map()`
Following the polars integration :) Allow super fast processing using pandas or polars functions in `IterableDataset.map()` by adding support to pandas and polars formatting in `IterableDataset` ```python import polars as pl from datasets import Dataset ds = Dataset.from_dict({"i": range(10)}).to_iterable_da...
closed
https://github.com/huggingface/datasets/pull/7370
2025-01-14T18:14:13
2025-01-31T11:08:15
2025-01-30T13:30:57
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,787,193,238
7,369
Importing dataset gives unhelpful error message when filenames in metadata.csv are not found in the directory
### Describe the bug While importing an audiofolder dataset, where the names of the audiofiles don't correspond to the filenames in the metadata.csv, we get an unclear error message that is not helpful for the debugging, i.e. ``` ValueError: Instruction "train" corresponds to no data! ``` ### Steps to reproduce the ...
open
https://github.com/huggingface/datasets/issues/7369
2025-01-14T13:53:21
2025-01-14T15:05:51
null
{ "login": "svencornetsdegroot", "id": 38278139, "type": "User" }
[]
false
[]
2,784,272,477
7,368
Add with_split to DatasetDict.map
#7356
closed
https://github.com/huggingface/datasets/pull/7368
2025-01-13T15:09:56
2025-03-08T05:45:02
2025-03-07T14:09:52
{ "login": "jp1924", "id": 93233241, "type": "User" }
[]
true
[]
2,781,522,894
7,366
Dataset.from_dict() can't handle large dict
### Describe the bug I have 26,000,000 3-tuples. When I use Dataset.from_dict() to load, neither. py nor Jupiter notebook can run successfully. This is my code: ``` # len(example_data) is 26,000,000, 'diff' is a text diff1_list = [example_data[i].texts[0] for i in range(len(example_data))] diff2_list =...
open
https://github.com/huggingface/datasets/issues/7366
2025-01-11T02:05:21
2025-01-11T02:05:21
null
{ "login": "CSU-OSS", "id": 164967134, "type": "User" }
[]
false
[]
2,780,216,199
7,365
A parameter is specified but not used in datasets.arrow_dataset.Dataset.from_pandas()
### Describe the bug I am interested in creating train, test and eval splits from a pandas Dataframe, therefore I was looking at the possibilities I can follow. I noticed the split parameter and was hopeful to use it in order to generate the 3 at once, however, while trying to understand the code, i noticed that it ha...
open
https://github.com/huggingface/datasets/issues/7365
2025-01-10T13:39:33
2025-01-10T13:39:33
null
{ "login": "NourOM02", "id": 69003192, "type": "User" }
[]
false
[]
2,776,929,268
7,364
API endpoints for gated dataset access requests
### Feature request I would like a programatic way of requesting access to gated datasets. The current solution to gain access forces me to visit a website and physically click an "agreement" button (as per the [documentation](https://huggingface.co/docs/hub/en/datasets-gated#access-gated-datasets-as-a-user)). An i...
closed
https://github.com/huggingface/datasets/issues/7364
2025-01-09T06:21:20
2025-01-09T11:17:40
2025-01-09T11:17:20
{ "login": "jerome-white", "id": 6140840, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,774,090,012
7,363
ImportError: To support decoding images, please install 'Pillow'.
### Describe the bug Following this tutorial locally using a macboko and VSCode: https://huggingface.co/docs/diffusers/en/tutorials/basic_training This line of code: for i, image in enumerate(dataset[:4]["image"]): throws: ImportError: To support decoding images, please install 'Pillow'. Pillow is installed. ###...
open
https://github.com/huggingface/datasets/issues/7363
2025-01-08T02:22:57
2025-05-28T14:56:53
null
{ "login": "jamessdixon", "id": 1394644, "type": "User" }
[]
false
[]
2,773,731,829
7,362
HuggingFace CLI dataset download raises error
### Describe the bug Trying to download Hugging Face datasets using Hugging Face CLI raises error. This error only started after December 27th, 2024. For example: ``` huggingface-cli download --repo-type dataset gboleda/wikicorpus Traceback (most recent call last): File "/home/ubuntu/test_venv/bin/huggingface...
closed
https://github.com/huggingface/datasets/issues/7362
2025-01-07T21:03:30
2025-01-08T15:00:37
2025-01-08T14:35:52
{ "login": "ajayvohra2005", "id": 3870355, "type": "User" }
[]
false
[]
2,771,859,244
7,361
Fix lock permission
All files except lock file have proper permission obeying `ACL` property if it is set. If the cache directory has `ACL` property, it should be respected instead of just using `umask` for permission. To fix it, just create a lock file and pass the created `mode`. By creating a lock file with `touch()` before `Fil...
open
https://github.com/huggingface/datasets/pull/7361
2025-01-07T04:15:53
2025-01-07T04:49:46
null
{ "login": "cih9088", "id": 11530592, "type": "User" }
[]
true
[]
2,771,751,406
7,360
error when loading dataset in Hugging Face: NoneType error is not callable
### Describe the bug I met an error when running a notebook provide by Hugging Face, and met the error. ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[2], line 5 3 # Load the enhancers dat...
open
https://github.com/huggingface/datasets/issues/7360
2025-01-07T02:11:36
2025-02-24T13:32:52
null
{ "login": "nanu23333", "id": 189343338, "type": "User" }
[]
false
[]
2,771,137,842
7,359
There are multiple 'mteb/arguana' configurations in the cache: default, corpus, queries with HF_HUB_OFFLINE=1
### Describe the bug Hey folks, I am trying to run this code - ```python from datasets import load_dataset, get_dataset_config_names ds = load_dataset("mteb/arguana") ``` with HF_HUB_OFFLINE=1 But I get the following error - ```python Using the latest cached version of the dataset since mteb/arguana...
open
https://github.com/huggingface/datasets/issues/7359
2025-01-06T17:42:49
2025-01-06T17:43:31
null
{ "login": "Bhavya6187", "id": 723146, "type": "User" }
[]
false
[]
2,770,927,769
7,358
Fix remove_columns in the formatted case
`remove_columns` had no effect when running a function in `.map()` on dataset that is formatted This aligns the logic of `map()` with the non formatted case and also with with https://github.com/huggingface/datasets/pull/7353
open
https://github.com/huggingface/datasets/pull/7358
2025-01-06T15:44:23
2025-01-06T15:46:46
null
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,770,456,127
7,357
Python process aborded with GIL issue when using image dataset
### Describe the bug The issue is visible only with the latest `datasets==3.2.0`. When using image dataset the Python process gets aborted right before the exit with the following error: ``` Fatal Python error: PyGILState_Release: thread state 0x7fa1f409ade0 must be current when releasing Python runtime state: f...
open
https://github.com/huggingface/datasets/issues/7357
2025-01-06T11:29:30
2025-03-08T15:59:36
null
{ "login": "AlexKoff88", "id": 25342812, "type": "User" }
[]
false
[]
2,770,095,103
7,356
How about adding a feature to pass the key when performing map on DatasetDict?
### Feature request Add a feature to pass the key of the DatasetDict when performing map ### Motivation I often preprocess using map on DatasetDict. Sometimes, I need to preprocess train and valid data differently depending on the task. So, I thought it would be nice to pass the key (like train, valid) when perf...
closed
https://github.com/huggingface/datasets/issues/7356
2025-01-06T08:13:52
2025-03-24T10:57:47
2025-03-24T10:57:47
{ "login": "jp1924", "id": 93233241, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,768,958,211
7,355
Not available datasets[audio] on python 3.13
### Describe the bug This is the error I got, it seems numba package does not support python 3.13 PS C:\Users\sergi\Documents> pip install datasets[audio] Defaulting to user installation because normal site-packages is not writeable Collecting datasets[audio] Using cached datasets-3.2.0-py3-none-any.whl.metada...
open
https://github.com/huggingface/datasets/issues/7355
2025-01-04T18:37:08
2025-06-28T00:26:19
null
{ "login": "sergiosinlimites", "id": 70306948, "type": "User" }
[]
false
[]
2,768,955,917
7,354
A module that was compiled using NumPy 1.x cannot be run in NumPy 2.0.2 as it may crash. To support both 1.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0. Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
### Describe the bug Following this tutorial: https://huggingface.co/docs/diffusers/en/tutorials/basic_training and running it locally using VSCode on my MacBook. The first line in the tutorial fails: from datasets import load_dataset dataset = load_dataset('huggan/smithsonian_butterflies_subset', split="train"). w...
closed
https://github.com/huggingface/datasets/issues/7354
2025-01-04T18:30:17
2025-01-08T02:20:58
2025-01-08T02:20:58
{ "login": "jamessdixon", "id": 1394644, "type": "User" }
[]
false
[]
2,768,484,726
7,353
changes to MappedExamplesIterable to resolve #7345
modified `MappedExamplesIterable` and `test_iterable_dataset.py::test_mapped_examples_iterable_with_indices` fix #7345 @lhoestq
closed
https://github.com/huggingface/datasets/pull/7353
2025-01-04T06:01:15
2025-01-07T11:56:41
2025-01-07T11:56:41
{ "login": "vttrifonov", "id": 12157034, "type": "User" }
[]
true
[]
2,767,763,850
7,352
fsspec 2024.12.0
null
closed
https://github.com/huggingface/datasets/pull/7352
2025-01-03T15:32:25
2025-01-03T15:34:54
2025-01-03T15:34:11
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,767,731,707
7,350
Bump hfh to 0.24 to fix ci
null
closed
https://github.com/huggingface/datasets/pull/7350
2025-01-03T15:09:40
2025-01-03T15:12:17
2025-01-03T15:10:27
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,767,670,454
7,349
Webdataset special columns in last position
Place columns "__key__" and "__url__" in last position in the Dataset Viewer since they are not the main content before: <img width="1012" alt="image" src="https://github.com/user-attachments/assets/b556c1fe-2674-4ba0-9643-c074aa9716fd" />
closed
https://github.com/huggingface/datasets/pull/7349
2025-01-03T14:32:15
2025-01-03T14:34:39
2025-01-03T14:32:30
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,766,128,230
7,348
Catch OSError for arrow
fixes https://github.com/huggingface/datasets/issues/7346 (also updated `ruff` and appleid style changes)
closed
https://github.com/huggingface/datasets/pull/7348
2025-01-02T14:30:00
2025-01-09T14:25:06
2025-01-09T14:25:04
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,760,282,339
7,347
Converting Arrow to WebDataset TAR Format for Offline Use
### Feature request Hi, I've downloaded an Arrow-formatted dataset offline using the hugggingface's datasets library by: ``` import json from datasets import load_dataset dataset = load_dataset("pixparse/cc3m-wds") dataset.save_to_disk("./cc3m_1") ``` now I need to convert it to WebDataset's TAR form...
closed
https://github.com/huggingface/datasets/issues/7347
2024-12-27T01:40:44
2024-12-31T17:38:00
2024-12-28T15:38:03
{ "login": "katie312", "id": 91370128, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,758,752,118
7,346
OSError: Invalid flatbuffers message.
### Describe the bug When loading a large 2D data (1000 × 1152) with a large number of (2,000 data in this case) in `load_dataset`, the error message `OSError: Invalid flatbuffers message` is reported. When only 300 pieces of data of this size (1000 × 1152) are stored, they can be loaded correctly. When 2,00...
closed
https://github.com/huggingface/datasets/issues/7346
2024-12-25T11:38:52
2025-01-09T14:25:29
2025-01-09T14:25:05
{ "login": "antecede", "id": 46232487, "type": "User" }
[]
false
[]
2,758,585,709
7,345
Different behaviour of IterableDataset.map vs Dataset.map with remove_columns
### Describe the bug The following code ```python import datasets as hf ds1 = hf.Dataset.from_list([{'i': i} for i in [0,1]]) #ds1 = ds1.to_iterable_dataset() ds2 = ds1.map( lambda i: {'i': i+1}, input_columns = ['i'], remove_columns = ['i'] ) list(ds2) ``` produces ```python [{'i': ...
closed
https://github.com/huggingface/datasets/issues/7345
2024-12-25T07:36:48
2025-01-07T11:56:42
2025-01-07T11:56:42
{ "login": "vttrifonov", "id": 12157034, "type": "User" }
[]
false
[]
2,754,735,951
7,344
HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access SlimPajama-627B or c4 on TPUs
### Describe the bug I am trying to run some trainings on Google's TPUs using Huggingface's DataLoader on [SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B) and [c4](https://huggingface.co/datasets/allenai/c4), but I end up running into `429 Client Error: Too Many Requests for URL` error when ...
closed
https://github.com/huggingface/datasets/issues/7344
2024-12-22T16:30:07
2025-01-15T05:32:00
2025-01-15T05:31:58
{ "login": "clankur", "id": 9397233, "type": "User" }
[]
false
[]
2,750,525,823
7,343
[Bug] Inconsistent behavior of data_files and data_dir in load_dataset method.
### Describe the bug Inconsistent operation of data_files and data_dir in load_dataset method. ### Steps to reproduce the bug # First I have three files, named 'train.json', 'val.json', 'test.json'. Each one has a simple dict `{text:'aaa'}`. Their path are `/data/train.json`, `/data/val.json`, `/data/test.jso...
closed
https://github.com/huggingface/datasets/issues/7343
2024-12-19T14:31:27
2025-01-03T15:54:09
2025-01-03T15:54:09
{ "login": "JasonCZH4", "id": 74161960, "type": "User" }
[]
false
[]
2,749,572,310
7,342
Update LICENSE
null
closed
https://github.com/huggingface/datasets/pull/7342
2024-12-19T08:17:50
2024-12-19T08:44:08
2024-12-19T08:44:08
{ "login": "eliebak", "id": 97572401, "type": "User" }
[]
true
[]
2,745,658,561
7,341
minor video docs on how to install
null
closed
https://github.com/huggingface/datasets/pull/7341
2024-12-17T18:06:17
2024-12-17T18:11:17
2024-12-17T18:11:15
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,745,473,274
7,340
don't import soundfile in tests
null
closed
https://github.com/huggingface/datasets/pull/7340
2024-12-17T16:49:55
2024-12-17T16:54:04
2024-12-17T16:50:24
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,745,460,060
7,339
Update CONTRIBUTING.md
null
closed
https://github.com/huggingface/datasets/pull/7339
2024-12-17T16:45:25
2024-12-17T16:51:36
2024-12-17T16:46:30
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,744,877,569
7,337
One or several metadata.jsonl were found, but not in the same directory or in a parent directory of
### Describe the bug ImageFolder with metadata.jsonl error. I downloaded liuhaotian/LLaVA-CC3M-Pretrain-595K locally from Hugging Face. According to the tutorial in https://huggingface.co/docs/datasets/image_dataset#image-captioning, only put images.zip and metadata.jsonl containing information in the same folder. How...
open
https://github.com/huggingface/datasets/issues/7337
2024-12-17T12:58:43
2025-01-03T15:28:13
null
{ "login": "mst272", "id": 67250532, "type": "User" }
[]
false
[]
2,744,746,456
7,336
Clarify documentation or Create DatasetCard
### Feature request I noticed that you can use a Model Card instead of a Dataset Card when pushing a dataset to the Hub, but this isn’t clearly mentioned in [the docs.](https://huggingface.co/docs/datasets/dataset_card) - Update the docs to clarify that a Model Card can work for datasets too. - It might be worth c...
open
https://github.com/huggingface/datasets/issues/7336
2024-12-17T12:01:00
2024-12-17T12:01:00
null
{ "login": "August-murr", "id": 145011209, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,743,437,260
7,335
Too many open files: '/root/.cache/huggingface/token'
### Describe the bug I ran this code: ``` from datasets import load_dataset dataset = load_dataset("common-canvas/commoncatalog-cc-by", cache_dir="/datadrive/datasets/cc", num_proc=1000) ``` And got this error. Before it was some other file though (lie something...incomplete) runnting ``` ulimit -n 8192 ...
open
https://github.com/huggingface/datasets/issues/7335
2024-12-16T21:30:24
2024-12-16T21:30:24
null
{ "login": "kopyl", "id": 17604849, "type": "User" }
[]
false
[]
2,740,266,503
7,334
TypeError: Value.__init__() missing 1 required positional argument: 'dtype'
### Describe the bug ds = load_dataset( "./xxx.py", name="default", split="train", ) The datasets does not support debugging locally anymore... ### Steps to reproduce the bug ``` from datasets import load_dataset ds = load_dataset( "./repo.py", name="default", split="train", ) ...
open
https://github.com/huggingface/datasets/issues/7334
2024-12-15T04:08:46
2025-07-10T03:32:36
null
{ "login": "ghost", "id": 10137, "type": "User" }
[]
false
[]
2,738,626,593
7,328
Fix typo in arrow_dataset
null
closed
https://github.com/huggingface/datasets/pull/7328
2024-12-13T15:17:09
2024-12-19T17:10:27
2024-12-19T17:10:25
{ "login": "AndreaFrancis", "id": 5564745, "type": "User" }
[]
true
[]
2,738,514,909
7,327
.map() is not caching and ram goes OOM
### Describe the bug Im trying to run a fairly simple map that is converting a dataset into numpy arrays. however, it just piles up on memory and doesnt write to disk. Ive tried multiple cache techniques such as specifying the cache dir, setting max mem, +++ but none seem to work. What am I missing here? ### Steps to...
open
https://github.com/huggingface/datasets/issues/7327
2024-12-13T14:22:56
2025-02-10T10:42:38
null
{ "login": "simeneide", "id": 7136076, "type": "User" }
[]
false
[]
2,738,188,902
7,326
Remove upper bound for fsspec
### Describe the bug As also raised by @cyyever in https://github.com/huggingface/datasets/pull/7296 and @NeilGirdhar in https://github.com/huggingface/datasets/commit/d5468836fe94e8be1ae093397dd43d4a2503b926#commitcomment-140952162 , `datasets` has a problematic version constraint on `fsspec`. In our case this c...
open
https://github.com/huggingface/datasets/issues/7326
2024-12-13T11:35:12
2025-01-03T15:34:37
null
{ "login": "fellhorn", "id": 26092524, "type": "User" }
[]
false
[]
2,736,618,054
7,325
Introduce pdf support (#7318)
First implementation of the Pdf feature to support pdfs (#7318) . Using [pdfplumber](https://github.com/jsvine/pdfplumber?tab=readme-ov-file#python-library) as the default library to work with pdfs. @lhoestq and @AndreaFrancis
closed
https://github.com/huggingface/datasets/pull/7325
2024-12-12T18:31:18
2025-03-18T14:00:36
2025-03-18T14:00:36
{ "login": "yabramuvdi", "id": 4812761, "type": "User" }
[]
true
[]
2,736,008,698
7,323
Unexpected cache behaviour using load_dataset
### Describe the bug Following the (Cache management)[https://huggingface.co/docs/datasets/en/cache] docu and previous behaviour from datasets version 2.18.0, one is able to change the cache directory. Previously, all downloaded/extracted/etc files were found in this folder. As i have recently update to the latest v...
closed
https://github.com/huggingface/datasets/issues/7323
2024-12-12T14:03:00
2025-01-31T11:34:24
2025-01-31T11:34:24
{ "login": "Moritz-Wirth", "id": 74349080, "type": "User" }
[]
false
[]
2,732,254,868
7,322
ArrowInvalid: JSON parse error: Column() changed from object to array in row 0
### Describe the bug Encountering an error while loading the ```liuhaotian/LLaVA-Instruct-150K dataset```. ### Steps to reproduce the bug ``` from datasets import load_dataset fw =load_dataset("liuhaotian/LLaVA-Instruct-150K") ``` Error: ``` ArrowInvalid Traceback (most recen...
open
https://github.com/huggingface/datasets/issues/7322
2024-12-11T08:41:39
2025-07-15T13:06:55
null
{ "login": "Polarisamoon", "id": 41767521, "type": "User" }
[]
false
[]
2,731,626,760
7,321
ImportError: cannot import name 'set_caching_enabled' from 'datasets'
### Describe the bug Traceback (most recent call last): File "/usr/local/lib/python3.10/runpy.py", line 187, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "/usr/local/lib/python3.10/runpy.py", line 110, in _get_module_details __import__(pkg_name) File "...
open
https://github.com/huggingface/datasets/issues/7321
2024-12-11T01:58:46
2024-12-11T13:32:15
null
{ "login": "sankexin", "id": 33318353, "type": "User" }
[]
false
[]
2,731,112,100
7,320
ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['label']
### Describe the bug I am trying to create a PEFT model from DISTILBERT model, and run a training loop. However, the trainer.train() is giving me this error: ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['label'] Here is my code: ### St...
closed
https://github.com/huggingface/datasets/issues/7320
2024-12-10T20:23:11
2024-12-10T23:22:23
2024-12-10T23:22:23
{ "login": "atrompeterog", "id": 38381084, "type": "User" }
[]
false
[]
2,730,679,980
7,319
set dev version
null
closed
https://github.com/huggingface/datasets/pull/7319
2024-12-10T17:01:34
2024-12-10T17:04:04
2024-12-10T17:01:45
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,730,676,278
7,318
Introduce support for PDFs
### Feature request The idea (discussed in the Discord server with @lhoestq ) is to have a Pdf type like Image/Audio/Video. For example [Video](https://github.com/huggingface/datasets/blob/main/src/datasets/features/video.py) was recently added and contains how to decode a video file encoded in a dictionary like {"pat...
open
https://github.com/huggingface/datasets/issues/7318
2024-12-10T16:59:48
2024-12-12T18:38:13
null
{ "login": "yabramuvdi", "id": 4812761, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,730,661,237
7,317
Release: 3.2.0
null
closed
https://github.com/huggingface/datasets/pull/7317
2024-12-10T16:53:20
2024-12-10T16:56:58
2024-12-10T16:56:56
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,730,196,085
7,316
More docs to from_dict to mention that the result lives in RAM
following discussions at https://discuss.huggingface.co/t/how-to-load-this-simple-audio-data-set-and-use-dataset-map-without-memory-issues/17722/14
closed
https://github.com/huggingface/datasets/pull/7316
2024-12-10T13:56:01
2024-12-10T13:58:32
2024-12-10T13:57:02
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,727,502,630
7,314
Resolved for empty datafiles
Resolved for Issue#6152
open
https://github.com/huggingface/datasets/pull/7314
2024-12-09T15:47:22
2024-12-27T18:20:21
null
{ "login": "sahillihas", "id": 20582290, "type": "User" }
[]
true
[]
2,726,240,634
7,313
Cannot create a dataset with relative audio path
### Describe the bug Hello! I want to create a dataset of parquet files, with audios stored as separate .mp3 files. However, it says "No such file or directory" (see the reproducing code). ### Steps to reproduce the bug Creating a dataset ``` from pathlib import Path from datasets import Dataset, load_datas...
open
https://github.com/huggingface/datasets/issues/7313
2024-12-09T07:34:20
2025-04-19T07:13:08
null
{ "login": "sedol1339", "id": 5188731, "type": "User" }
[]
false
[]
2,725,103,094
7,312
[Audio Features - DO NOT MERGE] PoC for adding an offset+sliced reading to audio file.
This is a proof of concept for #7310 . The idea is to enable the access to others column of the dataset row when loading an audio file into a table. This is to allow sliced reading. As stated in the issue, many people have very long audio files and use start and stop slicing in this audio file. Right now, this code ...
open
https://github.com/huggingface/datasets/pull/7312
2024-12-08T10:27:31
2024-12-08T10:27:31
null
{ "login": "TParcollet", "id": 11910731, "type": "User" }
[]
true
[]
2,725,002,630
7,311
How to get the original dataset name with username?
### Feature request The issue is related to ray data https://github.com/ray-project/ray/issues/49008 which it requires to check if the dataset is the original one just after `load_dataset` and parquet files are already available on hf hub. The solution used now is to get the dataset name, config and split, then `...
open
https://github.com/huggingface/datasets/issues/7311
2024-12-08T07:18:14
2025-01-09T10:48:02
null
{ "login": "npuichigo", "id": 11533479, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,724,830,603
7,310
Enable the Audio Feature to decode / read with an offset + duration
### Feature request For most large speech dataset, we do not wish to generate hundreds of millions of small audio samples. Instead, it is quite common to provide larger audio files with frame offset (soundfile start and stop arguments). We should be able to pass these arguments to Audio() (column ID corresponding in t...
open
https://github.com/huggingface/datasets/issues/7310
2024-12-07T22:01:44
2024-12-09T21:09:46
null
{ "login": "TParcollet", "id": 11910731, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,729,738,963
7,315
Allow manual configuration of Dataset Viewer for datasets not created with the `datasets` library
#### **Problem Description** Currently, the Hugging Face Dataset Viewer automatically interprets dataset fields for datasets created with the `datasets` library. However, for datasets pushed directly via `git`, the Viewer: - Defaults to generic columns like `label` with `null` values if no explicit mapping is provide...
open
https://github.com/huggingface/datasets/issues/7315
2024-12-07T16:37:12
2024-12-11T11:05:22
null
{ "login": "diarray-hub", "id": 114512099, "type": "User" }
[]
false
[]
2,723,636,931
7,309
Faster parquet streaming + filters with predicate pushdown
ParquetFragment.to_batches uses a buffered stream to read parquet data, which makes streaming faster (x2 on my laptop). I also added the `filters` config parameter to support filtering with predicate pushdown, e.g. ```python from datasets import load_dataset filters = [('problem_source', '==', 'math')] ds = ...
closed
https://github.com/huggingface/datasets/pull/7309
2024-12-06T18:01:54
2024-12-07T23:32:30
2024-12-07T23:32:28
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,720,244,889
7,307
refactor: remove unnecessary else
null
open
https://github.com/huggingface/datasets/pull/7307
2024-12-05T12:11:09
2024-12-06T15:11:33
null
{ "login": "HarikrishnanBalagopal", "id": 20921177, "type": "User" }
[]
true
[]
2,719,807,464
7,306
Creating new dataset from list loses information. (Audio Information Lost - either Datatype or Values).
### Describe the bug When creating a dataset from a list of datapoints, information is lost of the individual items. Specifically, when creating a dataset from a list of datapoints (from another dataset). Either the datatype is lost or the values are lost. See examples below. -> What is the best way to create...
open
https://github.com/huggingface/datasets/issues/7306
2024-12-05T09:07:53
2024-12-05T09:09:38
null
{ "login": "ai-nikolai", "id": 9797804, "type": "User" }
[]
false
[]
2,715,907,267
7,305
Build Documentation Test Fails Due to "Bad Credentials" Error
### Describe the bug The `Build documentation / build / build_main_documentation (push)` job is consistently failing during the "Syncing repository" step. The error occurs when attempting to determine the default branch name, resulting in "Bad credentials" errors. ### Steps to reproduce the bug 1. Trigger the `build...
open
https://github.com/huggingface/datasets/issues/7305
2024-12-03T20:22:54
2025-01-08T22:38:14
null
{ "login": "ruidazeng", "id": 31152346, "type": "User" }
[]
false
[]
2,715,179,811
7,304
Update iterable_dataset.py
close https://github.com/huggingface/datasets/issues/7297
closed
https://github.com/huggingface/datasets/pull/7304
2024-12-03T14:25:42
2024-12-03T14:28:10
2024-12-03T14:27:02
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,705,729,696
7,303
DataFilesNotFoundError for datasets LM1B
### Describe the bug Cannot load the dataset https://huggingface.co/datasets/billion-word-benchmark/lm1b ### Steps to reproduce the bug `dataset = datasets.load_dataset('lm1b', split=split)` ### Expected behavior `Traceback (most recent call last): File "/home/hml/projects/DeepLearning/Generative_model/Diffusio...
closed
https://github.com/huggingface/datasets/issues/7303
2024-11-29T17:27:45
2024-12-11T13:22:47
2024-12-11T13:22:47
{ "login": "hml1996-fight", "id": 72264324, "type": "User" }
[]
false
[]
2,702,626,386
7,302
Let server decide default repo visibility
Until now, all repos were public by default when created without passing the `private` argument. This meant that passing `private=False` or `private=None` was strictly the same. This is not the case anymore. Enterprise Hub offers organizations to set a default visibility setting for new repos. This is useful for organi...
closed
https://github.com/huggingface/datasets/pull/7302
2024-11-28T16:01:13
2024-11-29T17:00:40
2024-11-29T17:00:38
{ "login": "Wauplin", "id": 11801849, "type": "User" }
[]
true
[]
2,701,813,922
7,301
update load_dataset doctring
- remove canonical dataset name - remove dataset script logic - add streaming info - clearer download and prepare steps
closed
https://github.com/huggingface/datasets/pull/7301
2024-11-28T11:19:20
2024-11-29T10:31:43
2024-11-29T10:31:40
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,701,424,320
7,300
fix: update elasticsearch version
This should fix the `test_py311 (windows latest, deps-latest` errors. ``` =========================== short test summary info =========================== ERROR tests/test_search.py - AttributeError: `np.float_` was removed in the NumPy 2.0 release. Use `np.float64` instead. ERROR tests/test_search.py - AttributeE...
closed
https://github.com/huggingface/datasets/pull/7300
2024-11-28T09:14:21
2024-12-03T14:36:56
2024-12-03T14:24:42
{ "login": "ruidazeng", "id": 31152346, "type": "User" }
[]
true
[]
2,695,378,251
7,299
Efficient Image Augmentation in Hugging Face Datasets
### Describe the bug I'm using the Hugging Face datasets library to load images in batch and would like to apply a torchvision transform to solve the inconsistent image sizes in the dataset and apply some on the fly image augmentation. I can just think about using the collate_fn, but seems quite inefficient. ...
open
https://github.com/huggingface/datasets/issues/7299
2024-11-26T16:50:32
2024-11-26T16:53:53
null
{ "login": "fabiozappo", "id": 46443190, "type": "User" }
[]
false
[]
2,694,196,968
7,298
loading dataset issue with load_dataset() when training controlnet
### Describe the bug i'm unable to load my dataset for [controlnet training](https://github.com/huggingface/diffusers/blob/074e12358bc17e7dbe111ea4f62f05dbae8a49d5/examples/controlnet/train_controlnet.py#L606) using load_dataset(). however, load_from_disk() seems to work? would appreciate if someone can explain why ...
open
https://github.com/huggingface/datasets/issues/7298
2024-11-26T10:50:18
2024-11-26T10:50:18
null
{ "login": "sarahahtee", "id": 81594044, "type": "User" }
[]
false
[]
2,683,977,430
7,297
wrong return type for `IterableDataset.shard()`
### Describe the bug `IterableDataset.shard()` has the wrong typing for its return as `"Dataset"`. It should be `"IterableDataset"`. Makes my IDE unhappy. ### Steps to reproduce the bug look at [the source code](https://github.com/huggingface/datasets/blob/main/src/datasets/iterable_dataset.py#L2668)? ### Expected ...
closed
https://github.com/huggingface/datasets/issues/7297
2024-11-22T17:25:46
2024-12-03T14:27:27
2024-12-03T14:27:03
{ "login": "ysngshn", "id": 47225236, "type": "User" }
[]
false
[]
2,675,573,974
7,296
Remove upper version limit of fsspec[http]
null
closed
https://github.com/huggingface/datasets/pull/7296
2024-11-20T11:29:16
2025-03-06T04:47:04
2025-03-06T04:47:01
{ "login": "cyyever", "id": 17618148, "type": "User" }
[]
true
[]
2,672,003,384
7,295
[BUG]: Streaming from S3 triggers `unexpected keyword argument 'requote_redirect_url'`
### Describe the bug Note that this bug is only triggered when `streaming=True`. #5459 introduced always calling fsspec with `client_kwargs={"requote_redirect_url": False}`, which seems to have incompatibility issues even in the newest versions. Analysis of what's happening: 1. `datasets` passes the `client_kw...
open
https://github.com/huggingface/datasets/issues/7295
2024-11-19T12:23:36
2024-11-19T13:01:53
null
{ "login": "casper-hansen", "id": 27340033, "type": "User" }
[]
false
[]
2,668,663,130
7,294
Remove `aiohttp` from direct dependencies
The dependency is only used for catching an exception from other code. That can be done with an import guard.
closed
https://github.com/huggingface/datasets/pull/7294
2024-11-18T14:00:59
2025-05-07T14:27:18
2025-05-07T14:27:17
{ "login": "akx", "id": 58669, "type": "User" }
[]
true
[]
2,664,592,054
7,293
Updated inconsistent output in documentation examples for `ClassLabel`
fix #7129 @stevhliu
closed
https://github.com/huggingface/datasets/pull/7293
2024-11-16T16:20:57
2024-12-06T11:33:33
2024-12-06T11:32:01
{ "login": "sergiopaniego", "id": 17179696, "type": "User" }
[]
true
[]
2,664,250,855
7,292
DataFilesNotFoundError for datasets `OpenMol/PubChemSFT`
### Describe the bug Cannot load the dataset https://huggingface.co/datasets/OpenMol/PubChemSFT ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset('OpenMol/PubChemSFT') ``` ### Expected behavior ``` -----------------------------------------------------------------------...
closed
https://github.com/huggingface/datasets/issues/7292
2024-11-16T11:54:31
2024-11-19T00:53:00
2024-11-19T00:52:59
{ "login": "xnuohz", "id": 17878022, "type": "User" }
[]
false
[]
2,662,244,643
7,291
Why return_tensors='pt' doesn't work?
### Describe the bug I tried to add input_ids to dataset with map(), and I used the return_tensors='pt', but why I got the callback with the type of List? ![image](https://github.com/user-attachments/assets/ab046e20-2174-4e91-9cd6-4a296a43e83c) ### Steps to reproduce the bug ![image](https://github.com/user-attac...
open
https://github.com/huggingface/datasets/issues/7291
2024-11-15T15:01:23
2024-11-18T13:47:08
null
{ "login": "bw-wang19", "id": 86752851, "type": "User" }
[]
false
[]
2,657,620,816
7,290
`Dataset.save_to_disk` hangs when using num_proc > 1
### Describe the bug Hi, I'm encountered a small issue when saving datasets that led to the saving taking up to multiple hours. Specifically, [`Dataset.save_to_disk`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.save_to_disk) is a lot slower when using `num_proc>1` than...
open
https://github.com/huggingface/datasets/issues/7290
2024-11-14T05:25:13
2025-06-27T00:56:47
null
{ "login": "JohannesAck", "id": 22243463, "type": "User" }
[]
false
[]
2,648,019,507
7,289
Dataset viewer displays wrong statists
### Describe the bug In [my dataset](https://huggingface.co/datasets/speedcell4/opus-unigram2), there is a column called `lang2`, and there are 94 different classes in total, but the viewer says there are 83 values only. This issue only arises in the `train` split. The total number of values is also 94 in the `test`...
closed
https://github.com/huggingface/datasets/issues/7289
2024-11-11T03:29:27
2024-11-13T13:02:25
2024-11-13T13:02:25
{ "login": "speedcell4", "id": 3585459, "type": "User" }
[]
false
[]
2,647,052,280
7,288
Release v3.1.1
null
closed
https://github.com/huggingface/datasets/pull/7288
2024-11-10T09:38:15
2024-11-10T09:38:48
2024-11-10T09:38:48
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[]
true
[]
2,646,958,393
7,287
Support for identifier-based automated split construction
### Feature request As far as I understand, automated construction of splits for hub datasets is currently based on either file names or directory structure ([as described here](https://huggingface.co/docs/datasets/en/repository_structure)) It would seem to be pretty useful to also allow splits to be based on ide...
open
https://github.com/huggingface/datasets/issues/7287
2024-11-10T07:45:19
2024-11-19T14:37:02
null
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,645,350,151
7,286
Concurrent loading in `load_from_disk` - `num_proc` as a param
### Feature request https://github.com/huggingface/datasets/pull/6464 mentions a `num_proc` param while loading dataset from disk, but can't find that in the documentation and code anywhere ### Motivation Make loading large datasets from disk faster ### Your contribution Happy to contribute if given pointers
closed
https://github.com/huggingface/datasets/issues/7286
2024-11-08T23:21:40
2024-11-09T16:14:37
2024-11-09T16:14:37
{ "login": "unography", "id": 5240449, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,644,488,598
7,285
Release v3.1.0
null
closed
https://github.com/huggingface/datasets/pull/7285
2024-11-08T16:17:58
2024-11-08T16:18:05
2024-11-08T16:18:05
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[]
true
[]
2,644,302,386
7,284
support for custom feature encoding/decoding
Fix for https://github.com/huggingface/datasets/issues/7220 as suggested in discussion, in preference to #7221 (only concern would be on effect on type checking with custom feature types that aren't covered by FeatureType?)
closed
https://github.com/huggingface/datasets/pull/7284
2024-11-08T15:04:08
2024-11-21T16:09:47
2024-11-21T16:09:47
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[]
true
[]
2,642,537,708
7,283
Allow for variation in metadata file names as per issue #7123
Allow metadata files to have an identifying preface. Specifically, it will recognize files with `-metadata.csv` or `_metadata.csv` as metadata files for the purposes of the dataset viewer functionality. Resolves #7123.
open
https://github.com/huggingface/datasets/pull/7283
2024-11-08T00:44:47
2024-11-08T00:44:47
null
{ "login": "egrace479", "id": 38985481, "type": "User" }
[]
true
[]
2,642,075,491
7,282
Faulty datasets.exceptions.ExpectedMoreSplitsError
### Describe the bug Trying to download only the 'validation' split of my dataset; instead hit the error `datasets.exceptions.ExpectedMoreSplitsError`. Appears to be the same undesired behavior as reported in [#6939](https://github.com/huggingface/datasets/issues/6939), but with `data_files`, not `data_dir`. Her...
open
https://github.com/huggingface/datasets/issues/7282
2024-11-07T20:15:01
2024-11-07T20:15:42
null
{ "login": "meg-huggingface", "id": 90473723, "type": "User" }
[]
false
[]
2,640,346,339
7,281
File not found error
### Describe the bug I get a FileNotFoundError: <img width="944" alt="image" src="https://github.com/user-attachments/assets/1336bc08-06f6-4682-a3c0-071ff65efa87"> ### Steps to reproduce the bug See screenshot. ### Expected behavior I want to load one audiofile from the dataset. ### Environmen...
open
https://github.com/huggingface/datasets/issues/7281
2024-11-07T09:04:49
2024-11-07T09:22:43
null
{ "login": "MichielBontenbal", "id": 37507786, "type": "User" }
[]
false
[]
2,639,977,077
7,280
Add filename in error message when ReadError or similar occur
Please update error messages to include relevant information for debugging when loading datasets with `load_dataset()` that may have a few corrupted files. Whenever downloading a full dataset, some files might be corrupted (either at the source or from downloading corruption). However the errors often only let me k...
open
https://github.com/huggingface/datasets/issues/7280
2024-11-07T06:00:53
2024-11-20T13:23:12
null
{ "login": "elisa-aleman", "id": 37046039, "type": "User" }
[]
false
[]