id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
2,277,304,832
6,865
Example on Semantic segmentation contains bug
### Describe the bug https://huggingface.co/docs/datasets/en/semantic_segmentation shows wrong example with torchvision transforms. Specifically, as one can see in screenshot below, the object boundaries have weird colors. <img width="689" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/59...
open
https://github.com/huggingface/datasets/issues/6865
2024-05-03T09:40:12
2024-05-03T09:40:12
null
{ "login": "ducha-aiki", "id": 4803565, "type": "User" }
[]
false
[]
2,276,986,981
6,864
Dataset 'rewardsignal/reddit_writing_prompts' doesn't exist on the Hub
### Describe the bug The dataset `rewardsignal/reddit_writing_prompts` is missing in Huggingface Hub. ### Steps to reproduce the bug ``` from datasets import load_dataset prompt_response_dataset = load_dataset("rewardsignal/reddit_writing_prompts", data_files="prompt_responses_full.csv", split='train[:80%]...
closed
https://github.com/huggingface/datasets/issues/6864
2024-05-03T06:03:30
2024-05-06T06:36:42
2024-05-06T06:36:41
{ "login": "vinodrajendran001", "id": 5783246, "type": "User" }
[]
false
[]
2,276,977,534
6,863
Revert temporary pin huggingface-hub < 0.23.0
Revert temporary pin huggingface-hub < 0.23.0 introduced by - #6861 once the following issue is fixed and released: - huggingface/transformers#30618
closed
https://github.com/huggingface/datasets/issues/6863
2024-05-03T05:53:55
2024-05-27T10:14:41
2024-05-27T10:14:41
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
false
[]
2,276,763,745
6,862
Fix load_dataset for data_files with protocols other than HF
Fixes huggingface/datasets/issues/6598 I've added a new test case and a solution. Before applying the solution the test case was failing with the same error described in the linked issue. MRE: ``` pip install "datasets[s3]" python -c "from datasets import load_dataset; load_dataset('csv', data_files={'train': ...
closed
https://github.com/huggingface/datasets/pull/6862
2024-05-03T01:43:47
2024-07-23T14:37:08
2024-07-23T14:30:09
{ "login": "matstrand", "id": 544843, "type": "User" }
[]
true
[]
2,275,988,990
6,861
Fix CI by temporarily pinning huggingface-hub < 0.23.0
As a hotfix for CI, temporarily pin `huggingface-hub` upper version Fix #6860. Revert once root cause is fixed, see: - https://github.com/huggingface/transformers/issues/30618
closed
https://github.com/huggingface/datasets/pull/6861
2024-05-02T16:40:04
2024-05-02T16:59:42
2024-05-02T16:53:42
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,275,537,137
6,860
CI fails after huggingface_hub-0.23.0 release: FutureWarning: "resume_download"
CI fails after latest huggingface_hub-0.23.0 release: https://github.com/huggingface/huggingface_hub/releases/tag/v0.23.0 ``` FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_bertscore - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume...
closed
https://github.com/huggingface/datasets/issues/6860
2024-05-02T13:24:17
2024-05-02T16:53:45
2024-05-02T16:53:45
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
2,274,996,774
6,859
Support folder-based datasets with large metadata.jsonl
I tried creating an `imagefolder` dataset with a 714MB `metadata.jsonl` but got the error below. This pull request fixes the problem by increasing the block size like the message suggests. ``` >>> from datasets import load_dataset >>> dataset = load_dataset("imagefolder", data_dir="data-for-upload") Traceback (mos...
open
https://github.com/huggingface/datasets/pull/6859
2024-05-02T09:07:26
2024-05-02T09:07:26
null
{ "login": "gbenson", "id": 580564, "type": "User" }
[]
true
[]
2,274,917,185
6,858
Segmentation fault
### Describe the bug Using various version for datasets, I'm no more longer able to load that dataset without a segmentation fault. Several others files are also concerned. ### Steps to reproduce the bug # Create a new venv python3 -m venv venv_test source venv_test/bin/activate # Install the latest versio...
closed
https://github.com/huggingface/datasets/issues/6858
2024-05-02T08:28:49
2024-05-03T08:43:21
2024-05-03T08:42:36
{ "login": "scampion", "id": 554155, "type": "User" }
[]
false
[]
2,274,849,730
6,857
Fix line-endings in tests on Windows
EDIT: ~~Fix test_delete_from_hub on Windows by passing explicit encoding.~~ Fix test_delete_from_hub and test_xgetsize_private by uploading the README file content directly (encoding the string), instead of writing a local file and uploading it. Note that local files created on Windows will have "\r\n" line ending...
closed
https://github.com/huggingface/datasets/pull/6857
2024-05-02T07:49:15
2024-05-02T11:49:35
2024-05-02T11:43:00
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,274,828,933
6,856
CI fails on Windows for test_delete_from_hub and test_xgetsize_private due to new-line character
CI fails on Windows for test_delete_from_hub after the merge of: - #6820 This is weird because the CI was green in the PR branch before merging to main. ``` FAILED tests/test_hub.py::test_delete_from_hub - AssertionError: assert [CommitOperat...\r\n---\r\n')] == [CommitOperat...in/*\n---\n')] At index 1 ...
closed
https://github.com/huggingface/datasets/issues/6856
2024-05-02T07:37:03
2024-05-02T11:43:01
2024-05-02T11:43:01
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
2,274,777,812
6,855
Fix dataset name for community Hub script-datasets
Fix dataset name for community Hub script-datasets by passing explicit dataset_name to HubDatasetModuleFactoryWithScript. Fix #6854. CC: @Wauplin
closed
https://github.com/huggingface/datasets/pull/6855
2024-05-02T07:05:44
2024-05-03T15:58:00
2024-05-03T15:51:57
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,274,767,686
6,854
Wrong example of usage when config name is missing for community script-datasets
As reported by @Wauplin, when loading a community dataset with script, there is a bug in the example of usage of the error message if the dataset has multiple configs (and no default config) and the user does not pass any config. For example: ```python >>> ds = load_dataset("google/fleurs") ValueError: Config name i...
closed
https://github.com/huggingface/datasets/issues/6854
2024-05-02T06:59:39
2024-05-03T15:51:59
2024-05-03T15:51:58
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
2,272,570,000
6,853
Support soft links for load_datasets imagefolder
### Feature request Load_dataset from a folder of images doesn't seem to support soft links. It would be nice if it did, especially during methods development where image folders are being curated. ### Motivation Images are coming from a complex variety of sources and we'd like to be able to soft link directly from ...
open
https://github.com/huggingface/datasets/issues/6853
2024-04-30T22:14:29
2024-04-30T22:14:29
null
{ "login": "billytcl", "id": 10386511, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,272,465,011
6,852
Write token isn't working while pushing to datasets
### Describe the bug <img width="1001" alt="Screenshot 2024-05-01 at 3 37 06 AM" src="https://github.com/huggingface/datasets/assets/130903099/00fcf12c-fcc1-4749-8592-d263d4efcbcc"> As you can see I logged in to my account and the write token is valid. But I can't upload on my main account and I am getting that ...
closed
https://github.com/huggingface/datasets/issues/6852
2024-04-30T21:18:20
2024-05-02T00:55:46
2024-05-02T00:55:46
{ "login": "realzai", "id": 130903099, "type": "User" }
[]
false
[]
2,270,965,503
6,851
load_dataset('emotion') UnicodeDecodeError
### Describe the bug **emotions = load_dataset('emotion')** _UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte_ ### Steps to reproduce the bug load_dataset('emotion') ### Expected behavior succese ### Environment info py3.10 transformers 4.41.0.dev0 datasets 2....
open
https://github.com/huggingface/datasets/issues/6851
2024-04-30T09:25:01
2024-09-05T03:11:04
null
{ "login": "L-Block-C", "id": 32314558, "type": "User" }
[]
false
[]
2,269,500,624
6,850
Problem loading voxpopuli dataset
### Describe the bug ``` Exception has occurred: FileNotFoundError Couldn't find file at https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/{'en': 'data/en/asr_train.tsv'} ``` Error in logic for link url creation. The link should be https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/da...
closed
https://github.com/huggingface/datasets/issues/6850
2024-04-29T16:46:51
2024-05-06T09:25:54
2024-05-06T09:25:54
{ "login": "Namangarg110", "id": 40496687, "type": "User" }
[]
false
[]
2,268,718,355
6,849
fix webdataset filename split
use `os.path.splitext` to parse field_name. fix filename which has dot. like: ``` a.b.jpeg a.b.txt ```
closed
https://github.com/huggingface/datasets/pull/6849
2024-04-29T10:57:18
2024-06-04T12:54:04
2024-06-04T12:54:04
{ "login": "Bowser1704", "id": 43539191, "type": "User" }
[]
true
[]
2,268,622,609
6,848
Cant Downlaod Common Voice 17.0 hy-AM
### Describe the bug I want to download Common Voice 17.0 hy-AM but it returns an error. ``` The version_base parameter is not specified. Please specify a compatability version level, or None. Will assume defaults for version 1.1 @hydra.main(config_name='hfds_config', config_path=None) /usr/local/lib/pyth...
open
https://github.com/huggingface/datasets/issues/6848
2024-04-29T10:06:02
2025-04-01T20:48:09
null
{ "login": "mheryerznkanyan", "id": 31586104, "type": "User" }
[]
false
[]
2,268,589,177
6,847
[Streaming] Only load requested splits without resolving files for the other splits
e.g. [thangvip](https://huggingface.co/thangvip)/[cosmopedia_vi_math](https://huggingface.co/datasets/thangvip/cosmopedia_vi_math) has 300 splits and it takes a very long time to load only one split. This is due to `load_dataset()` resolving the files of all the splits even if only one is needed. In `dataset-view...
open
https://github.com/huggingface/datasets/issues/6847
2024-04-29T09:49:32
2024-05-07T04:43:59
null
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
2,267,352,120
6,846
Unimaginable super slow iteration
### Describe the bug Assuming there is a dataset with 52000 sentences, each with a length of 500, it takes 20 seconds to extract a sentence from the dataset……?Is there something wrong with my iteration? ### Steps to reproduce the bug ```python import datasets import time import random num_rows = 52000 n...
closed
https://github.com/huggingface/datasets/issues/6846
2024-04-28T05:24:14
2024-05-06T08:30:03
2024-05-06T08:30:03
{ "login": "rangehow", "id": 88258534, "type": "User" }
[]
false
[]
2,265,876,551
6,845
load_dataset doesn't support list column
### Describe the bug dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese") got exception: Generating train split: 1834 examples [00:00, 5227.98 examples/s] Traceback (most recent call last): File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 2011, in _prepare_split_single ...
open
https://github.com/huggingface/datasets/issues/6845
2024-04-26T14:11:44
2024-05-15T12:06:59
null
{ "login": "arthasking123", "id": 16257131, "type": "User" }
[]
false
[]
2,265,870,546
6,844
Retry on HF Hub error when streaming
Retry on the `huggingface_hub`'s `HfHubHTTPError` in the streaming mode. Fix #6843
closed
https://github.com/huggingface/datasets/pull/6844
2024-04-26T14:09:04
2024-04-26T15:37:42
2024-04-26T15:37:42
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,265,432,897
6,843
IterableDataset raises exception instead of retrying
### Describe the bug In light of the recent server outages, I decided to look into whether I could somehow wrap my IterableDataset streams to retry rather than error out immediately. To my surprise, `datasets` [already supports retries](https://github.com/huggingface/datasets/issues/6172#issuecomment-1794876229). Si...
open
https://github.com/huggingface/datasets/issues/6843
2024-04-26T10:00:43
2024-10-28T14:57:07
null
{ "login": "bauwenst", "id": 145220868, "type": "User" }
[]
false
[]
2,264,692,159
6,842
Datasets with files with colon : in filenames cannot be used on Windows
### Describe the bug Datasets (such as https://huggingface.co/datasets/MLCommons/peoples_speech) cannot be used on Windows due to the fact that windows does not allow colons ":" in filenames. These should be converted into alternative strings. ### Steps to reproduce the bug 1. Attempt to run load_dataset on MLCo...
open
https://github.com/huggingface/datasets/issues/6842
2024-04-26T00:14:16
2024-04-26T00:14:16
null
{ "login": "jacobjennings", "id": 1038927, "type": "User" }
[]
false
[]
2,264,687,683
6,841
Unable to load wiki_auto_asset_turk from GEM
### Describe the bug I am unable to load the wiki_auto_asset_turk dataset. I get a fatal error while trying to access wiki_auto_asset_turk and load it with datasets.load_dataset. The error (TypeError: expected str, bytes or os.PathLike object, not NoneType) is from filenames_for_dataset_split in a os.path.join call ...
closed
https://github.com/huggingface/datasets/issues/6841
2024-04-26T00:08:47
2024-05-29T13:54:03
2024-04-26T16:12:29
{ "login": "abhinavsethy", "id": 23074600, "type": "User" }
[]
false
[]
2,264,604,766
6,840
Delete uploaded files from the UI
### Feature request Once a file is uploaded and the commit is made, I am unable to delete individual files without completely deleting the whole dataset via the website UI. ### Motivation Would be a useful addition ### Your contribution Would love to help out with some guidance
open
https://github.com/huggingface/datasets/issues/6840
2024-04-25T22:33:57
2025-01-21T09:44:22
null
{ "login": "saicharan2804", "id": 62512681, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,263,761,062
6,839
Remove token arg from CLI examples
Remove token arg from CLI examples. Fix #6838. CC: @Wauplin
closed
https://github.com/huggingface/datasets/pull/6839
2024-04-25T14:36:58
2024-04-26T17:03:51
2024-04-26T16:57:40
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,263,674,843
6,838
Remove token arg from CLI examples
As suggested by @Wauplin, see: https://github.com/huggingface/datasets/pull/6831#discussion_r1579492603 > I would not advertise the --token arg in the example as this shouldn't be the recommended way (best to login with env variable or huggingface-cli login)
closed
https://github.com/huggingface/datasets/issues/6838
2024-04-25T14:00:38
2024-04-26T16:57:41
2024-04-26T16:57:41
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
false
[]
2,263,273,983
6,837
Cannot use cached dataset without Internet connection (or when servers are down)
### Describe the bug I want to be able to use cached dataset from HuggingFace even when I have no Internet connection (or when HuggingFace servers are down, or my company has network issues). The problem why I can't use it: `data_files` argument from `datasets.load_dataset()` function get it updates from the serve...
open
https://github.com/huggingface/datasets/issues/6837
2024-04-25T10:48:20
2025-01-25T16:36:41
null
{ "login": "DionisMuzenitov", "id": 112088378, "type": "User" }
[]
false
[]
2,262,249,919
6,836
ExpectedMoreSplits error on load_dataset when upgrading to 2.19.0
### Describe the bug Hi there, thanks for the great library! We have been using it a lot in torchtune and it's been a huge help for us. Regarding the bug: the same call to `load_dataset` errors with `ExpectedMoreSplits` in 2.19.0 after working fine in 2.18.0. Full details given in the repro below. ### Steps to re...
open
https://github.com/huggingface/datasets/issues/6836
2024-04-24T21:52:35
2024-05-14T04:08:19
null
{ "login": "ebsmothers", "id": 24319399, "type": "User" }
[]
false
[]
2,261,079,263
6,835
Support pyarrow LargeListType
Fixes #6834
closed
https://github.com/huggingface/datasets/pull/6835
2024-04-24T11:34:24
2024-08-12T14:43:47
2024-08-12T14:43:47
{ "login": "Modexus", "id": 37351874, "type": "User" }
[]
true
[]
2,261,078,104
6,834
largelisttype not supported (.from_polars())
### Describe the bug The following code fails because LargeListType is not supported. This is especially a problem for .from_polars since polars uses LargeListType. ### Steps to reproduce the bug ```python import datasets import polars as pl df = pl.DataFrame({"list": [[]]}) datasets.Dataset.from_pola...
closed
https://github.com/huggingface/datasets/issues/6834
2024-04-24T11:33:43
2024-08-12T14:43:46
2024-08-12T14:43:46
{ "login": "Modexus", "id": 37351874, "type": "User" }
[]
false
[]
2,259,731,274
6,833
Super slow iteration with trivial custom transform
### Describe the bug Dataset is 10X slower when applying trivial transforms: ``` import time import numpy as np from datasets import Dataset, Features, Array2D a = np.zeros((800, 800)) a = np.stack([a] * 1000) features = Features({"a": Array2D(shape=(800, 800), dtype="uint8")}) ds1 = Dataset.from_dict({"...
open
https://github.com/huggingface/datasets/issues/6833
2024-04-23T20:40:59
2024-10-08T15:41:18
null
{ "login": "xslittlegrass", "id": 2780075, "type": "User" }
[]
false
[]
2,258,761,447
6,832
Support downloading specific splits in `load_dataset`
This PR builds on https://github.com/huggingface/datasets/pull/6639 to support downloading only the specified splits in `load_dataset`. For this to work, a builder's `_split_generators` need to be able to accept the requested splits (as a list) via a `splits` argument to avoid processing the non-requested ones. Also, t...
open
https://github.com/huggingface/datasets/pull/6832
2024-04-23T12:32:27
2025-07-21T07:49:31
null
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,258,537,405
6,831
Add docs about the CLI
Add docs about the CLI. Close #6830. CC: @severo
closed
https://github.com/huggingface/datasets/pull/6831
2024-04-23T10:41:03
2024-04-26T16:51:09
2024-04-25T10:44:10
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,258,433,178
6,830
Add a doc page for the convert_to_parquet CLI
Follow-up to https://github.com/huggingface/datasets/pull/6795. Useful for https://github.com/huggingface/dataset-viewer/issues/2742. cc @albertvillanova
closed
https://github.com/huggingface/datasets/issues/6830
2024-04-23T09:49:04
2024-04-25T10:44:11
2024-04-25T10:44:11
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
false
[]
2,258,424,577
6,829
Load and save from/to disk no longer accept pathlib.Path
Reported by @vttrifonov at https://github.com/huggingface/datasets/pull/6704#issuecomment-2071168296: > This change is breaking in > https://github.com/huggingface/datasets/blob/f96e74d5c633cd5435dd526adb4a74631eb05c43/src/datasets/arrow_dataset.py#L1515 > when the input is `pathlib.Path`. The issue is that `url_to...
open
https://github.com/huggingface/datasets/issues/6829
2024-04-23T09:44:45
2024-04-23T09:44:46
null
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
2,258,420,421
6,828
Support PathLike input in save_to_disk / load_from_disk
null
open
https://github.com/huggingface/datasets/pull/6828
2024-04-23T09:42:38
2024-04-23T11:05:52
null
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,254,011,833
6,827
Loading a remote dataset fails in the last release (v2.19.0)
While loading a dataset with multiple splits I get an error saying `Couldn't find file at <URL>` I am loading the dataset like so, nothing out of the ordinary. This dataset needs a token to access it. ``` token="hf_myhftoken-sdhbdsjgkhbd" load_dataset("speechcolab/gigaspeech", "test", cache_dir=f"gigaspeech/test...
open
https://github.com/huggingface/datasets/issues/6827
2024-04-19T21:11:58
2024-04-19T21:13:42
null
{ "login": "zrthxn", "id": 35369637, "type": "User" }
[]
false
[]
2,252,445,242
6,826
Set dev version
null
closed
https://github.com/huggingface/datasets/pull/6826
2024-04-19T08:51:42
2024-04-19T09:05:25
2024-04-19T08:52:14
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,252,404,599
6,825
Release: 2.19.0
null
closed
https://github.com/huggingface/datasets/pull/6825
2024-04-19T08:29:02
2024-05-04T12:23:26
2024-04-19T08:44:57
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,251,076,197
6,824
Winogrande does not seem to be compatible with datasets version of 1.18.0
### Describe the bug I get the following error when simply running `load_dataset('winogrande','winogrande_xl')`. I do not have such an issue in the 1.17.0 version. ```Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line...
closed
https://github.com/huggingface/datasets/issues/6824
2024-04-18T16:11:04
2024-04-19T09:53:15
2024-04-19T09:52:33
{ "login": "spliew", "id": 7878204, "type": "User" }
[]
false
[]
2,250,775,569
6,823
Loading problems of Datasets with a single shard
### Describe the bug When saving a dataset on disk and it has a single shard it is not loaded as when it is saved in multiple shards. I installed the latest version of datasets via pip. ### Steps to reproduce the bug The code below reproduces the behavior. All works well when the range of the loop is 10000 bu...
open
https://github.com/huggingface/datasets/issues/6823
2024-04-18T13:59:00
2024-11-25T05:40:09
null
{ "login": "andjoer", "id": 60151338, "type": "User" }
[]
false
[]
2,250,316,258
6,822
Fix parquet export infos
Don't use the parquet export infos when USE_PARQUET_EXPORT is False. Otherwise the `datasets-server` might reuse erroneous data when re-running a job this follows https://github.com/huggingface/datasets/pull/6714
closed
https://github.com/huggingface/datasets/pull/6822
2024-04-18T10:21:41
2024-04-18T11:15:41
2024-04-18T11:09:13
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,248,471,673
6,820
Allow deleting a subset/config from a no-script dataset
TODO: - [x] Add docs - [x] Delete token arg from CLI example - See: #6839 Close #6810.
closed
https://github.com/huggingface/datasets/pull/6820
2024-04-17T14:41:12
2024-05-02T07:31:03
2024-04-30T09:44:24
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,248,043,797
6,819
Give more details in `DataFilesNotFoundError` when getting the config names
### Feature request After https://huggingface.co/datasets/cis-lmu/Glot500/commit/39060e01272ff228cc0ce1d31ae53789cacae8c3, the dataset viewer gives the following error: ``` { "error": "Cannot get the config names for the dataset.", "cause_exception": "DataFilesNotFoundError", "cause_message": "No (support...
open
https://github.com/huggingface/datasets/issues/6819
2024-04-17T11:19:47
2024-04-17T11:19:47
null
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,246,578,480
6,817
Support indexable objects in `Dataset.__getitem__`
As discussed in https://github.com/huggingface/datasets/pull/6816, this is needed to support objects that implement `__index__` such as `np.int64` in `Dataset.__getitem__`.
closed
https://github.com/huggingface/datasets/pull/6817
2024-04-16T17:41:27
2024-04-16T18:27:44
2024-04-16T18:17:29
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,246,264,911
6,816
Improve typing of Dataset.search, matching definition
Previously, the output of `score, indices = Dataset.search(...)` would be numpy arrays. The definition in `SearchResult` is a `List[int]` so this PR now matched the expected type. The previous behavior is a bit annoying as `Dataset.__getitem__` doesn't support `numpy.int64` which forced me to convert `indices` to...
closed
https://github.com/huggingface/datasets/pull/6816
2024-04-16T14:53:39
2024-04-16T15:54:10
2024-04-16T15:54:10
{ "login": "Dref360", "id": 8976546, "type": "User" }
[]
true
[]
2,246,197,070
6,815
Remove `os.path.relpath` in `resolve_patterns`
... to save a few seconds when resolving repos with many data files.
closed
https://github.com/huggingface/datasets/pull/6815
2024-04-16T14:23:13
2024-04-16T16:06:48
2024-04-16T15:58:22
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,245,857,902
6,814
`map` with `num_proc` > 1 leads to OOM
### Describe the bug When running `map` on parquet dataset loaded from local machine, the RAM usage increases linearly eventually leading to OOM. I was wondering if I should I save the `cache_file` after every n steps in order to prevent this? ### Steps to reproduce the bug ``` ds = load_dataset("parquet", data...
open
https://github.com/huggingface/datasets/issues/6814
2024-04-16T11:56:03
2024-04-19T11:53:41
null
{ "login": "bhavitvyamalik", "id": 19718818, "type": "User" }
[]
false
[]
2,245,626,870
6,813
Add Dataset.take and Dataset.skip
...to be aligned with IterableDataset.take and IterableDataset.skip
closed
https://github.com/huggingface/datasets/pull/6813
2024-04-16T09:53:42
2024-04-16T14:12:14
2024-04-16T14:06:07
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,244,898,824
6,812
Run CI
null
closed
https://github.com/huggingface/datasets/pull/6812
2024-04-16T01:12:36
2024-04-16T01:14:16
2024-04-16T01:12:41
{ "login": "charliermarsh", "id": 1309177, "type": "User" }
[]
true
[]
2,243,656,096
6,811
add allow_primitive_to_str and allow_decimal_to_str instead of allow_number_to_str
Fix #6805
closed
https://github.com/huggingface/datasets/pull/6811
2024-04-15T13:14:38
2024-07-03T14:59:42
2024-04-16T17:03:17
{ "login": "Modexus", "id": 37351874, "type": "User" }
[]
true
[]
2,242,968,745
6,810
Allow deleting a subset/config from a no-script dataset
As proposed by @BramVanroy, it would be neat to have this functionality through the API.
closed
https://github.com/huggingface/datasets/issues/6810
2024-04-15T07:53:26
2025-01-11T18:40:40
2024-04-30T09:44:25
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,242,956,297
6,809
Make convert_to_parquet CLI command create script branch
Make convert_to_parquet CLI command create a "script" branch and keep the script file on it. This PR proposes the simplest UX approach: whenever `--revision` is not explicitly passed (i.e., when the script is in the main branch), try to create a "script" branch from the "main" branch; if the "script" branch exists a...
closed
https://github.com/huggingface/datasets/pull/6809
2024-04-15T07:47:26
2024-04-17T08:44:26
2024-04-17T08:38:18
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,242,843,611
6,808
Make convert_to_parquet CLI command create script branch
As proposed by @severo, maybe we should add this functionality as well to the CLI command to convert a script-dataset to Parquet. See: https://github.com/huggingface/datasets/pull/6795#discussion_r1562819168 > When providing support, we sometimes suggest that users store their script in a script branch. What do you th...
closed
https://github.com/huggingface/datasets/issues/6808
2024-04-15T06:46:07
2024-04-17T08:38:19
2024-04-17T08:38:19
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,239,435,074
6,806
Fix hf-internal-testing/dataset_with_script commit SHA in CI test
Fix test using latest commit SHA in hf-internal-testing/dataset_with_script dataset: https://huggingface.co/datasets/hf-internal-testing/dataset_with_script/commits/refs%2Fconvert%2Fparquet Fix #6796.
closed
https://github.com/huggingface/datasets/pull/6806
2024-04-12T08:47:50
2024-04-12T09:08:23
2024-04-12T09:02:12
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,239,034,951
6,805
Batched mapping of existing string column casts boolean to string
### Describe the bug Let the dataset contain a column named 'a', which is of the string type. If 'a' is converted to a boolean using batched mapping, the mapper automatically casts the boolean to a string (e.g., True -> 'true'). It only happens when the original column and the mapped column name are identical. Th...
closed
https://github.com/huggingface/datasets/issues/6805
2024-04-12T04:21:41
2024-07-03T15:00:07
2024-07-03T15:00:07
{ "login": "starmpcc", "id": 46891489, "type": "User" }
[]
false
[]
2,238,035,124
6,804
Fix --repo-type order in cli upload docs
null
closed
https://github.com/huggingface/datasets/pull/6804
2024-04-11T15:39:09
2024-04-11T16:24:57
2024-04-11T16:18:47
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,237,933,090
6,803
#6791 Improve type checking around FAISS
Fixes #6791 Small PR to raise a better error when a dataset is not embedded properly.
closed
https://github.com/huggingface/datasets/pull/6803
2024-04-11T14:54:30
2024-04-11T15:44:09
2024-04-11T15:38:04
{ "login": "Dref360", "id": 8976546, "type": "User" }
[]
true
[]
2,237,365,489
6,802
Fix typo in docs (upload CLI)
Related to https://huggingface.slack.com/archives/C04RG8YRVB8/p1712643948574129 (interal) Positional args must be placed before optional args. Feel free to merge whenever it's ready.
closed
https://github.com/huggingface/datasets/pull/6802
2024-04-11T10:05:05
2024-04-11T16:19:00
2024-04-11T13:19:43
{ "login": "Wauplin", "id": 11801849, "type": "User" }
[]
true
[]
2,236,911,556
6,801
got fileNotFound
### Describe the bug When I use load_dataset to load the nyanko7/danbooru2023 data set, the cache is read in the form of a symlink. There may be a problem with the arrow_dataset initialization process and I get FileNotFoundError: [Errno 2] No such file or directory: '2945000.jpg' ### Steps to reproduce the bug #code...
closed
https://github.com/huggingface/datasets/issues/6801
2024-04-11T04:57:41
2024-04-12T16:47:43
2024-04-12T16:47:43
{ "login": "laoniandisko", "id": 93729155, "type": "User" }
[]
false
[]
2,236,431,288
6,800
High overhead when loading lots of subsets from the same dataset
### Describe the bug I have a multilingual dataset that contains a lot of subsets. Each subset corresponds to a pair of languages, you can see here an example with 250 subsets: [https://hf.co/datasets/loicmagne/open-subtitles-250-bitext-mining](). As part of the MTEB benchmark, we may need to load all the subsets of t...
open
https://github.com/huggingface/datasets/issues/6800
2024-04-10T21:08:57
2024-04-24T13:48:05
null
{ "login": "loicmagne", "id": 53355258, "type": "User" }
[]
false
[]
2,236,124,531
6,799
fix `DatasetBuilder._split_generators` incomplete type annotation
solve #6798: add missing `StreamingDownloadManager` type annotation to the `dl_manager` argument of the `DatasetBuilder._split_generators` function
closed
https://github.com/huggingface/datasets/pull/6799
2024-04-10T17:46:08
2024-04-11T15:41:06
2024-04-11T15:34:58
{ "login": "JonasLoos", "id": 33965649, "type": "User" }
[]
true
[]
2,235,768,891
6,798
`DatasetBuilder._split_generators` incomplete type annotation
### Describe the bug The [`DatasetBuilder._split_generators`](https://github.com/huggingface/datasets/blob/0f27d7b77c73412cfc50b24354bfd7a3e838202f/src/datasets/builder.py#L1449) function has currently the following signature: ```python class DatasetBuilder: def _split_generators(self, dl_manager: DownloadMan...
closed
https://github.com/huggingface/datasets/issues/6798
2024-04-10T14:38:50
2024-04-11T15:34:59
2024-04-11T15:34:59
{ "login": "JonasLoos", "id": 33965649, "type": "User" }
[]
false
[]
2,234,890,097
6,797
Fix CI test_load_dataset_distributed_with_script
Fix #6796.
closed
https://github.com/huggingface/datasets/pull/6797
2024-04-10T06:57:48
2024-04-10T08:25:00
2024-04-10T08:18:01
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,234,887,618
6,796
CI is broken due to hf-internal-testing/dataset_with_script
CI is broken for test_load_dataset_distributed_with_script. See: https://github.com/huggingface/datasets/actions/runs/8614926216/job/23609378127 ``` FAILED tests/test_load.py::test_load_dataset_distributed_with_script[None] - assert False + where False = all(<generator object test_load_dataset_distributed_with_scr...
closed
https://github.com/huggingface/datasets/issues/6796
2024-04-10T06:56:02
2024-04-12T09:02:13
2024-04-12T09:02:13
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
2,233,618,719
6,795
Add CLI function to convert script-dataset to Parquet
Close #6690.
closed
https://github.com/huggingface/datasets/pull/6795
2024-04-09T14:45:12
2024-04-17T08:41:23
2024-04-12T15:27:04
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,233,202,088
6,794
Multithreaded downloads
...for faster dataset download when there are many many small files (e.g. imagefolder, audiofolder) ### Behcnmark for example on [lhoestq/tmp-images-writer_batch_size](https://hf.co/datasets/lhoestq/tmp-images-writer_batch_size) (128 images) | | duration of the download step in `load_dataset()` | |--| ----...
closed
https://github.com/huggingface/datasets/pull/6794
2024-04-09T11:13:19
2024-04-15T21:24:13
2024-04-15T21:18:08
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,231,400,200
6,793
Loading just one particular split is not possible for imagenet-1k
### Describe the bug I'd expect the following code to download just the validation split but instead I get all data on my disk (train, test and validation splits) ` from datasets import load_dataset dataset = load_dataset("imagenet-1k", split="validation", trust_remote_code=True) ` Is it expected to work li...
open
https://github.com/huggingface/datasets/issues/6793
2024-04-08T14:39:14
2025-06-23T09:55:08
null
{ "login": "PaulPSta", "id": 165930106, "type": "User" }
[]
false
[]
2,231,318,682
6,792
Fix cache conflict in `_check_legacy_cache2`
It was reloading from the wrong cache dir because of a bug in `_check_legacy_cache2`. This function should not trigger if there are config_kwars like `sample_by=` fix https://github.com/huggingface/datasets/issues/6758
closed
https://github.com/huggingface/datasets/pull/6792
2024-04-08T14:05:42
2024-04-09T11:34:08
2024-04-09T11:27:58
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,230,102,332
6,791
`add_faiss_index` raises ValueError: not enough values to unpack (expected 2, got 1)
### Describe the bug Calling `add_faiss_index` on a `Dataset` with a column argument raises a ValueError. The following is the trace ```python 214 def replacement_add(self, x): 215 """Adds vectors to the index. 216 The index must be trained before vectors can be added to it. 217 Th...
closed
https://github.com/huggingface/datasets/issues/6791
2024-04-08T01:57:03
2024-04-11T15:38:05
2024-04-11T15:38:05
{ "login": "NeuralFlux", "id": 40491005, "type": "User" }
[]
false
[]
2,229,915,236
6,790
PyArrow 'Memory mapping file failed: Cannot allocate memory' bug
### Describe the bug Hello, I've been struggling with a problem using Huggingface datasets caused by PyArrow memory allocation. I finally managed to solve it, and thought to document it since similar issues have been raised here before (https://github.com/huggingface/datasets/issues/5710, https://github.com/huggi...
open
https://github.com/huggingface/datasets/issues/6790
2024-04-07T19:25:39
2025-06-12T07:31:44
null
{ "login": "lasuomela", "id": 25725697, "type": "User" }
[]
false
[]
2,229,527,001
6,789
Issue with map
### Describe the bug Map has been taking extremely long to preprocess my data. It seems to process 1000 examples (which it does really fast in about 10 seconds), then it hangs for a good 1-2 minutes, before it moves on to the next batch of 1000 examples. It also keeps eating up my hard drive space for some reaso...
open
https://github.com/huggingface/datasets/issues/6789
2024-04-07T02:52:06
2024-07-23T12:41:38
null
{ "login": "Nsohko", "id": 102672238, "type": "User" }
[]
false
[]
2,229,207,521
6,788
A Question About the Map Function
### Describe the bug Hello, I have a question regarding the map function in the Hugging Face datasets. The situation is as follows: when I load a jsonl file using load_dataset(..., streaming=False), and then utilize the map function to process it, I specify that the returned example should be of type Torch.ten...
closed
https://github.com/huggingface/datasets/issues/6788
2024-04-06T11:45:23
2024-04-11T05:29:35
2024-04-11T05:29:35
{ "login": "codeprompter", "id": 87431052, "type": "User" }
[]
false
[]
2,229,103,264
6,787
TimeoutError in map
### Describe the bug ```python from datasets import Dataset def worker(example): while True: continue example['a'] = 100 return example data = Dataset.from_list([{"a": 1}, {"a": 2}]) data = data.map(worker) print(data[0]) ``` I'm implementing a worker function whose runtime will de...
open
https://github.com/huggingface/datasets/issues/6787
2024-04-06T06:25:39
2024-08-14T02:09:57
null
{ "login": "Jiaxin-Wen", "id": 48146603, "type": "User" }
[]
false
[]
2,228,463,776
6,786
Make Image cast storage faster
PR for issue #6782. Makes `cast_storage` of the `Image` class faster by removing the slow call to `.pylist`. Instead directly convert each `ListArray` item to either `Array2DExtensionType` or `Array3DExtensionType`. This also preserves the `dtype` removing the warning if the array is already `uint8`.
open
https://github.com/huggingface/datasets/pull/6786
2024-04-05T17:00:46
2024-10-01T09:09:14
null
{ "login": "Modexus", "id": 37351874, "type": "User" }
[]
true
[]
2,228,429,852
6,785
rename datasets-server to dataset-viewer
See https://github.com/huggingface/dataset-viewer/issues/2650 Tell me if it's OK, or if it's a breaking change that must be handled differently. Also note that the docs page is still https://huggingface.co/docs/datasets-server/, so I didn't change it. And the API URL is still https://datasets-server.huggingfac...
closed
https://github.com/huggingface/datasets/pull/6785
2024-04-05T16:37:05
2024-04-08T12:41:13
2024-04-08T12:35:02
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
2,228,390,504
6,784
Extract data on the fly in packaged builders
Instead of waiting for data files to be extracted in the packaged builders, we can prepend the compression prefix and extract them as they are being read (using `fsspec`). This saves disk space (deleting extracted archives is not set by default) and slightly speeds up dataset generation (less disk reads)
closed
https://github.com/huggingface/datasets/pull/6784
2024-04-05T16:12:25
2024-04-16T16:37:47
2024-04-16T16:31:29
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,228,179,466
6,783
AttributeError: module 'numpy' has no attribute 'object'. in Kaggle Notebook
### Describe the bug # problem I can't resample audio dataset in Kaggle Notebook. It looks like some code in `datasets` library use aliases that were deprecated in NumPy 1.20. ## code for resampling ``` from datasets import load_dataset, Audio from transformers import AutoFeatureExtractor from transformers imp...
closed
https://github.com/huggingface/datasets/issues/6783
2024-04-05T14:31:48
2024-04-11T17:18:53
2024-04-11T17:18:53
{ "login": "petrov826", "id": 26062262, "type": "User" }
[]
false
[]
2,228,081,955
6,782
Image cast_storage very slow for arrays (e.g. numpy, tensors)
Update: see comments below ### Describe the bug Operations that save an image from a path are very slow. I believe the reason for this is that the image data (`numpy`) is converted into `pyarrow` format but then back to python using `.pylist()` before being converted to a numpy array again. `pylist` is alread...
open
https://github.com/huggingface/datasets/issues/6782
2024-04-05T13:46:54
2024-04-10T14:36:13
null
{ "login": "Modexus", "id": 37351874, "type": "User" }
[]
false
[]
2,228,026,497
6,781
Remove get_inferred_type from ArrowWriter write_batch
Inferring the type seems to be unnecessary given that the pyarrow array has already been created. Because pyarrow array creation is sometimes extremely slow this doubles the time write_batch takes.
closed
https://github.com/huggingface/datasets/pull/6781
2024-04-05T13:21:05
2024-04-09T07:49:11
2024-04-09T07:49:11
{ "login": "Modexus", "id": 37351874, "type": "User" }
[]
true
[]
2,226,160,096
6,780
Fix CI
Updates the `wmt_t2t` test to pin the `revision` to the version with a loading script (cc @albertvillanova). Additionally, it replaces the occurrences of the `lhoestq/test` repo id with `hf-internal-testing/dataset_with_script` and re-enables logging checks in the `Dataset.from_sql` tests.
closed
https://github.com/huggingface/datasets/pull/6780
2024-04-04T17:45:04
2024-04-04T18:46:04
2024-04-04T18:23:34
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,226,075,551
6,779
Install dependencies with `uv` in CI
`diffusers` (https://github.com/huggingface/diffusers/pull/7116) and `huggingface_hub` (https://github.com/huggingface/huggingface_hub/pull/2072) also use `uv` to install their dependencies, so we can do the same here. It seems to make the "Install dependencies" step in the `ubuntu` jobs 5-8x faster and 1.5-2x in th...
closed
https://github.com/huggingface/datasets/pull/6779
2024-04-04T17:02:51
2024-04-08T13:34:01
2024-04-08T13:27:44
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,226,040,636
6,778
Dataset.to_csv() missing commas in columns with lists
### Describe the bug The `to_csv()` method does not output commas in lists. So when the Dataset is loaded back in the data structure of the column with a list is not correct. Here's an example: Obviously, it's not as trivial as inserting commas in the list, since its a comma-separated file. But hopefully there...
open
https://github.com/huggingface/datasets/issues/6778
2024-04-04T16:46:13
2024-04-08T15:24:41
null
{ "login": "mpickard-dataprof", "id": 100041276, "type": "User" }
[]
false
[]
2,224,611,247
6,777
.Jsonl metadata not detected
### Describe the bug Hi I have the following directory structure: |--dataset | |-- images | |-- metadata1000.csv | |-- metadata1000.jsonl | |-- padded_images Example of metadata1000.jsonl file {"caption": "a drawing depicts a full shot of a black t-shirt with a triangular pattern on the front there is a white...
open
https://github.com/huggingface/datasets/issues/6777
2024-04-04T06:31:53
2024-04-05T21:14:48
null
{ "login": "nighting0le01", "id": 81643693, "type": "User" }
[]
false
[]
2,223,457,792
6,775
IndexError: Invalid key: 0 is out of bounds for size 0
### Describe the bug I am trying to fine-tune llama2-7b model in GCP. The notebook I am using for this can be found [here](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_pytorch_llama2_peft_finetuning.ipynb). When I use the dataset given in the exa...
open
https://github.com/huggingface/datasets/issues/6775
2024-04-03T17:06:30
2024-04-08T01:24:35
null
{ "login": "kk2491", "id": 38481564, "type": "User" }
[]
false
[]
2,222,164,316
6,774
Generating split is very slow when Image format is PNG
### Describe the bug When I create a dataset, it gets stuck while generating cached data. The image format is PNG, and it will not get stuck when the image format is jpeg. ![image](https://github.com/huggingface/datasets/assets/22740819/3b888fd8-e6d6-488f-b828-95a8f206a152) After debugging, I know that it is b...
open
https://github.com/huggingface/datasets/issues/6774
2024-04-03T07:47:31
2024-04-10T17:28:17
null
{ "login": "Tramac", "id": 22740819, "type": "User" }
[]
false
[]
2,221,049,121
6,773
Dataset on Hub re-downloads every time?
### Describe the bug Hi, I have a dataset on the hub [here](https://huggingface.co/datasets/manestay/borderlines). It has 1k+ downloads, which I sure is mostly just me and my colleagues working with it. It should have far fewer, since I'm using the same machine with a properly set up HF_HOME variable. However, whene...
closed
https://github.com/huggingface/datasets/issues/6773
2024-04-02T17:23:22
2024-04-08T18:43:45
2024-04-08T18:43:45
{ "login": "manestay", "id": 9099139, "type": "User" }
[]
false
[]
2,220,851,533
6,772
`remove_columns`/`rename_columns` doc fixes
Use more consistent wording in `remove_columns` to explain why it's faster than `map` and update `remove_columns`/`rename_columns` docstrings to fix in-place calls. Reported in https://github.com/huggingface/datasets/issues/6700
closed
https://github.com/huggingface/datasets/pull/6772
2024-04-02T15:41:28
2024-04-02T16:28:45
2024-04-02T16:17:46
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,220,131,457
6,771
Datasets FileNotFoundError when trying to generate examples.
### Discussed in https://github.com/huggingface/datasets/discussions/6768 <div type='discussions-op-text'> <sup>Originally posted by **RitchieP** April 1, 2024</sup> Currently, I have a dataset hosted on Huggingface with a custom script [here](https://huggingface.co/datasets/RitchieP/VerbaLex_voice). I'm loa...
closed
https://github.com/huggingface/datasets/issues/6771
2024-04-02T10:24:57
2024-04-04T14:22:03
2024-04-04T14:22:03
{ "login": "RitchieP", "id": 26197115, "type": "User" }
[]
false
[]
2,218,991,883
6,770
[Bug Report] `datasets==2.18.0` is not compatible with `fsspec==2023.12.2`
### Describe the bug `Datasets==2.18.0` is not compatible with `fsspec==2023.12.2`. I have to downgrade fsspec to `fsspec==2023.10.0` to make `Datasets==2.18.0` work properly. ### Steps to reproduce the bug To reproduce the bug: 1. Make sure that `Datasets==2.18.0` and `fsspec==2023.12.2`. 2. Run the following ...
closed
https://github.com/huggingface/datasets/issues/6770
2024-04-01T20:17:48
2024-04-11T17:31:44
2024-04-11T17:31:44
{ "login": "fshp971", "id": 19348888, "type": "User" }
[]
false
[]
2,218,242,015
6,769
(Willing to PR) Datasets with custom python objects
### Feature request Hi thanks for the library! I would like to have a huggingface Dataset, and one of its column is custom (non-serializable) Python objects. For example, a minimal code: ``` class MyClass: pass dataset = datasets.Dataset.from_list([ dict(a=MyClass(), b='hello'), ]) ``` It gives...
open
https://github.com/huggingface/datasets/issues/6769
2024-04-01T13:18:47
2024-04-01T13:36:58
null
{ "login": "fzyzcjy", "id": 5236035, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,217,065,412
6,767
fixing the issue 6755(small typo)
Fixed the issue #6755 on the typo mistake
closed
https://github.com/huggingface/datasets/pull/6767
2024-03-31T16:13:37
2024-04-02T14:14:02
2024-04-02T14:01:18
{ "login": "JINO-ROHIT", "id": 63234112, "type": "User" }
[]
true
[]
2,215,933,515
6,765
Compatibility issue between s3fs, fsspec, and datasets
### Describe the bug Here is the full error stack when installing: ``` ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. datasets 2.18.0 requires fsspec[http]<=2024.2.0,>=2023.1.0, but you ...
closed
https://github.com/huggingface/datasets/issues/6765
2024-03-29T19:57:24
2024-11-12T14:50:48
2024-04-03T14:33:12
{ "login": "njbrake", "id": 33383515, "type": "User" }
[]
false
[]
2,215,767,119
6,764
load_dataset can't work with symbolic links
### Feature request Enable the `load_dataset` function to load local datasets with symbolic links. E.g, this dataset can be loaded: ├── example_dataset/ │ ├── data/ │ │ ├── train/ │ │ │ ├── file0 │ │ │ ├── file1 │ │ ├── dev/ │ │ │ ├── file2 │ │ │ ├── file3 │ ├── metad...
open
https://github.com/huggingface/datasets/issues/6764
2024-03-29T17:49:28
2025-04-29T15:06:28
null
{ "login": "VladimirVincan", "id": 13640533, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,213,440,804
6,763
Fix issue with case sensitivity when loading dataset from local cache
When a dataset with upper-cases in its name is first loaded using `load_dataset()`, the local cache directory is created with all lowercase letters. However, upon subsequent loads, the current version attempts to locate the cache directory using the dataset's original name, which includes uppercase letters. This di...
open
https://github.com/huggingface/datasets/pull/6763
2024-03-28T14:52:35
2024-04-20T12:16:45
null
{ "login": "Sumsky21", "id": 58537872, "type": "User" }
[]
true
[]
2,213,275,468
6,762
Allow polars as valid output type
I was trying out polars as an output for a map function and found that it wasn't a valid return type in `validate_function_output`. Thought that we should accommodate this by creating and adding it to the `allowed_processed_input_types` variable.
closed
https://github.com/huggingface/datasets/pull/6762
2024-03-28T13:40:28
2024-08-16T15:54:37
2024-08-16T13:10:37
{ "login": "psmyth94", "id": 11325244, "type": "User" }
[]
true
[]
2,212,805,108
6,761
Remove deprecated code
What does this PR do? 1. remove `list_files_info` in favor of `list_repo_tree`. As of `0.23`, `list_files_info` will be removed for good. `datasets` had a utility to support both pre-0.20 and post-0.20 versions. Since `hfh` version is already pinned to `>=0.21.2`, I removed the legacy part. 2. `preupload_lfs_files` h...
closed
https://github.com/huggingface/datasets/pull/6761
2024-03-28T09:57:57
2024-03-29T13:27:26
2024-03-29T13:18:13
{ "login": "Wauplin", "id": 11801849, "type": "User" }
[]
true
[]
2,212,288,122
6,760
Load codeparrot/apps raising UnicodeDecodeError in datasets-2.18.0
### Describe the bug This happens with datasets-2.18.0; I downgraded the version to 2.14.6 fixing this temporarily. ``` Traceback (most recent call last): File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 2556, in load_dataset builder_instance = load_dataset_builder...
open
https://github.com/huggingface/datasets/issues/6760
2024-03-28T03:44:26
2024-06-19T07:06:40
null
{ "login": "yucc-leon", "id": 17897916, "type": "User" }
[]
false
[]