id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
2,063,839,916
6,554
Parquet exports are used even if revision is passed
We should not used Parquet exports if `revision` is passed. I think this is a regression.
closed
https://github.com/huggingface/datasets/issues/6554
2024-01-03T11:32:26
2024-02-02T10:35:29
2024-02-02T10:35:29
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
2,063,474,183
6,553
Cannot import name 'load_dataset' from .... module ‘datasets’
### Describe the bug use python -m pip install datasets to install ### Steps to reproduce the bug from datasets import load_dataset ### Expected behavior it doesn't work ### Environment info datasets version==2.15.0 python == 3.10.12 linux version I don't know??
closed
https://github.com/huggingface/datasets/issues/6553
2024-01-03T08:18:21
2024-02-21T00:38:24
2024-02-21T00:38:24
{ "login": "ciaoyizhen", "id": 83450192, "type": "User" }
[]
false
[]
2,063,157,187
6,552
Loading a dataset from Google Colab hangs at "Resolving data files".
### Describe the bug Hello, I'm trying to load a dataset from Google Colab but the process hangs at `Resolving data files`: ![image](https://github.com/huggingface/datasets/assets/99779/7175ad85-e571-46ed-9f87-92653985777d) It is happening when the `_get_origin_metadata` definition is invoked: ```python d...
closed
https://github.com/huggingface/datasets/issues/6552
2024-01-03T02:18:17
2024-01-08T10:09:04
2024-01-08T10:09:04
{ "login": "KelSolaar", "id": 99779, "type": "User" }
[]
false
[]
2,062,768,400
6,551
Fix parallel downloads for datasets without scripts
Enable parallel downloads using multiprocessing when `num_proc` is passed to `load_dataset`. It was enabled for datasets with scripts already (if they passed lists to `dl_manager.download`) but not for no-script datasets (we pass dicts {split: [list of files]} to `dl_manager.download` for those ones). I fixed thi...
closed
https://github.com/huggingface/datasets/pull/6551
2024-01-02T18:06:18
2024-01-06T20:14:57
2024-01-03T13:19:48
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,062,556,493
6,550
Multi gpu docs
after discussions in https://github.com/huggingface/datasets/pull/6415
closed
https://github.com/huggingface/datasets/pull/6550
2024-01-02T15:11:58
2024-01-31T13:45:15
2024-01-31T13:38:59
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,062,420,259
6,549
Loading from hf hub with clearer error message
### Feature request Shouldn't this kinda work ? ``` Dataset.from_json("hf://datasets/HuggingFaceTB/eval_data/resolve/main/eval_data_context_and_answers.json") ``` I got an error ``` File ~/miniconda3/envs/datatrove/lib/python3.10/site-packages/datasets/data_files.py:380, in resolve_pattern(pattern, base_path, al...
open
https://github.com/huggingface/datasets/issues/6549
2024-01-02T13:26:34
2024-01-02T14:06:49
null
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,061,047,984
6,548
Skip if a dataset has issues
### Describe the bug Hello everyone, I'm using **load_datasets** from **huggingface** to download the datasets and I'm facing an issue, the download starts but it reaches some state and then fails with the following error: Couldn't reach https://huggingface.co/datasets/wikimedia/wikipedia/resolve/4cb9b0d719291f1a10...
open
https://github.com/huggingface/datasets/issues/6548
2023-12-31T12:41:26
2024-01-02T10:33:17
null
{ "login": "hadianasliwa", "id": 143214684, "type": "User" }
[]
false
[]
2,060,796,927
6,547
set dev version
null
closed
https://github.com/huggingface/datasets/pull/6547
2023-12-30T16:47:17
2023-12-30T16:53:38
2023-12-30T16:47:27
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,060,796,369
6,546
Release: 2.16.1
null
closed
https://github.com/huggingface/datasets/pull/6546
2023-12-30T16:44:51
2023-12-30T16:52:07
2023-12-30T16:45:52
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,060,789,507
6,545
`image` column not automatically inferred if image dataset only contains 1 image
### Describe the bug By default, the standard Image Dataset maps out `file_name` to `image` when loading an Image Dataset. However, if the dataset contains only 1 image, this does not take place ### Steps to reproduce the bug Input (dataset with one image `multimodalart/repro_1_image`) ```py from data...
closed
https://github.com/huggingface/datasets/issues/6545
2023-12-30T16:17:29
2024-01-09T13:06:31
2024-01-09T13:06:31
{ "login": "apolinario", "id": 788417, "type": "User" }
[]
false
[]
2,060,782,594
6,544
Fix custom configs from script
We should not use the parquet export when the user is passing config_kwargs I also fixed a regression that would disallow creating a custom config when a dataset has multiple predefined configs fix https://github.com/huggingface/datasets/issues/6533
closed
https://github.com/huggingface/datasets/pull/6544
2023-12-30T15:51:25
2024-01-02T11:02:39
2023-12-30T16:09:49
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,060,776,174
6,543
Fix dl_manager.extract returning FileNotFoundError
The dl_manager base path is remote (e.g. a hf:// path), so local cached paths should be passed as absolute paths. This could happen if users provide a relative path as `cache_dir` fix https://github.com/huggingface/datasets/issues/6536
closed
https://github.com/huggingface/datasets/pull/6543
2023-12-30T15:24:50
2023-12-30T16:00:06
2023-12-30T15:53:59
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,059,198,575
6,542
Datasets : wikipedia 20220301.en error
### Describe the bug When I used load_dataset to download this data set, the following error occurred. The main problem was that the target data did not exist. ### Steps to reproduce the bug 1.I tried downloading directly. ```python wiki_dataset = load_dataset("wikipedia", "20220301.en") ``` An exception occurre...
closed
https://github.com/huggingface/datasets/issues/6542
2023-12-29T08:34:51
2024-01-02T13:21:06
2024-01-02T13:20:30
{ "login": "ppx666", "id": 53203620, "type": "User" }
[]
false
[]
2,058,983,826
6,541
Dataset not loading successfully.
### Describe the bug When I run down the below code shows this error: AttributeError: module 'numpy' has no attribute '_no_nep50_warning' I also added this issue in transformers library please check out: [link](https://github.com/huggingface/transformers/issues/28099) ### Steps to reproduce the bug ## Reproduction ...
closed
https://github.com/huggingface/datasets/issues/6541
2023-12-29T01:35:47
2024-01-17T00:40:46
2024-01-17T00:40:45
{ "login": "hisushanta", "id": 93595990, "type": "User" }
[]
false
[]
2,058,965,157
6,540
Extreme inefficiency for `save_to_disk` when merging datasets
### Describe the bug Hi, I tried to merge in total 22M sequences of data, where each sequence is of maximum length 2000. I found that merging these datasets and then `save_to_disk` is extremely slow because of flattening the indices. Wondering if you have any suggestions or guidance on this. Thank you very much! ###...
open
https://github.com/huggingface/datasets/issues/6540
2023-12-29T00:44:35
2023-12-30T15:05:48
null
{ "login": "KatarinaYuan", "id": 43512683, "type": "User" }
[]
false
[]
2,058,493,960
6,539
'Repo card metadata block was not found' when loading a pragmeval dataset
### Describe the bug I can't load dataset subsets of 'pragmeval'. The funny thing is I ran the dataset author's [colab notebook](https://colab.research.google.com/drive/1sg--LF4z7XR1wxAOfp0-3d4J6kQ9nj_A?usp=sharing) and it works just fine. I tried to install exactly the same packages that are installed on colab usi...
open
https://github.com/huggingface/datasets/issues/6539
2023-12-28T14:18:25
2023-12-28T14:18:37
null
{ "login": "lambdaofgod", "id": 3647577, "type": "User" }
[]
false
[]
2,057,377,630
6,538
ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py)
### Describe the bug While importing from packages getting the error Code: ``` import os import torch from datasets import load_dataset, Dataset from transformers import ( AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, HfArgumentParser, TrainingArguments, pipeline, ...
closed
https://github.com/huggingface/datasets/issues/6538
2023-12-27T13:31:16
2024-01-03T10:06:47
2024-01-03T10:04:58
{ "login": "Sonali-Behera-TRT", "id": 131662185, "type": "User" }
[]
false
[]
2,057,132,173
6,537
Adding support for netCDF (*.nc) files
### Feature request netCDF (*.nc) is a file format for storing multidimensional scientific data, which is used by packages like `xarray` (labelled multi-dimensional arrays in Python). It would be nice to have native support for netCDF in `datasets`. ### Motivation When uploading *.nc files onto Huggingface Hub throu...
open
https://github.com/huggingface/datasets/issues/6537
2023-12-27T09:27:29
2023-12-27T20:46:53
null
{ "login": "shermansiu", "id": 12627125, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,056,863,239
6,536
datasets.load_dataset raises FileNotFoundError for datasets==2.16.0
### Describe the bug Seems `datasets.load_dataset` raises FileNotFoundError for some hub datasets with the latest `datasets==2.16.0` ### Steps to reproduce the bug For example `pip install datasets==2.16.0` then ```python import datasets datasets.load_dataset("wentingzhao/anthropic-hh-first-prompt", cache_di...
closed
https://github.com/huggingface/datasets/issues/6536
2023-12-27T03:15:48
2023-12-30T18:58:04
2023-12-30T15:54:00
{ "login": "ArvinZhuang", "id": 46237844, "type": "User" }
[]
false
[]
2,056,264,339
6,535
IndexError: Invalid key: 47682 is out of bounds for size 0 while using PEFT
### Describe the bug I am trying to fine-tune the t5 model on the paraphrasing task. While running the same code without- model = get_peft_model(model, config) the model trains without any issues. However, using the model returned from get_peft_model raises the following error due to datasets- IndexError: Inv...
open
https://github.com/huggingface/datasets/issues/6535
2023-12-26T10:14:33
2024-02-05T08:42:31
null
{ "login": "MahavirDabas18", "id": 57484266, "type": "User" }
[]
false
[]
2,056,002,548
6,534
How to configure multiple folders in the same zip package
How should I write "config" in readme when all the data, such as train test, is in a zip file train floder and test floder in data.zip
open
https://github.com/huggingface/datasets/issues/6534
2023-12-26T03:56:20
2023-12-26T06:31:16
null
{ "login": "d710055071", "id": 12895488, "type": "User" }
[]
false
[]
2,055,929,101
6,533
ted_talks_iwslt | Error: Config name is missing
### Describe the bug Running load_dataset using the newest `datasets` library like below on the ted_talks_iwslt using year pair data will throw an error "Config name is missing" see also: https://huggingface.co/datasets/ted_talks_iwslt/discussions/3 likely caused by #6493, where the `and not config_kwargs` part...
closed
https://github.com/huggingface/datasets/issues/6533
2023-12-26T00:38:18
2023-12-30T18:58:21
2023-12-30T16:09:50
{ "login": "rayliuca", "id": 35850903, "type": "User" }
[]
false
[]
2,055,631,201
6,532
[Feature request] Indexing datasets by a customly-defined id field to enable random access dataset items via the id
### Feature request Some datasets may contain an id-like field, for example the `id` field in [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) and the `_id` field in [BeIR/dbpedia-entity](https://huggingface.co/datasets/BeIR/dbpedia-entity). HF datasets support efficient random access via r...
open
https://github.com/huggingface/datasets/issues/6532
2023-12-25T11:37:10
2025-05-05T13:25:24
null
{ "login": "Yu-Shi", "id": 3377221, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,055,201,605
6,531
Add polars compatibility
Hey there, I've just finished adding support to convert and format to `polars.DataFrame`. This was in response to the open issue about integrating Polars [#3334](https://github.com/huggingface/datasets/issues/3334). Datasets can be switched to Polars format via `Dataset.set_format("polars")`. I've also included `to_...
closed
https://github.com/huggingface/datasets/pull/6531
2023-12-24T20:03:23
2024-03-08T19:29:25
2024-03-08T15:22:58
{ "login": "psmyth94", "id": 11325244, "type": "User" }
[]
true
[]
2,054,817,609
6,530
Impossible to save a mapped dataset to disk
### Describe the bug I want to play around with different hyperparameters when training but don't want to re-map my dataset with 3 million samples each time for tens of hours when I [fully fine-tune SDXL](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py). After...
open
https://github.com/huggingface/datasets/issues/6530
2023-12-23T15:18:27
2023-12-24T09:40:30
null
{ "login": "kopyl", "id": 17604849, "type": "User" }
[]
false
[]
2,054,209,449
6,529
Impossible to only download a test split
I've spent a significant amount of time trying to locate the split object inside my _split_generators() custom function. Then after diving [in the code](https://github.com/huggingface/datasets/blob/5ff3670c18ed34fa8ddfa70a9aa403ae6cc9ad54/src/datasets/load.py#L2558) I realized that `download_and_prepare` is executed b...
open
https://github.com/huggingface/datasets/issues/6529
2023-12-22T16:56:32
2024-02-02T00:05:04
null
{ "login": "ysig", "id": 28439529, "type": "User" }
[]
false
[]
2,053,996,494
6,528
set dev version
null
closed
https://github.com/huggingface/datasets/pull/6528
2023-12-22T14:23:18
2023-12-22T14:31:42
2023-12-22T14:25:34
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,053,966,748
6,527
Release: 2.16.0
null
closed
https://github.com/huggingface/datasets/pull/6527
2023-12-22T13:59:56
2023-12-22T14:24:12
2023-12-22T14:17:55
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,053,726,451
6,526
Preserve order of configs and splits when using Parquet exports
Preserve order of configs and splits, as defined in dataset infos. Fix #6521.
closed
https://github.com/huggingface/datasets/pull/6526
2023-12-22T10:35:56
2023-12-22T11:42:22
2023-12-22T11:36:14
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,053,119,357
6,525
BBox type
see [internal discussion](https://huggingface.slack.com/archives/C02EK7C3SHW/p1703097195609209) Draft to get some feedback on a possible `BBox` feature type that can be used to get object detection bounding boxes data in one format or another. ```python >>> from datasets import load_dataset, BBox >>> ds = load_...
closed
https://github.com/huggingface/datasets/pull/6525
2023-12-21T22:13:27
2024-01-11T06:34:51
2023-12-21T22:39:27
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,053,076,311
6,524
Streaming the Pile: Missing Files
### Describe the bug The pile does not stream, a "File not Found error" is returned. It looks like the Pile's files have been moved. ### Steps to reproduce the bug To reproduce run the following code: ``` from datasets import load_dataset dataset = load_dataset('EleutherAI/pile', 'en', split='train', streamin...
closed
https://github.com/huggingface/datasets/issues/6524
2023-12-21T21:25:09
2023-12-22T09:17:05
2023-12-22T09:17:05
{ "login": "FelixLabelle", "id": 23347756, "type": "User" }
[]
false
[]
2,052,643,484
6,523
fix tests
null
closed
https://github.com/huggingface/datasets/pull/6523
2023-12-21T15:36:21
2023-12-21T15:56:54
2023-12-21T15:50:38
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,052,332,528
6,522
Loading HF Hub Dataset (private org repo) fails to load all features
### Describe the bug When pushing a `Dataset` with multiple `Features` (`input`, `output`, `tags`) to Huggingface Hub (private org repo), and later downloading the `Dataset`, only `input` and `output` load - I believe the expected behavior is for all `Features` to be loaded by default? ### Steps to reproduce the ...
open
https://github.com/huggingface/datasets/issues/6522
2023-12-21T12:26:35
2023-12-21T13:24:31
null
{ "login": "versipellis", "id": 6579034, "type": "User" }
[]
false
[]
2,052,229,538
6,521
The order of the splits is not preserved
We had a regression and the order of the splits is not preserved. They are alphabetically sorted, instead of preserving original "train", "validation", "test" order. Check: In branch "main" ```python In [9]: dataset = load_dataset("adversarial_qa", '"adversarialQA") In [10]: dataset Out[10]: DatasetDict({ ...
closed
https://github.com/huggingface/datasets/issues/6521
2023-12-21T11:17:27
2023-12-22T11:36:15
2023-12-22T11:36:15
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
2,052,059,078
6,520
Support commit_description parameter in push_to_hub
Support `commit_description` parameter in `push_to_hub`. CC: @Wauplin
closed
https://github.com/huggingface/datasets/pull/6520
2023-12-21T09:36:11
2023-12-21T14:49:47
2023-12-21T14:43:35
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,050,759,824
6,519
Support push_to_hub canonical datasets
Support `push_to_hub` canonical datasets. This is necessary in the Space to convert script-datasets to Parquet: https://huggingface.co/spaces/albertvillanova/convert-dataset-to-parquet Note that before this PR, the `repo_id` "dataset_name" was transformed to "user/dataset_name". This behavior was introduced by: ...
closed
https://github.com/huggingface/datasets/pull/6519
2023-12-20T15:16:45
2023-12-21T14:48:20
2023-12-21T14:40:57
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,050,137,038
6,518
fix get_metadata_patterns function args error
Bug get_metadata_patterns arg error https://github.com/huggingface/datasets/issues/6517
closed
https://github.com/huggingface/datasets/pull/6518
2023-12-20T09:06:22
2023-12-21T15:14:17
2023-12-21T15:07:57
{ "login": "d710055071", "id": 12895488, "type": "User" }
[]
true
[]
2,050,121,588
6,517
Bug get_metadata_patterns arg error
https://github.com/huggingface/datasets/blob/3f149204a2a5948287adcade5e90707aa5207a92/src/datasets/load.py#L1240C1-L1240C69 metadata_patterns = get_metadata_patterns(base_path, download_config=self.download_config)
closed
https://github.com/huggingface/datasets/issues/6517
2023-12-20T08:56:44
2023-12-22T00:24:23
2023-12-22T00:24:23
{ "login": "d710055071", "id": 12895488, "type": "User" }
[]
false
[]
2,050,033,322
6,516
Support huggingface-hub pre-releases
Support `huggingface-hub` pre-releases. This way we will have our CI green when testing `huggingface-hub` release candidates. See: https://github.com/huggingface/datasets/tree/ci-test-huggingface-hub-v0.20.0.rc1 Close #6513.
closed
https://github.com/huggingface/datasets/pull/6516
2023-12-20T07:52:29
2023-12-20T08:51:34
2023-12-20T08:44:44
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,049,724,251
6,515
Why call http_head() when fsspec_head() succeeds
https://github.com/huggingface/datasets/blob/a91582de288d98e94bcb5ab634ca1cfeeff544c5/src/datasets/utils/file_utils.py#L510C1-L523C14
closed
https://github.com/huggingface/datasets/issues/6515
2023-12-20T02:25:51
2023-12-26T05:35:46
2023-12-26T05:35:46
{ "login": "d710055071", "id": 12895488, "type": "User" }
[]
false
[]
2,049,600,663
6,514
Cache backward compatibility with 2.15.0
...for datasets without scripts It takes into account the changes in cache from - https://github.com/huggingface/datasets/pull/6493: switch to `config/version/commit_sha` schema - https://github.com/huggingface/datasets/pull/6454: fix `DataFilesDict` keys ordering when hashing requires https://github.com/huggin...
closed
https://github.com/huggingface/datasets/pull/6514
2023-12-19T23:52:25
2023-12-21T21:14:11
2023-12-21T21:07:55
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,048,869,151
6,513
Support huggingface-hub 0.20.0
CI to test the support of `huggingface-hub` 0.20.0: https://github.com/huggingface/datasets/compare/main...ci-test-huggingface-hub-v0.20.0.rc1 We need to merge: - #6510 - #6512 - #6516
closed
https://github.com/huggingface/datasets/issues/6513
2023-12-19T15:15:46
2023-12-20T08:44:45
2023-12-20T08:44:45
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
false
[]
2,048,795,819
6,512
Remove deprecated HfFolder
...and use `huggingface_hub.get_token()` instead
closed
https://github.com/huggingface/datasets/pull/6512
2023-12-19T14:40:49
2023-12-19T20:21:13
2023-12-19T20:14:30
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,048,465,958
6,511
Implement get dataset default config name
Implement `get_dataset_default_config_name`. Now that we support setting a configuration as default in `push_to_hub` (see #6500), we need a programmatically way to know in advance which is the default configuration. This will be used in the Space to convert script-datasets to Parquet: https://huggingface.co/spaces/a...
closed
https://github.com/huggingface/datasets/pull/6511
2023-12-19T11:26:19
2023-12-21T14:48:57
2023-12-21T14:42:41
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,046,928,742
6,510
Replace `list_files_info` with `list_repo_tree` in `push_to_hub`
Starting from `huggingface_hub` 0.20.0, `list_files_info` will be deprecated in favor of `list_repo_tree` (see https://github.com/huggingface/huggingface_hub/pull/1910)
closed
https://github.com/huggingface/datasets/pull/6510
2023-12-18T15:34:19
2023-12-19T18:05:47
2023-12-19T17:58:34
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,046,720,869
6,509
Better cast error when generating dataset
I want to improve the error message for datasets like https://huggingface.co/datasets/m-a-p/COIG-CQIA Cc @albertvillanova @severo is this new error ok ? Or should I use a dedicated error class ? New: ```python Traceback (most recent call last): File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py...
closed
https://github.com/huggingface/datasets/pull/6509
2023-12-18T13:57:24
2023-12-19T09:37:12
2023-12-19T09:31:03
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,045,733,273
6,508
Read GeoParquet files using parquet reader
Let GeoParquet files with the file extension `*.geoparquet` or `*.gpq` be readable by the default parquet reader. Those two file extensions are the ones most commonly used for GeoParquet files, and is included in the `gpq` validator tool at https://github.com/planetlabs/gpq/blob/e5576b4ee7306b4d2259d56c879465a9364da...
closed
https://github.com/huggingface/datasets/pull/6508
2023-12-18T04:50:37
2024-01-26T18:22:35
2024-01-26T16:18:41
{ "login": "weiji14", "id": 23487320, "type": "User" }
[]
true
[]
2,045,152,928
6,507
where is glue_metric.py> @Frankie123421 what was the resolution to this?
> @Frankie123421 what was the resolution to this? use glue_metric.py instead of glue.py in load_metric _Originally posted by @Frankie123421 in https://github.com/huggingface/datasets/issues/2117#issuecomment-905093763_
closed
https://github.com/huggingface/datasets/issues/6507
2023-12-17T09:58:25
2023-12-18T11:42:49
2023-12-18T11:42:49
{ "login": "Mcccccc1024", "id": 119146162, "type": "User" }
[]
false
[]
2,044,975,038
6,506
Incorrect test set labels for RTE and CoLA datasets via load_dataset
### Describe the bug The test set labels for the RTE and CoLA datasets when loading via datasets load_dataset are all -1. Edit: It appears this is also the case for every other dataset except for MRPC (stsb, sst2, qqp, mnli (both matched and mismatched), qnli, wnli, ax). Is this intended behavior to safeguard the t...
closed
https://github.com/huggingface/datasets/issues/6506
2023-12-16T22:06:08
2023-12-21T09:57:57
2023-12-21T09:57:57
{ "login": "emreonal11", "id": 73316684, "type": "User" }
[]
false
[]
2,044,721,288
6,505
Got stuck when I trying to load a dataset
### Describe the bug Hello, everyone. I met a problem when I am trying to load a data file using load_dataset method on a Debian 10 system. The data file is not very large, only 1.63MB with 600 records. Here is my code: from datasets import load_dataset dataset = load_dataset('json', data_files='mypath/oaast_r...
open
https://github.com/huggingface/datasets/issues/6505
2023-12-16T11:51:07
2024-12-24T16:45:52
null
{ "login": "yirenpingsheng", "id": 18232551, "type": "User" }
[]
false
[]
2,044,541,154
6,504
Error Pushing to Hub
### Describe the bug Error when trying to push a dataset in a special format to hub ### Steps to reproduce the bug ``` import datasets from datasets import Dataset dataset_dict = { "filename": ["apple", "banana"], "token": [[[1,2],[3,4]],[[1,2],[3,4]]], "label": [0, 1], } dataset = Dataset.from_d...
closed
https://github.com/huggingface/datasets/issues/6504
2023-12-16T01:05:22
2023-12-16T06:20:53
2023-12-16T06:20:53
{ "login": "Jiayi-Pan", "id": 55055083, "type": "User" }
[]
false
[]
2,043,847,591
6,503
Fix streaming xnli
This code was failing ```python In [1]: from datasets import load_dataset In [2]: ...: ds = load_dataset("xnli", "all_languages", split="test", streaming=True) ...: ...: sample_data = next(iter(ds))["premise"] # pick up one data ...: input_text = list(sample_data.valu...
closed
https://github.com/huggingface/datasets/pull/6503
2023-12-15T14:40:57
2023-12-15T14:51:06
2023-12-15T14:44:47
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,043,771,731
6,502
Pickle support for `torch.Generator` objects
Fix for https://discuss.huggingface.co/t/caching-a-dataset-processed-with-randomness/65616
closed
https://github.com/huggingface/datasets/pull/6502
2023-12-15T13:55:12
2023-12-15T15:04:33
2023-12-15T14:58:22
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,043,377,240
6,501
OverflowError: value too large to convert to int32_t
### Describe the bug ![image](https://github.com/huggingface/datasets/assets/47747764/f58044fb-ddda-48b6-ba68-7bbfef781630) ### Steps to reproduce the bug just loading datasets ### Expected behavior how can I fix it ### Environment info pip install /mnt/cluster/zhangfan/study_info/LLaMA-Factory/peft-0.6.0-py3...
open
https://github.com/huggingface/datasets/issues/6501
2023-12-15T10:10:21
2025-06-27T04:27:14
null
{ "login": "zhangfan-algo", "id": 47747764, "type": "User" }
[]
false
[]
2,043,258,633
6,500
Enable setting config as default when push_to_hub
Fix #6497.
closed
https://github.com/huggingface/datasets/pull/6500
2023-12-15T09:17:41
2023-12-18T11:56:11
2023-12-18T11:50:03
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,043,166,976
6,499
docs: add reference Git over SSH
see https://discuss.huggingface.co/t/update-datasets-getting-started-to-new-git-security/65893
closed
https://github.com/huggingface/datasets/pull/6499
2023-12-15T08:38:31
2023-12-15T11:48:47
2023-12-15T11:42:38
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
2,042,075,969
6,498
Fallback on dataset script if user wants to load default config
Right now this code is failing on `main`: ```python load_dataset("openbookqa") ``` This is because it tries to load the dataset from the Parquet export but the dataset has multiple configurations and the Parquet export doesn't know which one is the default one. I fixed this by simply falling back on using th...
closed
https://github.com/huggingface/datasets/pull/6498
2023-12-14T16:46:01
2023-12-15T13:16:56
2023-12-15T13:10:48
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,041,994,274
6,497
Support setting a default config name in push_to_hub
In order to convert script-datasets to no-script datasets, we need to support setting a default config name for those scripts that set one.
closed
https://github.com/huggingface/datasets/issues/6497
2023-12-14T15:59:03
2023-12-18T11:50:04
2023-12-18T11:50:04
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,041,589,386
6,496
Error when writing a dataset to HF Hub: A commit has happened since. Please refresh and try again.
**Describe the bug** Getting a `412 Client Error: Precondition Failed` when trying to write a dataset to the HF hub. ``` huggingface_hub.utils._errors.HfHubHTTPError: 412 Client Error: Precondition Failed for url: https://huggingface.co/api/datasets/GLorr/test-dask/commit/main (Request ID: Root=1-657ae26f-3bd92b...
open
https://github.com/huggingface/datasets/issues/6496
2023-12-14T11:24:54
2023-12-14T12:22:21
null
{ "login": "GeorgesLorre", "id": 35808396, "type": "User" }
[]
false
[]
2,039,684,839
6,494
Image Data loaded Twice
### Describe the bug ![1702472610561](https://github.com/huggingface/datasets/assets/28867010/4b7ef5e7-32c3-4b73-84cb-5de059caa0b6) When I learn from https://huggingface.co/docs/datasets/image_load and try to load image data from a folder. I noticed that the image was read twice in the returned data. As you can see i...
open
https://github.com/huggingface/datasets/issues/6494
2023-12-13T13:11:42
2023-12-13T13:11:42
null
{ "login": "ArcaneLex", "id": 28867010, "type": "User" }
[]
false
[]
2,039,708,529
6,495
Newline characters don't behave as expected when calling dataset.info
### System Info - `transformers` version: 4.32.1 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.11.5 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.2 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.1.1+cpu (False) - Tensorflow version (GPU...
open
https://github.com/huggingface/datasets/issues/6495
2023-12-12T23:07:51
2023-12-13T13:24:22
null
{ "login": "gerald-wrona", "id": 32300890, "type": "User" }
[]
false
[]
2,038,221,490
6,493
Lazy data files resolution and offline cache reload
Includes both https://github.com/huggingface/datasets/pull/6458 and https://github.com/huggingface/datasets/pull/6459 This PR should be merged instead of the two individually, since they are conflicting ## Offline cache reload it can reload datasets that were pushed to hub if they exist in the cache. examp...
closed
https://github.com/huggingface/datasets/pull/6493
2023-12-12T17:15:17
2023-12-21T15:19:20
2023-12-21T15:13:11
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,037,987,267
6,492
Make push_to_hub return CommitInfo
Make `push_to_hub` return `CommitInfo`. This is useful, for example, if we pass `create_pr=True` and we want to know the created PR ID. CC: @severo for the use case in https://huggingface.co/datasets/jmhessel/newyorker_caption_contest/discussions/4
closed
https://github.com/huggingface/datasets/pull/6492
2023-12-12T15:18:16
2023-12-13T14:29:01
2023-12-13T14:22:41
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,037,690,643
6,491
Fix metrics dead link
null
closed
https://github.com/huggingface/datasets/pull/6491
2023-12-12T12:51:49
2023-12-21T15:15:08
2023-12-21T15:08:53
{ "login": "qgallouedec", "id": 45557362, "type": "User" }
[]
true
[]
2,037,204,892
6,490
`load_dataset(...,save_infos=True)` not working without loading script
### Describe the bug It seems that saving a dataset infos back into the card file is not working for datasets without a loading script. After tracking the problem a bit it looks like saving the infos uses `Builder.get_imported_module_dir()` as its destination directory. Internally this is a call to `inspect.getfil...
open
https://github.com/huggingface/datasets/issues/6490
2023-12-12T08:09:18
2023-12-12T08:36:22
null
{ "login": "morganveyret", "id": 114978051, "type": "User" }
[]
false
[]
2,036,743,777
6,489
load_dataset imageflder for aws s3 path
### Feature request I would like to load a dataset from S3 using the imagefolder option something like `dataset = datasets.load_dataset('imagefolder', data_dir='s3://.../lsun/train/bedroom', fs=S3FileSystem(), streaming=True) ` ### Motivation no need of data_files ### Your contribution no experience...
open
https://github.com/huggingface/datasets/issues/6489
2023-12-12T00:08:43
2023-12-12T00:09:27
null
{ "login": "segalinc", "id": 9353106, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,035,899,898
6,488
429 Client Error
Hello, I was downloading the following dataset and after 20% of data was downloaded, I started getting error 429. It is not resolved since a few days. How should I resolve it? Thanks Dataset: https://huggingface.co/datasets/cerebras/SlimPajama-627B Error: `requests.exceptions.HTTPError: 429 Client Error: Too M...
open
https://github.com/huggingface/datasets/issues/6488
2023-12-11T15:06:01
2024-06-20T05:55:45
null
{ "login": "sasaadi", "id": 7882383, "type": "User" }
[]
false
[]
2,035,424,254
6,487
Update builder hash with info
Currently if you change the `dataset_info` of a dataset (e.g. in the YAML part of the README.md), the cache ignores this change. This is problematic because you want to regenerate a dataset if you change the features or the split sizes for example (e.g. after push_to_hub) Ideally we should take the resolved files...
closed
https://github.com/huggingface/datasets/pull/6487
2023-12-11T11:09:16
2024-01-11T06:35:07
2023-12-11T11:41:34
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,035,206,206
6,486
Fix docs phrasing about supported formats when sharing a dataset
Fix docs phrasing.
closed
https://github.com/huggingface/datasets/pull/6486
2023-12-11T09:21:22
2023-12-13T14:21:29
2023-12-13T14:15:21
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,035,141,884
6,485
FileNotFoundError: [Errno 2] No such file or directory: 'nul'
### Describe the bug it seems that sth wrong with my terrible "bug body" life, When i run this code, "import datasets" i meet this error FileNotFoundError: [Errno 2] No such file or directory: 'nul' ![image](https://github.com/huggingface/datasets/assets/73683903/3973c120-ebb1-42b7-bede-b9de053e861d) ![image](htt...
closed
https://github.com/huggingface/datasets/issues/6485
2023-12-11T08:52:13
2023-12-14T08:09:08
2023-12-14T08:09:08
{ "login": "amanyara", "id": 73683903, "type": "User" }
[]
false
[]
2,032,946,981
6,483
Iterable Dataset: rename column clashes with remove column
### Describe the bug Suppose I have a two iterable datasets, one with the features: * `{"audio", "text", "column_a"}` And the other with the features: * `{"audio", "sentence", "column_b"}` I want to combine both datasets using `interleave_datasets`, which requires me to unify the column names. I would typic...
closed
https://github.com/huggingface/datasets/issues/6483
2023-12-08T16:11:30
2023-12-08T16:27:16
2023-12-08T16:27:04
{ "login": "sanchit-gandhi", "id": 93869735, "type": "User" }
[ { "name": "streaming", "color": "fef2c0" } ]
false
[]
2,033,333,294
6,484
[Feature Request] Dataset versioning
**Is your feature request related to a problem? Please describe.** I am working on a project, where I would like to test different preprocessing methods for my ML-data. Thus, I would like to work a lot with revisions and compare them. Currently, I was not able to make it work with the revision keyword because it was n...
open
https://github.com/huggingface/datasets/issues/6484
2023-12-08T16:01:35
2023-12-11T19:13:46
null
{ "login": "kenfus", "id": 47979198, "type": "User" }
[]
false
[]
2,032,675,918
6,482
Fix max lock length on unix
reported in https://github.com/huggingface/datasets/pull/6482
closed
https://github.com/huggingface/datasets/pull/6482
2023-12-08T13:39:30
2023-12-12T11:53:32
2023-12-12T11:47:27
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,032,650,003
6,481
using torchrun, save_to_disk suddenly shows SIGTERM
### Describe the bug When I run my code using the "torchrun" command, when the code reaches the "save_to_disk" part, suddenly I get the following warning and error messages: Because the dataset is too large, the "save_to_disk" function splits it into 70 parts for saving. However, an error occurs suddenly when it reac...
open
https://github.com/huggingface/datasets/issues/6481
2023-12-08T13:22:03
2023-12-08T13:22:03
null
{ "login": "Ariya12138", "id": 85916625, "type": "User" }
[]
false
[]
2,031,116,653
6,480
Add IterableDataset `__repr__`
Example for glue sst2: Dataset ``` DatasetDict({ test: Dataset({ features: ['sentence', 'label', 'idx'], num_rows: 1821 }) train: Dataset({ features: ['sentence', 'label', 'idx'], num_rows: 67349 }) validation: Dataset({ features: ['sentence',...
closed
https://github.com/huggingface/datasets/pull/6480
2023-12-07T16:31:50
2023-12-08T13:33:06
2023-12-08T13:26:54
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,029,040,121
6,479
More robust preupload retry mechanism
null
closed
https://github.com/huggingface/datasets/pull/6479
2023-12-06T17:19:38
2023-12-06T19:47:29
2023-12-06T19:41:06
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,028,071,596
6,478
How to load data from lakefs
My dataset is stored on the company's lakefs server. How can I write code to load the dataset? It would be great if I could provide code examples or provide some references
closed
https://github.com/huggingface/datasets/issues/6478
2023-12-06T09:04:11
2024-07-03T19:13:57
2024-07-03T19:13:56
{ "login": "d710055071", "id": 12895488, "type": "User" }
[]
false
[]
2,028,022,374
6,477
Fix PermissionError on Windows CI
Fix #6476.
closed
https://github.com/huggingface/datasets/pull/6477
2023-12-06T08:34:53
2023-12-06T09:24:11
2023-12-06T09:17:52
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,028,018,596
6,476
CI on windows is broken: PermissionError
See: https://github.com/huggingface/datasets/actions/runs/7104781624/job/19340572394 ``` FAILED tests/test_load.py::test_loading_from_the_datasets_hub - NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\Users\\RUNNER~1\\AppData\\Local\\Temp\\tmpfcnps56i\\hf-internal-testing___dataset_with_script\...
closed
https://github.com/huggingface/datasets/issues/6476
2023-12-06T08:32:53
2023-12-06T09:17:53
2023-12-06T09:17:53
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
2,027,373,734
6,475
laion2B-en failed to load on Windows with PrefetchVirtualMemory failed
### Describe the bug I have downloaded laion2B-en, and I'm receiving the following error trying to load it: ``` Resolving data files: 100%|██████████| 128/128 [00:00<00:00, 1173.79it/s] Traceback (most recent call last): File "D:\Art-Workspace\src\artworkspace\tokeneval\compute_frequencies.py", line 31, in <mo...
open
https://github.com/huggingface/datasets/issues/6475
2023-12-06T00:07:34
2023-12-06T23:26:23
null
{ "login": "doctorpangloss", "id": 2229300, "type": "User" }
[]
false
[]
2,027,006,715
6,474
Deprecate Beam API and download from HF GCS bucket
Deprecate the Beam API and download from the HF GCS bucked. TODO: - [x] Convert the Beam-based [`wikipedia`](https://huggingface.co/datasets/wikipedia) to an Arrow-based dataset ([Hub PR](https://huggingface.co/datasets/wikipedia/discussions/19)) - [x] Make [`natural_questions`](https://huggingface.co/datasets/na...
closed
https://github.com/huggingface/datasets/pull/6474
2023-12-05T19:51:33
2024-03-12T14:56:25
2024-03-12T14:50:12
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,026,495,084
6,473
Fix CI quality
Fix #6472.
closed
https://github.com/huggingface/datasets/pull/6473
2023-12-05T15:36:23
2023-12-05T18:14:50
2023-12-05T18:08:41
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,026,493,439
6,472
CI quality is broken
See: https://github.com/huggingface/datasets/actions/runs/7100835633/job/19327734359 ``` Would reformat: src/datasets/features/image.py 1 file would be reformatted, 253 files left unchanged ```
closed
https://github.com/huggingface/datasets/issues/6472
2023-12-05T15:35:34
2023-12-06T08:17:34
2023-12-05T18:08:43
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "maintenance", "color": "d4c5f9" } ]
false
[]
2,026,100,761
6,471
Remove delete doc CI
null
closed
https://github.com/huggingface/datasets/pull/6471
2023-12-05T12:37:50
2023-12-05T12:44:59
2023-12-05T12:38:50
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,024,724,319
6,470
If an image in a dataset is corrupted, we get unescapable error
### Describe the bug Example discussed in detail here: https://huggingface.co/datasets/sasha/birdsnap/discussions/1 ### Steps to reproduce the bug ``` from datasets import load_dataset, VerificationMode dataset = load_dataset( 'sasha/birdsnap', split="train", verification_mode=VerificationMode.ALL_C...
open
https://github.com/huggingface/datasets/issues/6470
2023-12-04T20:58:49
2023-12-04T20:58:49
null
{ "login": "chigozienri", "id": 14337872, "type": "User" }
[]
false
[]
2,023,695,839
6,469
Don't expand_info in HF glob
Finally fix https://github.com/huggingface/datasets/issues/5537
closed
https://github.com/huggingface/datasets/pull/6469
2023-12-04T12:00:37
2023-12-15T13:18:37
2023-12-15T13:12:30
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,023,617,877
6,468
Use auth to get parquet export
added `token` to the `_datasets_server` functions
closed
https://github.com/huggingface/datasets/pull/6468
2023-12-04T11:18:27
2023-12-04T17:21:22
2023-12-04T17:15:11
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,023,174,233
6,467
New version release request
### Feature request Hi! I am using `datasets` in library `xtuner` and am highly interested in the features introduced since v2.15.0. To avoid installation from source in our pypi wheels, we are eagerly waiting for the new release. So, Does your team have a new release plan for v2.15.1 and could you please share ...
closed
https://github.com/huggingface/datasets/issues/6467
2023-12-04T07:08:26
2023-12-04T15:42:22
2023-12-04T15:42:22
{ "login": "LZHgrla", "id": 36994684, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,022,601,176
6,466
Can't align optional features of struct
### Describe the bug Hello! I'm currently experiencing an issue where I can't concatenate datasets if an inner field of a Feature is Optional. I have a column named `speaker`, and this holds some information about a speaker. ```python @dataclass class Speaker: name: str email: Optional[str] ``` ...
closed
https://github.com/huggingface/datasets/issues/6466
2023-12-03T15:57:07
2024-02-15T15:19:33
2024-02-08T14:38:34
{ "login": "Dref360", "id": 8976546, "type": "User" }
[]
false
[]
2,022,212,468
6,465
`load_dataset` uses out-of-date cache instead of re-downloading a changed dataset
### Describe the bug When a dataset is updated on the hub, using `load_dataset` will load the locally cached dataset instead of re-downloading the updated dataset ### Steps to reproduce the bug Here is a minimal example script to 1. create an initial dataset and upload 2. download it so it is stored in cache 3. c...
open
https://github.com/huggingface/datasets/issues/6465
2023-12-02T21:35:17
2024-08-20T08:32:11
null
{ "login": "mnoukhov", "id": 3391297, "type": "User" }
[]
false
[]
2,020,860,462
6,464
Add concurrent loading of shards to datasets.load_from_disk
In some file systems (like luster), memory mapping arrow files takes time. This can be accelerated by performing the mmap in parallel on processes or threads. - Threads seem to be faster than processes when gathering the list of tables from the workers (see https://github.com/huggingface/datasets/issues/2252). - I'...
closed
https://github.com/huggingface/datasets/pull/6464
2023-12-01T13:13:53
2024-01-26T15:17:43
2024-01-26T15:10:26
{ "login": "kkoutini", "id": 51880718, "type": "User" }
[]
true
[]
2,020,702,967
6,463
Disable benchmarks in PRs
In order to keep PR pages less spammy / more readable. Having the benchmarks on commits on `main` is enough imo
closed
https://github.com/huggingface/datasets/pull/6463
2023-12-01T11:35:30
2023-12-01T12:09:09
2023-12-01T12:03:04
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,019,238,388
6,462
Missing DatasetNotFoundError
continuation of https://github.com/huggingface/datasets/pull/6431 this should fix the CI in https://github.com/huggingface/datasets/pull/6458 too
closed
https://github.com/huggingface/datasets/pull/6462
2023-11-30T18:09:43
2023-11-30T18:36:40
2023-11-30T18:30:30
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,018,850,731
6,461
Fix shard retry mechanism in `push_to_hub`
When it fails, `preupload_lfs_files` throws a [`RuntimeError`](https://github.com/huggingface/huggingface_hub/blob/5eefebee2c150a2df950ab710db350e96c711433/src/huggingface_hub/_commit_api.py#L402) error and chains the original HTTP error. This PR modifies the retry mechanism's error handling to account for that. Fix...
closed
https://github.com/huggingface/datasets/pull/6461
2023-11-30T14:57:14
2023-12-01T17:57:39
2023-12-01T17:51:33
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,017,433,899
6,460
jsonlines files don't load with `load_dataset`
### Describe the bug While [the docs](https://huggingface.co/docs/datasets/upload_dataset#upload-dataset) seem to state that `.jsonl` is a supported extension for `datasets`, loading the dataset results in a `JSONDecodeError`. ### Steps to reproduce the bug Code: ``` from datasets import load_dataset dset = load_...
closed
https://github.com/huggingface/datasets/issues/6460
2023-11-29T21:20:11
2023-12-29T02:58:29
2023-12-05T13:30:53
{ "login": "serenalotreck", "id": 41377532, "type": "User" }
[]
false
[]
2,017,029,380
6,459
Retrieve cached datasets that were pushed to hub when offline
I drafted the logic to retrieve a no-script dataset in the cache. For example it can reload datasets that were pushed to hub if they exist in the cache. example: ```python >>> Dataset.from_dict({"a": [1, 2]}).push_to_hub("lhoestq/tmp") >>> load_dataset("lhoestq/tmp") DatasetDict({ train: Dataset({ ...
closed
https://github.com/huggingface/datasets/pull/6459
2023-11-29T16:56:15
2024-03-25T13:55:42
2024-03-25T13:55:42
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,016,577,761
6,458
Lazy data files resolution
Related to discussion at https://github.com/huggingface/datasets/pull/6255 this makes this code run in 2sec instead of >10sec ```python from datasets import load_dataset ds = load_dataset("glue", "sst2", streaming=True, trust_remote_code=False) ``` For some datasets with many configs and files it can be u...
closed
https://github.com/huggingface/datasets/pull/6458
2023-11-29T13:18:44
2024-02-08T14:41:35
2024-02-08T14:41:35
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,015,650,563
6,457
`TypeError`: huggingface_hub.hf_file_system.HfFileSystem.find() got multiple values for keyword argument 'maxdepth'
### Describe the bug Please see https://github.com/huggingface/huggingface_hub/issues/1872 ### Steps to reproduce the bug Please see https://github.com/huggingface/huggingface_hub/issues/1872 ### Expected behavior Please see https://github.com/huggingface/huggingface_hub/issues/1872 ### Environment info Please s...
closed
https://github.com/huggingface/datasets/issues/6457
2023-11-29T01:57:36
2023-11-29T15:39:03
2023-11-29T02:02:38
{ "login": "wasertech", "id": 79070834, "type": "User" }
[]
false
[]
2,015,186,090
6,456
Don't require trust_remote_code in inspect_dataset
don't require `trust_remote_code` in (deprecated) `inspect_dataset` (it defeats its purpose) (not super important but we might as well keep it until the next major release) this is needed to fix the tests in https://github.com/huggingface/datasets/pull/6448
closed
https://github.com/huggingface/datasets/pull/6456
2023-11-28T19:47:07
2023-11-30T10:40:23
2023-11-30T10:34:12
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,013,001,584
6,454
Refactor `dill` logic
Refactor the `dill` logic to make it easier to maintain (and fix some issues along the way) It makes the following improvements to the serialization API: * consistent order of a `dict`'s keys * support for hashing `torch.compile`-ed modules and functions * deprecates `datasets.fingerprint.hashregister` as the `ha...
closed
https://github.com/huggingface/datasets/pull/6454
2023-11-27T20:01:25
2023-11-28T16:29:58
2023-11-28T16:29:31
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]