id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
2,208,892,891
6,759
Persistent multi-process Pool
### Feature request Running .map and filter functions with `num_procs` consecutively instantiates several multiprocessing pools iteratively. As instantiating a Pool is very resource intensive it can be a bottleneck to performing iteratively filtering. My ideas: 1. There should be an option to declare `persist...
open
https://github.com/huggingface/datasets/issues/6759
2024-03-26T17:35:25
2024-03-26T17:35:25
null
{ "login": "fostiropoulos", "id": 4337024, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,208,494,302
6,758
Passing `sample_by` to `load_dataset` when loading text data does not work
### Describe the bug I have a dataset that consists of a bunch of text files, each representing an example. There is an undocumented `sample_by` argument for the `TextConfig` class that is used by `Text` to decide whether to split files into lines, paragraphs or take them whole. Passing `sample_by=“document”` to `load...
closed
https://github.com/huggingface/datasets/issues/6758
2024-03-26T14:55:33
2024-04-09T11:27:59
2024-04-09T11:27:59
{ "login": "ntoxeg", "id": 823693, "type": "User" }
[]
false
[]
2,206,280,340
6,757
Test disabling transformers containers in docs CI
Related to https://github.com/huggingface/doc-builder/pull/487 and [internal slack thread](https://huggingface.slack.com/archives/C04F8N7FQNL/p1711384899462349?thread_ts=1711041424.720769&cid=C04F8N7FQNL). There is now a `custom_container` option when building docs in CI. When set to `""` (instead of `"huggingface/tran...
open
https://github.com/huggingface/datasets/pull/6757
2024-03-25T17:16:11
2024-03-27T16:26:35
null
{ "login": "Wauplin", "id": 11801849, "type": "User" }
[]
true
[]
2,205,557,725
6,756
Support SQLite files?
### Feature request Support loading a dataset from a SQLite file https://huggingface.co/datasets/severo/test_iris_sqlite/tree/main ### Motivation SQLite is a popular file format. ### Your contribution See discussion on slack: https://huggingface.slack.com/archives/C04L6P8KNQ5/p1702481859117909 (internal) In ...
closed
https://github.com/huggingface/datasets/issues/6756
2024-03-25T11:48:05
2024-03-26T16:09:32
2024-03-26T16:09:32
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,204,573,289
6,755
Small typo on the documentation
### Describe the bug There is a small typo on https://github.com/huggingface/datasets/blob/d5468836fe94e8be1ae093397dd43d4a2503b926/src/datasets/dataset_dict.py#L938 It should be `caching is enabled`. ### Steps to reproduce the bug Please visit https://github.com/huggingface/datasets/blob/d5468836fe94e...
closed
https://github.com/huggingface/datasets/issues/6755
2024-03-24T21:47:52
2024-04-02T14:01:19
2024-04-02T14:01:19
{ "login": "fostiropoulos", "id": 4337024, "type": "User" }
[ { "name": "good first issue", "color": "7057ff" } ]
false
[]
2,204,214,595
6,754
Fix cache path to snakecase for `CachedDatasetModuleFactory` and `Cache`
Fix https://github.com/huggingface/datasets/issues/6750#issuecomment-2016678729 I didn't find a guideline on how to run the tests, so i just run the following steps to make sure that this bug is fixed. 1. `python test.py`, 2. then `HF_DATASETS_OFFLINE=1 python test.py` The `test.py` is ``` import datasets ...
closed
https://github.com/huggingface/datasets/pull/6754
2024-03-24T06:59:15
2024-04-15T15:45:44
2024-04-15T15:38:51
{ "login": "izhx", "id": 26690193, "type": "User" }
[]
true
[]
2,204,155,091
6,753
Type error when importing datasets on Kaggle
### Describe the bug When trying to run ``` import datasets print(datasets.__version__) ``` It generates the following error ``` TypeError: expected string or bytes-like object ``` It looks like It cannot find the valid versions of `fsspec` though fsspec version is fine when I checked Via command ...
closed
https://github.com/huggingface/datasets/issues/6753
2024-03-24T03:01:30
2024-10-02T11:49:35
2024-03-30T00:23:49
{ "login": "jtv199", "id": 18300717, "type": "User" }
[]
false
[]
2,204,043,839
6,752
Precision being changed from float16 to float32 unexpectedly
### Describe the bug I'm loading a HuggingFace Dataset for images. I'm running a preprocessing (map operation) step that runs a few operations, one of them being conversion to float16. The Dataset features also say that the 'img' is of type float16. Whenever I take an image from that HuggingFace Dataset instance...
open
https://github.com/huggingface/datasets/issues/6752
2024-03-23T20:53:56
2024-04-10T15:21:33
null
{ "login": "gcervantes8", "id": 21228908, "type": "User" }
[]
false
[]
2,203,951,501
6,751
Use 'with' operator for some download functions
Some functions in `streaming_download_manager.py` are not closing the file they open which lead to `Unclosed file` warnings in our code. This fixes a few of them.
closed
https://github.com/huggingface/datasets/pull/6751
2024-03-23T16:32:08
2024-03-26T00:40:57
2024-03-26T00:40:57
{ "login": "Moisan", "id": 31669, "type": "User" }
[]
true
[]
2,203,590,658
6,750
`load_dataset` requires a network connection for local download?
### Describe the bug Hi all - I see that in the past a network dependency has been mistakenly introduced into `load_dataset` even for local loads. Is it possible this has happened again? ### Steps to reproduce the bug ``` >>> import datasets >>> datasets.load_dataset("hh-rlhf") Repo card metadata block was not ...
closed
https://github.com/huggingface/datasets/issues/6750
2024-03-23T01:06:32
2024-04-15T15:38:52
2024-04-15T15:38:52
{ "login": "MiroFurtado", "id": 6306695, "type": "User" }
[]
false
[]
2,202,310,116
6,749
Fix fsspec tqdm callback
Following changes at https://github.com/fsspec/filesystem_spec/pull/1497 for `fsspec>=2024.2.0`
closed
https://github.com/huggingface/datasets/pull/6749
2024-03-22T11:44:11
2024-03-22T14:51:45
2024-03-22T14:45:39
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,201,517,348
6,748
Strange slicing behavior
### Describe the bug I have loaded a dataset, and then slice first 300 samples using `:` ops, however, the resulting dataset is not expected, as the output below: ```bash len(dataset)=1050324 len(dataset[:300])=2 len(dataset[0:300])=2 len(dataset.select(range(300)))=300 ``` ### Steps to reproduce the bug loa...
open
https://github.com/huggingface/datasets/issues/6748
2024-03-22T01:49:13
2024-03-22T16:43:57
null
{ "login": "Luciennnnnnn", "id": 20135317, "type": "User" }
[]
false
[]
2,201,219,384
6,747
chore(deps): bump fsspec
There were a few fixes released recently, some DVC ecosystem packages require newer version of `fsspec`.
closed
https://github.com/huggingface/datasets/pull/6747
2024-03-21T21:25:49
2024-03-22T16:40:15
2024-03-22T16:28:40
{ "login": "shcheklein", "id": 3659196, "type": "User" }
[]
true
[]
2,198,993,949
6,746
ExpectedMoreSplits error when loading C4 dataset
### Describe the bug I encounter bug when running the example command line ```python python main.py \ --model decapoda-research/llama-7b-hf \ --prune_method wanda \ --sparsity_ratio 0.5 \ --sparsity_type unstructured \ --save out/llama_7b/unstructured/wanda/ ``` The bug occurred ...
closed
https://github.com/huggingface/datasets/issues/6746
2024-03-21T02:53:04
2024-09-18T19:57:14
2024-07-29T07:21:08
{ "login": "billwang485", "id": 65165345, "type": "User" }
[]
false
[]
2,198,541,732
6,745
Scraping the whole of github including private repos is bad; kindly stop
### Feature request https://github.com/bigcode-project/opt-out-v2 - opt out is not consent. kindly quit this ridiculous nonsense. ### Motivation [EDITED: insults not tolerated] ### Your contribution [EDITED: insults not tolerated]
closed
https://github.com/huggingface/datasets/issues/6745
2024-03-20T20:54:06
2024-03-21T12:28:04
2024-03-21T10:24:56
{ "login": "ghost", "id": 10137, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,197,910,168
6,744
Option to disable file locking
### Feature request Commands such as `load_dataset` creates file locks with `filelock.FileLock`. It would be good if there was a way to disable this. ### Motivation File locking doesn't work on all file-systems (in my case NFS mounted Weka). If the `cache_dir` only had small files then it would be possible to point ...
open
https://github.com/huggingface/datasets/issues/6744
2024-03-20T15:59:45
2024-03-20T15:59:45
null
{ "login": "VRehnberg", "id": 35767167, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,195,481,697
6,743
Allow null values in dict columns
Fix #6738
closed
https://github.com/huggingface/datasets/pull/6743
2024-03-19T16:54:22
2024-04-08T13:08:42
2024-03-19T20:05:19
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,195,134,854
6,742
Fix missing download_config in get_data_patterns
Reported in https://github.com/huggingface/datasets-server/issues/2607
closed
https://github.com/huggingface/datasets/pull/6742
2024-03-19T14:29:25
2024-03-19T18:24:39
2024-03-19T18:15:13
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,194,626,108
6,741
Fix offline mode with single config
Reported in https://github.com/huggingface/datasets/issues/4760 The cache was not able to reload a dataset with a single config form the cache if the config name is not specificed For example ```python from datasets import load_dataset, config config.HF_DATASETS_OFFLINE = True load_dataset("openai_human...
closed
https://github.com/huggingface/datasets/pull/6741
2024-03-19T10:48:32
2024-03-25T16:35:21
2024-03-25T16:23:59
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,193,172,074
6,740
Support for loading geotiff files as a part of the ImageFolder
### Feature request Request for adding rasterio support to load geotiff as a part of ImageFolder, instead of using PIL ### Motivation As of now, there are many datasets in HuggingFace Hub which are predominantly focussed towards RemoteSensing or are from RemoteSensing. The current ImageFolder (if I have understood c...
closed
https://github.com/huggingface/datasets/issues/6740
2024-03-18T20:00:39
2024-03-27T18:19:48
2024-03-27T18:19:20
{ "login": "sunny1401", "id": 31362090, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,192,730,134
6,739
Transpose images with EXIF Orientation tag
Closes https://github.com/huggingface/datasets/issues/6252
closed
https://github.com/huggingface/datasets/pull/6739
2024-03-18T16:43:06
2025-07-03T11:33:18
2024-03-19T15:29:42
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,192,386,536
6,738
Dict feature is non-nullable while nested dict feature is
When i try to create a `Dataset` object with None values inside a dict column, like this: ```python from datasets import Dataset, Features, Value Dataset.from_dict( { "dict": [{"a": 0, "b": 0}, None], }, features=Features( {"dict": {"a": Value("int16"), "b": Value("int16")}} ) ) ...
closed
https://github.com/huggingface/datasets/issues/6738
2024-03-18T14:31:47
2024-03-20T10:24:15
2024-03-19T20:05:20
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
2,190,198,425
6,737
Invalid pattern: '**' can only be an entire path component
### Describe the bug ValueError: Invalid pattern: '**' can only be an entire path component when loading any dataset ### Steps to reproduce the bug import datasets ds = datasets.load_dataset("TokenBender/code_instructions_122k_alpaca_style") ### Expected behavior loading the dataset successfully ### Environm...
closed
https://github.com/huggingface/datasets/issues/6737
2024-03-16T19:28:46
2024-07-23T14:23:28
2024-05-13T11:32:57
{ "login": "JPonsa", "id": 28976175, "type": "User" }
[]
false
[]
2,190,181,422
6,736
Mosaic Streaming (MDS) Support
### Feature request I'm a huge fan of the current HF Datasets `webdataset` integration (especially the built-in streaming support). However, I'd love to upload some robotics and multimodal datasets I've processed for use with [Mosaic Streaming](https://docs.mosaicml.com/projects/streaming/en/stable/), specifically the...
open
https://github.com/huggingface/datasets/issues/6736
2024-03-16T18:42:04
2024-03-18T15:13:34
null
{ "login": "siddk", "id": 2498509, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,189,132,932
6,735
Add `mode` parameter to `Image` feature
Fix https://github.com/huggingface/datasets/issues/6675
closed
https://github.com/huggingface/datasets/pull/6735
2024-03-15T17:21:12
2024-03-18T15:47:48
2024-03-18T15:41:33
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,187,646,694
6,734
Tokenization slows towards end of dataset
### Describe the bug Mapped tokenization slows down substantially towards end of dataset. train set started off very slow, caught up to 20k then tapered off til the end. what's particularly strange is that the tokenization crashed a few times before due to errors with invalid tokens somewhere or corrupted down...
open
https://github.com/huggingface/datasets/issues/6734
2024-03-15T03:27:36
2025-02-20T17:40:54
null
{ "login": "ethansmith2000", "id": 98723285, "type": "User" }
[]
false
[]
2,186,811,724
6,733
EmptyDatasetError when loading dataset downloaded with HuggingFace cli
### Describe the bug I am using a cluster that does not have access to the internet when given a job. I tried downloading the dataset using the huggingface-cli command and then loading it with load_dataset but I get an error: ```raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files...
open
https://github.com/huggingface/datasets/issues/6733
2024-03-14T16:41:27
2024-03-15T18:09:02
null
{ "login": "StwayneXG", "id": 77196999, "type": "User" }
[]
false
[]
2,182,844,673
6,731
Unexpected behavior when using load_dataset with streaming=True in a for loop
### Describe the bug ### My Code ``` from datasets import load_dataset res=[] for i in [0,1]: di=load_dataset( "json", data_files='path_to.json', split='train', streaming=True, ).map(lambda x: {"source": i}) res.append(di) for e in res[...
closed
https://github.com/huggingface/datasets/issues/6731
2024-03-12T23:26:43
2024-04-16T00:00:00
2024-04-16T00:00:00
{ "login": "uApiv", "id": 42908296, "type": "User" }
[]
false
[]
2,181,881,499
6,730
Deprecate Pandas builder
The Pandas packaged builder is undocumented and relies on `pickle` to read the data, making it **unsafe**. Moreover, I haven't seen a single instance of this builder being used (not even using the GH/Hub search), so we should deprecate it.
closed
https://github.com/huggingface/datasets/pull/6730
2024-03-12T15:12:13
2024-03-12T17:42:33
2024-03-12T17:36:24
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,180,237,159
6,729
Support zipfiles that span multiple disks?
See https://huggingface.co/datasets/PhilEO-community/PhilEO-downstream The dataset viewer gives the following error: ``` Error code: ConfigNamesError Exception: BadZipFile Message: zipfiles that span multiple disks are not supported Traceback: Traceback (most recent call last): F...
closed
https://github.com/huggingface/datasets/issues/6729
2024-03-11T21:07:41
2024-06-26T05:08:59
2024-06-26T05:05:28
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "question", "color": "d876e3" } ]
false
[]
2,178,607,012
6,728
Issue Downloading Certain Datasets After Setting Custom `HF_ENDPOINT`
### Describe the bug This bug is triggered under the following conditions: - datasets repo ids without organization names trigger errors, such as `bookcorpus`, `gsm8k`, `wikipedia`, rather than in the form of `A/B`. - If `HF_ENDPOINT` is set and the hostname is not in the form of `(hub-ci.)?huggingface.co`. - T...
closed
https://github.com/huggingface/datasets/issues/6728
2024-03-11T09:06:38
2024-03-15T14:52:07
2024-03-15T14:52:07
{ "login": "padeoe", "id": 10057041, "type": "User" }
[]
false
[]
2,177,826,110
6,727
Using a registry instead of calling globals for fetching feature types
Hello, When working with bio-data, each feature often has metadata associated with it (e.g. species, lineage, snp position, etc). To store this, I like to use the feature classes with the added `metadata` attribute. However, when saving or loading with custom features, you get an error since that class doesn't exist...
closed
https://github.com/huggingface/datasets/pull/6727
2024-03-10T17:47:51
2024-03-13T12:08:49
2024-03-13T10:46:02
{ "login": "psmyth94", "id": 11325244, "type": "User" }
[]
true
[]
2,177,097,232
6,726
Profiling for HF Filesystem shows there are easy performance gains to be made
### Describe the bug # Let's make it faster First, an evidence... ![image](https://github.com/huggingface/datasets/assets/159512661/a703a82c-43a0-426c-9d99-24c563d70965) Figure 1: CProfile for loading 3 files from cerebras/SlimPajama-627B train split, and 3 files from test split using streaming=True. X axis is 1106...
open
https://github.com/huggingface/datasets/issues/6726
2024-03-09T07:08:45
2024-03-09T07:11:08
null
{ "login": "awgr", "id": 159512661, "type": "User" }
[]
false
[]
2,175,527,530
6,725
Request for a comparison of huggingface datasets compared with other data format especially webdataset
### Feature request Request for a comparison of huggingface datasets compared with other data format especially webdataset ### Motivation I see huggingface datasets uses Apache Arrow as its backend, it seems to be great, but I'm curious about how it is good compared with other dataset format, like webdataset, what's...
open
https://github.com/huggingface/datasets/issues/6725
2024-03-08T08:23:01
2024-03-08T08:23:01
null
{ "login": "Luciennnnnnn", "id": 20135317, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,174,398,227
6,724
Dataset with loading script does not work in renamed repos
### Describe the bug My data repository was first called `BramVanroy/hplt-mono-v1-2` but I then renamed to use underscores instead of dashes. However, it seems that `datasets` retrieves the old repo name when it checks whether the repo contains data loading scripts in this line. https://github.com/huggingface/dat...
open
https://github.com/huggingface/datasets/issues/6724
2024-03-07T17:38:38
2024-03-07T20:06:25
null
{ "login": "BramVanroy", "id": 2779410, "type": "User" }
[]
false
[]
2,174,344,456
6,723
get_dataset_default_config_name docstring
fix https://github.com/huggingface/datasets/pull/6722
closed
https://github.com/huggingface/datasets/pull/6723
2024-03-07T17:09:29
2024-03-07T17:27:29
2024-03-07T17:21:20
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,174,332,127
6,722
Add details in docstring
see https://github.com/huggingface/datasets-server/pull/2554#discussion_r1516516867
closed
https://github.com/huggingface/datasets/pull/6722
2024-03-07T17:02:07
2024-03-07T17:21:10
2024-03-07T17:21:08
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
2,173,931,714
6,721
Hi,do you know how to load the dataset from local file now?
Hi, if I want to load the dataset from local file, then how to specify the configuration name? _Originally posted by @WHU-gentle in https://github.com/huggingface/datasets/issues/2976#issuecomment-1333455222_
open
https://github.com/huggingface/datasets/issues/6721
2024-03-07T13:58:40
2024-03-31T08:09:25
null
{ "login": "Gera001", "id": 50232044, "type": "User" }
[]
false
[]
2,173,603,459
6,720
TypeError: 'str' object is not callable
### Describe the bug I am trying to get the HPLT datasets on the hub. Downloading/re-uploading would be too time- and resource consuming so I wrote [a dataset loader script](https://huggingface.co/datasets/BramVanroy/hplt_mono_v1_2/blob/main/hplt_mono_v1_2.py). I think I am very close but for some reason I always get ...
closed
https://github.com/huggingface/datasets/issues/6720
2024-03-07T11:07:09
2024-03-08T07:34:53
2024-03-07T15:13:58
{ "login": "BramVanroy", "id": 2779410, "type": "User" }
[]
false
[]
2,169,585,727
6,719
Is there any way to solve hanging of IterableDataset using split by node + filtering during inference
### Describe the bug I am using an iterable dataset in a multi-node setup, trying to do training/inference while filtering the data on the fly. I usually do not use `split_dataset_by_node` but it is very slow using the IterableDatasetShard in `accelerate` and `transformers`. When I filter after applying `split_dataset...
open
https://github.com/huggingface/datasets/issues/6719
2024-03-05T15:55:13
2024-03-05T15:55:13
null
{ "login": "ssharpe42", "id": 8136905, "type": "User" }
[]
false
[]
2,169,468,488
6,718
Fix concurrent script loading with force_redownload
I added `lock_importable_file` in `get_dataset_builder_class` and `extend_dataset_builder_for_streaming` to fix the issue, and I also added a test cc @clefourrier
closed
https://github.com/huggingface/datasets/pull/6718
2024-03-05T15:04:20
2024-03-07T14:05:53
2024-03-07T13:58:04
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,168,726,432
6,717
`remove_columns` method used with a streaming enable dataset mode produces a LibsndfileError on multichannel audio
### Describe the bug When loading a HF dataset in streaming mode and removing some columns, it is impossible to load a sample if the audio contains more than one channel. I have the impression that the time axis and channels are swapped or concatenated. ### Steps to reproduce the bug Minimal error code: ```python ...
open
https://github.com/huggingface/datasets/issues/6717
2024-03-05T09:33:26
2024-08-14T17:54:20
null
{ "login": "jhauret", "id": 53187038, "type": "User" }
[]
false
[]
2,168,706,558
6,716
Non-deterministic `Dataset.builder_name` value
### Describe the bug I'm not sure if this is a bug, but `print(ds.builder_name)` in the following code sometimes prints out `rotten_tomatoes` instead of `parquet`: ```python import datasets for _ in range(100): ds = datasets.load_dataset("rotten_tomatoes", split="train") print(ds.builder_name) # pr...
closed
https://github.com/huggingface/datasets/issues/6716
2024-03-05T09:23:21
2024-03-19T07:58:14
2024-03-19T07:58:14
{ "login": "harupy", "id": 17039389, "type": "User" }
[]
false
[]
2,167,747,095
6,715
Fix sliced ConcatenationTable pickling with mixed schemas vertically
A sliced + pickled ConcatenationTable could end up with a different schema than the original schema, if the slice only contains blocks with only a subset of the columns. This can lead to issues when saving datasets from a concatenation of datasets with mixed schemas Reported in https://discuss.huggingface.co/t/da...
closed
https://github.com/huggingface/datasets/pull/6715
2024-03-04T21:02:07
2024-03-05T11:23:05
2024-03-05T11:17:04
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,167,569,080
6,714
Expand no-code dataset info with datasets-server info
E.g., to have info about a dataset's number of examples for more informative TQDM bars.
closed
https://github.com/huggingface/datasets/pull/6714
2024-03-04T19:18:10
2024-03-04T20:28:30
2024-03-04T20:22:15
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,166,797,560
6,713
Bump huggingface-hub lower version to 0.21.2
This should fix the version compatibility issue when using `huggingface_hub` < 0.21.2 and latest fsspec (>=2023.12.0). See my comment: https://github.com/huggingface/datasets/pull/6687#issuecomment-1976493336 >> EDIT: the fix has been released in `huggingface_hub` 0.21.2 - I removed my commits that were using `hugg...
closed
https://github.com/huggingface/datasets/pull/6713
2024-03-04T13:00:52
2024-03-04T18:14:03
2024-03-04T18:06:05
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,166,588,373
6,712
fix CastError pickling
reported in https://discuss.huggingface.co/t/datasetdict-save-to-disk-with-num-proc-1-seems-to-hang-with-error/75595
closed
https://github.com/huggingface/datasets/pull/6712
2024-03-04T11:14:18
2024-03-04T20:23:47
2024-03-04T20:17:17
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,165,507,817
6,711
3x Faster Text Preprocessing
I was preparing some datasets for AI training and noticed that `datasets` by HuggingFace uses the conventional `open` mechanism to read the file and split it into chunks. I thought it can be significantly accelerated, and [started with a benchmark](https://gist.github.com/ashvardanian/55c2052e9f78b05b8d614aa90cb12347):...
open
https://github.com/huggingface/datasets/pull/6711
2024-03-03T19:03:04
2024-06-26T06:28:14
null
{ "login": "ashvardanian", "id": 1983160, "type": "User" }
[]
true
[]
2,164,781,564
6,710
Persist IterableDataset epoch in workers
Use shared memory for the IterableDataset epoch. This way calling `ds.set_epoch()` in the main process will update the epoch in the DataLoader workers as well. This is useful especially because the epoch is used to compute the `effective_seed` used for shuffling. I used torch's shared memory in case users want t...
closed
https://github.com/huggingface/datasets/pull/6710
2024-03-02T12:08:50
2024-07-01T17:51:25
2024-07-01T17:45:30
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,164,169,913
6,709
set dev version
null
closed
https://github.com/huggingface/datasets/pull/6709
2024-03-01T21:01:14
2024-03-01T21:07:35
2024-03-01T21:01:23
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,164,158,579
6,708
Release: 2.18.0
null
closed
https://github.com/huggingface/datasets/pull/6708
2024-03-01T20:52:17
2024-03-01T21:03:01
2024-03-01T20:56:50
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,163,799,868
6,707
Silence ruff deprecation messages
null
closed
https://github.com/huggingface/datasets/pull/6707
2024-03-01T16:52:29
2024-03-01T17:32:14
2024-03-01T17:25:46
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,163,783,123
6,706
Update ruff
null
closed
https://github.com/huggingface/datasets/pull/6706
2024-03-01T16:44:58
2024-03-01T17:02:13
2024-03-01T16:52:17
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,163,768,640
6,705
Fix data_files when passing data_dir
This code should not return empty data files ```python from datasets import load_dataset_builder revision = "3d406e70bc21c3ca92a9a229b4c6fc3ed88279fd" b = load_dataset_builder("bigcode/the-stack-v2-dedup", data_dir="data/Dockerfile", revision=revision) print(b.config.data_files) ``` Previously it would ret...
closed
https://github.com/huggingface/datasets/pull/6705
2024-03-01T16:38:53
2024-03-01T18:59:06
2024-03-01T18:52:49
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,163,752,391
6,704
Improve default patterns resolution
Separate the default patterns that match directories from the ones matching files and ensure directories are checked first (reverts the change from https://github.com/huggingface/datasets/pull/6244, which merged these patterns). Also, ensure that the glob patterns do not overlap to avoid duplicates in the result. A...
closed
https://github.com/huggingface/datasets/pull/6704
2024-03-01T16:31:25
2024-04-23T09:43:09
2024-03-15T15:22:03
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,163,250,590
6,703
Unable to load dataset that was saved with `save_to_disk`
### Describe the bug I get the following error message: You are trying to load a dataset that was saved using `save_to_disk`. Please use `load_from_disk` instead. ### Steps to reproduce the bug 1. Save a dataset with `save_to_disk` 2. Try to load it with `load_datasets` ### Expected behavior I am ab...
closed
https://github.com/huggingface/datasets/issues/6703
2024-03-01T11:59:56
2024-03-04T13:46:20
2024-03-04T13:46:20
{ "login": "casper-hansen", "id": 27340033, "type": "User" }
[]
false
[]
2,161,938,484
6,702
Push samples to dataset on hub without having the dataset locally
### Feature request Say I have the following code: ``` from datasets import Dataset import pandas as pd new_data = { "column_1": ["value1", "value2"], "column_2": ["value3", "value4"], } df_new = pd.DataFrame(new_data) dataset_new = Dataset.from_pandas(df_new) # add these samples to a remote datase...
closed
https://github.com/huggingface/datasets/issues/6702
2024-02-29T19:17:12
2024-03-08T21:08:38
2024-03-08T21:08:38
{ "login": "jbdel", "id": 17854096, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,161,448,017
6,701
Base parquet batch_size on parquet row group size
This allows to stream datasets like [Major-TOM/Core-S2L2A](https://huggingface.co/datasets/Major-TOM/Core-S2L2A) which have row groups with few rows (one row is ~10MB). Previously the cold start would take a lot of time and OOM because it would download many row groups before yielding the first example. I tried on O...
closed
https://github.com/huggingface/datasets/pull/6701
2024-02-29T14:53:01
2024-02-29T15:15:18
2024-02-29T15:08:55
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,158,871,038
6,700
remove_columns is not in-place but the doc shows it is in-place
### Describe the bug The doc of `datasets` v2.17.0/v2.17.1 shows that `remove_columns` is in-place. [link](https://huggingface.co/docs/datasets/v2.17.1/en/package_reference/main_classes#datasets.DatasetDict.remove_columns) In the text classification example of transformers v4.38.1, the columns are not removed. h...
closed
https://github.com/huggingface/datasets/issues/6700
2024-02-28T12:36:22
2024-04-02T17:15:28
2024-04-02T17:15:28
{ "login": "shelfofclub", "id": 32047804, "type": "User" }
[]
false
[]
2,158,152,341
6,699
`Dataset` unexpected changed dict data and may cause error
### Describe the bug Will unexpected get keys with `None` value in the parsed json dict. ### Steps to reproduce the bug ```jsonl test.jsonl {"id": 0, "indexs": {"-1": [0, 10]}} {"id": 1, "indexs": {"-1": [0, 10]}} ``` ```python dataset = Dataset.from_json('.test.jsonl') print(dataset[0]) ``` Result: ```...
open
https://github.com/huggingface/datasets/issues/6699
2024-02-28T05:30:10
2024-02-28T19:14:36
null
{ "login": "scruel", "id": 16933298, "type": "User" }
[]
false
[]
2,157,752,392
6,698
Faster `xlistdir`
Pass `detail=False` to the `fsspec` `listdir` to avoid unnecessarily fetching expensive metadata about the paths.
closed
https://github.com/huggingface/datasets/pull/6698
2024-02-27T22:55:08
2024-02-27T23:44:49
2024-02-27T23:38:14
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,157,322,224
6,697
Unable to Load Dataset in Kaggle
### Describe the bug Having installed the latest versions of transformers==4.38.1 and datasets==2.17.1 Unable to load the dataset in a kaggle notebook. Get this Error: ``` --------------------------------------------------------------------------- ValueError Traceback (most recen...
closed
https://github.com/huggingface/datasets/issues/6697
2024-02-27T18:19:34
2024-02-29T17:32:42
2024-02-29T17:32:41
{ "login": "vrunm", "id": 97465624, "type": "User" }
[]
false
[]
2,154,161,357
6,696
Make JSON builder support an array of strings
Support JSON file with an array of strings. Fix #6695.
closed
https://github.com/huggingface/datasets/pull/6696
2024-02-26T13:18:31
2024-02-28T06:45:23
2024-02-28T06:39:12
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,154,075,509
6,695
Support JSON file with an array of strings
Support loading a dataset from a JSON file with an array of strings. See: https://huggingface.co/datasets/CausalLM/Refined-Anime-Text/discussions/1
closed
https://github.com/huggingface/datasets/issues/6695
2024-02-26T12:35:11
2024-03-08T14:16:25
2024-02-28T06:39:13
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,153,086,984
6,694
__add__ for Dataset, IterableDataset
It's too cumbersome to write this command every time we perform a dataset merging operation. ```pythonfrom datasets import concatenate_datasets``` We have added a simple `__add__` magic method to each class using `concatenate_datasets.` ```python from datasets import load_dataset bookcorpus = load_dataset("bookc...
open
https://github.com/huggingface/datasets/pull/6694
2024-02-26T01:46:55
2024-02-29T16:52:58
null
{ "login": "oh-gnues-iohc", "id": 79557937, "type": "User" }
[]
true
[]
2,152,887,712
6,693
Update the print message for chunked_dataset in process.mdx
Update documentation to align with `Dataset.__repr__` change after #423
closed
https://github.com/huggingface/datasets/pull/6693
2024-02-25T18:37:07
2024-02-25T19:57:12
2024-02-25T19:51:02
{ "login": "gzbfgjf2", "id": 142939562, "type": "User" }
[]
true
[]
2,152,270,987
6,692
Enhancement: Enable loading TSV files in load_dataset()
Fix #6691
closed
https://github.com/huggingface/datasets/pull/6692
2024-02-24T11:38:59
2024-02-26T15:33:50
2024-02-26T07:14:03
{ "login": "harsh1504660", "id": 77767961, "type": "User" }
[]
true
[]
2,152,134,041
6,691
load_dataset() does not support tsv
### Feature request the load_dataset() for local functions support file types like csv, json etc but not of type tsv (tab separated values). ### Motivation cant easily load files of type tsv, have to convert them to another type like csv then load ### Your contribution Can try by raising a PR with a little help, c...
closed
https://github.com/huggingface/datasets/issues/6691
2024-02-24T05:56:04
2024-02-26T07:15:07
2024-02-26T07:09:35
{ "login": "dipsivenkatesh", "id": 26873178, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,150,800,065
6,690
Add function to convert a script-dataset to Parquet
Add function to convert a script-dataset to Parquet and push it to the Hub, analogously to the Space: "Convert a Hugging Face dataset to Parquet"
closed
https://github.com/huggingface/datasets/issues/6690
2024-02-23T10:28:20
2024-04-12T15:27:05
2024-04-12T15:27:05
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,149,581,147
6,689
.load_dataset() method defaults to zstandard
### Describe the bug Regardless of what method I use, datasets defaults to zstandard for unpacking my datasets. This is poor behavior, because not only is zstandard not a dependency in the huggingface package (and therefore, your dataset loading will be interrupted while it asks you to install the package), but it ...
closed
https://github.com/huggingface/datasets/issues/6689
2024-02-22T17:39:27
2024-03-07T14:54:16
2024-03-07T14:54:15
{ "login": "ElleLeonne", "id": 87243032, "type": "User" }
[]
false
[]
2,148,609,859
6,688
Tensor type (e.g. from `return_tensors`) ignored in map
### Describe the bug I don't know if it is a bug or an expected behavior, but the tensor type seems to be ignored after applying map. For example, mapping over to tokenize text with a transformers' tokenizer always returns lists and it ignore the `return_tensors` argument. If this is an expected behaviour (e.g., fo...
open
https://github.com/huggingface/datasets/issues/6688
2024-02-22T09:27:57
2024-02-22T15:56:21
null
{ "login": "srossi93", "id": 11166137, "type": "User" }
[]
false
[]
2,148,554,178
6,687
fsspec: support fsspec>=2023.12.0 glob changes
- adds support for the `fs.glob` changes introduced in `fsspec==2023.12.0` and unpins the current upper bound Should close #6644 Should close #6645 The `test_data_files` glob/pattern tests pass for me in: - `fsspec==2023.10.0` (the pinned max version in datasets `main`) - `fsspec==2023.12.0` (#6644) - `fsspec...
closed
https://github.com/huggingface/datasets/pull/6687
2024-02-22T08:59:32
2024-03-04T12:59:42
2024-02-29T15:12:17
{ "login": "pmrowla", "id": 651988, "type": "User" }
[]
true
[]
2,147,795,103
6,686
Question: Is there any way for uploading a large image dataset?
I am uploading an image dataset like this: ``` dataset = load_dataset( "json", data_files={"train": "data/custom_dataset/train.json", "validation": "data/custom_dataset/val.json"}, ) dataset = dataset.cast_column("images", Sequence(Image())) dataset.push_to_hub("StanfordAIMI/custom_dataset", max_shard_si...
open
https://github.com/huggingface/datasets/issues/6686
2024-02-21T22:07:21
2024-05-02T03:44:59
null
{ "login": "zhjohnchan", "id": 37367987, "type": "User" }
[]
false
[]
2,145,570,006
6,685
Updated Quickstart Notebook link
Fixed Quickstart Notebook Link in the [Overview notebook](https://github.com/huggingface/datasets/blob/main/notebooks/Overview.ipynb)
closed
https://github.com/huggingface/datasets/pull/6685
2024-02-21T01:04:18
2024-03-12T21:31:04
2024-02-25T18:48:08
{ "login": "Codeblockz", "id": 55932554, "type": "User" }
[]
true
[]
2,144,092,388
6,684
Improve error message for gated datasets on load
Internal Slack discussion: https://huggingface.slack.com/archives/C02V51Q3800/p1708424971135029
closed
https://github.com/huggingface/datasets/pull/6684
2024-02-20T10:51:27
2024-02-20T15:40:52
2024-02-20T15:33:56
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
2,142,751,955
6,683
Fix imagefolder dataset url
null
closed
https://github.com/huggingface/datasets/pull/6683
2024-02-19T16:26:51
2024-02-19T17:24:25
2024-02-19T17:18:10
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
2,142,000,800
6,682
Update GitHub Actions to Node 20
Update GitHub Actions to Node 20. Fix #6679.
closed
https://github.com/huggingface/datasets/pull/6682
2024-02-19T10:10:50
2024-02-28T07:02:40
2024-02-28T06:56:34
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,141,985,239
6,681
Update release instructions
Update release instructions.
closed
https://github.com/huggingface/datasets/pull/6681
2024-02-19T10:03:08
2024-02-28T07:23:49
2024-02-28T07:17:22
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "maintenance", "color": "d4c5f9" } ]
true
[]
2,141,979,527
6,680
Set dev version
null
closed
https://github.com/huggingface/datasets/pull/6680
2024-02-19T10:00:31
2024-02-19T10:06:43
2024-02-19T10:00:40
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,141,953,981
6,679
Node.js 16 GitHub Actions are deprecated
`Node.js` 16 GitHub Actions are deprecated. See: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/ We should update them to Node 20. See warnings in our CI, e.g.: https://github.com/huggingface/datasets/actions/runs/7957295009?pr=6678 > Node.js 16 actions are deprecat...
closed
https://github.com/huggingface/datasets/issues/6679
2024-02-19T09:47:37
2024-02-28T06:56:35
2024-02-28T06:56:35
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "maintenance", "color": "d4c5f9" } ]
false
[]
2,141,902,154
6,678
Release: 2.17.1
null
closed
https://github.com/huggingface/datasets/pull/6678
2024-02-19T09:24:29
2024-02-19T10:03:00
2024-02-19T09:56:52
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,141,244,167
6,677
Pass through information about location of cache directory.
If cache directory is set, information is not passed through. Pass download config in as an arg too.
closed
https://github.com/huggingface/datasets/pull/6677
2024-02-18T23:48:57
2024-02-28T18:57:39
2024-02-28T18:51:15
{ "login": "stridge-cruxml", "id": 94808782, "type": "User" }
[]
true
[]
2,140,648,619
6,676
Can't Read List of JSON Files Properly
### Describe the bug Trying to read a bunch of JSON files into Dataset class but default approach doesn't work. I don't get why it works when I read it one by one but not when I pass as a list :man_shrugging: The code fails with ``` ArrowInvalid: JSON parse error: Invalid value. in row 0 UnicodeDecodeError...
open
https://github.com/huggingface/datasets/issues/6676
2024-02-17T22:58:15
2024-03-02T20:47:22
null
{ "login": "lordsoffallen", "id": 20232088, "type": "User" }
[]
false
[]
2,139,640,381
6,675
Allow image model (color conversion) to be specified as part of datasets Image() decode
### Feature request Typical torchvision / torch Datasets in image applications apply color conversion in the Dataset portion of the code as part of image decode, separately from the image transform stack. This is true for PIL.Image where convert is usually called in dataset, for native torchvision https://pytorch.or...
closed
https://github.com/huggingface/datasets/issues/6675
2024-02-16T23:43:20
2024-03-18T15:41:34
2024-03-18T15:41:34
{ "login": "rwightman", "id": 5702664, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,139,595,576
6,674
Depprcated Overview.ipynb Link to new Quickstart Notebook invalid
### Describe the bug For the dreprecated notebook found [here](https://github.com/huggingface/datasets/blob/main/notebooks/Overview.ipynb). The link to the new notebook is broken. ### Steps to reproduce the bug Click the [Quickstart notebook](https://github.com/huggingface/notebooks/blob/main/datasets_doc/quicksta...
closed
https://github.com/huggingface/datasets/issues/6674
2024-02-16T22:51:35
2024-02-25T18:48:09
2024-02-25T18:48:09
{ "login": "Codeblockz", "id": 55932554, "type": "User" }
[]
false
[]
2,139,522,827
6,673
IterableDataset `set_epoch` is ignored when DataLoader `persistent_workers=True`
### Describe the bug When persistent workers are enabled, the epoch that's set via the IterableDataset instance held by the training process is ignored by the workers as they are disconnected across processes. PyTorch samplers for non-iterable datasets have a mechanism to sync this, datasets.IterableDataset does ...
closed
https://github.com/huggingface/datasets/issues/6673
2024-02-16T21:38:12
2024-07-01T17:45:31
2024-07-01T17:45:31
{ "login": "rwightman", "id": 5702664, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "streaming", "color": "fef2c0" } ]
false
[]
2,138,732,288
6,672
Remove deprecated verbose parameter from CSV builder
Remove deprecated `verbose` parameter from CSV builder. Note that the `verbose` parameter is deprecated since pandas 2.2.0. See: - https://github.com/pandas-dev/pandas/pull/56556 - https://github.com/pandas-dev/pandas/pull/57450 Fix #6671.
closed
https://github.com/huggingface/datasets/pull/6672
2024-02-16T14:26:21
2024-02-19T09:26:34
2024-02-19T09:20:22
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,138,727,870
6,671
CSV builder raises deprecation warning on verbose parameter
CSV builder raises a deprecation warning on `verbose` parameter: ``` FutureWarning: The 'verbose' keyword in pd.read_csv is deprecated and will be removed in a future version. ``` See: - https://github.com/pandas-dev/pandas/pull/56556 - https://github.com/pandas-dev/pandas/pull/57450
closed
https://github.com/huggingface/datasets/issues/6671
2024-02-16T14:23:46
2024-02-19T09:20:23
2024-02-19T09:20:23
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
false
[]
2,138,372,958
6,670
ValueError
### Describe the bug ValueError Traceback (most recent call last) [<ipython-input-11-9b99bc80ec23>](https://localhost:8080/#) in <cell line: 11>() 9 import numpy as np 10 import matplotlib.pyplot as plt ---> 11 from datasets import DatasetDict, Dataset 12 from transf...
closed
https://github.com/huggingface/datasets/issues/6670
2024-02-16T11:05:17
2024-02-17T04:26:34
2024-02-16T14:43:53
{ "login": "prashanth19bolukonda", "id": 112316000, "type": "User" }
[]
false
[]
2,138,322,662
6,669
attribute error when writing trainer.train()
### Describe the bug AttributeError Traceback (most recent call last) Cell In[39], line 2 1 # Start the training process ----> 2 trainer.train() File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1539, in Trainer.train(self, resume_from_checkpoint, trial, ignore...
closed
https://github.com/huggingface/datasets/issues/6669
2024-02-16T10:40:49
2024-03-01T10:58:00
2024-02-29T17:25:17
{ "login": "prashanth19bolukonda", "id": 112316000, "type": "User" }
[]
false
[]
2,137,859,935
6,668
Chapter 6 - Issue Loading `cnn_dailymail` dataset
### Describe the bug So I am getting this bug when I try to run cell 4 of the Chapter 6 notebook code: `dataset = load_dataset("ccdv/cnn_dailymail", version="3.0.0")` Error Message: ``` --------------------------------------------------------------------------- ValueError Tracebac...
open
https://github.com/huggingface/datasets/issues/6668
2024-02-16T04:40:56
2024-02-16T04:40:56
null
{ "login": "hariravichandran", "id": 34660389, "type": "User" }
[]
false
[]
2,137,769,552
6,667
Default config for squad is incorrect
### Describe the bug If you download Squad, it will download the plain_text version, but the config still specifies "default", so if you set the offline mode the cache will try to look it up according to the config_id which is "default" and this will say; ValueError: Couldn't find cache for squad for config 'default'...
open
https://github.com/huggingface/datasets/issues/6667
2024-02-16T02:36:55
2024-02-23T09:10:00
null
{ "login": "kiddyboots216", "id": 22651617, "type": "User" }
[]
false
[]
2,136,136,425
6,665
Allow SplitDict setitem to replace existing SplitInfo
Fix this code provided by @clefourrier ```python import datasets import os token = os.getenv("TOKEN") results = datasets.load_dataset("gaia-benchmark/results_public", "2023", token=token, download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD) results["test"] = datasets.Dataset.from_list([row for row in resu...
closed
https://github.com/huggingface/datasets/pull/6665
2024-02-15T10:17:08
2024-03-01T16:02:46
2024-03-01T15:56:38
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,135,483,978
6,664
Revert the changes in `arrow_writer.py` from #6636
#6636 broke `write_examples_on_file` and `write_batch` from the class `ArrowWriter`. I'm undoing these changes. See #6663. Note the current implementation doesn't keep the order of the columns and the schema, thus setting a wrong schema for each column.
closed
https://github.com/huggingface/datasets/pull/6664
2024-02-15T01:47:33
2024-02-16T14:02:39
2024-02-16T02:31:11
{ "login": "bryant1410", "id": 3905501, "type": "User" }
[]
true
[]
2,135,480,811
6,663
`write_examples_on_file` and `write_batch` are broken in `ArrowWriter`
### Describe the bug `write_examples_on_file` and `write_batch` are broken in `ArrowWriter` since #6636. The order between the columns and the schema is not preserved anymore. So these functions don't work anymore unless the order happens to align well. ### Steps to reproduce the bug Try to do `write_batch` with any...
closed
https://github.com/huggingface/datasets/issues/6663
2024-02-15T01:43:27
2024-02-16T09:25:00
2024-02-16T09:25:00
{ "login": "bryant1410", "id": 3905501, "type": "User" }
[]
false
[]
2,132,425,812
6,662
fix: show correct package name to install biopython
When you try to download a dataset that uses [biopython](https://github.com/biopython/biopython), like `load_dataset("InstaDeepAI/multi_species_genomes")`, you get the error: ``` >>> from datasets import load_dataset >>> dataset = load_dataset("InstaDeepAI/multi_species_genomes") /home/j.vangoey/.pyenv/versions/m...
closed
https://github.com/huggingface/datasets/pull/6662
2024-02-13T14:15:04
2024-03-01T17:49:48
2024-03-01T17:43:39
{ "login": "BioGeek", "id": 59344, "type": "User" }
[]
true
[]
2,132,296,267
6,661
Import error on Google Colab
### Describe the bug Cannot be imported on Google Colab, the import throws the following error: ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject ### Steps to reproduce the bug 1. `! pip install -U datasets` 2. `import dataset...
closed
https://github.com/huggingface/datasets/issues/6661
2024-02-13T13:12:40
2024-02-25T16:37:54
2024-02-14T08:04:47
{ "login": "kithogue", "id": 16103566, "type": "User" }
[]
false
[]
2,131,977,011
6,660
Automatic Conversion for uint16/uint32 to Compatible PyTorch Dtypes
This PR addresses an issue encountered when utilizing uint16 or uint32 datatypes with datasets, followed by attempting to convert these datasets into PyTorch-compatible formats. Currently, doing so results in a TypeError due to incompatible datatype conversion, as illustrated by the following example: ```python from ...
closed
https://github.com/huggingface/datasets/pull/6660
2024-02-13T10:24:33
2024-03-01T19:01:57
2024-03-01T18:52:37
{ "login": "mohalisad", "id": 23399590, "type": "User" }
[]
true
[]
2,129,229,810
6,659
Change default compression argument for JsonDatasetWriter
Change default compression type from `None` to "infer", to align with pandas' defaults. Documentation asks the user to supply `to_json_kwargs` with arguments suitable for pandas' `to_json` method. At the same time, while pandas' by default uses ["infer"](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame....
closed
https://github.com/huggingface/datasets/pull/6659
2024-02-11T23:49:07
2024-03-01T17:51:50
2024-03-01T17:44:55
{ "login": "Rexhaif", "id": 5154447, "type": "User" }
[]
true
[]
2,129,158,371
6,658
[Resumable IterableDataset] Add IterableDataset state_dict
A simple implementation of a mechanism to resume an IterableDataset. It works by restarting at the latest shard and skip samples. It provides fast resuming (though not instantaneous). Example: ```python from datasets import Dataset, concatenate_datasets ds = Dataset.from_dict({"a": range(5)}).to_iterable_d...
closed
https://github.com/huggingface/datasets/pull/6658
2024-02-11T20:35:52
2024-10-01T10:19:38
2024-06-03T19:15:39
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]