id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
2,549,882,529
7,173
Release: 3.0.1
null
closed
https://github.com/huggingface/datasets/pull/7173
2024-09-26T08:25:54
2024-09-26T08:28:29
2024-09-26T08:26:03
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,549,781,691
7,172
Add torchdata as a regular test dependency
Add `torchdata` as a regular test dependency. Note that previously, `torchdata` was installed from their repo and current main branch (0.10.0.dev) requires Python>=3.9. Also note they made a recent release: 0.8.0 on Jul 31, 2024. Fix #7171.
closed
https://github.com/huggingface/datasets/pull/7172
2024-09-26T07:45:55
2024-09-26T08:12:12
2024-09-26T08:05:40
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,549,738,919
7,171
CI is broken: No solution found when resolving dependencies
See: https://github.com/huggingface/datasets/actions/runs/11046967444/job/30687294297 ``` Run uv pip install --system -r additional-tests-requirements.txt --no-deps × No solution found when resolving dependencies: ╰─▶ Because the current Python version (3.8.18) does not satisfy Python>=3.9 and torchdata=...
closed
https://github.com/huggingface/datasets/issues/7171
2024-09-26T07:24:58
2024-09-26T08:05:41
2024-09-26T08:05:41
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
2,546,944,016
7,170
Support JSON lines with missing columns
Support JSON lines with missing columns. Fix #7169. The implemented test raised: ``` datasets.table.CastError: Couldn't cast age: int64 to {'age': Value(dtype='int32', id=None), 'name': Value(dtype='string', id=None)} because column names don't match ``` Related to: - #7160 - #7162
closed
https://github.com/huggingface/datasets/pull/7170
2024-09-25T05:08:15
2024-09-26T06:42:09
2024-09-26T06:42:07
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,546,894,076
7,169
JSON lines with missing columns raise CastError
JSON lines with missing columns raise CastError: > CastError: Couldn't cast ... to ... because column names don't match Related to: - #7159 - #7161
closed
https://github.com/huggingface/datasets/issues/7169
2024-09-25T04:43:28
2024-09-26T06:42:08
2024-09-26T06:42:08
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
2,546,710,631
7,168
sd1.5 diffusers controlnet training script gives new error
### Describe the bug This will randomly pop up during training now ``` Traceback (most recent call last): File "/workspace/diffusers/examples/controlnet/train_controlnet.py", line 1192, in <module> main(args) File "/workspace/diffusers/examples/controlnet/train_controlnet.py", line 1041, in main ...
closed
https://github.com/huggingface/datasets/issues/7168
2024-09-25T01:42:49
2024-09-30T05:24:03
2024-09-30T05:24:02
{ "login": "Night1099", "id": 90132896, "type": "User" }
[]
false
[]
2,546,708,014
7,167
Error Mapping on sd3, sdxl and upcoming flux controlnet training scripts in diffusers
### Describe the bug ``` Map: 6%|██████ | 8000/138120 [19:27<5:16:36, 6.85 examples/s] Traceback (most recent call last): File "/workspace/diffusers/examples/controlnet/train_controlnet_sd3.py", line 1416, in <mod...
closed
https://github.com/huggingface/datasets/issues/7167
2024-09-25T01:39:51
2024-09-30T05:28:15
2024-09-30T05:28:04
{ "login": "Night1099", "id": 90132896, "type": "User" }
[]
false
[]
2,545,608,736
7,166
fix docstring code example for distributed shuffle
close https://github.com/huggingface/datasets/issues/7163
closed
https://github.com/huggingface/datasets/pull/7166
2024-09-24T14:39:54
2024-09-24T14:42:41
2024-09-24T14:40:14
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,544,972,541
7,165
fix increase_load_count
it was failing since 3.0 and therefore not updating download counts on HF or in our dashboard
closed
https://github.com/huggingface/datasets/pull/7165
2024-09-24T10:14:40
2024-09-24T17:31:07
2024-09-24T13:48:00
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,544,757,297
7,164
fsspec.exceptions.FSTimeoutError when downloading dataset
### Describe the bug I am trying to download the `librispeech_asr` `clean` dataset, which results in a `FSTimeoutError` exception after downloading around 61% of the data. ### Steps to reproduce the bug ``` import datasets datasets.load_dataset("librispeech_asr", "clean") ``` The output is as follows: > Dow...
open
https://github.com/huggingface/datasets/issues/7164
2024-09-24T08:45:05
2025-04-09T22:25:56
null
{ "login": "timonmerk", "id": 38216460, "type": "User" }
[]
false
[]
2,542,361,234
7,163
Set explicit seed in iterable dataset ddp shuffling example
### Describe the bug In the examples section of the iterable dataset docs https://huggingface.co/docs/datasets/en/package_reference/main_classes#datasets.IterableDataset the ddp example shuffles without seeding ```python from datasets.distributed import split_dataset_by_node ids = ds.to_iterable_dataset(num_sh...
closed
https://github.com/huggingface/datasets/issues/7163
2024-09-23T11:34:06
2024-09-24T14:40:15
2024-09-24T14:40:15
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[]
false
[]
2,542,323,382
7,162
Support JSON lines with empty struct
Support JSON lines with empty struct. Fix #7161. Related to: - #7160
closed
https://github.com/huggingface/datasets/pull/7162
2024-09-23T11:16:12
2024-09-23T11:30:08
2024-09-23T11:30:06
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,541,971,931
7,161
JSON lines with empty struct raise ArrowTypeError
JSON lines with empty struct raise ArrowTypeError: struct fields don't match or are in the wrong order See example: https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5 > ArrowTypeError: struct fields don't match or are in the wrong order: Input fields: struct<> output fields: struct<pov_c...
closed
https://github.com/huggingface/datasets/issues/7161
2024-09-23T08:48:56
2024-09-25T04:43:44
2024-09-23T11:30:07
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
2,541,877,813
7,160
Support JSON lines with missing struct fields
Support JSON lines with missing struct fields. Fix #7159. The implemented test raised: ``` TypeError: Couldn't cast array of type struct<age: int64> to {'age': Value(dtype='int32', id=None), 'name': Value(dtype='string', id=None)} ```
closed
https://github.com/huggingface/datasets/pull/7160
2024-09-23T08:04:09
2024-09-23T11:09:19
2024-09-23T11:09:17
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,541,865,613
7,159
JSON lines with missing struct fields raise TypeError: Couldn't cast array
JSON lines with missing struct fields raise TypeError: Couldn't cast array of type. See example: https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5 One would expect that the struct missing fields are added with null values.
closed
https://github.com/huggingface/datasets/issues/7159
2024-09-23T07:57:58
2024-10-21T08:07:07
2024-09-23T11:09:18
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
2,541,494,765
7,158
google colab ex
null
closed
https://github.com/huggingface/datasets/pull/7158
2024-09-23T03:29:50
2024-12-20T16:41:07
2024-12-20T16:41:07
{ "login": "docfhsp", "id": 157789664, "type": "User" }
[]
true
[]
2,540,354,890
7,157
Fix zero proba interleave datasets
fix https://github.com/huggingface/datasets/issues/7147
closed
https://github.com/huggingface/datasets/pull/7157
2024-09-21T15:19:14
2024-09-24T14:33:54
2024-09-24T14:33:54
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,539,360,617
7,156
interleave_datasets resets shuffle state
### Describe the bug ``` import datasets import torch.utils.data def gen(shards): yield {"shards": shards} def main(): dataset = datasets.IterableDataset.from_generator( gen, gen_kwargs={'shards': list(range(25))} ) dataset = dataset.shuffle(buffer_size=1) dataset...
open
https://github.com/huggingface/datasets/issues/7156
2024-09-20T17:57:54
2025-03-18T10:56:25
null
{ "login": "jonathanasdf", "id": 511073, "type": "User" }
[]
false
[]
2,533,641,870
7,155
Dataset viewer not working! Failure due to more than 32 splits.
Hello guys, I have a dataset and I didn't know I couldn't upload more than 32 splits. Now, my dataset viewer is not working. I don't have the dataset locally on my node anymore and recreating would take a week. And I have to publish the dataset coming Monday. I read about the practice, how I can resolve it and avoi...
closed
https://github.com/huggingface/datasets/issues/7155
2024-09-18T12:43:21
2024-09-18T13:20:03
2024-09-18T13:20:03
{ "login": "sleepingcat4", "id": 81933585, "type": "User" }
[]
false
[]
2,532,812,323
7,154
Support ndjson data files
Support `ndjson` (Newline Delimited JSON) data files. Fix #7153.
closed
https://github.com/huggingface/datasets/pull/7154
2024-09-18T06:10:10
2024-09-19T11:25:17
2024-09-19T11:25:14
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,532,788,555
7,153
Support data files with .ndjson extension
### Feature request Support data files with `.ndjson` extension. ### Motivation We already support data files with `.jsonl` extension. ### Your contribution I am opening a PR.
closed
https://github.com/huggingface/datasets/issues/7153
2024-09-18T05:54:45
2024-09-19T11:25:15
2024-09-19T11:25:15
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,527,577,048
7,151
Align filename prefix splitting with WebDataset library
Align filename prefix splitting with WebDataset library. This PR uses the same `base_plus_ext` function as the one used by the `webdataset` library. Fix #7150. Related to #7144.
closed
https://github.com/huggingface/datasets/pull/7151
2024-09-16T06:07:39
2024-09-16T15:26:36
2024-09-16T15:26:34
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,527,571,175
7,150
WebDataset loader splits keys differently than WebDataset library
As reported by @ragavsachdeva (see discussion here: https://github.com/huggingface/datasets/pull/7144#issuecomment-2348307792), our webdataset loader is not aligned with the `webdataset` library when splitting keys from filenames. For example, we get a different key splitting for filename `/some/path/22.0/1.1.png`: ...
closed
https://github.com/huggingface/datasets/issues/7150
2024-09-16T06:02:47
2024-09-16T15:26:35
2024-09-16T15:26:35
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
2,524,497,448
7,149
Datasets Unknown Keyword Argument Error - task_templates
### Describe the bug Issue ```python from datasets import load_dataset examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>) ``` Gives error ``` TypeError: DatasetInfo.__init__() got an unexpected keyword argument 'task_templates' ``` A simple downgrade to lower `data...
closed
https://github.com/huggingface/datasets/issues/7149
2024-09-13T10:30:57
2025-03-06T07:11:55
2024-09-13T14:10:48
{ "login": "varungupta31", "id": 51288316, "type": "User" }
[]
false
[]
2,523,833,413
7,148
Bug: Error when downloading mteb/mtop_domain
### Describe the bug When downloading the dataset "mteb/mtop_domain", ran into the following error: ``` Traceback (most recent call last): File "/share/project/xzy/test/test_download.py", line 3, in <module> data = load_dataset("mteb/mtop_domain", "en", trust_remote_code=True) File "/opt/conda/lib/pytho...
closed
https://github.com/huggingface/datasets/issues/7148
2024-09-13T04:09:39
2024-09-14T15:11:35
2024-09-14T15:11:35
{ "login": "ZiyiXia", "id": 77958037, "type": "User" }
[]
false
[]
2,523,129,465
7,147
IterableDataset strange deadlock
### Describe the bug ``` import datasets import torch.utils.data num_shards = 1024 def gen(shards): for shard in shards: if shard < 25: yield {"shard": shard} def main(): dataset = datasets.IterableDataset.from_generator( gen, gen_kwargs={"shards": lis...
closed
https://github.com/huggingface/datasets/issues/7147
2024-09-12T18:59:33
2024-09-23T09:32:27
2024-09-21T17:37:34
{ "login": "jonathanasdf", "id": 511073, "type": "User" }
[]
false
[]
2,519,820,162
7,146
Set dev version
null
closed
https://github.com/huggingface/datasets/pull/7146
2024-09-11T13:53:27
2024-09-12T04:34:08
2024-09-12T04:34:06
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,519,789,724
7,145
Release: 3.0.0
null
closed
https://github.com/huggingface/datasets/pull/7145
2024-09-11T13:41:47
2024-09-11T13:48:42
2024-09-11T13:48:41
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,519,393,560
7,144
Fix key error in webdataset
I was running into ``` example[field_name] = {"path": example["__key__"] + "." + field_name, "bytes": example[field_name]} KeyError: 'png' ``` The issue is that a filename may have multiple "." e.g. `22.05.png`. Changing `split` to `rsplit` fixes it. Related https://github.com/huggingface/datasets/issues/68...
closed
https://github.com/huggingface/datasets/pull/7144
2024-09-11T10:50:17
2025-01-15T10:32:43
2024-09-13T04:31:37
{ "login": "ragavsachdeva", "id": 26804893, "type": "User" }
[]
true
[]
2,512,327,211
7,143
Modify add_column() to optionally accept a FeatureType as param
Fix #7142. **Before (Add + Cast)**: ``` from datasets import load_dataset, Value ds = load_dataset("rotten_tomatoes", split="test") lst = [i for i in range(len(ds))] ds = ds.add_column("new_col", lst) # Assigns int64 to new_col by default print(ds.features) ds = ds.cast_column("new_col", Value(dtype="u...
closed
https://github.com/huggingface/datasets/pull/7143
2024-09-08T10:56:57
2024-09-17T06:01:23
2024-09-16T15:11:01
{ "login": "varadhbhatnagar", "id": 20443618, "type": "User" }
[]
true
[]
2,512,244,938
7,142
Specifying datatype when adding a column to a dataset.
### Feature request There should be a way to specify the datatype of a column in `datasets.add_column()`. ### Motivation To specify a custom datatype, we have to use `datasets.add_column()` followed by `datasets.cast_column()` which is slow for large datasets. Another workaround is to pass a `numpy.array()` of desi...
closed
https://github.com/huggingface/datasets/issues/7142
2024-09-08T07:34:24
2024-09-17T03:46:32
2024-09-17T03:46:32
{ "login": "varadhbhatnagar", "id": 20443618, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,510,797,653
7,141
Older datasets throwing safety errors with 2.21.0
### Describe the bug The dataset loading was throwing some safety errors for this popular dataset `wmt14`. [in]: ``` import datasets # train_data = datasets.load_dataset("wmt14", "de-en", split="train") train_data = datasets.load_dataset("wmt14", "de-en", split="train") val_data = datasets.load_dataset(...
closed
https://github.com/huggingface/datasets/issues/7141
2024-09-06T16:26:30
2024-09-06T21:14:14
2024-09-06T19:09:29
{ "login": "alvations", "id": 1050316, "type": "User" }
[]
false
[]
2,508,078,858
7,139
Use load_dataset to load imagenet-1K But find a empty dataset
### Describe the bug ```python def get_dataset(data_path, train_folder="train", val_folder="val"): traindir = os.path.join(data_path, train_folder) valdir = os.path.join(data_path, val_folder) def transform_val_examples(examples): transform = Compose([ Resize(256), ...
open
https://github.com/huggingface/datasets/issues/7139
2024-09-05T15:12:22
2024-10-09T04:02:41
null
{ "login": "fscdc", "id": 105094708, "type": "User" }
[]
false
[]
2,507,738,308
7,138
Cache only changed columns?
### Feature request Cache only the actual changes to the dataset i.e. changed columns. ### Motivation I realized that caching actually saves the complete dataset again. This is especially problematic for image datasets if one wants to only change another column e.g. some metadata and then has to save 5 TB again. #...
open
https://github.com/huggingface/datasets/issues/7138
2024-09-05T12:56:47
2024-09-20T13:27:20
null
{ "login": "Modexus", "id": 37351874, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,506,851,048
7,137
[BUG] dataset_info sequence unexpected behavior in README.md YAML
### Describe the bug When working on `dataset_info` yaml, I find my data column with format `list[dict[str, str]]` cannot be coded correctly. My data looks like ``` {"answers":[{"text": "ADDRESS", "label": "abc"}]} ``` My `dataset_info` in README.md is: ``` dataset_info: - config_name: default feature...
closed
https://github.com/huggingface/datasets/issues/7137
2024-09-05T06:06:06
2025-07-07T09:20:29
2025-07-04T19:50:59
{ "login": "ain-soph", "id": 13214530, "type": "User" }
[]
false
[]
2,506,115,857
7,136
Do not consume unnecessary memory during sharding
When sharding `IterableDataset`s, a temporary list is created that is then indexed. There is no need to create a temporary list of a potentially very large step/world size, with standard `islice` functionality, so we avoid it. ```shell pytest tests/test_distributed.py -k iterable ``` Runs successfully.
open
https://github.com/huggingface/datasets/pull/7136
2024-09-04T19:26:06
2024-09-04T19:28:23
null
{ "login": "janEbert", "id": 12694897, "type": "User" }
[]
true
[]
2,503,318,328
7,135
Bug: Type Mismatch in Dataset Mapping
# Issue: Type Mismatch in Dataset Mapping ## Description There is an issue with the `map` function in the `datasets` library where the mapped output does not reflect the expected type change. After applying a mapping function to convert an integer label to a string, the resulting type remains an integer instead of ...
open
https://github.com/huggingface/datasets/issues/7135
2024-09-03T16:37:01
2024-09-05T14:09:05
null
{ "login": "marko1616", "id": 45327989, "type": "User" }
[]
false
[]
2,499,484,041
7,134
Attempting to return a rank 3 grayscale image from dataset.map results in extreme slowdown
### Describe the bug Background: Digital images are often represented as a (Height, Width, Channel) tensor. This is the same for huggingface datasets that contain images. These images are loaded in Pillow containers which offer, for example, the `.convert` method. I can convert an image from a (H,W,3) shape to a...
open
https://github.com/huggingface/datasets/issues/7134
2024-09-01T13:55:41
2024-09-02T10:34:53
null
{ "login": "navidmafi", "id": 46371349, "type": "User" }
[]
false
[]
2,496,474,495
7,133
remove filecheck to enable symlinks
Enables streaming from local symlinks #7083 @lhoestq
closed
https://github.com/huggingface/datasets/pull/7133
2024-08-30T07:36:56
2024-12-24T14:25:22
2024-12-24T14:25:22
{ "login": "fschlatt", "id": 23191892, "type": "User" }
[]
true
[]
2,494,510,464
7,132
Fix data file module inference
I saved a dataset with two splits to disk with `DatasetDict.save_to_disk`. The train is bigger and ended up in 10 shards, whereas the test split only resulted in 1 split. Now when trying to load the dataset, an error is raised that not all splits have the same data format: > ValueError: Couldn't infer the same da...
open
https://github.com/huggingface/datasets/pull/7132
2024-08-29T13:48:16
2024-09-02T19:52:13
null
{ "login": "HennerM", "id": 1714412, "type": "User" }
[]
true
[]
2,491,942,650
7,129
Inconsistent output in documentation example: `num_classes` not displayed in `ClassLabel` output
In the documentation for [ClassLabel](https://huggingface.co/docs/datasets/v2.21.0/en/package_reference/main_classes#datasets.ClassLabel), there is an example of usage with the following code: ```` from datasets import Features features = Features({'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'])}) ...
closed
https://github.com/huggingface/datasets/issues/7129
2024-08-28T12:27:48
2024-12-06T11:32:02
2024-12-06T11:32:02
{ "login": "sergiopaniego", "id": 17179696, "type": "User" }
[]
false
[]
2,490,274,775
7,128
Filter Large Dataset Entry by Entry
### Feature request I am not sure if this is a new feature, but I wanted to post this problem here, and hear if others have ways of optimizing and speeding up this process. Let's say I have a really large dataset that I cannot load into memory. At this point, I am only aware of `streaming=True` to load the dataset....
open
https://github.com/huggingface/datasets/issues/7128
2024-08-27T20:31:09
2024-10-07T23:37:44
null
{ "login": "QiyaoWei", "id": 36057290, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,486,524,966
7,127
Caching shuffles by np.random.Generator results in unintiutive behavior
### Describe the bug Create a dataset. Save it to disk. Load from disk. Shuffle, usning a `np.random.Generator`. Iterate. Shuffle again. Iterate. The iterates are different since the supplied np.random.Generator has progressed between the shuffles. Load dataset from disk again. Shuffle and Iterate. See same result ...
open
https://github.com/huggingface/datasets/issues/7127
2024-08-26T10:29:48
2025-03-10T17:12:57
null
{ "login": "el-hult", "id": 11832922, "type": "User" }
[]
false
[]
2,485,939,495
7,126
Disable implicit token in CI
Disable implicit token in CI. This PR allows running CI tests locally without implicitly using the local user HF token. For example, run locally the tests in: - #7124
closed
https://github.com/huggingface/datasets/pull/7126
2024-08-26T05:29:46
2024-08-26T06:05:01
2024-08-26T05:59:15
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,485,912,246
7,125
Fix wrong SHA in CI tests of HubDatasetModuleFactoryWithParquetExport
Fix wrong SHA in CI tests of HubDatasetModuleFactoryWithParquetExport.
closed
https://github.com/huggingface/datasets/pull/7125
2024-08-26T05:09:35
2024-08-26T05:33:15
2024-08-26T05:27:09
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,485,890,442
7,124
Test get_dataset_config_info with non-existing/gated/private dataset
Test get_dataset_config_info with non-existing/gated/private dataset. Related to: - #7109 See also: - https://github.com/huggingface/dataset-viewer/pull/3037: https://github.com/huggingface/dataset-viewer/pull/3037/commits/bb1a7e00c53c242088597cab6572e4fd57797ecb
closed
https://github.com/huggingface/datasets/pull/7124
2024-08-26T04:53:59
2024-08-26T06:15:33
2024-08-26T06:09:42
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,484,003,937
7,123
Make dataset viewer more flexible in displaying metadata alongside images
### Feature request To display images with their associated metadata in the dataset viewer, a `metadata.csv` file is required. In the case of a dataset with multiple subsets, this would require the CSVs to be contained in the same folder as the images since they all need to be named `metadata.csv`. The request is th...
open
https://github.com/huggingface/datasets/issues/7123
2024-08-23T22:56:01
2024-10-17T09:13:47
null
{ "login": "egrace479", "id": 38985481, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,482,491,258
7,122
[interleave_dataset] sample batches from a single source at a time
### Feature request interleave_dataset and [RandomlyCyclingMultiSourcesExamplesIterable](https://github.com/huggingface/datasets/blob/3813ce846e52824b38e53895810682f0a496a2e3/src/datasets/iterable_dataset.py#L816) enable us to sample data examples from different sources. But can we also sample batches in a similar man...
open
https://github.com/huggingface/datasets/issues/7122
2024-08-23T07:21:15
2024-08-23T07:21:15
null
{ "login": "memray", "id": 4197249, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,480,978,483
7,121
Fix typed examples iterable state dict
fix https://github.com/huggingface/datasets/issues/7085 as noted by @VeryLazyBoy and reported by @AjayP13
closed
https://github.com/huggingface/datasets/pull/7121
2024-08-22T14:45:03
2024-08-22T14:54:56
2024-08-22T14:49:06
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,480,674,237
7,120
don't mention the script if trust_remote_code=False
See https://huggingface.co/datasets/Omega02gdfdd/bioclip-demo-zero-shot-mistakes for example. The error is: ``` FileNotFoundError: Couldn't find a dataset script at /src/services/worker/Omega02gdfdd/bioclip-demo-zero-shot-mistakes/bioclip-demo-zero-shot-mistakes.py or any data file in the same directory. Couldn't f...
closed
https://github.com/huggingface/datasets/pull/7120
2024-08-22T12:32:32
2024-08-22T14:39:52
2024-08-22T14:33:52
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
2,477,766,493
7,119
Install transformers with numpy-2 CI
Install transformers with numpy-2 CI. Note that transformers no longer pins numpy < 2 since transformers-4.43.0: - https://github.com/huggingface/transformers/pull/32018 - https://github.com/huggingface/transformers/releases/tag/v4.43.0
closed
https://github.com/huggingface/datasets/pull/7119
2024-08-21T11:14:59
2024-08-21T11:42:35
2024-08-21T11:36:50
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,477,676,893
7,118
Allow numpy-2.1 and test it without audio extra
Allow numpy-2.1 and test it without audio extra. This PR reverts: - #7114 Note that audio extra tests can be included again with numpy-2.1 once next numba-0.61.0 version is released.
closed
https://github.com/huggingface/datasets/pull/7118
2024-08-21T10:29:35
2024-08-21T11:05:03
2024-08-21T10:58:15
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,476,555,659
7,117
Audio dataset load everything in RAM and is very slow
Hello, I'm working with an audio dataset. I want to transcribe the audio that the dataset contain, and for that I use whisper. My issue is that the dataset load everything in the RAM when I map the dataset, obviously, when RAM usage is too high, the program crashes. To fix this issue, I'm using writer_batch_size tha...
open
https://github.com/huggingface/datasets/issues/7117
2024-08-20T21:18:12
2024-08-26T13:11:55
null
{ "login": "Jourdelune", "id": 64205064, "type": "User" }
[]
false
[]
2,475,522,721
7,116
datasets cannot handle nested json if features is given.
### Describe the bug I have a json named temp.json. ```json {"ref1": "ABC", "ref2": "DEF", "cuts":[{"cut1": 3, "cut2": 5}]} ``` I want to load it. ```python ds = datasets.load_dataset('json', data_files="./temp.json", features=datasets.Features({ 'ref1': datasets.Value('string'), 'ref2': datasets.Value...
closed
https://github.com/huggingface/datasets/issues/7116
2024-08-20T12:27:49
2024-09-03T10:18:23
2024-09-03T10:18:07
{ "login": "ljw20180420", "id": 38550511, "type": "User" }
[]
false
[]
2,475,363,142
7,115
module 'pyarrow.lib' has no attribute 'ListViewType'
### Describe the bug Code: `!pipuninstall -y pyarrow !pip install --no-cache-dir pyarrow !pip uninstall -y pyarrow !pip install pyarrow --no-cache-dir !pip install --upgrade datasets transformers pyarrow !pip install pyarrow.parquet ! pip install pyarrow-core libparquet !pip install pyarrow --no-cache-di...
closed
https://github.com/huggingface/datasets/issues/7115
2024-08-20T11:05:44
2024-09-10T06:51:08
2024-09-10T06:51:08
{ "login": "neurafusionai", "id": 175128880, "type": "User" }
[]
false
[]
2,475,062,252
7,114
Temporarily pin numpy<2.1 to fix CI
Temporarily pin numpy<2.1 to fix CI. Fix #7111.
closed
https://github.com/huggingface/datasets/pull/7114
2024-08-20T08:42:57
2024-08-20T09:09:27
2024-08-20T09:02:35
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,475,029,640
7,113
Stream dataset does not iterate if the batch size is larger than the dataset size (related to drop_last_batch)
### Describe the bug Hi there, I use streaming and interleaving to combine multiple datasets saved in jsonl files. The size of dataset can vary (from 100ish to 100k-ish). I use dataset.map() and a big batch size to reduce the IO cost. It was working fine with datasets-2.16.1 but this problem shows up after I upgr...
closed
https://github.com/huggingface/datasets/issues/7113
2024-08-20T08:26:40
2024-08-26T04:24:11
2024-08-26T04:24:10
{ "login": "memray", "id": 4197249, "type": "User" }
[]
false
[]
2,475,004,644
7,112
cudf-cu12 24.4.1, ibis-framework 8.0.0 requires pyarrow<15.0.0a0,>=14.0.1,pyarrow<16,>=2 and datasets 2.21.0 requires pyarrow>=15.0.0
### Describe the bug !pip install accelerate>=0.16.0 torchvision transformers>=4.25.1 datasets>=2.19.1 ftfy tensorboard Jinja2 peft==0.7.0 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. c...
open
https://github.com/huggingface/datasets/issues/7112
2024-08-20T08:13:55
2024-09-20T15:30:03
null
{ "login": "SoumyaMB10", "id": 174590283, "type": "User" }
[]
false
[]
2,474,915,845
7,111
CI is broken for numpy-2: Failed to fetch wheel: llvmlite==0.34.0
Ci is broken with error `Failed to fetch wheel: llvmlite==0.34.0`: https://github.com/huggingface/datasets/actions/runs/10466825281/job/28984414269 ``` Run uv pip install --system "datasets[tests_numpy2] @ ." Resolved 150 packages in 4.42s error: Failed to prepare distributions Caused by: Failed to fetch wheel: ...
closed
https://github.com/huggingface/datasets/issues/7111
2024-08-20T07:27:28
2024-08-21T05:05:36
2024-08-20T09:02:36
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
false
[]
2,474,747,695
7,110
Fix ConnectionError for gated datasets and unauthenticated users
Fix `ConnectionError` for gated datasets and unauthenticated users. See: - https://github.com/huggingface/dataset-viewer/issues/3025 Note that a recent change in the Hub returns dataset info for gated datasets and unauthenticated users, instead of raising a `GatedRepoError` as before. See: - https://github.com/hug...
closed
https://github.com/huggingface/datasets/pull/7110
2024-08-20T05:26:54
2024-08-20T15:11:35
2024-08-20T09:14:35
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,473,367,848
7,109
ConnectionError for gated datasets and unauthenticated users
Since the Hub returns dataset info for gated datasets and unauthenticated users, there is dead code: https://github.com/huggingface/datasets/blob/98fdc9e78e6d057ca66e58a37f49d6618aab8130/src/datasets/load.py#L1846-L1852 We should remove the dead code and properly handle this case: currently we are raising a `Connect...
closed
https://github.com/huggingface/datasets/issues/7109
2024-08-19T13:27:45
2024-08-20T09:14:36
2024-08-20T09:14:35
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
false
[]
2,470,665,327
7,108
website broken: Create a new dataset repository, doesn't create a new repo in Firefox
### Describe the bug This issue is also reported here: https://discuss.huggingface.co/t/create-a-new-dataset-repository-broken-page/102644 This page is broken. https://huggingface.co/new-dataset I fill in the form with my text, and click `Create Dataset`. ![Screenshot 2024-08-16 at 15 55 37](https://github....
closed
https://github.com/huggingface/datasets/issues/7108
2024-08-16T17:23:00
2024-08-19T13:21:12
2024-08-19T06:52:48
{ "login": "neoneye", "id": 147971, "type": "User" }
[]
false
[]
2,470,444,732
7,107
load_dataset broken in 2.21.0
### Describe the bug `eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)` used to work till 2.20.0 but doesn't work in 2.21.0 In 2.20.0: ![Screenshot 2024-08-16 at 3 57 10 PM](https://github.com/user-attachments/assets/0516489b-8187-486d-bee8-88af3381de...
closed
https://github.com/huggingface/datasets/issues/7107
2024-08-16T14:59:51
2024-08-18T09:28:43
2024-08-18T09:27:12
{ "login": "anjor", "id": 1911631, "type": "User" }
[]
false
[]
2,469,854,262
7,106
Rename LargeList.dtype to LargeList.feature
Rename `LargeList.dtype` to `LargeList.feature`. Note that `dtype` is usually used for NumPy data types ("int64", "float32",...): see `Value.dtype`. However, `LargeList` attribute (like `Sequence.feature`) expects a `FeatureType` instead. With this renaming: - we avoid confusion about the expected type and -...
closed
https://github.com/huggingface/datasets/pull/7106
2024-08-16T09:12:04
2024-08-26T04:31:59
2024-08-26T04:26:02
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,468,207,039
7,105
Use `huggingface_hub` cache
- use `hf_hub_download()` from `huggingface_hub` for HF files - `datasets` cache_dir is still used for: - caching datasets as Arrow files (that back `Dataset` objects) - extracted archives, uncompressed files - files downloaded via http (datasets with scripts) - I removed code that were made for http files (...
closed
https://github.com/huggingface/datasets/pull/7105
2024-08-15T14:45:22
2024-09-12T04:36:08
2024-08-21T15:47:16
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,467,788,212
7,104
remove more script docs
null
closed
https://github.com/huggingface/datasets/pull/7104
2024-08-15T10:13:26
2024-08-15T10:24:13
2024-08-15T10:18:25
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,467,664,581
7,103
Fix args of feature docstrings
Fix Args section of feature docstrings. Currently, some args do not appear in the docs because they are not properly parsed due to the lack of their type (between parentheses).
closed
https://github.com/huggingface/datasets/pull/7103
2024-08-15T08:46:08
2024-08-16T09:18:29
2024-08-15T10:33:30
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,466,893,106
7,102
Slow iteration speeds when using IterableDataset.shuffle with load_dataset(data_files=..., streaming=True)
### Describe the bug When I load a dataset from a number of arrow files, as in: ``` random_dataset = load_dataset( "arrow", data_files={split: shard_filepaths}, streaming=True, split=split, ) ``` I'm able to get fast iteration speeds when iterating over the dataset without shuffling. ...
open
https://github.com/huggingface/datasets/issues/7102
2024-08-14T21:44:44
2024-08-15T16:17:31
null
{ "login": "lajd", "id": 13192126, "type": "User" }
[]
false
[]
2,466,510,783
7,101
`load_dataset` from Hub with `name` to specify `config` using incorrect builder type when multiple data formats are present
Following [documentation](https://huggingface.co/docs/datasets/repository_structure#define-your-splits-and-subsets-in-yaml) I had defined different configs for [`Dataception`](https://huggingface.co/datasets/bigdata-pw/Dataception), a dataset of datasets: ```yaml configs: - config_name: dataception data_files: ...
open
https://github.com/huggingface/datasets/issues/7101
2024-08-14T18:12:25
2024-08-18T10:33:38
null
{ "login": "hlky", "id": 106811348, "type": "User" }
[]
false
[]
2,465,529,414
7,100
IterableDataset: cannot resolve features from list of numpy arrays
### Describe the bug when resolve features of `IterableDataset`, got `pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values` error. ``` Traceback (most recent call last): File "test.py", line 6 iter_ds = iter_ds._resolve_features() File "lib/python3.10/site-packages/datasets/iterable_dat...
open
https://github.com/huggingface/datasets/issues/7100
2024-08-14T11:01:51
2024-10-03T05:47:23
null
{ "login": "VeryLazyBoy", "id": 18899212, "type": "User" }
[]
false
[]
2,465,221,827
7,099
Set dev version
null
closed
https://github.com/huggingface/datasets/pull/7099
2024-08-14T08:31:17
2024-08-14T08:45:17
2024-08-14T08:39:25
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,465,016,562
7,098
Release: 2.21.0
null
closed
https://github.com/huggingface/datasets/pull/7098
2024-08-14T06:35:13
2024-08-14T06:41:07
2024-08-14T06:41:06
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,458,455,489
7,097
Some of DownloadConfig's properties are always being overridden in load.py
### Describe the bug The `extract_compressed_file` and `force_extract` properties of DownloadConfig are always being set to True in the function `dataset_module_factory` in the `load.py` file. This behavior is very annoying because data extracted will just be ignored the next time the dataset is loaded. See this im...
open
https://github.com/huggingface/datasets/issues/7097
2024-08-09T18:26:37
2024-08-09T18:26:37
null
{ "login": "ductai199x", "id": 29772899, "type": "User" }
[]
false
[]
2,456,929,173
7,096
Automatically create `cache_dir` from `cache_file_name`
You get a pretty unhelpful error message when specifying a `cache_file_name` in a directory that doesn't exist, e.g. `cache_file_name="./cache/data.map"` ```python import datasets cache_file_name="./cache/train.map" dataset = datasets.load_dataset("ylecun/mnist") dataset["train"].map(lambda x: x, cache_file_na...
closed
https://github.com/huggingface/datasets/pull/7096
2024-08-09T01:34:06
2024-08-15T17:25:26
2024-08-15T10:13:22
{ "login": "ringohoffman", "id": 27844407, "type": "User" }
[]
true
[]
2,454,418,130
7,094
Add Arabic Docs to Datasets
Translate Docs into Arabic issue-number : #7093 [Arabic Docs](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx) [English Docs](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/en/index.mdx) @stevhliu
open
https://github.com/huggingface/datasets/pull/7094
2024-08-07T21:53:06
2024-08-07T21:53:06
null
{ "login": "AhmedAlmaghz", "id": 53489256, "type": "User" }
[]
true
[]
2,454,413,074
7,093
Add Arabic Docs to datasets
### Feature request Add Arabic Docs to datasets [Datasets Arabic](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx) ### Motivation @AhmedAlmaghz https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx ### Your contribution @AhmedAlmaghz https://github.com/AhmedAlma...
open
https://github.com/huggingface/datasets/issues/7093
2024-08-07T21:48:05
2024-08-07T21:48:05
null
{ "login": "AhmedAlmaghz", "id": 53489256, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,451,393,658
7,092
load_dataset with multiple jsonlines files interprets datastructure too early
### Describe the bug likely related to #6460 using `datasets.load_dataset("json", data_dir= ... )` with multiple `.jsonl` files will error if one of the files (maybe the first file?) contains a full column of empty data. ### Steps to reproduce the bug real world example: data is available in this [PR-bra...
open
https://github.com/huggingface/datasets/issues/7092
2024-08-06T17:42:55
2024-08-08T16:35:01
null
{ "login": "Vipitis", "id": 23384483, "type": "User" }
[]
false
[]
2,449,699,490
7,090
The test test_move_script_doesnt_change_hash fails because it runs the 'python' command while the python executable has a different name
### Describe the bug Tests should use the same pythin path as they are launched with, which in the case of FreeBSD is /usr/local/bin/python3.11 Failure: ``` if err_filename is not None: > raise child_exception_type(errno_num, err_msg, err_filename) E FileNotFo...
open
https://github.com/huggingface/datasets/issues/7090
2024-08-06T00:35:05
2024-08-06T00:35:05
null
{ "login": "yurivict", "id": 271906, "type": "User" }
[]
false
[]
2,449,479,500
7,089
Missing pyspark dependency causes the testsuite to error out, instead of a few tests to be skipped
### Describe the bug see the subject ### Steps to reproduce the bug regular tests ### Expected behavior n/a ### Environment info version 2.20.0
open
https://github.com/huggingface/datasets/issues/7089
2024-08-05T21:05:11
2024-08-05T21:05:11
null
{ "login": "yurivict", "id": 271906, "type": "User" }
[]
false
[]
2,447,383,940
7,088
Disable warning when using with_format format on tensors
### Feature request If we write this code: ```python """Get data and define datasets.""" from enum import StrEnum from datasets import load_dataset from torch.utils.data import DataLoader from torchvision import transforms class Split(StrEnum): """Describes what type of split to use in the dataloa...
open
https://github.com/huggingface/datasets/issues/7088
2024-08-05T00:45:50
2024-08-05T00:45:50
null
{ "login": "Haislich", "id": 42048782, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,447,158,643
7,087
Unable to create dataset card for Lushootseed language
### Feature request While I was creating the dataset which contained all documents from the Lushootseed Wikipedia, the dataset card asked me to enter which language the dataset was in. Since Lushootseed is a critically endangered language, it was not available as one of the options. Is it possible to allow entering la...
closed
https://github.com/huggingface/datasets/issues/7087
2024-08-04T14:27:04
2024-08-06T06:59:23
2024-08-06T06:59:22
{ "login": "vaishnavsudarshan", "id": 134876525, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,445,516,829
7,086
load_dataset ignores cached datasets and tries to hit HF Hub, resulting in API rate limit errors
### Describe the bug I have been running lm-eval-harness a lot which has results in an API rate limit. This seems strange, since all of the data should be cached locally. I have in fact verified this. ### Steps to reproduce the bug 1. Be Me 2. Run `load_dataset("TAUR-Lab/MuSR")` 3. Hit rate limit error 4. Dataset...
open
https://github.com/huggingface/datasets/issues/7086
2024-08-02T18:12:23
2025-06-16T18:43:29
null
{ "login": "tginart", "id": 11379648, "type": "User" }
[]
false
[]
2,440,008,618
7,085
[Regression] IterableDataset is broken on 2.20.0
### Describe the bug In the latest version of datasets there is a major regression, after creating an `IterableDataset` from a generator and applying a few operations (`map`, `select`), you can no longer iterate through the dataset multiple times. The issue seems to stem from the recent addition of "resumable Itera...
closed
https://github.com/huggingface/datasets/issues/7085
2024-07-31T13:01:59
2024-08-22T14:49:37
2024-08-22T14:49:07
{ "login": "AjayP13", "id": 5404177, "type": "User" }
[]
false
[]
2,439,519,534
7,084
More easily support streaming local files
### Feature request Simplify downloading and streaming datasets locally. Specifically, perhaps add an option to `load_dataset(..., streaming="download_first")` or add better support for streaming symlinked or arrow files. ### Motivation I have downloaded FineWeb-edu locally and currently trying to stream the d...
open
https://github.com/huggingface/datasets/issues/7084
2024-07-31T09:03:15
2024-07-31T09:05:58
null
{ "login": "fschlatt", "id": 23191892, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,439,518,466
7,083
fix streaming from arrow files
null
closed
https://github.com/huggingface/datasets/pull/7083
2024-07-31T09:02:42
2024-08-30T15:17:03
2024-08-30T15:17:03
{ "login": "fschlatt", "id": 23191892, "type": "User" }
[]
true
[]
2,437,354,975
7,082
Support HTTP authentication in non-streaming mode
Support HTTP authentication in non-streaming mode, by support passing HTTP storage_options in non-streaming mode. - Note that currently, HTTP authentication is supported only in streaming mode. For example, this is necessary if a remote HTTP host requires authentication to download the data.
closed
https://github.com/huggingface/datasets/pull/7082
2024-07-30T09:25:49
2024-08-08T08:29:55
2024-08-08T08:24:06
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,437,059,657
7,081
Set load_from_disk path type as PathLike
Set `load_from_disk` path type as `PathLike`. This way it is aligned with `save_to_disk`.
closed
https://github.com/huggingface/datasets/pull/7081
2024-07-30T07:00:38
2024-07-30T08:30:37
2024-07-30T08:21:50
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,434,275,664
7,080
Generating train split takes a long time
### Describe the bug Loading a simple webdataset takes ~45 minutes. ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("PixArt-alpha/SAM-LLaVA-Captions10M") ``` ### Expected behavior The dataset should load immediately as it does when loaded through a normal indexed WebD...
open
https://github.com/huggingface/datasets/issues/7080
2024-07-29T01:42:43
2024-10-02T15:31:22
null
{ "login": "alexanderswerdlow", "id": 35648800, "type": "User" }
[]
false
[]
2,433,363,298
7,079
HfHubHTTPError: 500 Server Error: Internal Server Error for url:
### Describe the bug newly uploaded datasets, since yesterday, yields an error. old datasets, works fine. Seems like the datasets api server returns a 500 I'm getting the same error, when I invoke `load_dataset` with my dataset. Long discussion about it here, but I'm not sure anyone from huggingface have s...
closed
https://github.com/huggingface/datasets/issues/7079
2024-07-27T08:21:03
2024-09-20T13:26:25
2024-07-27T19:52:30
{ "login": "neoneye", "id": 147971, "type": "User" }
[]
false
[]
2,433,270,271
7,078
Fix CI test_convert_to_parquet
Fix `test_convert_to_parquet` by patching `HfApi.preupload_lfs_files` and revert temporary fix: - #7074
closed
https://github.com/huggingface/datasets/pull/7078
2024-07-27T05:32:40
2024-07-27T05:50:57
2024-07-27T05:44:32
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,432,345,489
7,077
column_names ignored by load_dataset() when loading CSV file
### Describe the bug load_dataset() ignores the column_names kwarg when loading a CSV file. Instead, it uses whatever values are on the first line of the file. ### Steps to reproduce the bug Call `load_dataset` to load data from a CSV file and specify `column_names` kwarg. ### Expected behavior The resulting da...
open
https://github.com/huggingface/datasets/issues/7077
2024-07-26T14:18:04
2024-07-30T07:52:26
null
{ "login": "luismsgomes", "id": 9130265, "type": "User" }
[]
false
[]
2,432,275,393
7,076
🧪 Do not mock create_commit
null
closed
https://github.com/huggingface/datasets/pull/7076
2024-07-26T13:44:42
2024-07-27T05:48:17
2024-07-27T05:48:17
{ "login": "coyotte508", "id": 342922, "type": "User" }
[]
true
[]
2,432,027,412
7,075
Update required soxr version from pre-release to release
Update required `soxr` version from pre-release to release 0.4.0: https://github.com/dofuuz/python-soxr/releases/tag/v0.4.0
closed
https://github.com/huggingface/datasets/pull/7075
2024-07-26T11:24:35
2024-07-26T11:46:52
2024-07-26T11:40:49
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,431,772,703
7,074
Fix CI by temporarily marking test_convert_to_parquet as expected to fail
As a hotfix for CI, temporarily mark test_convert_to_parquet as expected to fail. Fix #7073. Revert once root cause is fixed.
closed
https://github.com/huggingface/datasets/pull/7074
2024-07-26T09:03:33
2024-07-26T09:23:33
2024-07-26T09:16:12
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,431,706,568
7,073
CI is broken for convert_to_parquet: Invalid rev id: refs/pr/1 404 error causes RevisionNotFoundError
See: https://github.com/huggingface/datasets/actions/runs/10095313567/job/27915185756 ``` FAILED tests/test_hub.py::test_convert_to_parquet - huggingface_hub.utils._errors.RevisionNotFoundError: 404 Client Error. (Request ID: Root=1-66a25839-31ce7b475e70e7db1e4d44c2;b0c8870f-d5ef-4bf2-a6ff-0191f3df0f64) Revision N...
closed
https://github.com/huggingface/datasets/issues/7073
2024-07-26T08:27:41
2024-07-27T05:48:02
2024-07-26T09:16:13
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
false
[]
2,430,577,916
7,072
nm
null
closed
https://github.com/huggingface/datasets/issues/7072
2024-07-25T17:03:24
2024-07-25T20:36:11
2024-07-25T20:36:11
{ "login": "brettdavies", "id": 26392883, "type": "User" }
[]
false
[]
2,430,313,011
7,071
Filter hangs
### Describe the bug When trying to filter my custom dataset, the process hangs, regardless of the lambda function used. It appears to be an issue with the way the Images are being handled. The dataset in question is a preprocessed version of https://huggingface.co/datasets/danaaubakirova/patfig where notably, I hav...
open
https://github.com/huggingface/datasets/issues/7071
2024-07-25T15:29:05
2024-07-25T15:36:59
null
{ "login": "lucienwalewski", "id": 61711045, "type": "User" }
[]
false
[]
2,430,285,235
7,070
how set_transform affects batch size?
### Describe the bug I am trying to fine-tune w2v-bert for ASR task. Since my dataset is so big, I preferred to use the on-the-fly method with set_transform. So i change the preprocessing function to this: ``` def prepare_dataset(batch): input_features = processor(batch["audio"], sampling_rate=16000).input_feat...
open
https://github.com/huggingface/datasets/issues/7070
2024-07-25T15:19:34
2024-07-25T15:19:34
null
{ "login": "VafaKnm", "id": 103993288, "type": "User" }
[]
false
[]
2,429,281,339
7,069
Fix push_to_hub by not calling create_branch if PR branch
Fix push_to_hub by not calling create_branch if PR branch (e.g. `refs/pr/1`). Note that currently create_branch raises a 400 Bad Request error if the user passes a PR branch (e.g. `refs/pr/1`). EDIT: ~~Fix push_to_hub by not calling create_branch if branch exists.~~ Note that currently create_branch raises a ...
closed
https://github.com/huggingface/datasets/pull/7069
2024-07-25T07:50:04
2024-07-31T07:10:07
2024-07-30T10:51:01
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,426,657,434
7,068
Fix prepare_single_hop_path_and_storage_options
Fix `_prepare_single_hop_path_and_storage_options`: - Do not pass HF authentication headers and HF user-agent to non-HF HTTP URLs - Do not overwrite passed `storage_options` nested values: - Before, when passed ```DownloadConfig(storage_options={"https": {"client_kwargs": {"raise_for_status": True}}})```, ...
closed
https://github.com/huggingface/datasets/pull/7068
2024-07-24T05:52:34
2024-07-29T07:02:07
2024-07-29T06:56:15
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]