id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
1,899,848,414
6,246
Add new column to dataset
### Describe the bug ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) [<ipython-input-9-bd197b36b6a0>](https://localhost:8080/#) in <cell line: 1>() ----> 1 dataset['train']['/workspace/data'] 3 frames [/...
closed
https://github.com/huggingface/datasets/issues/6246
2023-09-17T16:59:48
2023-09-18T16:20:09
2023-09-18T16:20:09
{ "login": "andysingal", "id": 20493493, "type": "User" }
[]
false
[]
1,898,861,422
6,244
Add support for `fsspec>=2023.9.0`
Fix #6214
closed
https://github.com/huggingface/datasets/pull/6244
2023-09-15T17:58:25
2023-09-26T15:41:38
2023-09-26T15:32:51
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,898,532,784
6,243
Fix cast from fixed size list to variable size list
Fix #6242
closed
https://github.com/huggingface/datasets/pull/6243
2023-09-15T14:23:33
2023-09-19T18:02:21
2023-09-19T17:53:17
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,896,899,123
6,242
Data alteration when loading dataset with unspecified inner sequence length
### Describe the bug When a dataset saved with a specified inner sequence length is loaded without specifying that length, the original data is altered and becomes inconsistent. ### Steps to reproduce the bug ```python from datasets import Dataset, Features, Value, Sequence, load_dataset # Repository ID repo_id...
closed
https://github.com/huggingface/datasets/issues/6242
2023-09-14T16:12:45
2023-09-19T17:53:18
2023-09-19T17:53:18
{ "login": "qgallouedec", "id": 45557362, "type": "User" }
[]
false
[]
1,896,429,694
6,241
Remove unused global variables in `audio.py`
null
closed
https://github.com/huggingface/datasets/pull/6241
2023-09-14T12:06:32
2023-09-15T15:57:10
2023-09-15T15:46:07
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,895,723,888
6,240
Dataloader stuck on multiple GPUs
### Describe the bug I am trying to get CLIP to fine-tuning with my code. When I tried to run it on multiple GPUs using accelerate, I encountered the following phenomenon. - Validation dataloader stuck in 2nd epoch only on multi-GPU Specifically, when the "for inputs in valid_loader:" process is finished, it does...
closed
https://github.com/huggingface/datasets/issues/6240
2023-09-14T05:30:30
2023-09-14T23:54:42
2023-09-14T23:54:42
{ "login": "kuri54", "id": 40049003, "type": "User" }
[]
false
[]
1,895,349,382
6,239
Load local audio data doesn't work
### Describe the bug I get a RuntimeError from the following code: ```python audio_dataset = Dataset.from_dict({"audio": ["/kaggle/input/bengaliai-speech/train_mp3s/000005f3362c.mp3"]}).cast_column("audio", Audio()) audio_dataset[0] ``` ### Traceback <details> ```python RuntimeError ...
closed
https://github.com/huggingface/datasets/issues/6239
2023-09-13T22:30:01
2023-09-15T14:32:10
2023-09-15T14:32:10
{ "login": "abodacs", "id": 554032, "type": "User" }
[]
false
[]
1,895,207,828
6,238
`dataset.filter` ALWAYS removes the first item from the dataset when using batched=True
### Describe the bug If you call batched=True when calling `filter`, the first item is _always_ filtered out, regardless of the filter condition. ### Steps to reproduce the bug Here's a minimal example: ```python def filter_batch_always_true(batch, indices): print("First index being passed into this filte...
closed
https://github.com/huggingface/datasets/issues/6238
2023-09-13T20:20:37
2023-09-17T07:05:07
2023-09-17T07:05:07
{ "login": "Taytay", "id": 1330693, "type": "User" }
[]
false
[]
1,893,822,321
6,237
Tokenization with multiple workers is too slow
I am trying to tokenize a few million documents with multiple workers but the tokenization process is taking forever. Code snippet: ``` raw_datasets.map( encode_function, batched=False, num_proc=args.preprocessing_num_workers, load_from_cache_file=not args.ove...
closed
https://github.com/huggingface/datasets/issues/6237
2023-09-13T06:18:34
2023-09-19T21:54:58
2023-09-19T21:54:58
{ "login": "macabdul9", "id": 25720695, "type": "User" }
[]
false
[]
1,893,648,480
6,236
Support buffer shuffle for to_tf_dataset
### Feature request I'm using to_tf_dataset to convert a large dataset to tf.data.Dataset and use Keras fit to train model. Currently, to_tf_dataset only supports full size shuffle, which can be very slow on large dataset. tf.data.Dataset support buffer shuffle by default. shuffle( buffer_size, seed=None, r...
open
https://github.com/huggingface/datasets/issues/6236
2023-09-13T03:19:44
2023-09-18T01:11:21
null
{ "login": "EthanRock", "id": 7635551, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,893,337,083
6,235
Support multiprocessing for download/extract nestedly
### Feature request Current multiprocessing for download/extract is not done nestedly. For example, when processing SlimPajama, there is only 3 processes (for train/test/val), while there are many files inside these 3 folders ``` Downloading data files #0: 0%| | 0/1 [00:00<?, ?obj/s] Downloading data f...
open
https://github.com/huggingface/datasets/issues/6235
2023-09-12T21:51:08
2023-09-12T21:51:08
null
{ "login": "hgt312", "id": 22725729, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,891,804,286
6,233
Update README.md
fixed a typo
closed
https://github.com/huggingface/datasets/pull/6233
2023-09-12T06:53:06
2023-09-13T18:20:50
2023-09-13T18:10:04
{ "login": "NinoRisteski", "id": 95188570, "type": "User" }
[]
true
[]
1,891,109,762
6,232
Improve error message for missing function parameters
The error message in the fingerprint module was missing the f-string 'f' symbol, so the error message returned by fingerprint.py, line 469 was literally "function {func} is missing parameters {fingerprint_names} in signature." This has been fixed.
closed
https://github.com/huggingface/datasets/pull/6232
2023-09-11T19:11:58
2023-09-15T18:07:56
2023-09-15T17:59:02
{ "login": "suavemint", "id": 4016832, "type": "User" }
[]
true
[]
1,890,863,249
6,231
Overwrite legacy default config name in `dataset_infos.json` in packaged datasets
Currently if we push data as default config with `.push_to_hub` to a repo that has a legacy `dataset_infos.json` file containing a legacy default config name like `{username}--{dataset_name}`, new key `"default"` is added to `dataset_infos.json` along with the legacy one. I think the legacy one should be dropped in thi...
open
https://github.com/huggingface/datasets/pull/6231
2023-09-11T16:27:09
2023-09-26T11:19:36
null
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,890,521,006
6,230
Don't skip hidden files in `dl_manager.iter_files` when they are given as input
Required for `load_dataset(<format>, data_files=["path/to/.hidden_file"])` to work as expected
closed
https://github.com/huggingface/datasets/pull/6230
2023-09-11T13:29:19
2023-09-13T18:21:28
2023-09-13T18:12:09
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,889,050,954
6,229
Apply inference on all images in the dataset
### Describe the bug ``` --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) Cell In[14], line 11 9 for idx, example in enumerate(dataset['train']): 10 image_path = example['image'] ---> 11 mask...
closed
https://github.com/huggingface/datasets/issues/6229
2023-09-10T08:36:12
2023-09-20T16:11:53
2023-09-20T16:11:52
{ "login": "andysingal", "id": 20493493, "type": "User" }
[]
false
[]
1,887,959,311
6,228
Remove RGB -> BGR image conversion in Object Detection tutorial
Fix #6225
closed
https://github.com/huggingface/datasets/pull/6228
2023-09-08T16:09:13
2023-09-08T18:02:49
2023-09-08T17:52:16
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,887,462,591
6,226
Add push_to_hub with multiple configs docs
null
closed
https://github.com/huggingface/datasets/pull/6226
2023-09-08T11:08:55
2023-09-08T12:29:21
2023-09-08T12:20:51
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,887,054,320
6,225
Conversion from RGB to BGR in Object Detection tutorial
The [tutorial](https://huggingface.co/docs/datasets/main/en/object_detection) mentions the necessity of conversion the input image from BGR to RGB > albumentations expects the image to be in BGR format, not RGB, so you’ll have to convert the image before applying the transform. [Link to tutorial](https://github.c...
closed
https://github.com/huggingface/datasets/issues/6225
2023-09-08T06:49:19
2023-09-08T17:52:18
2023-09-08T17:52:17
{ "login": "samokhinv", "id": 33297401, "type": "User" }
[]
false
[]
1,886,043,692
6,224
Ignore `dataset_info.json` in data files resolution
`save_to_disk` creates this file, but also [`HugginFaceDatasetSever`](https://github.com/gradio-app/gradio/blob/26fef8c7f85a006c7e25cdbed1792df19c512d02/gradio/flagging.py#L214), so this is needed to avoid issues such as [this one](https://discord.com/channels/879548962464493619/1149295819938349107/1149295819938349107)...
closed
https://github.com/huggingface/datasets/pull/6224
2023-09-07T14:43:51
2023-09-07T15:46:10
2023-09-07T15:37:20
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,885,710,696
6,223
Update README.md
fixed a few typos
closed
https://github.com/huggingface/datasets/pull/6223
2023-09-07T11:33:20
2023-09-13T22:32:31
2023-09-13T22:23:42
{ "login": "NinoRisteski", "id": 95188570, "type": "User" }
[]
true
[]
1,884,875,510
6,222
fix typo in Audio dataset documentation
There is a typo in the section of the documentation dedicated to creating an audio dataset. The Dataset is incorrectly suffixed with a `Config` https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia/blob/main/librivox-indonesia.py#L59
closed
https://github.com/huggingface/datasets/pull/6222
2023-09-06T23:17:24
2023-10-03T14:18:41
2023-09-07T15:39:09
{ "login": "prassanna-ravishankar", "id": 3224332, "type": "User" }
[]
true
[]
1,884,324,631
6,221
Support saving datasets with custom formatting
Requested in https://discuss.huggingface.co/t/using-set-transform-on-a-dataset-leads-to-an-exception/53036. I am not sure if supporting this is the best idea for the following reasons: >For this to work, we would have to pickle a custom transform, which means the transform and the objects it references need to be...
open
https://github.com/huggingface/datasets/issues/6221
2023-09-06T16:03:32
2023-09-06T18:32:07
null
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
false
[]
1,884,285,980
6,220
Set dev version
null
closed
https://github.com/huggingface/datasets/pull/6220
2023-09-06T15:40:33
2023-09-06T15:52:33
2023-09-06T15:41:13
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,884,244,334
6,219
Release: 2.14.5
null
closed
https://github.com/huggingface/datasets/pull/6219
2023-09-06T15:17:10
2023-09-06T15:46:20
2023-09-06T15:18:51
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,883,734,000
6,218
Rename old push_to_hub configs to "default" in dataset_infos
Fix ```python from datasets import load_dataset_builder b = load_dataset_builder("lambdalabs/pokemon-blip-captions", "default") print(b.info) ``` which should return ``` DatasetInfo( features={'image': Image(decode=True, id=None), 'text': Value(dtype='string', id=None)}, dataset_name='pokemon-bli...
closed
https://github.com/huggingface/datasets/pull/6218
2023-09-06T10:40:05
2023-09-07T08:31:29
2023-09-06T11:23:56
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,883,614,607
6,217
`Dataset.to_dict()` ignore `decode=True` with Image feature
### Describe the bug `Dataset.to_dict` seems to ignore the decoding instruction passed in features. ### Steps to reproduce the bug ```python import datasets import numpy as np from PIL import Image img = np.random.randint(0, 256, (5, 5, 3), dtype=np.uint8) img = Image.fromarray(img) features = datasets.Fea...
open
https://github.com/huggingface/datasets/issues/6217
2023-09-06T09:26:16
2023-09-08T17:08:52
null
{ "login": "qgallouedec", "id": 45557362, "type": "User" }
[]
false
[]
1,883,492,703
6,216
Release: 2.13.2
null
closed
https://github.com/huggingface/datasets/pull/6216
2023-09-06T08:15:32
2023-09-06T08:52:18
2023-09-06T08:22:43
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,882,176,970
6,215
Fix checking patterns to infer packaged builder
Don't ignore results of pattern resolving if `self.data_files` is not None. Otherwise lines 854 and 1037 make no sense.
closed
https://github.com/huggingface/datasets/pull/6215
2023-09-05T15:10:47
2023-09-06T10:34:00
2023-09-06T10:25:00
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,881,736,469
6,214
Unpin fsspec < 2023.9.0
Once root issue is fixed, remove temporary pin of fsspec < 2023.9.0 introduced by: - #6210 Related to issue: - #6209 After investigation, I think the root issue is related to the new glob behavior with double asterisk `**` they have introduced in: - https://github.com/fsspec/filesystem_spec/pull/1329
closed
https://github.com/huggingface/datasets/issues/6214
2023-09-05T11:02:58
2023-09-26T15:32:52
2023-09-26T15:32:52
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,880,592,987
6,213
Better list array values handling in cast/embed storage
Use [`array.flatten`](https://arrow.apache.org/docs/python/generated/pyarrow.ListArray.html#pyarrow.ListArray.flatten) that takes `.offset` into account instead of `array.values` in array cast/embed.
closed
https://github.com/huggingface/datasets/pull/6213
2023-09-04T16:21:23
2024-01-11T06:32:20
2023-10-05T15:24:34
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,880,399,516
6,212
Tilde (~) is not supported for data_files
### Describe the bug Attempting to `load_dataset` from a path starting with `~` (as a shorthand for the user's home directory) seems not to be fully working - at least as far as the `parquet` dataset builder is concerned. (the same file can be loaded correctly if providing its absolute path instead) I think that...
open
https://github.com/huggingface/datasets/issues/6212
2023-09-04T14:23:49
2023-09-05T08:28:39
null
{ "login": "exs-avianello", "id": 128361578, "type": "User" }
[]
false
[]
1,880,265,906
6,211
Fix empty splitinfo json
If a split is empty, then the JSON split info should mention num_bytes = 0 and num_examples = 0. Until now they were omited because the JSON dumps ignore the fields that are equal to the default values. This is needed in datasets-server since we parse this information to the viewer
closed
https://github.com/huggingface/datasets/pull/6211
2023-09-04T13:13:53
2023-09-04T14:58:34
2023-09-04T14:47:17
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,879,649,731
6,210
Temporarily pin fsspec < 2023.9.0
Temporarily pin fsspec < 2023.9.0 until permanent solution is found. Hot fix #6209.
closed
https://github.com/huggingface/datasets/pull/6210
2023-09-04T07:07:07
2023-09-04T07:40:23
2023-09-04T07:30:00
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,879,622,000
6,209
CI is broken with AssertionError: 3 failed, 12 errors
Our CI is broken: 3 failed, 12 errors See: https://github.com/huggingface/datasets/actions/runs/6069947111/job/16465138041 ``` =========================== short test summary info ============================ FAILED tests/test_load.py::ModuleFactoryTest::test_LocalDatasetModuleFactoryWithoutScript_with_data_dir - ...
closed
https://github.com/huggingface/datasets/issues/6209
2023-09-04T06:47:05
2023-09-04T07:30:01
2023-09-04T07:30:01
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,879,572,646
6,208
Do not filter out .zip extensions from no-script datasets
This PR is a hotfix of: - #6207 That PR introduced the filtering out of `.zip` extensions. This PR reverts that. Hot fix #6207. Maybe we should do patch releases: the bug was introduced in 2.13.1. CC: @lhoestq
closed
https://github.com/huggingface/datasets/pull/6208
2023-09-04T06:07:12
2023-09-04T09:22:19
2023-09-04T09:13:32
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,879,555,234
6,207
No-script datasets with ZIP files do not load
While investigating an issue on a Hub dataset, I have discovered the no-script datasets containing ZIP files do not load. For example, that no-script dataset containing ZIP files, raises NonMatchingSplitsSizesError: ```python In [2]: ds = load_dataset("sidovic/LearningQ-qg") NonMatchingSplitsSizesError: [ { ...
closed
https://github.com/huggingface/datasets/issues/6207
2023-09-04T05:50:27
2023-09-04T09:13:33
2023-09-04T09:13:33
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,879,473,745
6,206
When calling load_dataset, raise error: pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays
### Describe the bug When calling load_dataset, raise error ``` Traceback (most recent call last): File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/builder.py", line 1694, in _pre pare_split_single ...
closed
https://github.com/huggingface/datasets/issues/6206
2023-09-04T04:14:00
2024-04-17T15:53:29
2023-09-04T06:05:49
{ "login": "aihao2000", "id": 51043929, "type": "User" }
[]
false
[]
1,877,491,602
6,203
Support loading from a DVC remote repository
### Feature request Adding support for loading a file from a DVC repository, tracked remotely on a SCM. ### Motivation DVC is a popular version control system to version and manage datasets. The files are stored on a remote object storage platform, but they are tracked using Git. Integration with DVC is possible thr...
closed
https://github.com/huggingface/datasets/issues/6203
2023-09-01T14:04:52
2023-09-15T15:11:27
2023-09-15T15:11:27
{ "login": "bilelomrani1", "id": 16692099, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,876,630,351
6,202
avoid downgrading jax version
### Feature request Whenever I `pip install datasets[jax]` it downgrades jax to version 0.3.25. I seem to be able to install this library first then upgrade jax back to version 0.4.13. ### Motivation It would be nice to not overwrite currently installed version of jax if possible. ### Your contribution I...
closed
https://github.com/huggingface/datasets/issues/6202
2023-09-01T02:57:57
2023-10-12T16:28:59
2023-10-12T16:28:59
{ "login": "chrisflesher", "id": 1332458, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,875,256,775
6,201
Fix to_json ValueError and remove pandas pin
This PR fixes the root cause of the issue: - #6197 This PR also removes the temporary pin of `pandas` introduced by: - #6200 Note that for orient in ['records', 'values'], index value is ignored but - in `pandas` < 2.1.0, a ValueError is raised if not index and orient not in ['split', 'table'] - for orien...
closed
https://github.com/huggingface/datasets/pull/6201
2023-08-31T10:38:08
2023-09-05T11:07:07
2023-09-05T10:58:21
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,875,169,551
6,200
Temporarily pin pandas < 2.1.0
Temporarily pin `pandas` < 2.1.0 until permanent solution is found. Hot fix #6197.
closed
https://github.com/huggingface/datasets/pull/6200
2023-08-31T09:45:17
2023-08-31T10:33:24
2023-08-31T10:24:38
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,875,165,185
6,199
Use load_dataset for local json files, but it not works
### Describe the bug when I use load_dataset to load my local datasets,it always goes to Hugging Face to download the data instead of loading the local dataset. ### Steps to reproduce the bug `raw_datasets = load_dataset( ‘json’, data_files=data_files)` ### Expected behavior ![image](https://gi...
open
https://github.com/huggingface/datasets/issues/6199
2023-08-31T09:42:34
2023-08-31T19:05:07
null
{ "login": "Garen-in-bush", "id": 50519434, "type": "User" }
[]
false
[]
1,875,092,027
6,198
Preserve split order in DataFilesDict
After investigation, I have found that this copy forces the splits to be sorted alphabetically: https://github.com/huggingface/datasets/blob/029227a116c14720afca71b9b22e78eb2a1c09a6/src/datasets/builder.py#L556 This PR removes the alphabetically sort of `DataFilesDict` keys. - Note that for a `dict`, the order of k...
closed
https://github.com/huggingface/datasets/pull/6198
2023-08-31T09:00:26
2023-08-31T13:57:31
2023-08-31T13:48:42
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,875,078,155
6,197
ValueError: 'index=True' is only valid when 'orient' is 'split', 'table', 'index', or 'columns'
### Describe the bug Saving a dataset `.to_json()` fails with a `ValueError` since the latest `pandas` [release](https://pandas.pydata.org/docs/dev/whatsnew/v2.1.0.html) (`2.1.0`) In their latest release we have: > Improved error handling when using [DataFrame.to_json()](https://pandas.pydata.org/docs/dev/refere...
closed
https://github.com/huggingface/datasets/issues/6197
2023-08-31T08:51:50
2023-09-01T10:35:10
2023-08-31T10:24:40
{ "login": "exs-avianello", "id": 128361578, "type": "User" }
[]
false
[]
1,875,070,972
6,196
Split order is not preserved
I have noticed that in some cases the split order is not preserved. For example, consider a no-script dataset with configs: ```yaml configs: - config_name: default data_files: - split: train path: train.csv - split: test path: test.csv ``` - Note the defined split order is [train, test] On...
closed
https://github.com/huggingface/datasets/issues/6196
2023-08-31T08:47:16
2023-08-31T13:48:43
2023-08-31T13:48:43
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,874,195,585
6,195
Force to reuse cache at given path
### Describe the bug I have run the official example of MLM like: ```bash python run_mlm.py \ --model_name_or_path roberta-base \ --dataset_name togethercomputer/RedPajama-Data-1T \ --dataset_config_name arxiv \ --per_device_train_batch_size 10 \ --preprocessing_num_workers 20 ...
closed
https://github.com/huggingface/datasets/issues/6195
2023-08-30T18:44:54
2023-11-03T10:14:21
2023-08-30T19:00:45
{ "login": "Luosuu", "id": 43507393, "type": "User" }
[]
false
[]
1,872,598,223
6,194
Support custom fingerprinting with `Dataset.from_generator`
### Feature request When using `Dataset.from_generator`, the generator is hashed when building the fingerprint. Similar to `.map`, it would be interesting to let the user bypass this hashing by accepting a `fingerprint` argument to `.from_generator`. ### Motivation Using the `.from_generator` constructor with ...
open
https://github.com/huggingface/datasets/issues/6194
2023-08-29T22:43:13
2024-12-22T01:14:39
null
{ "login": "bilelomrani1", "id": 16692099, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,872,285,153
6,193
Dataset loading script method does not work with .pyc file
### Describe the bug The huggingface dataset library specifically looks for ‘.py’ file while loading the dataset using loading script approach and it does not work with ‘.pyc’ file. While deploying in production, it becomes an issue when we are restricted to use only .pyc files. Is there any work around for this ? #...
open
https://github.com/huggingface/datasets/issues/6193
2023-08-29T19:35:06
2023-08-31T19:47:29
null
{ "login": "riteshkumarumassedu", "id": 43389071, "type": "User" }
[]
false
[]
1,871,911,640
6,192
Set minimal fsspec version requirement to 2023.1.0
Fix https://github.com/huggingface/datasets/issues/6141 Colab installs 2023.6.0, so we should be good 🙂
closed
https://github.com/huggingface/datasets/pull/6192
2023-08-29T15:23:41
2023-08-30T14:01:56
2023-08-30T13:51:32
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,871,634,840
6,191
Add missing `revision` argument
I've noticed that when you're not working on the main branch, there are sometimes errors in the files returned. After some investigation, I realized that the revision was not properly passed everywhere. This PR proposes a fix.
closed
https://github.com/huggingface/datasets/pull/6191
2023-08-29T13:05:04
2023-09-04T06:38:17
2023-08-31T13:50:00
{ "login": "qgallouedec", "id": 45557362, "type": "User" }
[]
true
[]
1,871,582,175
6,190
`Invalid user token` even when correct user token is passed!
### Describe the bug I'm working on a dataset which comprises other datasets on the hub. URL: https://huggingface.co/datasets/open-asr-leaderboard/datasets-test-only Note: Some of the sub-datasets in this metadataset require explicit access. All the other datasets work fine, except, `common_voice`. ### Steps t...
closed
https://github.com/huggingface/datasets/issues/6190
2023-08-29T12:37:03
2023-08-29T13:01:10
2023-08-29T13:01:09
{ "login": "Vaibhavs10", "id": 18682411, "type": "User" }
[]
false
[]
1,871,569,855
6,189
Don't alter input in Features.from_dict
null
closed
https://github.com/huggingface/datasets/pull/6189
2023-08-29T12:29:47
2023-08-29T13:04:59
2023-08-29T12:52:48
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,870,987,640
6,188
[Feature Request] Check the length of batch before writing so that empty batch is allowed
### Use Case I use `dataset.map(process_fn, batched=True)` to process the dataset, with data **augmentations or filtering**. However, when all examples within a batch is filtered out, i.e. **an empty batch is returned**, the following error will be thrown: ``` ValueError: Schema and number of arrays unequal `...
closed
https://github.com/huggingface/datasets/issues/6188
2023-08-29T06:37:34
2023-09-19T21:55:38
2023-09-19T21:55:37
{ "login": "namespace-Pt", "id": 61188463, "type": "User" }
[]
false
[]
1,870,936,143
6,187
Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory
### Describe the bug ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) [<ipython-input-48-6a7b3e847019>](https://localhost:8080/#) in <cell line: 7>() 5 } 6 ----> 7 csv_datasets_reloaded = load_...
open
https://github.com/huggingface/datasets/issues/6187
2023-08-29T05:49:56
2023-08-29T16:21:45
null
{ "login": "andysingal", "id": 20493493, "type": "User" }
[]
false
[]
1,869,431,457
6,186
Feature request: add code example of multi-GPU processing
### Feature request Would be great to add a code example of how to do multi-GPU processing with 🤗 Datasets in the documentation. cc @stevhliu Currently the docs has a small [section](https://huggingface.co/docs/datasets/v2.3.2/en/process#map) on this saying "your big GPU call goes here", however it didn't work f...
closed
https://github.com/huggingface/datasets/issues/6186
2023-08-28T10:00:59
2024-10-07T09:39:51
2023-11-22T15:42:20
{ "login": "NielsRogge", "id": 48327001, "type": "User" }
[ { "name": "documentation", "color": "0075ca" }, { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,868,077,748
6,185
Error in saving the PIL image into *.arrow files using datasets.arrow_writer
### Describe the bug I am using the ArrowWriter from datasets.arrow_writer to save a json-style file as arrow files. Within the dictionary, it contains a feature called "image" which is a list of PIL.Image objects. I am saving the json using the following script: ``` def save_to_arrow(path,temp): with ArrowWri...
open
https://github.com/huggingface/datasets/issues/6185
2023-08-26T12:15:57
2023-08-29T14:49:58
null
{ "login": "HaozheZhao", "id": 14247682, "type": "User" }
[]
false
[]
1,867,766,143
6,184
Map cache does not detect function changes in another module
```python # dataset.py import os import datasets if not os.path.exists('/tmp/test.json'): with open('/tmp/test.json', 'w') as file: file.write('[{"text": "hello"}]') def transform(example): text = example['text'] # text += ' world' return {'text': text} data = datasets.load_dataset('json', ...
closed
https://github.com/huggingface/datasets/issues/6184
2023-08-25T22:59:14
2023-08-29T20:57:07
2023-08-29T20:56:49
{ "login": "jonathanasdf", "id": 511073, "type": "User" }
[ { "name": "duplicate", "color": "cfd3d7" } ]
false
[]
1,867,743,276
6,183
Load dataset with non-existent file
### Describe the bug When load a dataset from datasets and pass a wrong path to json with the data, error message does not contain something abount "wrong path" or "file do not exist" - ```SchemaInferenceError: Please pass `features` or at least one example when writing data``` ### Steps to reproduce the bug ...
closed
https://github.com/huggingface/datasets/issues/6183
2023-08-25T22:21:22
2023-08-29T13:26:22
2023-08-29T13:26:22
{ "login": "freQuensy23-coder", "id": 64750224, "type": "User" }
[]
false
[]
1,867,203,131
6,182
Loading Meteor metric in HF evaluate module crashes due to datasets import issue
### Describe the bug When using python3.9 and ```evaluate``` module loading Meteor metric crashes at a non-existent import from ```datasets.config``` in ```datasets v2.14``` ### Steps to reproduce the bug ``` from evaluate import load meteor = load("meteor") ``` produces the following error: ``` from d...
closed
https://github.com/huggingface/datasets/issues/6182
2023-08-25T14:54:06
2023-09-04T16:41:11
2023-08-31T14:38:23
{ "login": "dsashulya", "id": 42322648, "type": "User" }
[]
false
[]
1,867,035,522
6,181
Fix import in `image_load` doc
Reported on [Discord](https://discord.com/channels/879548962464493619/1144295822209581168/1144295822209581168)
closed
https://github.com/huggingface/datasets/pull/6181
2023-08-25T13:12:19
2023-08-25T16:12:46
2023-08-25T16:02:24
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,867,032,578
6,180
Use `hf-internal-testing` repos for hosting test dataset repos
Use `hf-internal-testing` for hosting instead of the maintainers' dataset repos.
closed
https://github.com/huggingface/datasets/pull/6180
2023-08-25T13:10:26
2023-08-25T16:58:02
2023-08-25T16:46:22
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,867,009,016
6,179
Map cache with tokenizer
Similar issue to https://github.com/huggingface/datasets/issues/5985, but across different sessions rather than two calls in the same session. Unlike that issue, explicitly calling tokenizer(my_args) before the map() doesn't help, because the tokenizer was created with a different hash to begin with... setup ```...
open
https://github.com/huggingface/datasets/issues/6179
2023-08-25T12:55:18
2023-08-31T15:17:24
null
{ "login": "jonathanasdf", "id": 511073, "type": "User" }
[]
false
[]
1,866,610,102
6,178
'import datasets' throws "invalid syntax error"
### Describe the bug Hi, I have been trying to import the datasets library but I keep gtting this error. `Traceback (most recent call last): File /opt/local/jupyterhub/lib64/python3.9/site-packages/IPython/core/interactiveshell.py:3508 in run_code exec(code_obj, self.user_global_ns, self.user_ns) ...
closed
https://github.com/huggingface/datasets/issues/6178
2023-08-25T08:35:14
2023-09-27T17:33:39
2023-09-27T17:33:39
{ "login": "elia-ashraf", "id": 128580829, "type": "User" }
[]
false
[]
1,865,490,962
6,177
Use object detection images from `huggingface/documentation-images`
null
closed
https://github.com/huggingface/datasets/pull/6177
2023-08-24T16:16:09
2023-08-25T16:30:00
2023-08-25T16:21:17
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,864,436,408
6,176
how to limit the size of memory mapped file?
### Describe the bug Huggingface datasets use memory-mapped file to map large datasets in memory for fast access. However, it seems like huggingface will occupy all the memory for memory-mapped files, which makes a troublesome situation since we cluster will distribute a small portion of memory to me (once it's over ...
open
https://github.com/huggingface/datasets/issues/6176
2023-08-24T05:33:45
2023-10-11T06:00:10
null
{ "login": "williamium3000", "id": 47763855, "type": "User" }
[]
false
[]
1,863,592,678
6,175
PyArrow 13 CI fixes
Fixes: * bumps the PyArrow version check in the `cast_array_to_feature` to avoid the offset bug (still not fixed) * aligns the Pandas formatting tests with the Numpy ones (the current test fails due to https://github.com/apache/arrow/pull/35656, which requires `.to_pandas(coerce_temporal_nanoseconds=True)` to always ...
closed
https://github.com/huggingface/datasets/pull/6175
2023-08-23T15:45:53
2023-08-25T13:15:59
2023-08-25T13:06:52
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,863,422,065
6,173
Fix CI for pyarrow 13.0.0
pyarrow 13.0.0 just came out ``` FAILED tests/test_formatting.py::ArrowExtractorTest::test_pandas_extractor - AssertionError: Attributes of Series are different Attribute "dtype" are different [left]: datetime64[us, UTC] [right]: datetime64[ns, UTC] ``` ``` FAILED tests/test_table.py::test_cast_sliced_fi...
closed
https://github.com/huggingface/datasets/issues/6173
2023-08-23T14:11:20
2023-08-25T13:06:53
2023-08-25T13:06:53
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
1,863,318,027
6,172
Make Dataset streaming queries retryable
### Feature request Streaming datasets, as intended, do not load the entire dataset in memory or disk. However, while querying the next data chunk from the remote, sometimes it is possible that the service is down or there might be other issues that may cause the query to fail. In such a scenario, it would be nice to ...
open
https://github.com/huggingface/datasets/issues/6172
2023-08-23T13:15:38
2023-11-06T13:54:16
null
{ "login": "rojagtap", "id": 42299342, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,862,922,767
6,171
Fix typo in about_mapstyle_vs_iterable.mdx
null
closed
https://github.com/huggingface/datasets/pull/6171
2023-08-23T09:21:11
2023-08-23T09:32:59
2023-08-23T09:21:19
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,862,705,731
6,170
feat: Return the name of the currently loaded file
Added an optional parameter return_file_name in the load_dataset function. When it is set to True, the function will include the name of the file corresponding to the current line as a feature in the returned output. I added this here https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/js...
open
https://github.com/huggingface/datasets/pull/6170
2023-08-23T07:08:17
2023-08-29T12:41:05
null
{ "login": "Amitesh-Patel", "id": 124021133, "type": "User" }
[]
true
[]
1,862,360,199
6,169
Configurations in yaml not working
### Dataset configurations cannot be created in YAML/README Hello! I'm trying to follow the docs here in order to create structure in my dataset as added from here (#5331): https://github.com/huggingface/datasets/blob/8b8e6ee067eb74e7965ca2a6768f15f9398cb7c8/docs/source/repository_structure.mdx#L110-L118 I have t...
open
https://github.com/huggingface/datasets/issues/6169
2023-08-23T00:13:22
2023-08-23T15:35:31
null
{ "login": "tsor13", "id": 45085098, "type": "User" }
[]
false
[]
1,861,867,274
6,168
Fix ArrayXD YAML conversion
Replace the `shape` tuple with a list in the `ArrayXD` YAML conversion. Fix #6112
closed
https://github.com/huggingface/datasets/pull/6168
2023-08-22T17:02:54
2023-12-12T15:06:59
2023-12-12T15:00:43
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,861,474,327
6,167
Allow hyphen in split name
To fix https://discuss.huggingface.co/t/error-when-setting-up-the-dataset-viewer-streamingrowserror/51276.
closed
https://github.com/huggingface/datasets/pull/6167
2023-08-22T13:30:59
2024-01-11T06:31:31
2023-08-22T15:38:53
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,861,259,055
6,166
Document BUILDER_CONFIG_CLASS
Related to https://github.com/huggingface/datasets/issues/6130
closed
https://github.com/huggingface/datasets/pull/6166
2023-08-22T11:27:41
2023-08-23T14:01:25
2023-08-23T13:52:36
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,861,124,284
6,165
Fix multiprocessing with spawn in iterable datasets
The "Spawn" method is preferred when multiprocessing on macOS or Windows systems, instead of the "Fork" method on linux systems. This causes some methods of Iterable Datasets to break when using a dataloader with more than 0 workers. I fixed the issue by replacing lambda and local methods which are not pickle-abl...
closed
https://github.com/huggingface/datasets/pull/6165
2023-08-22T10:07:23
2023-08-29T13:27:14
2023-08-29T13:18:11
{ "login": "bruno-hays", "id": 48770768, "type": "User" }
[]
true
[]
1,859,560,007
6,164
Fix: Missing a MetadataConfigs init when the repo has a `datasets_info.json` but no README
When I try to push to an arrow repo (can provide the link on Slack), it uploads the files but fails to update the metadata, with ``` File "app.py", line 123, in add_new_eval eval_results[level].push_to_hub(my_repo, token=TOKEN, split=SPLIT) File "blabla_my_env_path/lib/python3.10/site-packages/datasets/arro...
closed
https://github.com/huggingface/datasets/pull/6164
2023-08-21T14:57:54
2023-08-21T16:27:05
2023-08-21T16:18:26
{ "login": "clefourrier", "id": 22726840, "type": "User" }
[]
true
[]
1,857,682,241
6,163
Error type: ArrowInvalid Details: Failed to parse string: '[254,254]' as a scalar of type int32
### Describe the bug I am getting the following error while I am trying to upload the CSV sheet to train a model. My CSV sheet content is exactly same as shown in the example CSV file in the Auto Train page. Attaching screenshot of error for reference. I have also tried converting the index of the answer that are inte...
open
https://github.com/huggingface/datasets/issues/6163
2023-08-19T11:34:40
2025-07-22T12:04:46
null
{ "login": "shishirCTC", "id": 90616801, "type": "User" }
[]
false
[]
1,856,198,342
6,162
load_dataset('json',...) from togethercomputer/RedPajama-Data-1T errors when jsonl rows contains different data fields
### Describe the bug When loading some jsonl from redpajama-data-1T github source [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) fails due to one row of the file containing an extra field called **symlink_target: string>**. When deleting that line the loading...
open
https://github.com/huggingface/datasets/issues/6162
2023-08-18T07:19:39
2023-08-18T17:00:35
null
{ "login": "rbrugaro", "id": 82971690, "type": "User" }
[]
false
[]
1,855,794,354
6,161
Fix protocol prefix for Beam
Fix #6147
closed
https://github.com/huggingface/datasets/pull/6161
2023-08-17T22:40:37
2024-03-18T17:01:21
2024-03-18T17:01:21
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,855,760,543
6,160
Fix Parquet loading with `columns`
Fix #6149
closed
https://github.com/huggingface/datasets/pull/6160
2023-08-17T21:58:24
2023-08-17T22:44:59
2023-08-17T22:36:04
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,855,691,512
6,159
Add `BoundingBox` feature
... to make working with object detection datasets easier. Currently, `Sequence(int_or_float, length=4)` can be used to represent this feature optimally (in the storage backend), so I only see this feature being useful if we make it work with the viewer. Also, bounding boxes usually come in 4 different formats (explain...
open
https://github.com/huggingface/datasets/issues/6159
2023-08-17T20:49:51
2024-11-18T17:58:43
null
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,855,374,220
6,158
[docs] Complete `to_iterable_dataset`
Finishes the `to_iterable_dataset` documentation by adding it to the relevant sections in the tutorial and guide.
closed
https://github.com/huggingface/datasets/pull/6158
2023-08-17T17:02:11
2023-08-17T19:24:20
2023-08-17T19:13:15
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[]
true
[]
1,855,265,663
6,157
DatasetInfo.__init__() got an unexpected keyword argument '_column_requires_decoding'
### Describe the bug When I was in load_dataset, it said "DatasetInfo.__init__() got an unexpected keyword argument '_column_requires_decoding'". The second time I ran it, there was no error and the dataset object worked ```python --------------------------------------------------------------------------- TypeErr...
closed
https://github.com/huggingface/datasets/issues/6157
2023-08-17T15:48:11
2023-09-27T17:36:14
2023-09-27T17:36:14
{ "login": "aihao2000", "id": 51043929, "type": "User" }
[]
false
[]
1,854,768,618
6,156
Why not use self._epoch as seed to shuffle in distributed training with IterableDataset
### Describe the bug Currently, distributed training with `IterableDataset` needs to pass fixed seed to shuffle to keep each node use the same seed to avoid overlapping. https://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1174-L1177 My question ...
closed
https://github.com/huggingface/datasets/issues/6156
2023-08-17T10:58:20
2023-08-17T14:33:15
2023-08-17T14:33:14
{ "login": "npuichigo", "id": 11533479, "type": "User" }
[]
false
[]
1,854,661,682
6,155
Raise FileNotFoundError when passing data_files that don't exist
e.g. when running `load_dataset("parquet", data_files="doesnt_exist.parquet")`
closed
https://github.com/huggingface/datasets/pull/6155
2023-08-17T09:49:48
2023-08-18T13:45:58
2023-08-18T13:35:13
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,854,595,943
6,154
Use yaml instead of get data patterns when possible
This would make the data files resolution faster: no need to list all the data files to infer the dataset builder to use. fix https://github.com/huggingface/datasets/issues/6140
closed
https://github.com/huggingface/datasets/pull/6154
2023-08-17T09:17:05
2023-08-17T20:46:25
2023-08-17T20:37:19
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,852,494,646
6,152
FolderBase Dataset automatically resolves under current directory when data_dir is not specified
### Describe the bug FolderBase Dataset automatically resolves under current directory when data_dir is not specified. For example: ``` load_dataset("audiofolder") ``` takes long time to resolve and collect data_files from current directory. But I think it should reach out to this line for error handling https:...
closed
https://github.com/huggingface/datasets/issues/6152
2023-08-16T04:38:09
2025-06-18T14:18:42
2025-06-18T14:18:42
{ "login": "npuichigo", "id": 11533479, "type": "User" }
[ { "name": "good first issue", "color": "7057ff" } ]
false
[]
1,851,497,818
6,151
Faster sorting for single key items
### Feature request A faster way to sort a dataset which contains a large number of rows. ### Motivation The current sorting implementations took significantly longer than expected when I was running on a dataset trying to sort by timestamps. **Code snippet:** ```python ds = datasets.load_dataset( "json"...
closed
https://github.com/huggingface/datasets/issues/6151
2023-08-15T14:02:31
2023-08-21T14:38:26
2023-08-21T14:38:25
{ "login": "jackapbutler", "id": 47942453, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,850,740,456
6,150
Allow dataset implement .take
### Feature request I want to do: ``` dataset.take(512) ``` but it only works with streaming = True ### Motivation uniform interface to data sets. Really surprising the above only works with streaming = True. ### Your contribution Should be trivial to copy paste the IterableDataset .take to use the local pa...
open
https://github.com/huggingface/datasets/issues/6150
2023-08-15T00:17:51
2023-08-17T13:49:37
null
{ "login": "brando90", "id": 1855278, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,850,700,624
6,149
Dataset.from_parquet cannot load subset of columns
### Describe the bug When using `Dataset.from_parquet(path_or_paths, columns=[...])` and a subset of columns, loading fails with a variant of the following ``` ValueError: Couldn't cast a: int64 -- schema metadata -- pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 273 to {'a': V...
closed
https://github.com/huggingface/datasets/issues/6149
2023-08-14T23:28:22
2023-08-17T22:36:05
2023-08-17T22:36:05
{ "login": "dwyatte", "id": 2512762, "type": "User" }
[]
false
[]
1,849,524,683
6,148
Ignore parallel warning in map_nested
This warning message was shown every time you pass num_proc to `load_dataset` because of `map_nested` ``` parallel_map is experimental and might be subject to breaking changes in the future ``` This PR removes it for `map_nested`. If someone uses another parallel backend they're already warned when `parallel_ba...
closed
https://github.com/huggingface/datasets/pull/6148
2023-08-14T10:43:41
2023-08-17T08:54:06
2023-08-17T08:43:58
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,848,914,830
6,147
ValueError when running BeamBasedBuilder with GCS path in cache_dir
### Describe the bug When running the BeamBasedBuilder with a GCS path specified in the cache_dir, the following ValueError occurs: ``` ValueError: Unable to get filesystem from specified path, please use the correct path or ensure the required dependency is installed, e.g., pip install apache-beam[gcp]. Path spec...
closed
https://github.com/huggingface/datasets/issues/6147
2023-08-14T03:11:34
2024-03-18T16:59:15
2024-03-18T16:59:14
{ "login": "ktrk115", "id": 13844767, "type": "User" }
[]
false
[]
1,848,417,366
6,146
DatasetGenerationError when load glue benchmark datasets from `load_dataset`
### Describe the bug Package version: datasets-2.14.4 When I run the codes: ``` from datasets import load_dataset dataset = load_dataset("glue", "ax") ``` I got the following errors: --------------------------------------------------------------------------- SchemaInferenceError ...
closed
https://github.com/huggingface/datasets/issues/6146
2023-08-13T05:17:56
2023-08-26T22:09:09
2023-08-26T22:09:09
{ "login": "yusx-swapp", "id": 78742415, "type": "User" }
[]
false
[]
1,852,630,074
6,153
custom load dataset to hub
### System Info kaggle notebook i transformed dataset: ``` dataset = load_dataset("Dahoas/first-instruct-human-assistant-prompt") ``` to formatted_dataset: ``` Dataset({ features: ['message_tree_id', 'message_tree_text'], num_rows: 33143 }) ``` but would like to know how to upload to hub ### ...
closed
https://github.com/huggingface/datasets/issues/6153
2023-08-13T04:42:22
2023-11-21T11:50:28
2023-10-08T17:04:16
{ "login": "andysingal", "id": 20493493, "type": "User" }
[]
false
[]
1,847,811,310
6,145
Export to_iterable_dataset to document
Fix the export of a missing method of `Dataset`
closed
https://github.com/huggingface/datasets/pull/6145
2023-08-12T07:00:14
2023-08-15T17:04:01
2023-08-15T16:55:24
{ "login": "npuichigo", "id": 11533479, "type": "User" }
[]
true
[]
1,847,296,711
6,144
NIH exporter file not found
### Describe the bug can't use or download the nih exporter pile data. ``` 15 experiment_compute_diveristy_coeff_single_dataset_then_combined_datasets_with_domain_weights() 16 File "/lfs/ampere1/0/brando9/beyond-scale-language-data-diversity/src/diversity/div_coeff.py", line 474, in experiment_compute_diveri...
open
https://github.com/huggingface/datasets/issues/6144
2023-08-11T19:05:25
2023-08-14T23:28:38
null
{ "login": "brando90", "id": 1855278, "type": "User" }
[]
false
[]
1,846,205,216
6,142
the-stack-dedup fails to generate
### Describe the bug I'm getting an error generating the-stack-dedup with datasets 2.13.1, and with 2.14.4 nothing happens. ### Steps to reproduce the bug My code: ``` import os import datasets as ds MY_CACHE_DIR = "/home/ubuntu/the-stack-dedup-local" MY_TOKEN="my-token" the_stack_ds = ds.load_dataset("...
closed
https://github.com/huggingface/datasets/issues/6142
2023-08-11T05:10:49
2023-08-17T09:26:13
2023-08-17T09:26:13
{ "login": "michaelroyzen", "id": 45830328, "type": "User" }
[]
false
[]
1,846,117,729
6,141
TypeError: ClientSession._request() got an unexpected keyword argument 'https'
### Describe the bug Hello, when I ran the [code snippet](https://huggingface.co/docs/datasets/v2.14.4/en/loading#json) on the document, I encountered the following problem: ``` Python 3.10.9 (main, Mar 1 2023, 18:23:06) [GCC 11.2.0] on linux Type "help", "copyright", "credits" or "license" for more informatio...
closed
https://github.com/huggingface/datasets/issues/6141
2023-08-11T02:40:32
2023-08-30T13:51:33
2023-08-30T13:51:33
{ "login": "q935970314", "id": 35994018, "type": "User" }
[]
false
[]
1,845,384,712
6,140
Misalignment between file format specified in configs metadata YAML and the inferred builder
There is a misalignment between the format of the `data_files` specified in the configs metadata YAML (CSV): ```yaml configs: - config_name: default data_files: - split: train path: data.csv ``` and the inferred builder (JSON). Note there are multiple JSON files in the repo, but they do not...
closed
https://github.com/huggingface/datasets/issues/6140
2023-08-10T15:07:34
2023-08-17T20:37:20
2023-08-17T20:37:20
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]