id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
1,805,887,184
6,037
Documentation links to examples are broken
### Describe the bug The links at the bottom of [add_dataset](https://huggingface.co/docs/datasets/v1.2.1/add_dataset.html) to examples of specific datasets are all broken, for example - text classification: [ag_news](https://github.com/huggingface/datasets/blob/master/datasets/ag_news/ag_news.py) (original data ...
closed
https://github.com/huggingface/datasets/issues/6037
2023-07-15T04:54:50
2023-07-17T22:35:14
2023-07-17T15:10:32
{ "login": "david-waterworth", "id": 5028974, "type": "User" }
[]
false
[]
1,805,138,898
6,036
Deprecate search API
The Search API only supports Faiss and ElasticSearch as vector stores, is somewhat difficult to maintain (e.g., it still doesn't support ElasticSeach 8.0, difficult testing, ...), does not have the best design (adds a bunch of methods to the `Dataset` class that are only useful after creating an index), the usage doesn...
open
https://github.com/huggingface/datasets/pull/6036
2023-07-14T16:22:09
2023-09-07T16:44:32
null
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,805,087,687
6,035
Dataset representation
__repr__ and _repr_html_ now both are similar to that of Polars
open
https://github.com/huggingface/datasets/pull/6035
2023-07-14T15:42:37
2023-07-19T19:41:35
null
{ "login": "Ganryuu", "id": 63643948, "type": "User" }
[]
true
[]
1,804,501,361
6,034
load_dataset hangs on WSL
### Describe the bug load_dataset simply hangs. It happens once every ~5 times, and interestingly hangs for a multiple of 5 minutes (hangs for 5/10/15 minutes). Using the profiler in PyCharm shows that it spends the time at <method 'connect' of '_socket.socket' objects>. However, a local cache is available so I am not...
closed
https://github.com/huggingface/datasets/issues/6034
2023-07-14T09:03:10
2023-07-14T14:48:29
2023-07-14T14:48:29
{ "login": "Andy-Zhou2", "id": 20140522, "type": "User" }
[]
false
[]
1,804,482,051
6,033
`map` function doesn't fully utilize `input_columns`.
### Describe the bug I wanted to select only some columns of data. And I thought that's why the argument `input_columns` exists. What I expected is like this: If there are ["a", "b", "c", "d"] columns, and if I set `input_columns=["a", "d"]`, the data will have only ["a", "d"] columns. But it doesn't select co...
closed
https://github.com/huggingface/datasets/issues/6033
2023-07-14T08:49:28
2023-07-14T09:16:04
2023-07-14T09:16:04
{ "login": "kwonmha", "id": 8953934, "type": "User" }
[]
false
[]
1,804,358,679
6,032
DownloadConfig.proxies not work when load_dataset_builder calling HfApi.dataset_info
### Describe the bug ```python download_config = DownloadConfig(proxies={'https': '<my proxy>'}) builder = load_dataset_builder(..., download_config=download_config) ``` But, when getting the dataset_info from HfApi, the http requests not using the proxies. ### Steps to reproduce the bug 1. Setup proxies i...
open
https://github.com/huggingface/datasets/issues/6032
2023-07-14T07:22:55
2023-09-11T13:50:41
null
{ "login": "codingl2k1", "id": 138426806, "type": "User" }
[]
false
[]
1,804,183,858
6,031
Argument type for map function changes when using `input_columns` for `IterableDataset`
### Describe the bug I wrote `tokenize(examples)` function as an argument for `map` function for `IterableDataset`. It process dictionary type `examples` as a parameter. It is used in `train_dataset = train_dataset.map(tokenize, batched=True)` No error is raised. And then, I found some unnecessary keys and val...
closed
https://github.com/huggingface/datasets/issues/6031
2023-07-14T05:11:14
2023-07-14T14:44:15
2023-07-14T14:44:15
{ "login": "kwonmha", "id": 8953934, "type": "User" }
[]
false
[]
1,803,864,744
6,030
fixed typo in comment
This mistake was a bit confusing, so I thought it was worth sending a PR over.
closed
https://github.com/huggingface/datasets/pull/6030
2023-07-13T22:49:57
2023-07-14T14:21:58
2023-07-14T14:13:38
{ "login": "NightMachinery", "id": 36224762, "type": "User" }
[]
true
[]
1,803,460,046
6,029
[docs] Fix link
Fixes link to the builder classes :)
closed
https://github.com/huggingface/datasets/pull/6029
2023-07-13T17:24:12
2023-07-13T17:47:41
2023-07-13T17:38:59
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[]
true
[]
1,803,294,981
6,028
Use new hffs
Thanks to @janineguo 's work in https://github.com/huggingface/datasets/pull/5919 which was needed to support HfFileSystem. Switching to `HfFileSystem` will help implementing optimization in data files resolution ## Implementation details I replaced all the from_hf_repo and from_local_or_remote in data_files.p...
closed
https://github.com/huggingface/datasets/pull/6028
2023-07-13T15:41:44
2023-07-17T17:09:39
2023-07-17T17:01:00
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,803,008,486
6,027
Delete `task_templates` in `IterableDataset` when they are no longer valid
Fix #6025
closed
https://github.com/huggingface/datasets/pull/6027
2023-07-13T13:16:17
2023-07-13T14:06:20
2023-07-13T13:57:35
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,802,929,222
6,026
Fix style with ruff 0.0.278
null
closed
https://github.com/huggingface/datasets/pull/6026
2023-07-13T12:34:24
2023-07-13T12:46:26
2023-07-13T12:37:01
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,801,852,601
6,025
Using a dataset for a use other than it was intended for.
### Describe the bug Hi, I want to use the rotten tomatoes dataset but for a task other than classification, but when I interleave the dataset, it throws ```'ValueError: Column label is not present in features.'```. It seems that the label_col must be there in the dataset for some reason? Here is the full stacktra...
closed
https://github.com/huggingface/datasets/issues/6025
2023-07-12T22:33:17
2023-07-13T13:57:36
2023-07-13T13:57:36
{ "login": "surya-narayanan", "id": 17240858, "type": "User" }
[]
false
[]
1,801,708,808
6,024
Don't reference self in Spark._validate_cache_dir
Fix for https://github.com/huggingface/datasets/issues/5963
closed
https://github.com/huggingface/datasets/pull/6024
2023-07-12T20:31:16
2023-07-13T16:58:32
2023-07-13T12:37:09
{ "login": "maddiedawson", "id": 106995444, "type": "User" }
[]
true
[]
1,801,272,420
6,023
Fix `ClassLabel` min max check for `None` values
Fix #6022
closed
https://github.com/huggingface/datasets/pull/6023
2023-07-12T15:46:12
2023-07-12T16:29:26
2023-07-12T16:18:04
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,800,092,589
6,022
Batch map raises TypeError: '>=' not supported between instances of 'NoneType' and 'int'
### Describe the bug When mapping some datasets with `batched=True`, datasets may raise an exeception: ```python Traceback (most recent call last): File "/Users/codingl2k1/Work/datasets/venv/lib/python3.11/site-packages/multiprocess/pool.py", line 125, in worker result = (True, func(*args, **kwds)) ...
closed
https://github.com/huggingface/datasets/issues/6022
2023-07-12T03:20:17
2023-07-12T16:18:06
2023-07-12T16:18:05
{ "login": "codingl2k1", "id": 138426806, "type": "User" }
[]
false
[]
1,799,785,904
6,021
[docs] Update return statement of index search
Clarifies in the return statement of the docstring that the retrieval score is `IndexFlatL2` by default (see [PR](https://github.com/huggingface/transformers/issues/24739) and internal Slack [convo](https://huggingface.slack.com/archives/C01229B19EX/p1689105179711689)), and fixes the formatting because multiple return ...
closed
https://github.com/huggingface/datasets/pull/6021
2023-07-11T21:33:32
2023-07-12T17:13:02
2023-07-12T17:03:00
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[]
true
[]
1,799,720,536
6,020
Inconsistent "The features can't be aligned" error when combining map, multiprocessing, and variable length outputs
### Describe the bug I'm using a dataset with map and multiprocessing to run a function that returned a variable length list of outputs. This output list may be empty. Normally this is handled fine, but there is an edge case that crops up when using multiprocessing. In some cases, an empty list result ends up in a dat...
open
https://github.com/huggingface/datasets/issues/6020
2023-07-11T20:40:38
2024-10-27T06:30:13
null
{ "login": "kheyer", "id": 38166299, "type": "User" }
[]
false
[]
1,799,532,822
6,019
Improve logging
Adds the StreamHandler (as `hfh` and `transformers` do) to the library's logger to log INFO messages and logs the messages about "loading a cached result" (and some other warnings) as INFO (Also removes the `leave=False` arg in the progress bars to be consistent with `hfh` and `transformers` - progress bars serve as...
closed
https://github.com/huggingface/datasets/pull/6019
2023-07-11T18:30:23
2023-07-12T19:34:14
2023-07-12T17:19:28
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,799,411,999
6,018
test1
null
closed
https://github.com/huggingface/datasets/pull/6018
2023-07-11T17:25:49
2023-07-20T10:11:41
2023-07-20T10:11:41
{ "login": "ognjenovicj", "id": 139256323, "type": "User" }
[]
true
[]
1,799,309,132
6,017
Switch to huggingface_hub's HfFileSystem
instead of the current datasets.filesystems.hffilesystem.HfFileSystem which can be slow in some cases related to https://github.com/huggingface/datasets/issues/5846 and https://github.com/huggingface/datasets/pull/5919
closed
https://github.com/huggingface/datasets/issues/6017
2023-07-11T16:24:40
2023-07-17T17:01:01
2023-07-17T17:01:01
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,798,968,033
6,016
Dataset string representation enhancement
my attempt at #6010 not sure if this is the right way to go about it, I will wait for your feedback
open
https://github.com/huggingface/datasets/pull/6016
2023-07-11T13:38:25
2023-07-16T10:26:18
null
{ "login": "Ganryuu", "id": 63643948, "type": "User" }
[]
true
[]
1,798,807,893
6,015
Add metadata ui screenshot in docs
null
closed
https://github.com/huggingface/datasets/pull/6015
2023-07-11T12:16:29
2023-07-11T16:07:28
2023-07-11T15:56:46
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,798,213,816
6,014
Request to Share/Update Dataset Viewer Code
Overview: The repository (huggingface/datasets-viewer) was recently archived and when I tried to run the code, there was the error message "AttributeError: module 'datasets.load' has no attribute 'prepare_module'". I could not resolve the issue myself due to lack of documentation of that attribute. Request: I k...
closed
https://github.com/huggingface/datasets/issues/6014
2023-07-11T06:36:09
2024-07-20T07:29:08
2023-09-25T12:01:17
{ "login": "lilyorlilypad", "id": 105081034, "type": "User" }
[ { "name": "duplicate", "color": "cfd3d7" } ]
false
[]
1,796,083,437
6,013
[FR] `map` should reuse unchanged columns from the previous dataset to avoid disk usage
### Feature request Currently adding a new column with `map` will cause all the data in the dataset to be duplicated and stored/cached on the disk again. It should reuse unchanged columns. ### Motivation This allows having datasets with different columns but sharing some basic columns. Currently, these datasets wou...
open
https://github.com/huggingface/datasets/issues/6013
2023-07-10T06:42:20
2025-06-19T06:30:38
null
{ "login": "NightMachinery", "id": 36224762, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "good second issue", "color": "BDE59C" } ]
false
[]
1,795,575,432
6,012
[FR] Transform Chaining, Lazy Mapping
### Feature request Currently using a `map` call processes and duplicates the whole dataset, which takes both time and disk space. The solution is to allow lazy mapping, which is essentially a saved chain of transforms that are applied on the fly whenever a slice of the dataset is requested. The API should look ...
open
https://github.com/huggingface/datasets/issues/6012
2023-07-09T21:40:21
2025-01-20T14:06:28
null
{ "login": "NightMachinery", "id": 36224762, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,795,296,568
6,011
Documentation: wiki_dpr Dataset has no metric_type for Faiss Index
### Describe the bug After loading `wiki_dpr` using: ```py ds = load_dataset(path='wiki_dpr', name='psgs_w100.multiset.compressed', split='train') print(ds.get_index("embeddings").metric_type) # prints nothing because the value is None ``` the index does not have a defined `metric_type`. This is an issue because ...
closed
https://github.com/huggingface/datasets/issues/6011
2023-07-09T08:30:19
2023-07-11T03:02:36
2023-07-11T03:02:36
{ "login": "YichiRockyZhang", "id": 29335344, "type": "User" }
[]
false
[]
1,793,838,152
6,010
Improve `Dataset`'s string representation
Currently, `Dataset.__repr__` outputs a dataset's column names and the number of rows. We could improve it by printing its features and the first few rows. We should also implement `_repr_html_` to have a rich HTML representation in notebooks/Streamlit.
open
https://github.com/huggingface/datasets/issues/6010
2023-07-07T16:38:03
2023-09-01T03:45:07
null
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,792,059,808
6,009
Fix cast for dictionaries with no keys
Fix #5677
closed
https://github.com/huggingface/datasets/pull/6009
2023-07-06T18:48:14
2023-07-07T14:13:00
2023-07-07T14:01:13
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,789,869,344
6,008
Dataset.from_generator consistently freezes at ~1000 rows
### Describe the bug Whenever I try to create a dataset which contains images using `Dataset.from_generator`, it freezes around 996 rows. I suppose it has something to do with memory consumption, but there's more memory available. I Somehow it worked a few times but mostly this makes the datasets library much more ...
closed
https://github.com/huggingface/datasets/issues/6008
2023-07-05T16:06:48
2023-07-10T13:46:39
2023-07-10T13:46:39
{ "login": "andreemic", "id": 27695722, "type": "User" }
[]
false
[]
1,789,782,693
6,007
Get an error "OverflowError: Python int too large to convert to C long" when loading a large dataset
### Describe the bug When load a large dataset with the following code ```python from datasets import load_dataset dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train') ``` We encountered the error: "OverflowError: Python int too large to convert to C long" The error look something like...
open
https://github.com/huggingface/datasets/issues/6007
2023-07-05T15:16:50
2024-02-07T22:22:35
null
{ "login": "silverriver", "id": 2529049, "type": "User" }
[ { "name": "arrow", "color": "c2e0c6" } ]
false
[]
1,788,855,582
6,006
NotADirectoryError when loading gigawords
### Describe the bug got `NotADirectoryError` whtn loading gigawords dataset ### Steps to reproduce the bug When running ``` import datasets datasets.load_dataset('gigaword') ``` Got the following exception: ```bash Traceback (most recent call last): ...
closed
https://github.com/huggingface/datasets/issues/6006
2023-07-05T06:23:41
2023-07-05T06:31:02
2023-07-05T06:31:01
{ "login": "xipq", "id": 115634163, "type": "User" }
[]
false
[]
1,788,103,576
6,005
Drop Python 3.7 support
`hfh` and `transformers` have dropped Python 3.7 support, so we should do the same :). (Based on the stats, it seems less than 10% of the users use `datasets` with Python 3.7)
closed
https://github.com/huggingface/datasets/pull/6005
2023-07-04T15:02:37
2023-07-06T15:32:41
2023-07-06T15:22:43
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,786,636,368
6,004
Misc improvements
Contains the following improvements: * fixes a "share dataset" link in README and modifies the "hosting" part in the disclaimer section * updates `Makefile` to also run the style checks on `utils` and `setup.py` * deletes a test for GH-hosted datasets (no longer supported) * deletes `convert_dataset.sh` (outdated...
closed
https://github.com/huggingface/datasets/pull/6004
2023-07-03T18:29:14
2023-07-06T17:04:11
2023-07-06T16:55:25
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,786,554,110
6,003
interleave_datasets & DataCollatorForLanguageModeling having a conflict ?
### Describe the bug Hi everyone :) I have two local & custom datasets (1 "sentence" per line) which I split along the 95/5 lines for pre-training a Bert model. I use a modified version of `run_mlm.py` in order to be able to make use of `interleave_dataset`: - `tokenize()` runs fine - `group_text()` runs fine ...
open
https://github.com/huggingface/datasets/issues/6003
2023-07-03T17:15:31
2023-07-03T17:15:31
null
{ "login": "PonteIneptique", "id": 1929830, "type": "User" }
[]
false
[]
1,786,053,060
6,002
Add KLUE-MRC metrics
## Metrics for KLUE-MRC (Korean Language Understanding Evaluation — Machine Reading Comprehension) Adding metrics for [KLUE-MRC](https://huggingface.co/datasets/klue). KLUE-MRC is very similar to SQuAD 2.0 but has a slightly different format which is why I added metrics for KLUE-MRC. Specifically, in the case of...
closed
https://github.com/huggingface/datasets/pull/6002
2023-07-03T12:11:10
2023-07-09T11:57:20
2023-07-09T11:57:20
{ "login": "ingyuseong", "id": 37537248, "type": "User" }
[]
true
[]
1,782,516,627
6,001
Align `column_names` type check with type hint in `sort`
Fix #5998
closed
https://github.com/huggingface/datasets/pull/6001
2023-06-30T13:15:50
2023-06-30T14:18:32
2023-06-30T14:11:24
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,782,456,878
6,000
Pin `joblib` to avoid `joblibspark` test failures
`joblibspark` doesn't support the latest `joblib` release. See https://github.com/huggingface/datasets/actions/runs/5401870932/jobs/9812337078 for the errors
closed
https://github.com/huggingface/datasets/pull/6000
2023-06-30T12:36:54
2023-06-30T13:17:05
2023-06-30T13:08:27
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,781,851,513
5,999
Getting a 409 error while loading xglue dataset
### Describe the bug Unable to load xglue dataset ### Steps to reproduce the bug ```python import datasets dataset = datasets.load_dataset("xglue", "ntg") ``` > ConnectionError: Couldn't reach https://xglue.blob.core.windows.net/xglue/xglue_full_dataset.tar.gz (error 409) ### Expected behavior Expected the...
closed
https://github.com/huggingface/datasets/issues/5999
2023-06-30T04:13:54
2023-06-30T05:57:23
2023-06-30T05:57:22
{ "login": "Praful932", "id": 45713796, "type": "User" }
[]
false
[]
1,781,805,018
5,998
The current implementation has a potential bug in the sort method
### Describe the bug In the sort method,here's a piece of code ```python # column_names: Union[str, Sequence_[str]] # Check proper format of and for duplicates in column_names if not isinstance(column_names, list): column_names = [column_names] ``` I get an error when I pass in a tuple based on the ...
closed
https://github.com/huggingface/datasets/issues/5998
2023-06-30T03:16:57
2023-06-30T14:21:03
2023-06-30T14:11:25
{ "login": "wangyuxinwhy", "id": 22192665, "type": "User" }
[]
false
[]
1,781,582,818
5,997
extend the map function so it can wrap around long text that does not fit in the context window
### Feature request I understand `dataset` provides a [`map`](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L2849) function. This function in turn takes in a callable that is used to tokenize the text on which a model is trained. Frequently this text will not fit within a models's con...
open
https://github.com/huggingface/datasets/issues/5997
2023-06-29T22:15:21
2023-07-03T17:58:52
null
{ "login": "siddhsql", "id": 127623723, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,779,294,374
5,996
Deprecate `use_auth_token` in favor of `token`
... to be consistent with `transformers` and `huggingface_hub`.
closed
https://github.com/huggingface/datasets/pull/5996
2023-06-28T16:26:38
2023-07-05T15:22:20
2023-07-03T16:03:33
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,777,088,925
5,995
Support returning dataframe in map transform
Allow returning Pandas DataFrames in `map` transforms. (Plus, raise an error in the non-batched mode if a returned PyArrow table/Pandas DataFrame has more than one row)
closed
https://github.com/huggingface/datasets/pull/5995
2023-06-27T14:15:08
2023-06-28T13:56:02
2023-06-28T13:46:33
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,776,829,004
5,994
Fix select_columns columns order
Fix the order of the columns in dataset.features when the order changes with `dataset.select_columns()`. I also fixed the same issue for `dataset.flatten()` Close https://github.com/huggingface/datasets/issues/5993
closed
https://github.com/huggingface/datasets/pull/5994
2023-06-27T12:32:46
2023-06-27T15:40:47
2023-06-27T15:32:43
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,776,643,555
5,993
ValueError: Table schema does not match schema used to create file
### Describe the bug Saving a dataset as parquet fails with a `ValueError: Table schema does not match schema used to create file` if the dataset was obtained out of a `.select_columns()` call with columns selected out of order. ### Steps to reproduce the bug ```python import datasets dataset = datasets.Dataset...
closed
https://github.com/huggingface/datasets/issues/5993
2023-06-27T10:54:07
2023-06-27T15:36:42
2023-06-27T15:32:44
{ "login": "exs-avianello", "id": 128361578, "type": "User" }
[]
false
[]
1,776,460,964
5,992
speedup
null
closed
https://github.com/huggingface/datasets/pull/5992
2023-06-27T09:17:58
2023-06-27T09:23:07
2023-06-27T09:18:04
{ "login": "qgallouedec", "id": 45557362, "type": "User" }
[]
true
[]
1,774,456,518
5,991
`map` with any joblib backend
We recently enabled the (experimental) parallel backend switch for data download and extraction but not for `map` yet. Right now we're using our `iflatmap_unordered` implementation for multiprocessing that uses a shared Queue to gather progress updates from the subprocesses and show a progress bar in the main proces...
open
https://github.com/huggingface/datasets/issues/5991
2023-06-26T10:33:42
2025-06-26T18:32:56
null
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,774,134,091
5,989
Set a rule on the config and split names
> should we actually allow characters like spaces? maybe it's better to add validation for whitespace symbols and directly in datasets and raise https://github.com/huggingface/datasets-server/issues/853
open
https://github.com/huggingface/datasets/issues/5989
2023-06-26T07:34:14
2023-07-19T14:22:54
null
{ "login": "severo", "id": 1676121, "type": "User" }
[]
false
[]
1,773,257,828
5,988
ConnectionError: Couldn't reach dataset_infos.json
### Describe the bug I'm trying to load codeparrot/codeparrot-clean-train, but get the following error: ConnectionError: Couldn't reach https://huggingface.co/datasets/codeparrot/codeparrot-clean-train/resolve/main/dataset_infos.json (ConnectionError(ProtocolError('Connection aborted.', ConnectionResetError(104, 'C...
closed
https://github.com/huggingface/datasets/issues/5988
2023-06-25T12:39:31
2023-07-07T13:20:57
2023-07-07T13:20:57
{ "login": "yulingao", "id": 20674868, "type": "User" }
[]
false
[]
1,773,047,909
5,987
Why max_shard_size is not supported in load_dataset and passed to download_and_prepare
### Describe the bug https://github.com/huggingface/datasets/blob/a8a797cc92e860c8d0df71e0aa826f4d2690713e/src/datasets/load.py#L1809 What I can to is break the `load_dataset` and use `load_datset_builder` + `download_and_prepare` instead. ### Steps to reproduce the bug https://github.com/huggingface/datasets/blo...
closed
https://github.com/huggingface/datasets/issues/5987
2023-06-25T04:19:13
2023-06-29T16:06:08
2023-06-29T16:06:08
{ "login": "npuichigo", "id": 11533479, "type": "User" }
[]
false
[]
1,772,233,111
5,986
Make IterableDataset.from_spark more efficient
Moved the code from using collect() to using toLocalIterator, which allows for prefetching partitions that will be selected next, thus allowing for better performance when iterating.
closed
https://github.com/huggingface/datasets/pull/5986
2023-06-23T22:18:20
2023-07-07T10:05:58
2023-07-07T09:56:09
{ "login": "mathewjacob1002", "id": 134338709, "type": "User" }
[]
true
[]
1,771,588,158
5,985
Cannot reuse tokenizer object for dataset map
### Describe the bug Related to https://github.com/huggingface/transformers/issues/24441. Not sure if this is a tokenizer issue or caching issue, so filing in both. Passing the tokenizer to the dataset map function causes the tokenizer to be fingerprinted weirdly. After calling the tokenizer with arguments like pad...
closed
https://github.com/huggingface/datasets/issues/5985
2023-06-23T14:45:31
2023-07-21T14:09:14
2023-07-21T14:09:14
{ "login": "vikigenius", "id": 12724810, "type": "User" }
[ { "name": "duplicate", "color": "cfd3d7" } ]
false
[]
1,771,571,458
5,984
AutoSharding IterableDataset's when num_workers > 1
### Feature request Minimal Example ``` import torch from datasets import IterableDataset d = IterableDataset.from_file(<file_name>) dl = torch.utils.data.dataloader.DataLoader(d,num_workers=3) for sample in dl: print(sample) ``` Warning: Too many dataloader workers: 2 (max is dataset.n_shard...
open
https://github.com/huggingface/datasets/issues/5984
2023-06-23T14:34:20
2024-03-22T15:01:14
null
{ "login": "mathephysicist", "id": 25594384, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,770,578,804
5,983
replaced PathLike as a variable for save_to_disk for dataset_path wit…
…h str like that of load_from_disk
closed
https://github.com/huggingface/datasets/pull/5983
2023-06-23T00:57:05
2023-09-11T04:17:17
2023-09-11T04:17:17
{ "login": "benjaminbrown038", "id": 35114142, "type": "User" }
[]
true
[]
1,770,333,296
5,982
404 on Datasets Documentation Page
### Describe the bug Getting a 404 from the Hugging Face Datasets docs page: https://huggingface.co/docs/datasets/index ### Steps to reproduce the bug 1. Go to URL https://huggingface.co/docs/datasets/index 2. Notice 404 not found ### Expected behavior URL should either show docs or redirect to new location #...
closed
https://github.com/huggingface/datasets/issues/5982
2023-06-22T20:14:57
2023-06-26T15:45:03
2023-06-26T15:45:03
{ "login": "kmulka-bloomberg", "id": 118509387, "type": "User" }
[]
false
[]
1,770,310,087
5,981
Only two cores are getting used in sagemaker with pytorch 3.10 kernel
### Describe the bug When using the newer pytorch 3.10 kernel, only 2 cores are being used by huggingface filter and map functions. The Pytorch 3.9 kernel would use as many cores as specified in the num_proc field. We have solved this in our own code by placing the following snippet in the code that is called insi...
closed
https://github.com/huggingface/datasets/issues/5981
2023-06-22T19:57:31
2023-10-30T06:17:40
2023-07-24T11:54:52
{ "login": "mmr-crexi", "id": 107141022, "type": "User" }
[]
false
[]
1,770,255,973
5,980
Viewing dataset card returns “502 Bad Gateway”
The url is: https://huggingface.co/datasets/Confirm-Labs/pile_ngrams_trigrams I am able to successfully view the “Files and versions” tab: [Confirm-Labs/pile_ngrams_trigrams at main](https://huggingface.co/datasets/Confirm-Labs/pile_ngrams_trigrams/tree/main) Any help would be appreciated! Thanks! I hope this is ...
closed
https://github.com/huggingface/datasets/issues/5980
2023-06-22T19:14:48
2023-06-27T08:38:19
2023-06-26T14:42:45
{ "login": "tbenthompson", "id": 4241811, "type": "User" }
[]
false
[]
1,770,198,250
5,979
set dev version
null
closed
https://github.com/huggingface/datasets/pull/5979
2023-06-22T18:32:14
2023-06-22T18:42:22
2023-06-22T18:32:22
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,770,187,053
5,978
Release: 2.13.1
null
closed
https://github.com/huggingface/datasets/pull/5978
2023-06-22T18:23:11
2023-06-22T18:40:24
2023-06-22T18:30:16
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,768,503,913
5,976
Avoid stuck map operation when subprocesses crashes
I've been using Dataset.map() with `num_proc=os.cpu_count()` to leverage multicore processing for my datasets, but from time to time I get stuck processes waiting forever. Apparently, when one of the subprocesses is abruptly killed (OOM killer, segfault, SIGKILL, etc), the main process keeps waiting for the async task ...
closed
https://github.com/huggingface/datasets/pull/5976
2023-06-21T21:18:31
2023-07-10T09:58:39
2023-07-10T09:50:07
{ "login": "pappacena", "id": 1213561, "type": "User" }
[]
true
[]
1,768,271,343
5,975
Streaming Dataset behind Proxy - FileNotFoundError
### Describe the bug When trying to stream a dataset i get the following error after a few minutes of waiting. ``` FileNotFoundError: https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json If the repo is private or gated, make sure to log in with `huggingface-cli login`. ``` I hav...
closed
https://github.com/huggingface/datasets/issues/5975
2023-06-21T19:10:02
2023-06-30T05:55:39
2023-06-30T05:55:38
{ "login": "Veluchs", "id": 135350576, "type": "User" }
[]
false
[]
1,767,981,231
5,974
Deprecate `errors` param in favor of `encoding_errors` in text builder
For consistency with the JSON builder and Pandas
closed
https://github.com/huggingface/datasets/pull/5974
2023-06-21T16:31:38
2023-06-26T10:34:43
2023-06-26T10:27:40
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,767,897,485
5,972
Filter unsupported extensions
I used a regex to filter the data files based on their extension for packaged builders. I tried and a regex is 10x faster that using `in` to check if the extension is in the list of supported extensions. Supersedes https://github.com/huggingface/datasets/pull/5850 Close https://github.com/huggingface/datasets/...
closed
https://github.com/huggingface/datasets/pull/5972
2023-06-21T15:43:01
2023-06-22T14:23:29
2023-06-22T14:16:26
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,767,053,635
5,971
Docs: make "repository structure" easier to find
The page https://huggingface.co/docs/datasets/repository_structure explains how to create a simple repository structure without a dataset script. It's the simplest way to create a dataset and should be easier to find, particularly on the docs' first pages.
open
https://github.com/huggingface/datasets/issues/5971
2023-06-21T08:26:44
2023-07-05T06:51:38
null
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
false
[]
1,766,010,356
5,970
description disappearing from Info when Uploading a Dataset Created with `from_dict`
### Describe the bug When uploading a dataset created locally using `from_dict` with a specified `description` field. It appears before upload, but is missing after upload and re-download. ### Steps to reproduce the bug I think the most relevant pattern in the code might be the following lines: ``` descr...
open
https://github.com/huggingface/datasets/issues/5970
2023-06-20T19:18:26
2023-06-22T14:23:56
null
{ "login": "balisujohn", "id": 20377292, "type": "User" }
[]
false
[]
1,765,529,905
5,969
Add `encoding` and `errors` params to JSON loader
"Requested" in https://discuss.huggingface.co/t/utf-16-for-datasets/43828/3. `pd.read_json` also has these parameters, so it makes sense to be consistent.
closed
https://github.com/huggingface/datasets/pull/5969
2023-06-20T14:28:35
2023-06-21T13:39:50
2023-06-21T13:32:22
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,765,252,561
5,968
Common Voice datasets still need `use_auth_token=True`
### Describe the bug We don't need to pass `use_auth_token=True` anymore to download gated datasets or models, so the following should work if correctly logged in. ```py from datasets import load_dataset load_dataset("mozilla-foundation/common_voice_6_1", "tr", split="train+validation") ``` However it throw...
closed
https://github.com/huggingface/datasets/issues/5968
2023-06-20T11:58:37
2023-07-29T16:08:59
2023-07-29T16:08:58
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
false
[]
1,763,926,520
5,967
Config name / split name lost after map with multiproc
### Describe the bug Performing a `.map` method on a dataset loses it's config name / split name only if run with multiproc ### Steps to reproduce the bug ```python from datasets import Audio, load_dataset from transformers import AutoFeatureExtractor import numpy as np # load dummy dataset libri = load_datas...
open
https://github.com/huggingface/datasets/issues/5967
2023-06-19T17:27:36
2023-06-28T08:55:25
null
{ "login": "sanchit-gandhi", "id": 93869735, "type": "User" }
[]
false
[]
1,763,885,914
5,966
Fix JSON generation in benchmarks CI
Related to changes made in https://github.com/iterative/dvc/pull/9475
closed
https://github.com/huggingface/datasets/pull/5966
2023-06-19T16:56:06
2023-06-19T17:29:11
2023-06-19T17:22:10
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,763,648,540
5,965
"Couldn't cast array of type" in complex datasets
### Describe the bug When doing a map of a dataset with complex types, sometimes `datasets` is unable to interpret the valid schema of a returned datasets.map() function. This often comes from conflicting types, like when both empty lists and filled lists are competing for the same field value. This is prone to hap...
closed
https://github.com/huggingface/datasets/issues/5965
2023-06-19T14:16:14
2023-07-26T15:13:53
2023-07-26T15:13:53
{ "login": "piercefreeman", "id": 1712066, "type": "User" }
[]
false
[]
1,763,513,574
5,964
Always return list in `list_datasets`
Fix #5925 Plus, deprecate `list_datasets`/`inspect_dataset` in favor of `huggingface_hub.list_datasets`/"git clone workflow" (downloads data files)
closed
https://github.com/huggingface/datasets/pull/5964
2023-06-19T13:07:08
2023-06-19T17:29:37
2023-06-19T17:22:41
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,762,774,457
5,963
Got an error _pickle.PicklingError use Dataset.from_spark.
python 3.9.2 Got an error _pickle.PicklingError use Dataset.from_spark. Did the dataset import load data from spark dataframe using multi-node Spark cluster df = spark.read.parquet(args.input_data).repartition(50) ds = Dataset.from_spark(df, keep_in_memory=True, cache_dir="...
closed
https://github.com/huggingface/datasets/issues/5963
2023-06-19T05:30:35
2023-07-24T11:55:46
2023-07-24T11:55:46
{ "login": "yanzia12138", "id": 112800614, "type": "User" }
[]
false
[]
1,761,589,882
5,962
Issue with train_test_split maintaining the same underlying PyArrow Table
### Describe the bug I've been using the train_test_split method in the datasets module to split my HuggingFace Dataset into separate training, validation, and testing subsets. However, I've noticed an issue where the split datasets appear to maintain the same underlying PyArrow Table. ### Steps to reproduce the bug ...
open
https://github.com/huggingface/datasets/issues/5962
2023-06-17T02:19:58
2023-06-17T02:19:58
null
{ "login": "Oziel14", "id": 70730520, "type": "User" }
[]
false
[]
1,758,525,111
5,961
IterableDataset: split by node and map may preprocess samples that will be skipped anyway
There are two ways an iterable dataset can be split by node: 1. if the number of shards is a factor of number of GPUs: in that case the shards are evenly distributed per GPU 2. otherwise, each GPU iterate on the data and at the end keeps 1 sample out of n(GPUs) - skipping the others. In case 2. it's ...
open
https://github.com/huggingface/datasets/issues/5961
2023-06-15T10:29:10
2023-09-01T10:35:11
null
{ "login": "johnchienbronci", "id": 27708347, "type": "User" }
[]
false
[]
1,757,397,507
5,959
read metric glue.py from local file
### Describe the bug Currently, The server is off-line. I am using the glue metric from the local file downloaded from the hub. I download / cached datasets using `load_dataset('glue','sst2', cache_dir='/xxx')` to cache them and then in the off-line mode, I use `load_dataset('xxx/glue.py','sst2', cache_dir='/xxx'...
closed
https://github.com/huggingface/datasets/issues/5959
2023-06-14T17:59:35
2023-06-14T18:04:16
2023-06-14T18:04:16
{ "login": "JiazhaoLi", "id": 31148397, "type": "User" }
[]
false
[]
1,757,265,971
5,958
set dev version
null
closed
https://github.com/huggingface/datasets/pull/5958
2023-06-14T16:26:34
2023-06-14T16:34:55
2023-06-14T16:26:51
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,757,252,466
5,957
Release: 2.13.0
null
closed
https://github.com/huggingface/datasets/pull/5957
2023-06-14T16:17:26
2023-06-14T16:33:39
2023-06-14T16:24:39
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,756,959,367
5,956
Fix ArrowExamplesIterable.shard_data_sources
ArrowExamplesIterable.shard_data_sources was outdated I also fixed a warning message by not using format_type= in with_format()
closed
https://github.com/huggingface/datasets/pull/5956
2023-06-14T13:50:38
2023-06-14T14:43:12
2023-06-14T14:33:45
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,756,827,133
5,955
Strange bug in loading local JSON files, using load_dataset
### Describe the bug I am using 'load_dataset 'loads a JSON file, but I found a strange bug: an error will be reported when the length of the JSON file exceeds 160000 (uncertain exact number). I have checked the data through the following code and there are no issues. So I cannot determine the true reason for this err...
closed
https://github.com/huggingface/datasets/issues/5955
2023-06-14T12:46:00
2023-06-21T14:42:15
2023-06-21T14:42:15
{ "login": "Night-Quiet", "id": 73934131, "type": "User" }
[]
false
[]
1,756,572,994
5,954
Better filenotfound for gated
close https://github.com/huggingface/datasets/issues/5953 <img width="1292" alt="image" src="https://github.com/huggingface/datasets/assets/42851186/270fe5bc-1739-4878-b7bc-ab6d35336d4d">
closed
https://github.com/huggingface/datasets/pull/5954
2023-06-14T10:33:10
2023-06-14T12:33:27
2023-06-14T12:26:31
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,756,520,523
5,953
Bad error message when trying to download gated dataset
### Describe the bug When I attempt to download a model from the Hub that is gated without being logged in, I get a nice error message. E.g.: E.g. ```sh Repository Not Found for url: https://huggingface.co/api/models/DeepFloyd/IF-I-XL-v1.0. Please make sure you specified the correct `repo_id` and `repo_type`. I...
closed
https://github.com/huggingface/datasets/issues/5953
2023-06-14T10:03:39
2023-06-14T16:36:51
2023-06-14T12:26:32
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
false
[]
1,756,481,591
5,952
Add Arrow builder docs
following https://github.com/huggingface/datasets/pull/5944
closed
https://github.com/huggingface/datasets/pull/5952
2023-06-14T09:42:46
2023-06-14T14:42:31
2023-06-14T14:34:39
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,756,363,546
5,951
What is the Right way to use discofuse dataset??
[Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6) **Below is the following way, as per my understanding , Is it correct :question: :question:** The **columns/features from `DiscoFuse dataset`** that will be the **input to the `encoder` and `decoder`** ar...
closed
https://github.com/huggingface/datasets/issues/5951
2023-06-14T08:38:39
2023-06-14T13:25:06
2023-06-14T12:10:16
{ "login": "akesh1235", "id": 125154243, "type": "User" }
[]
false
[]
1,755,197,946
5,950
Support for data with instance-wise dictionary as features
### Feature request I notice that when loading data instances with feature type of python dictionary, the dictionary keys would be broadcast so that every instance has the same set of keys. Please see an example in the Motivation section. It is possible to avoid this behavior, i.e., load dictionary features as it i...
open
https://github.com/huggingface/datasets/issues/5950
2023-06-13T15:49:00
2025-04-07T13:20:37
null
{ "login": "richardwth", "id": 33274336, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,754,843,717
5,949
Replace metadata utils with `huggingface_hub`'s RepoCard API
Use `huggingface_hub`'s RepoCard API instead of `DatasetMetadata` for modifying the card's YAML, and deprecate `datasets.utils.metadata` and `datasets.utils.readme`. After removing these modules, we can also delete `datasets.utils.resources` since the moon landing repo now stores its own version of these resources f...
closed
https://github.com/huggingface/datasets/pull/5949
2023-06-13T13:03:19
2023-06-27T16:47:51
2023-06-27T16:38:32
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,754,794,611
5,948
Fix sequence of array support for most dtype
Fixes #5936 Also, a related fix to #5927
closed
https://github.com/huggingface/datasets/pull/5948
2023-06-13T12:38:59
2023-06-14T15:11:55
2023-06-14T15:03:33
{ "login": "qgallouedec", "id": 45557362, "type": "User" }
[]
true
[]
1,754,359,316
5,947
Return the audio filename when decoding fails due to corrupt files
### Feature request Return the audio filename when the audio decoding fails. Although currently there are some checks for mp3 and opus formats with the library version there are still cases when the audio decoding could fail, eg. Corrupt file. ### Motivation When you try to load an object file dataset and the...
open
https://github.com/huggingface/datasets/issues/5947
2023-06-13T08:44:09
2023-06-14T12:45:01
null
{ "login": "wetdog", "id": 8949105, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,754,234,469
5,946
IndexError Not Solving -> IndexError: Invalid key: ?? is out of bounds for size 0 or ??
### Describe the bug in <cell line: 1>:1 │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1537 in train ...
open
https://github.com/huggingface/datasets/issues/5946
2023-06-13T07:34:15
2023-07-14T12:04:48
null
{ "login": "syngokhan", "id": 70565543, "type": "User" }
[]
false
[]
1,754,084,577
5,945
Failing to upload dataset to the hub
### Describe the bug Trying to upload a dataset of hundreds of thousands of audio samples (the total volume is not very large, 60 gb) to the hub with push_to_hub, it doesn't work. From time to time one piece of the data (parquet) gets pushed and then I get RemoteDisconnected even though my internet is stable. Please...
closed
https://github.com/huggingface/datasets/issues/5945
2023-06-13T05:46:46
2023-07-24T11:56:40
2023-07-24T11:56:40
{ "login": "Ar770", "id": 77382661, "type": "User" }
[]
false
[]
1,752,882,200
5,944
Arrow dataset builder to be able to load and stream Arrow datasets
This adds a Arrow dataset builder to be able to load and stream from already preprocessed Arrow files. It's related to https://github.com/huggingface/datasets/issues/3035
closed
https://github.com/huggingface/datasets/pull/5944
2023-06-12T14:21:49
2023-06-13T17:36:02
2023-06-13T17:29:01
{ "login": "mariusz-jachimowicz-83", "id": 10278877, "type": "User" }
[]
true
[]
1,752,021,681
5,942
Pass datasets-cli additional args as kwargs to DatasetBuilder in `run_beam.py`
Hi, Following this <https://discuss.huggingface.co/t/how-to-preprocess-a-wikipedia-dataset-using-dataflowrunner/41991/3>, here is a simple PR to pass any additional args to datasets-cli as kwargs in the DatasetBuilder in `run_beam.py`. I also took the liberty to add missing setup steps to the `beam.mdx` docs in o...
open
https://github.com/huggingface/datasets/pull/5942
2023-06-12T06:50:50
2023-06-30T09:15:00
null
{ "login": "graelo", "id": 84066822, "type": "User" }
[]
true
[]
1,751,838,897
5,941
Load Data Sets Too Slow In Train Seq2seq Model
### Describe the bug step 'Generating train split' in load_dataset is too slow: ![image](https://github.com/huggingface/datasets/assets/19569322/d9b08eee-95fe-4741-a346-b70416c948f8) ### Steps to reproduce the bug Data: own data,16K16B Mono wav Oficial Script:[ run_speech_recognition_seq2seq.py](https://github...
closed
https://github.com/huggingface/datasets/issues/5941
2023-06-12T03:58:43
2023-08-15T02:52:22
2023-08-15T02:52:22
{ "login": "xyx361100238", "id": 19569322, "type": "User" }
[]
false
[]
1,774,389,854
5,990
Pushing a large dataset on the hub consistently hangs
### Describe the bug Once I have locally built a large dataset that I want to push to hub, I use the recommended approach of .push_to_hub to get the dataset on the hub, and after pushing a few shards, it consistently hangs. This has happened over 40 times over the past week, and despite my best efforts to try and catc...
open
https://github.com/huggingface/datasets/issues/5990
2023-06-10T14:46:47
2025-02-15T09:29:10
null
{ "login": "AntreasAntoniou", "id": 10792502, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,749,955,883
5,939
.
null
closed
https://github.com/huggingface/datasets/issues/5939
2023-06-09T14:01:34
2023-06-12T12:19:34
2023-06-12T12:19:19
{ "login": "flckv", "id": 103381497, "type": "User" }
[]
false
[]
1,749,462,851
5,938
Make get_from_cache use custom temp filename that is locked
This PR ensures that the temporary filename created is the same as the one that is locked, while writing to the cache. This PR stops using `tempfile` to generate the temporary filename. Additionally, the behavior now is aligned for both `resume_download` `True` and `False`. Refactor temp_file_manager so that i...
closed
https://github.com/huggingface/datasets/pull/5938
2023-06-09T09:01:13
2023-06-14T13:35:38
2023-06-14T13:27:24
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,749,388,597
5,937
Avoid parallel redownload in cache
Avoid parallel redownload in cache by retrying inside the lock if path exists.
closed
https://github.com/huggingface/datasets/pull/5937
2023-06-09T08:18:36
2023-06-14T12:30:59
2023-06-14T12:23:57
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,748,424,388
5,936
Sequence of array not supported for most dtype
### Describe the bug Create a dataset composed of sequence of array fails for most dtypes (see code below). ### Steps to reproduce the bug ```python from datasets import Sequence, Array2D, Features, Dataset import numpy as np for dtype in [ "bool", # ok "int8", # failed "int16", # failed ...
closed
https://github.com/huggingface/datasets/issues/5936
2023-06-08T18:18:07
2023-06-14T15:03:34
2023-06-14T15:03:34
{ "login": "qgallouedec", "id": 45557362, "type": "User" }
[]
false
[]
1,748,090,220
5,935
Better row group size in push_to_hub
This is a very simple change that improves `to_parquet` to use a more reasonable row group size for image and audio datasets. This is especially useful for `push_to_hub` and will provide a better experience with the dataset viewer on HF
closed
https://github.com/huggingface/datasets/pull/5935
2023-06-08T15:01:15
2023-06-09T17:47:37
2023-06-09T17:40:09
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,747,904,840
5,934
Modify levels of some logging messages
Some warning messages didn't quite sound like warnings so I modified their logging levels to info.
closed
https://github.com/huggingface/datasets/pull/5934
2023-06-08T13:31:44
2023-07-12T18:21:03
2023-07-12T18:21:02
{ "login": "Laurent2916", "id": 21087104, "type": "User" }
[]
true
[]
1,747,382,500
5,933
Fix `to_numpy` when None values in the sequence
Closes #5927 I've realized that the error was overlooked during testing due to the presence of only one None value in the sequence. Unfortunately, it was the only case where the function works as expected. When the sequence contained more than one None value, the function failed. Consequently, I've updated the tests...
closed
https://github.com/huggingface/datasets/pull/5933
2023-06-08T08:38:56
2023-06-09T13:49:41
2023-06-09T13:23:48
{ "login": "qgallouedec", "id": 45557362, "type": "User" }
[]
true
[]