id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
2,635,813,932
7,279
Feature proposal: Stacking, potentially heterogeneous, datasets
### Introduction Hello there, I noticed that there are two ways to combine multiple datasets: Either through `datasets.concatenate_datasets` or `datasets.interleave_datasets`. However, to my knowledge (please correct me if I am wrong) both approaches require the datasets that are combined to have the same features....
open
https://github.com/huggingface/datasets/pull/7279
2024-11-05T15:40:50
2024-11-05T15:40:50
null
{ "login": "TimCares", "id": 96243987, "type": "User" }
[]
true
[]
2,633,436,151
7,278
Let soundfile directly read local audio files
- [x] Fixes #7276
open
https://github.com/huggingface/datasets/pull/7278
2024-11-04T17:41:13
2024-11-18T14:01:25
null
{ "login": "fawazahmed0", "id": 20347013, "type": "User" }
[]
true
[]
2,632,459,184
7,277
Add link to video dataset
This PR updates https://huggingface.co/docs/datasets/loading to also link to the new video loading docs. cc @mfarre
closed
https://github.com/huggingface/datasets/pull/7277
2024-11-04T10:45:12
2024-11-04T17:05:06
2024-11-04T17:05:06
{ "login": "NielsRogge", "id": 48327001, "type": "User" }
[]
true
[]
2,631,917,431
7,276
Accessing audio dataset value throws Format not recognised error
### Describe the bug Accessing audio dataset value throws `Format not recognised error` ### Steps to reproduce the bug **code:** ```py from datasets import load_dataset dataset = load_dataset("fawazahmed0/bug-audio") for data in dataset["train"]: print(data) ``` **output:** ```bash (mypy) ...
open
https://github.com/huggingface/datasets/issues/7276
2024-11-04T05:59:13
2024-11-09T18:51:52
null
{ "login": "fawazahmed0", "id": 20347013, "type": "User" }
[]
false
[]
2,631,713,397
7,275
load_dataset
### Describe the bug I am performing two operations I see on a hugging face tutorial (Fine-tune a language model), and I am defining every aspect inside the mapped functions, also some imports of the library because it doesnt identify anything not defined outside that function where the dataset elements are being mapp...
open
https://github.com/huggingface/datasets/issues/7275
2024-11-04T03:01:44
2024-11-04T03:01:44
null
{ "login": "santiagobp99", "id": 46941974, "type": "User" }
[]
false
[]
2,629,882,821
7,274
[MINOR:TYPO] Fix typo in exception text
null
closed
https://github.com/huggingface/datasets/pull/7274
2024-11-01T21:15:29
2025-05-21T13:17:20
2025-05-21T13:17:20
{ "login": "cakiki", "id": 3664563, "type": "User" }
[]
true
[]
2,628,896,492
7,273
Raise error for incorrect JSON serialization
Raise error when `lines = False` and `batch_size < Dataset.num_rows` in `Dataset.to_json()`. Issue: #7037 Related PRs: #7039 #7181
closed
https://github.com/huggingface/datasets/pull/7273
2024-11-01T11:54:35
2024-11-18T11:25:01
2024-11-18T11:25:01
{ "login": "varadhbhatnagar", "id": 20443618, "type": "User" }
[]
true
[]
2,627,223,390
7,272
fix conda release worlflow
null
closed
https://github.com/huggingface/datasets/pull/7272
2024-10-31T15:56:19
2024-10-31T15:58:35
2024-10-31T15:57:29
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,627,135,540
7,271
Set dev version
null
closed
https://github.com/huggingface/datasets/pull/7271
2024-10-31T15:22:51
2024-10-31T15:25:27
2024-10-31T15:22:59
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,627,107,016
7,270
Release: 3.1.0
null
closed
https://github.com/huggingface/datasets/pull/7270
2024-10-31T15:10:01
2024-10-31T15:14:23
2024-10-31T15:14:20
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,626,873,843
7,269
Memory leak when streaming
### Describe the bug I try to use a dataset with streaming=True, the issue I have is that the RAM usage becomes higher and higher until it is no longer sustainable. I understand that huggingface store data in ram during the streaming, and more worker in dataloader there are, more a lot of shard will be stored in ...
open
https://github.com/huggingface/datasets/issues/7269
2024-10-31T13:33:52
2024-11-18T11:46:07
null
{ "login": "Jourdelune", "id": 64205064, "type": "User" }
[]
false
[]
2,626,664,687
7,268
load_from_disk
### Describe the bug I have data saved with save_to_disk. The data is big (700Gb). When I try loading it, the only option is load_from_disk, and this function copies the data to a tmp directory, causing me to run out of disk space. Is there an alternative solution to that? ### Steps to reproduce the bug when trying ...
open
https://github.com/huggingface/datasets/issues/7268
2024-10-31T11:51:56
2025-07-01T08:42:17
null
{ "login": "ghaith-mq", "id": 71670961, "type": "User" }
[]
false
[]
2,626,490,029
7,267
Source installation fails on Macintosh with python 3.10
### Describe the bug Hi, Decord is a dev dependency not maintained since couple years. It does not have an ARM package available rendering it uninstallable on non-intel based macs Suggestion is to move to eva-decord (https://github.com/georgia-tech-db/eva-decord) which doesnt have this problem. Happy to...
open
https://github.com/huggingface/datasets/issues/7267
2024-10-31T10:18:45
2024-11-04T22:18:06
null
{ "login": "mayankagarwals", "id": 39498938, "type": "User" }
[]
false
[]
2,624,666,087
7,266
The dataset viewer should be available soon. Please retry later.
### Describe the bug After waiting for 2 hours, it still presents ``The dataset viewer should be available soon. Please retry later.'' ### Steps to reproduce the bug dataset link: https://huggingface.co/datasets/BryanW/HI_EDIT ### Expected behavior Present the dataset viewer. ### Environment info NA
closed
https://github.com/huggingface/datasets/issues/7266
2024-10-30T16:32:00
2024-10-31T03:48:11
2024-10-31T03:48:10
{ "login": "viiika", "id": 39821659, "type": "User" }
[]
false
[]
2,624,090,418
7,265
Disallow video push_to_hub
null
closed
https://github.com/huggingface/datasets/pull/7265
2024-10-30T13:21:55
2024-10-30T13:36:05
2024-10-30T13:36:02
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,624,047,640
7,264
fix docs relative links
null
closed
https://github.com/huggingface/datasets/pull/7264
2024-10-30T13:07:34
2024-10-30T13:10:13
2024-10-30T13:09:02
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,621,844,054
7,263
Small addition to video docs
null
closed
https://github.com/huggingface/datasets/pull/7263
2024-10-29T16:58:37
2024-10-29T17:01:05
2024-10-29T16:59:10
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,620,879,059
7,262
Allow video with disabeld decoding without decord
for the viewer, this way it can use Video(decode=False) and doesn't need decord (which causes segfaults)
closed
https://github.com/huggingface/datasets/pull/7262
2024-10-29T10:54:04
2024-10-29T10:56:19
2024-10-29T10:55:37
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,620,510,840
7,261
Cannot load the cache when mapping the dataset
### Describe the bug I'm training the flux controlnet. The train_dataset.map() takes long time to finish. However, when I killed one training process and want to restart a new training with the same dataset. I can't reuse the mapped result even I defined the cache dir for the dataset. with accelerator.main_process_...
open
https://github.com/huggingface/datasets/issues/7261
2024-10-29T08:29:40
2025-03-24T13:27:55
null
{ "login": "zhangn77", "id": 43033959, "type": "User" }
[]
false
[]
2,620,014,285
7,260
cache can't cleaned or disabled
### Describe the bug I tried following ways, the cache can't be disabled. I got 2T data, but I also got more than 2T cache file. I got pressure on storage. I need to diable cache or cleaned immediately after processed. Following ways are all not working, please give some help! ```python from datasets import ...
open
https://github.com/huggingface/datasets/issues/7260
2024-10-29T03:15:28
2024-12-11T09:04:52
null
{ "login": "charliedream1", "id": 15007828, "type": "User" }
[]
false
[]
2,618,909,241
7,259
Don't embed videos
don't include video bytes when running download_and_prepare(format="parquet") this also affects push_to_hub which will just upload the local paths of the videos though
closed
https://github.com/huggingface/datasets/pull/7259
2024-10-28T16:25:10
2024-10-28T16:27:34
2024-10-28T16:26:01
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,618,758,399
7,258
Always set non-null writer batch size
bug introduced in #7230, it was preventing the Viewer limit writes to work
closed
https://github.com/huggingface/datasets/pull/7258
2024-10-28T15:26:14
2024-10-28T15:28:41
2024-10-28T15:26:29
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,618,602,173
7,257
fix ci for pyarrow 18
null
closed
https://github.com/huggingface/datasets/pull/7257
2024-10-28T14:31:34
2024-10-28T14:34:05
2024-10-28T14:31:44
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,618,580,188
7,256
Retry all requests timeouts
as reported in https://github.com/huggingface/datasets/issues/6843
closed
https://github.com/huggingface/datasets/pull/7256
2024-10-28T14:23:16
2024-10-28T14:56:28
2024-10-28T14:56:26
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,618,540,355
7,255
fix decord import
delay the import until Video() is instantiated + also import duckdb first (otherwise importing duckdb later causes a segfault)
closed
https://github.com/huggingface/datasets/pull/7255
2024-10-28T14:08:19
2024-10-28T14:10:43
2024-10-28T14:09:14
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,616,174,996
7,254
mismatch for datatypes when providing `Features` with `Array2D` and user specified `dtype` and using with_format("numpy")
### Describe the bug If the user provides a `Features` type value to `datasets.Dataset` with members having `Array2D` with a value for `dtype`, it is not respected during `with_format("numpy")` which should return a `np.array` with `dtype` that the user provided for `Array2D`. It seems for floats, it will be set to `f...
open
https://github.com/huggingface/datasets/issues/7254
2024-10-26T22:06:27
2024-10-26T22:07:37
null
{ "login": "Akhil-CM", "id": 97193607, "type": "User" }
[]
false
[]
2,615,862,202
7,253
Unable to upload a large dataset zip either from command line or UI
### Describe the bug Unable to upload a large dataset zip from command line or UI. UI simply says error. I am trying to a upload a tar.gz file of 17GB. <img width="550" alt="image" src="https://github.com/user-attachments/assets/f9d29024-06c8-49c4-a109-0492cff79d34"> <img width="755" alt="image" src="https://githu...
open
https://github.com/huggingface/datasets/issues/7253
2024-10-26T13:17:06
2024-10-26T13:17:06
null
{ "login": "vakyansh", "id": 159609047, "type": "User" }
[]
false
[]
2,613,795,544
7,252
Add IterableDataset.shard()
Will be useful to distribute a dataset across workers (other than pytorch) like spark I also renamed `.n_shards` -> `.num_shards` for consistency and kept the old name for backward compatibility. And a few changes in internal functions for consistency as well (rank, world_size -> num_shards, index) Breaking chang...
closed
https://github.com/huggingface/datasets/pull/7252
2024-10-25T11:07:12
2025-03-21T03:58:43
2024-10-25T15:45:22
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,612,097,435
7,251
Missing video docs
null
closed
https://github.com/huggingface/datasets/pull/7251
2024-10-24T16:45:12
2024-10-24T16:48:29
2024-10-24T16:48:27
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,612,041,969
7,250
Basic XML support (mostly copy pasted from text)
enable the viewer for datasets like https://huggingface.co/datasets/FrancophonIA/e-calm (there will be more and more apparently)
closed
https://github.com/huggingface/datasets/pull/7250
2024-10-24T16:14:50
2024-10-24T16:19:18
2024-10-24T16:19:16
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,610,136,636
7,249
How to debugging
### Describe the bug I wanted to use my own script to handle the processing, and followed the tutorial documentation by rewriting the MyDatasetConfig and MyDatasetBuilder (which contains the _info,_split_generators and _generate_examples methods) classes. Testing with simple data was able to output the results of the ...
open
https://github.com/huggingface/datasets/issues/7249
2024-10-24T01:03:51
2024-10-24T01:03:51
null
{ "login": "ShDdu", "id": 49576595, "type": "User" }
[]
false
[]
2,609,926,089
7,248
ModuleNotFoundError: No module named 'datasets.tasks'
### Describe the bug --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) [<ipython-input-9-13b5f31bd391>](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_R...
open
https://github.com/huggingface/datasets/issues/7248
2024-10-23T21:58:25
2024-10-24T17:00:19
null
{ "login": "shoowadoo", "id": 93593941, "type": "User" }
[]
false
[]
2,606,230,029
7,247
Adding column with dict struction when mapping lead to wrong order
### Describe the bug in `map()` function, I want to add a new column with a dict structure. ``` def map_fn(example): example['text'] = {'user': ..., 'assistant': ...} return example ``` However this leads to a wrong order `{'assistant':..., 'user':...}` in the dataset. Thus I can't concatenate two datasets ...
open
https://github.com/huggingface/datasets/issues/7247
2024-10-22T18:55:11
2024-10-22T18:55:23
null
{ "login": "chchch0109", "id": 114604968, "type": "User" }
[]
false
[]
2,605,734,447
7,246
Set dev version
null
closed
https://github.com/huggingface/datasets/pull/7246
2024-10-22T15:04:47
2024-10-22T15:07:31
2024-10-22T15:04:58
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,605,701,235
7,245
Release: 3.0.2
null
closed
https://github.com/huggingface/datasets/pull/7245
2024-10-22T14:53:34
2024-10-22T15:01:50
2024-10-22T15:01:47
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,605,461,515
7,244
use huggingface_hub offline mode
and better handling of LocalEntryNotfoundError cc @Wauplin follow up to #7234
closed
https://github.com/huggingface/datasets/pull/7244
2024-10-22T13:27:16
2024-10-22T14:10:45
2024-10-22T14:10:20
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,602,853,172
7,243
ArrayXD with None as leading dim incompatible with DatasetCardData
### Describe the bug Creating a dataset with ArrayXD features leads to errors when downloading from hub due to DatasetCardData removing the Nones @lhoestq ### Steps to reproduce the bug ```python import numpy as np from datasets import Array2D, Dataset, Features, load_dataset def examples_generator():...
open
https://github.com/huggingface/datasets/issues/7243
2024-10-21T15:08:13
2024-10-22T14:18:10
null
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[]
false
[]
2,599,899,156
7,241
`push_to_hub` overwrite argument
### Feature request Add an `overwrite` argument to the `push_to_hub` method. ### Motivation I want to overwrite a repo without deleting it on Hugging Face. Is this possible? I couldn't find anything in the documentation or tutorials. ### Your contribution I can create a PR.
closed
https://github.com/huggingface/datasets/issues/7241
2024-10-20T03:23:26
2024-10-24T17:39:08
2024-10-24T17:39:08
{ "login": "ceferisbarov", "id": 60838378, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,598,980,027
7,240
Feature Request: Add functionality to pass split types like train, test in DatasetDict.map
Hello datasets! We often encounter situations where we need to preprocess data differently depending on split types such as train, valid, and test. However, while DatasetDict.map has features to pass rank or index, there's no functionality to pass split types. Therefore, I propose adding a 'with_splits' parame...
closed
https://github.com/huggingface/datasets/pull/7240
2024-10-19T09:59:12
2025-01-06T08:04:08
2025-01-06T08:04:08
{ "login": "jp1924", "id": 93233241, "type": "User" }
[]
true
[]
2,598,409,993
7,238
incompatibily issue when using load_dataset with datasets==3.0.1
### Describe the bug There is a bug when using load_dataset with dataset version at 3.0.1 . Please see below in the "steps to reproduce the bug". To resolve the bug, I had to downgrade to version 2.21.0 OS: Ubuntu 24 (AWS instance) Python: same bug under 3.12 and 3.10 The error I had was: Traceback (most rec...
open
https://github.com/huggingface/datasets/issues/7238
2024-10-18T21:25:23
2024-12-09T09:49:32
null
{ "login": "jupiterMJM", "id": 74985234, "type": "User" }
[]
false
[]
2,597,358,525
7,236
[MINOR:TYPO] Update arrow_dataset.py
Fix wrong link. csv kwargs docstring link was pointing to pandas json docs.
closed
https://github.com/huggingface/datasets/pull/7236
2024-10-18T12:10:03
2024-10-24T15:06:43
2024-10-24T15:06:43
{ "login": "cakiki", "id": 3664563, "type": "User" }
[]
true
[]
2,594,220,624
7,234
No need for dataset_info
save a useless call to /api/datasets/repo_id
closed
https://github.com/huggingface/datasets/pull/7234
2024-10-17T09:54:03
2024-10-22T12:30:40
2024-10-21T16:44:34
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,593,903,113
7,233
数据集数量问题
### Describe the bug 这里我进行大模型微调,当数据集数量为718时,模型可以正常微调,但是当我添加一个在前718个数据集中的数据或者新增一个数据就会报错 ### Steps to reproduce the bug 1. 这里我的数据集可以微调的最后两个数据集是: { "messages": [ { "role": "user", "content": "完成校正装置设计后需要进行哪些工作?" }, { "role": "assistant", "content": "一旦完成校正装置设计后,需要进行系统实际调校...
open
https://github.com/huggingface/datasets/issues/7233
2024-10-17T07:41:44
2024-10-17T07:41:44
null
{ "login": "want-well", "id": 180297268, "type": "User" }
[]
false
[]
2,593,720,548
7,232
(Super tiny doc update) Mention to_polars
polars is also quite popular now, thus this tiny update can tell users polars is supported
closed
https://github.com/huggingface/datasets/pull/7232
2024-10-17T06:08:53
2024-10-24T23:11:05
2024-10-24T15:06:16
{ "login": "fzyzcjy", "id": 5236035, "type": "User" }
[]
true
[]
2,592,011,737
7,231
Fix typo in image dataset docs
Fix typo in image dataset docs. Typo reported by @datavistics.
closed
https://github.com/huggingface/datasets/pull/7231
2024-10-16T14:05:46
2024-10-16T17:06:21
2024-10-16T17:06:19
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,589,531,942
7,230
Video support
(wip and experimental) adding the `Video` type based on `VideoReader` from `decord` ```python >>>from datasets import load_dataset >>> ds = load_dataset("path/to/videos", split="train").with_format("torch") >>> print(ds[0]["video"]) <decord.video_reader.VideoReader object at 0x337a47910> >>> print(ds[0]["vid...
closed
https://github.com/huggingface/datasets/pull/7230
2024-10-15T18:17:29
2024-10-24T16:39:51
2024-10-24T16:39:50
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,588,847,398
7,229
handle config_name=None in push_to_hub
This caught me out - thought it might be better to explicitly handle None?
closed
https://github.com/huggingface/datasets/pull/7229
2024-10-15T13:48:57
2024-10-24T17:51:52
2024-10-24T17:51:52
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[]
true
[]
2,587,310,094
7,228
Composite (multi-column) features
### Feature request Structured data types (graphs etc.) might often be most efficiently stored as multiple columns, which then need to be combined during feature decoding Although it is currently possible to nest features as structs, my impression is that in particular when dealing with e.g. a feature composed of...
open
https://github.com/huggingface/datasets/issues/7228
2024-10-14T23:59:19
2024-10-15T11:17:15
null
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,587,048,312
7,227
fast array extraction
Implements #7210 using method suggested in https://github.com/huggingface/datasets/pull/7207#issuecomment-2411789307 ```python import numpy as np from datasets import Dataset, Features, Array3D features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32"), "array1": Array3D((None,10,10), dtype="float3...
open
https://github.com/huggingface/datasets/pull/7227
2024-10-14T20:51:32
2025-01-28T09:39:26
null
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[]
true
[]
2,586,920,351
7,226
Add R as a How to use from the Polars (R) Library as an option
### Feature request The boiler plate code to access a dataset via the hugging face file system is very useful. Please addd ## Add Polars (R) option The equivailent code works, because the [Polars-R](https://github.com/pola-rs/r-polars) wrapper has hugging faces funcitonaliy as well. ```r library(polars) ...
open
https://github.com/huggingface/datasets/issues/7226
2024-10-14T19:56:07
2024-10-14T19:57:13
null
{ "login": "ran-codes", "id": 45013044, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,586,229,216
7,225
Huggingface GIT returns null as Content-Type instead of application/x-git-receive-pack-result
### Describe the bug We push changes to our datasets programmatically. Our git client jGit reports that the hf git server returns null as Content-Type after a push. ### Steps to reproduce the bug A basic kotlin application: ``` val person = PersonIdent( "padmalcom", "padmalcom@sth.com" ) ...
open
https://github.com/huggingface/datasets/issues/7225
2024-10-14T14:33:06
2024-10-14T14:33:06
null
{ "login": "padmalcom", "id": 3961950, "type": "User" }
[]
false
[]
2,583,233,980
7,224
fallback to default feature casting in case custom features not available during dataset loading
a fix for #7223 in case datasets is happy to support this kind of extensibility! seems cool / powerful for allowing sharing of datasets with potentially different feature types
open
https://github.com/huggingface/datasets/pull/7224
2024-10-12T16:13:56
2024-10-12T16:13:56
null
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[]
true
[]
2,583,231,590
7,223
Fallback to arrow defaults when loading dataset with custom features that aren't registered locally
### Describe the bug Datasets allows users to create and register custom features. However if datasets are then pushed to the hub, this means that anyone calling load_dataset without registering the custom Features in the same way as the dataset creator will get an error message. It would be nice to offer a fall...
open
https://github.com/huggingface/datasets/issues/7223
2024-10-12T16:08:20
2024-10-12T16:08:20
null
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[]
false
[]
2,582,678,033
7,222
TypeError: Couldn't cast array of type string to null in long json
### Describe the bug In general, changing the type from string to null is allowed within a dataset — there are even examples of this in the documentation. However, if the dataset is large and unevenly distributed, this allowance stops working. The schema gets locked in after reading a chunk. Consequently, if al...
open
https://github.com/huggingface/datasets/issues/7222
2024-10-12T08:14:59
2025-07-21T03:07:32
null
{ "login": "nokados", "id": 5142577, "type": "User" }
[]
false
[]
2,582,114,631
7,221
add CustomFeature base class to support user-defined features with encoding/decoding logic
intended as fix for #7220 if this kind of extensibility is something that datasets is willing to support! ```python from datasets.features.features import CustomFeature class ListOfStrs(CustomFeature): requires_encoding = True def _encode_example(self, value): if isinstance(value, str): ...
closed
https://github.com/huggingface/datasets/pull/7221
2024-10-11T20:10:27
2025-01-28T09:40:29
2025-01-28T09:40:29
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[]
true
[]
2,582,036,110
7,220
Custom features not compatible with special encoding/decoding logic
### Describe the bug It is possible to register custom features using datasets.features.features.register_feature (https://github.com/huggingface/datasets/pull/6727) However such features are not compatible with Features.encode_example/decode_example if they require special encoding / decoding logic because encod...
open
https://github.com/huggingface/datasets/issues/7220
2024-10-11T19:20:11
2024-11-08T15:10:58
null
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[]
false
[]
2,581,708,084
7,219
bump fsspec
null
closed
https://github.com/huggingface/datasets/pull/7219
2024-10-11T15:56:36
2024-10-14T08:21:56
2024-10-14T08:21:55
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,581,095,098
7,217
ds.map(f, num_proc=10) is slower than df.apply
### Describe the bug pandas columns: song_id, song_name ds = Dataset.from_pandas(df) def has_cover(song_name): if song_name is None or pd.isna(song_name): return False return 'cover' in song_name.lower() df['has_cover'] = df.song_name.progress_apply(has_cover) ds = ds.map(lambda x: {'has_cov...
open
https://github.com/huggingface/datasets/issues/7217
2024-10-11T11:04:05
2025-02-28T21:21:01
null
{ "login": "lanlanlanlanlanlan365", "id": 178981231, "type": "User" }
[]
false
[]
2,579,942,939
7,215
Iterable dataset map with explicit features causes slowdown for Sequence features
### Describe the bug When performing map, it's nice to be able to pass the new feature type, and indeed required by interleave and concatenate datasets. However, this can cause a major slowdown for certain types of array features due to the features being re-encoded. This is separate to the slowdown reported i...
open
https://github.com/huggingface/datasets/issues/7215
2024-10-10T22:08:20
2024-10-10T22:10:32
null
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[]
false
[]
2,578,743,713
7,214
Formatted map + with_format(None) changes array dtype for iterable datasets
### Describe the bug When applying with_format -> map -> with_format(None), array dtypes seem to change, even if features are passed ### Steps to reproduce the bug ```python features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32")}) dataset = Dataset.from_dict({f"array0": [np.zeros((100,10,10...
open
https://github.com/huggingface/datasets/issues/7214
2024-10-10T12:45:16
2024-10-12T16:55:57
null
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[]
false
[]
2,578,675,565
7,213
Add with_rank to Dataset.from_generator
### Feature request Add `with_rank` to `Dataset.from_generator` similar to `Dataset.map` and `Dataset.filter`. ### Motivation As for `Dataset.map` and `Dataset.filter`, this is useful when creating cache files using multi-GPU, where the rank can be used to select GPU IDs. For now, rank can be added in the `ge...
open
https://github.com/huggingface/datasets/issues/7213
2024-10-10T12:15:29
2024-10-10T12:17:11
null
{ "login": "muthissar", "id": 17828087, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,578,641,259
7,212
Windows do not supprot signal.alarm and singal.signal
### Describe the bug signal.alarm and signal.signal are used in the load.py module, but these are not supported by Windows. ### Steps to reproduce the bug lighteval accelerate --model_args "pretrained=gpt2,trust_remote_code=True" --tasks "community|kinit_sts" --custom_tasks "community_tasks/kinit_evals.py" --output...
open
https://github.com/huggingface/datasets/issues/7212
2024-10-10T12:00:19
2024-10-10T12:00:19
null
{ "login": "TomasJavurek", "id": 33832672, "type": "User" }
[]
false
[]
2,576,400,502
7,211
Describe only selected fields in README
### Feature request Hi Datasets team! Is it possible to add the ability to describe only selected fields of the dataset files in `README.md`? For example, I have this open dataset ([open-llm-leaderboard/results](https://huggingface.co/datasets/open-llm-leaderboard/results?row=0)) and I want to describe only some f...
open
https://github.com/huggingface/datasets/issues/7211
2024-10-09T16:25:47
2024-10-09T16:25:47
null
{ "login": "alozowski", "id": 67658835, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,575,883,939
7,210
Convert Array features to numpy arrays rather than lists by default
### Feature request It is currently quite easy to cause massive slowdowns when using datasets and not familiar with the underlying data conversions by e.g. making bad choices of formatting. Would it be more user-friendly to set defaults that avoid this as much as possible? e.g. format Array features as numpy arrays...
open
https://github.com/huggingface/datasets/issues/7210
2024-10-09T13:05:21
2024-10-09T13:05:21
null
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,575,526,651
7,209
Preserve features in iterable dataset.filter
Fixes example in #7208 - I'm not sure what other checks I should do? @lhoestq I also haven't thought hard about the concatenate / interleaving example iterables but think this might work assuming that features are either all identical or None?
closed
https://github.com/huggingface/datasets/pull/7209
2024-10-09T10:42:05
2024-10-16T11:27:22
2024-10-09T16:04:07
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[]
true
[]
2,575,484,256
7,208
Iterable dataset.filter should not override features
### Describe the bug When calling filter on an iterable dataset, the features get set to None ### Steps to reproduce the bug import numpy as np import time from datasets import Dataset, Features, Array3D ```python features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32"), "array1": Array3D((None,...
closed
https://github.com/huggingface/datasets/issues/7208
2024-10-09T10:23:45
2024-10-09T16:08:46
2024-10-09T16:08:45
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[]
false
[]
2,573,582,335
7,207
apply formatting after iter_arrow to speed up format -> map, filter for iterable datasets
I got to this by hacking around a bit but it seems to solve #7206 I have no idea if this approach makes sense or would break something else? Could maybe work on a full pr if this looks reasonable @lhoestq ? I imagine the same issue might affect other iterable dataset methods?
closed
https://github.com/huggingface/datasets/pull/7207
2024-10-08T15:44:53
2025-01-14T18:36:03
2025-01-14T16:59:30
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[]
true
[]
2,573,567,467
7,206
Slow iteration for iterable dataset with numpy formatting for array data
### Describe the bug When working with large arrays, setting with_format to e.g. numpy then applying map causes a significant slowdown for iterable datasets. ### Steps to reproduce the bug ```python import numpy as np import time from datasets import Dataset, Features, Array3D features=Features(**{"array...
open
https://github.com/huggingface/datasets/issues/7206
2024-10-08T15:38:11
2024-10-17T17:14:52
null
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[]
false
[]
2,573,490,859
7,205
fix ci benchmark
we're not using the benchmarks anymore + they were not working anyway due to token permissions I keep the code in case we ever want to re-run the benchmark manually
closed
https://github.com/huggingface/datasets/pull/7205
2024-10-08T15:06:18
2024-10-08T15:25:28
2024-10-08T15:25:25
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,573,289,063
7,204
fix unbatched arrow map for iterable datasets
Fixes the bug when applying map to an arrow-formatted iterable dataset described here: https://github.com/huggingface/datasets/issues/6833#issuecomment-2399903885 ```python from datasets import load_dataset ds = load_dataset("rotten_tomatoes", split="train", streaming=True) ds = ds.with_format("arrow").map(l...
closed
https://github.com/huggingface/datasets/pull/7204
2024-10-08T13:54:09
2024-10-08T14:19:47
2024-10-08T14:19:47
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[]
true
[]
2,573,154,222
7,203
with_format docstring
reported at https://github.com/huggingface/datasets/issues/3444
closed
https://github.com/huggingface/datasets/pull/7203
2024-10-08T13:05:19
2024-10-08T13:13:12
2024-10-08T13:13:05
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,572,583,798
7,202
`from_parquet` return type annotation
### Describe the bug As already posted in https://github.com/microsoft/pylance-release/issues/6534, the correct type hinting fails when building a dataset using the `from_parquet` constructor. Their suggestion is to comprehensively annotate the method's return type to better align with the docstring information. ###...
open
https://github.com/huggingface/datasets/issues/7202
2024-10-08T09:08:10
2024-10-08T09:08:10
null
{ "login": "saiden89", "id": 45285915, "type": "User" }
[]
false
[]
2,569,837,015
7,201
`load_dataset()` of images from a single directory where `train.png` image exists
### Describe the bug Hey! Firstly, thanks for maintaining such framework! I had a small issue, where I wanted to load a custom dataset of image+text captioning. I had all of my images in a single directory, and one of the images had the name `train.png`. Then, the loaded dataset had only this image. I guess it'...
open
https://github.com/huggingface/datasets/issues/7201
2024-10-07T09:14:17
2024-10-07T09:14:17
null
{ "login": "SagiPolaczek", "id": 56922146, "type": "User" }
[]
false
[]
2,567,921,694
7,200
Fix the environment variable for huggingface cache
Resolve #6256. As far as I tested, `HF_DATASETS_CACHE` was ignored and I could not specify the cache directory at all except for the default one by this environment variable. `HF_HOME` has worked. Perhaps the recent change on file downloading by `huggingface_hub` could affect this bug. In my testing, I could not sp...
closed
https://github.com/huggingface/datasets/pull/7200
2024-10-05T11:54:35
2024-10-30T23:10:27
2024-10-08T15:45:18
{ "login": "torotoki", "id": 989899, "type": "User" }
[]
true
[]
2,566,788,225
7,199
Add with_rank to Dataset.from_generator
Adds `with_rank` to `Dataset.from_generator`. As for `Dataset.map` and `Dataset.filter`, this is useful when creating cache files using multi-GPU.
open
https://github.com/huggingface/datasets/pull/7199
2024-10-04T16:51:53
2024-10-04T16:51:53
null
{ "login": "muthissar", "id": 17828087, "type": "User" }
[]
true
[]
2,566,064,849
7,198
Add repeat method to datasets
Following up on discussion in #6623 and #7198 I thought this would be pretty useful for my case so had a go at implementing. My main motivation is to be able to call iterable_dataset.repeat(None).take(samples_per_epoch) to safely avoid timeout issues in a distributed training setting. This would provide a straightfo...
closed
https://github.com/huggingface/datasets/pull/7198
2024-10-04T10:45:16
2025-02-05T16:32:31
2025-02-05T16:32:31
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[]
true
[]
2,565,924,788
7,197
ConnectionError: Couldn't reach 'allenai/c4' on the Hub (ConnectionError)数据集下不下来,怎么回事
### Describe the bug from datasets import load_dataset print("11") traindata = load_dataset('ptb_text_only', 'penn_treebank', split='train') print("22") valdata = load_dataset('ptb_text_only', 'penn_treebank', split='validation') ### Steps to reproduce the b...
open
https://github.com/huggingface/datasets/issues/7197
2024-10-04T09:33:25
2025-02-26T02:26:16
null
{ "login": "Mrgengli", "id": 114299344, "type": "User" }
[]
false
[]
2,564,218,566
7,196
concatenate_datasets does not preserve shuffling state
### Describe the bug After concatenate datasets on an iterable dataset, the shuffling state is destroyed, similar to #7156 This means concatenation cant be used for resolving uneven numbers of samples across devices when using iterable datasets in a distributed setting as discussed in #6623 I also noticed th...
open
https://github.com/huggingface/datasets/issues/7196
2024-10-03T14:30:38
2025-03-18T10:56:47
null
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[]
false
[]
2,564,070,809
7,195
Add support for 3D datasets
See https://huggingface.co/datasets/allenai/objaverse for example
open
https://github.com/huggingface/datasets/issues/7195
2024-10-03T13:27:44
2024-10-04T09:23:36
null
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,563,364,199
7,194
datasets.exceptions.DatasetNotFoundError for private dataset
### Describe the bug The following Python code tries to download a private dataset and fails with the error `datasets.exceptions.DatasetNotFoundError: Dataset 'ClimatePolicyRadar/all-document-text-data-weekly' doesn't exist on the Hub or cannot be accessed.`. Downloading a public dataset doesn't work. ``` py fro...
closed
https://github.com/huggingface/datasets/issues/7194
2024-10-03T07:49:36
2024-10-03T10:09:28
2024-10-03T10:09:28
{ "login": "kdutia", "id": 20212179, "type": "User" }
[]
false
[]
2,562,392,887
7,193
Support of num_workers (multiprocessing) in map for IterableDataset
### Feature request Currently, IterableDataset doesn't support setting num_worker in .map(), which results in slow processing here. Could we add support for it? As .map() can be run in the batch fashion (e.g., batch_size is default to 1000 in datasets), it seems to be doable for IterableDataset as the regular Dataset....
open
https://github.com/huggingface/datasets/issues/7193
2024-10-02T18:34:04
2024-10-03T09:54:15
null
{ "login": "getao", "id": 12735658, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,562,289,642
7,192
Add repeat() for iterable datasets
### Feature request It would be useful to be able to straightforwardly repeat iterable datasets indefinitely, to provide complete control over starting and ending of iteration to the user. An IterableDataset.repeat(n) function could do this automatically ### Motivation This feature was discussed in this iss...
closed
https://github.com/huggingface/datasets/issues/7192
2024-10-02T17:48:13
2025-03-18T10:48:33
2025-03-18T10:48:32
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,562,206,949
7,191
Solution to issue: #7080 Modified load_dataset function, so that it prompts the user to select a dataset when subdatasets or splits (train, test) are available
# Feel free to give suggestions please.. ### This PR is raised because of issue: https://github.com/huggingface/datasets/issues/7080 ![image](https://github.com/user-attachments/assets/8fbc604f-f0a5-4a59-a63e-aa4c26442c83) ### This PR gives solution to https://github.com/huggingface/datasets/issues/7080 1. ...
closed
https://github.com/huggingface/datasets/pull/7191
2024-10-02T17:02:45
2024-11-10T08:48:21
2024-11-10T08:48:21
{ "login": "negativenagesh", "id": 148525245, "type": "User" }
[]
true
[]
2,562,162,725
7,190
Datasets conflicts with fsspec 2024.9
### Describe the bug Installing both in latest versions are not possible `pip install "datasets==3.0.1" "fsspec==2024.9.0"` But using older version of datasets is ok `pip install "datasets==1.24.4" "fsspec==2024.9.0"` ### Steps to reproduce the bug `pip install "datasets==3.0.1" "fsspec==2024.9.0"` #...
open
https://github.com/huggingface/datasets/issues/7190
2024-10-02T16:43:46
2024-10-10T07:33:18
null
{ "login": "cw-igormorgado", "id": 162599174, "type": "User" }
[]
false
[]
2,562,152,845
7,189
Audio preview in dataset viewer for audio array data without a path/filename
### Feature request Huggingface has quite a comprehensive set of guides for [audio datasets](https://huggingface.co/docs/datasets/en/audio_dataset). It seems, however, all these guides assume the audio array data to be decoded/inserted into a HF dataset always originates from individual files. The [Audio-dataclass](...
open
https://github.com/huggingface/datasets/issues/7189
2024-10-02T16:38:38
2024-10-02T17:01:40
null
{ "login": "Lauler", "id": 7157234, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,560,712,689
7,188
Pin multiprocess<0.70.1 to align with dill<0.3.9
Pin multiprocess<0.70.1 to align with dill<0.3.9. Note that multiprocess-0.70.1 requires dill-0.3.9: https://github.com/uqfoundation/multiprocess/releases/tag/0.70.17 Fix #7186.
closed
https://github.com/huggingface/datasets/pull/7188
2024-10-02T05:40:18
2024-10-02T06:08:25
2024-10-02T06:08:23
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,560,501,308
7,187
shard_data_sources() got an unexpected keyword argument 'worker_id'
### Describe the bug ``` [rank0]: File "/home/qinghao/miniconda3/envs/doremi/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 238, in __iter__ [rank0]: for key_example in islice(self.generate_examples_fn(**gen_kwags), shard_example_idx_start, None): [rank0]: File "/home/qinghao/miniconda3/en...
open
https://github.com/huggingface/datasets/issues/7187
2024-10-02T01:26:35
2024-10-02T01:26:35
null
{ "login": "Qinghao-Hu", "id": 27758466, "type": "User" }
[]
false
[]
2,560,323,917
7,186
pinning `dill<0.3.9` without pinning `multiprocess`
### Describe the bug The [latest `multiprocess` release](https://github.com/uqfoundation/multiprocess/releases/tag/0.70.17) requires `dill>=0.3.9` which causes issues when installing `datasets` without backtracking during package version resolution. Is it possible to add a pin for multiprocess so something like `multi...
closed
https://github.com/huggingface/datasets/issues/7186
2024-10-01T22:29:32
2024-10-02T06:08:24
2024-10-02T06:08:24
{ "login": "shubhbapna", "id": 38372682, "type": "User" }
[]
false
[]
2,558,508,748
7,185
CI benchmarks are broken
Since Aug 30, 2024, CI benchmarks are broken: https://github.com/huggingface/datasets/actions/runs/11108421214/job/30861323975 ``` {"level":"error","message":"Resource not accessible by integration","name":"HttpError","request":{"body":"{\"body\":\"<details>\\n<summary>Show benchmarks</summary>\\n\\nPyArrow==8.0.0\\n...
closed
https://github.com/huggingface/datasets/issues/7185
2024-10-01T08:16:08
2024-10-09T16:07:48
2024-10-09T16:07:48
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "maintenance", "color": "d4c5f9" } ]
false
[]
2,556,855,150
7,184
Pin dill<0.3.9 to fix CI
Pin dill<0.3.9 to fix CI for deps-latest. Note that dill-0.3.9 was released yesterday Sep 29, 2024: - https://pypi.org/project/dill/0.3.9/ - https://github.com/uqfoundation/dill/releases/tag/0.3.9 Fix #7183.
closed
https://github.com/huggingface/datasets/pull/7184
2024-09-30T14:26:25
2024-09-30T14:38:59
2024-09-30T14:38:57
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,556,789,055
7,183
CI is broken for deps-latest
See: https://github.com/huggingface/datasets/actions/runs/11106149906/job/30853879890 ``` =========================== short test summary info ============================ FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_filter_caching_on_disk - AssertionError: Lists differ: [{'fi[44 chars] {'filename': '/...
closed
https://github.com/huggingface/datasets/issues/7183
2024-09-30T14:02:07
2024-09-30T14:38:58
2024-09-30T14:38:58
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
false
[]
2,556,333,671
7,182
Support features in metadata configs
Support features in metadata configs, like: ``` configs: - config_name: default features: - name: id dtype: int64 - name: name dtype: string - name: score dtype: float64 ``` This will allow to avoid inference of data types. Currently, we allow passing th...
closed
https://github.com/huggingface/datasets/pull/7182
2024-09-30T11:14:53
2024-10-09T16:03:57
2024-10-09T16:03:54
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,554,917,019
7,181
Fix datasets export to JSON
null
closed
https://github.com/huggingface/datasets/pull/7181
2024-09-29T12:45:20
2024-11-01T11:55:36
2024-11-01T11:55:36
{ "login": "varadhbhatnagar", "id": 20443618, "type": "User" }
[]
true
[]
2,554,244,750
7,180
Memory leak when wrapping datasets into PyTorch Dataset without explicit deletion
### Describe the bug I've encountered a memory leak when wrapping the HuggingFace dataset into a PyTorch Dataset. The RAM usage constantly increases during iteration if items are not explicitly deleted after use. ### Steps to reproduce the bug Steps to reproduce: Create a PyTorch Dataset wrapper f...
closed
https://github.com/huggingface/datasets/issues/7180
2024-09-28T14:00:47
2024-09-30T12:07:56
2024-09-30T12:07:56
{ "login": "iamwangyabin", "id": 38123329, "type": "User" }
[]
false
[]
2,552,387,980
7,179
Support Python 3.11
Support Python 3.11. Fix #7178.
closed
https://github.com/huggingface/datasets/pull/7179
2024-09-27T08:55:44
2024-10-08T16:21:06
2024-10-08T16:21:03
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,552,378,330
7,178
Support Python 3.11
Support Python 3.11: https://peps.python.org/pep-0664/
closed
https://github.com/huggingface/datasets/issues/7178
2024-09-27T08:50:47
2024-10-08T16:21:04
2024-10-08T16:21:04
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,552,371,082
7,177
Fix release instructions
Fix release instructions. During last release, I had to make this additional update.
closed
https://github.com/huggingface/datasets/pull/7177
2024-09-27T08:47:01
2024-09-27T08:57:35
2024-09-27T08:57:32
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,551,025,564
7,176
fix grammar in fingerprint.py
I see this error all the time and it was starting to get to me.
open
https://github.com/huggingface/datasets/pull/7176
2024-09-26T16:13:42
2024-09-26T16:13:42
null
{ "login": "jxmorris12", "id": 13238952, "type": "User" }
[]
true
[]
2,550,957,337
7,175
[FSTimeoutError] load_dataset
### Describe the bug When using `load_dataset`to load [HuggingFaceM4/VQAv2](https://huggingface.co/datasets/HuggingFaceM4/VQAv2), I am getting `FSTimeoutError`. ### Error ``` TimeoutError: The above exception was the direct cause of the following exception: FSTimeoutError Trac...
closed
https://github.com/huggingface/datasets/issues/7175
2024-09-26T15:42:29
2025-02-01T09:09:35
2024-09-30T17:28:35
{ "login": "cosmo3769", "id": 53268607, "type": "User" }
[]
false
[]
2,549,892,315
7,174
Set dev version
null
closed
https://github.com/huggingface/datasets/pull/7174
2024-09-26T08:30:11
2024-09-26T08:32:39
2024-09-26T08:30:21
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]