id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
1,468,352,562
5,312
Add DatasetDict.to_pandas
From discussions in https://github.com/huggingface/datasets/issues/5189, for tabular data it doesn't really make sense to have to do ```python df = load_dataset(...)["train"].to_pandas() ``` because many datasets are not split. In this PR I added `to_pandas` to `DatasetDict` which returns the DataFrame: If th...
closed
https://github.com/huggingface/datasets/pull/5312
2022-11-29T16:30:02
2023-09-24T10:06:19
2023-01-25T17:33:42
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,467,875,153
5,311
Add `features` param to `IterableDataset.map`
## Description As suggested by @lhoestq in #3888, we should be adding the param `features` to `IterableDataset.map` so that the features can be preserved (not turned into `None` as that's the default behavior) whenever the user passes those as param, so as to be consistent with `Dataset.map`, as it provides the `fea...
closed
https://github.com/huggingface/datasets/pull/5311
2022-11-29T11:08:34
2022-12-06T15:45:02
2022-12-06T15:42:04
{ "login": "alvarobartt", "id": 36760800, "type": "User" }
[]
true
[]
1,467,719,635
5,310
Support xPath for Windows pathnames
This PR implements a string representation of `xPath`, which is valid for local paths (also windows) and remote URLs. Additionally, some `os.path` methods are fixed for remote URLs on Windows machines. Now, on Windows machines: ```python In [2]: str(xPath("C:\\dir\\file.txt")) Out[2]: 'C:\\dir\\file.txt' In [...
closed
https://github.com/huggingface/datasets/pull/5310
2022-11-29T09:20:47
2022-11-30T12:00:09
2022-11-30T11:57:16
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,466,758,987
5,309
Close stream in `ArrowWriter.finalize` before inference error
Ensure the file stream is closed in `ArrowWriter.finalize` before raising the `SchemaInferenceError` to avoid the `PermissionError` on Windows in `incomplete_dir`'s `shutil.rmtree`.
closed
https://github.com/huggingface/datasets/pull/5309
2022-11-28T16:59:39
2022-12-07T12:55:20
2022-12-07T12:52:15
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,466,552,281
5,308
Support `topdown` parameter in `xwalk`
Add support for the `topdown` parameter in `xwalk` when `fsspec>=2022.11.0` is installed.
closed
https://github.com/huggingface/datasets/pull/5308
2022-11-28T14:42:41
2022-12-09T12:58:55
2022-12-09T12:55:59
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,466,477,427
5,307
Use correct dataset type in `from_generator` docs
Use the correct dataset type in the `from_generator` docs (example with sharding).
closed
https://github.com/huggingface/datasets/pull/5307
2022-11-28T13:59:10
2022-11-28T15:30:37
2022-11-28T15:27:26
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,465,968,639
5,306
Can't use custom feature description when loading a dataset
### Describe the bug I have created a feature dictionary to describe my datasets' column types, to use when loading the dataset, following [the doc](https://huggingface.co/docs/datasets/main/en/about_dataset_features). It crashes at dataset load. ### Steps to reproduce the bug ```python # Creating features task_...
closed
https://github.com/huggingface/datasets/issues/5306
2022-11-28T07:55:44
2022-11-28T08:11:45
2022-11-28T08:11:44
{ "login": "clefourrier", "id": 22726840, "type": "User" }
[]
false
[]
1,465,627,826
5,305
Dataset joelito/mc4_legal does not work with multiple files
### Describe the bug The dataset https://huggingface.co/datasets/joelito/mc4_legal works for languages like bg with a single data file, but not for languages with multiple files like de. It shows zero rows for the de dataset. joelniklaus@Joels-MacBook-Pro ~/N/P/C/L/p/m/mc4_legal (main) [1]> python test_mc4_legal....
closed
https://github.com/huggingface/datasets/issues/5305
2022-11-28T00:16:16
2022-11-28T07:22:42
2022-11-28T07:22:42
{ "login": "JoelNiklaus", "id": 3775944, "type": "User" }
[]
false
[]
1,465,110,367
5,304
timit_asr doesn't load the test split.
### Describe the bug When I use the function ```timit = load_dataset('timit_asr', data_dir=data_dir)```, it only loads train split, not test split. I tried to change the directory and filename to lower case to upper case for the test split, but it does not work at all. ```python DatasetDict({ train: Datase...
closed
https://github.com/huggingface/datasets/issues/5304
2022-11-26T10:18:22
2023-02-10T16:33:21
2023-02-10T16:33:21
{ "login": "seyong92", "id": 17842800, "type": "User" }
[]
false
[]
1,464,837,251
5,303
Skip dataset verifications by default
Skip the dataset verifications (split and checksum verifications, duplicate keys check) by default unless a dataset is being tested (`datasets-cli test/run_beam`). The main goal is to avoid running the checksum check in the default case due to how expensive it can be for large datasets. PS: Maybe we should deprecate...
closed
https://github.com/huggingface/datasets/pull/5303
2022-11-25T18:39:09
2023-02-13T16:50:42
2023-02-13T16:43:47
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,464,778,901
5,302
Improve `use_auth_token` docstring and deprecate `use_auth_token` in `download_and_prepare`
Clarify in the docstrings what happens when `use_auth_token` is `None` and deprecate the `use_auth_token` param in `download_and_prepare`.
closed
https://github.com/huggingface/datasets/pull/5302
2022-11-25T17:09:21
2022-12-09T14:20:15
2022-12-09T14:17:20
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,464,749,156
5,301
Return a split Dataset in load_dataset
...instead of a DatasetDict. ```python # now supported ds = load_dataset("squad") ds[0] for example in ds: pass # still works ds["train"] ds["validation"] # new ds.splits # Dict[str, Dataset] | None # soon to be supported (not in this PR) ds = load_dataset("dataset_with_no_splits") ds[0] f...
closed
https://github.com/huggingface/datasets/pull/5301
2022-11-25T16:35:54
2023-09-24T10:06:15
2023-02-21T13:13:13
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,464,697,136
5,300
Use same `num_proc` for dataset download and generation
Use the same `num_proc` value for data download and generation. Additionally, do not set `num_proc` to 16 in `DownloadManager` by default (`num_proc` now has to be specified explicitly).
closed
https://github.com/huggingface/datasets/pull/5300
2022-11-25T15:37:42
2022-12-07T12:55:39
2022-12-07T12:52:51
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,464,695,091
5,299
Fix xopen for Windows pathnames
This PR fixes a bug in `xopen` function for Windows pathnames. Fix #5298.
closed
https://github.com/huggingface/datasets/pull/5299
2022-11-25T15:35:28
2022-11-29T08:23:58
2022-11-29T08:21:24
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,464,681,871
5,298
Bug in xopen with Windows pathnames
Currently, `xopen` function has a bug with local Windows pathnames: From its implementation: ```python def xopen(file: str, mode="r", *args, **kwargs): file = _as_posix(PurePath(file)) main_hop, *rest_hops = file.split("::") if is_local_path(main_hop): return open(file, mode, *args, **kwarg...
closed
https://github.com/huggingface/datasets/issues/5298
2022-11-25T15:21:32
2022-11-29T08:21:25
2022-11-29T08:21:25
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,464,554,491
5,297
Fix xjoin for Windows pathnames
This PR fixes a bug in `xjoin` function with Windows pathnames. Fix #5296.
closed
https://github.com/huggingface/datasets/pull/5297
2022-11-25T13:30:17
2022-11-29T08:07:39
2022-11-29T08:05:12
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,464,553,580
5,296
Bug in xjoin with Windows pathnames
Currently, `xjoin` function has a bug with local Windows pathnames: instead of returning the OS-dependent join pathname, it always returns it in POSIX format. ```python from datasets.download.streaming_download_manager import xjoin path = xjoin("C:\\Users\\USERNAME", "filename.txt") ``` Join path should be: ...
closed
https://github.com/huggingface/datasets/issues/5296
2022-11-25T13:29:33
2022-11-29T08:05:13
2022-11-29T08:05:13
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,464,006,743
5,295
Extractions failed when .zip file located on read-only path (e.g., SageMaker FastFile mode)
### Describe the bug Hi, `load_dataset()` does not work .zip files located on a read-only directory. Looks like it's because Dataset creates a lock file in the [same directory](https://github.com/huggingface/datasets/blob/df4bdd365f2abb695f113cbf8856a925bc70901b/src/datasets/utils/extract.py) as the .zip file. ...
closed
https://github.com/huggingface/datasets/issues/5295
2022-11-25T03:59:43
2023-07-21T14:39:09
2023-07-21T14:39:09
{ "login": "verdimrc", "id": 2340781, "type": "User" }
[]
false
[]
1,463,679,582
5,294
Support streaming datasets with pathlib.Path.with_suffix
This PR extends the support in streaming mode for datasets that use `pathlib.Path.with_suffix`. Fix #5293.
closed
https://github.com/huggingface/datasets/pull/5294
2022-11-24T18:04:38
2022-11-29T07:09:08
2022-11-29T07:06:32
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,463,669,201
5,293
Support streaming datasets with pathlib.Path.with_suffix
Extend support for streaming datasets that use `pathlib.Path.with_suffix`. This feature will be useful e.g. for datasets containing text files and annotated files with the same name but different extension.
closed
https://github.com/huggingface/datasets/issues/5293
2022-11-24T17:52:08
2022-11-29T07:06:33
2022-11-29T07:06:33
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,463,053,832
5,292
Missing documentation build for versions 2.7.1 and 2.6.2
After the patch releases [2.7.1](https://github.com/huggingface/datasets/releases/tag/2.7.1) and [2.6.2](https://github.com/huggingface/datasets/releases/tag/2.6.2), the online docs were not properly built (the build_documentation workflow was not triggered). There was a fix by: - #5291 However, both documentati...
closed
https://github.com/huggingface/datasets/issues/5292
2022-11-24T09:42:10
2022-11-24T10:10:02
2022-11-24T10:10:02
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "maintenance", "color": "d4c5f9" } ]
false
[]
1,462,983,472
5,291
[build doc] for v2.7.1 & v2.6.2
Do NOT merge. Using this PR to build docs for [v2.7.1](https://github.com/huggingface/datasets/pull/5291/commits/f4914af20700f611b9331a9e3ba34743bbeff934) & [v2.6.2](https://github.com/huggingface/datasets/pull/5291/commits/025f85300a0874eeb90a20393c62f25ac0accaa0)
closed
https://github.com/huggingface/datasets/pull/5291
2022-11-24T08:54:47
2022-11-24T09:14:10
2022-11-24T09:11:15
{ "login": "mishig25", "id": 11827707, "type": "User" }
[]
true
[]
1,462,716,766
5,290
fix error where reading breaks when batch missing an assigned column feature
null
open
https://github.com/huggingface/datasets/pull/5290
2022-11-24T03:53:46
2022-11-25T03:21:54
null
{ "login": "eunseojo", "id": 12104720, "type": "User" }
[]
true
[]
1,462,543,139
5,289
Added support for JXL images.
JPEG-XL is the most advanced of the next-generation of image codecs, supporting both lossless and lossy files — with better compression and quality than PNG and JPG respectively. It has reduced the disk sizes and bandwidth required for many of the datasets I use. Pillow does not yet support JXL, but there's a plugi...
open
https://github.com/huggingface/datasets/pull/5289
2022-11-23T23:16:33
2022-11-29T18:49:46
null
{ "login": "alexjc", "id": 445208, "type": "User" }
[]
true
[]
1,462,134,067
5,288
Lossy json serialization - deserialization of dataset info
### Describe the bug Saving a dataset to disk as json (using `to_json`) and then loading it again (using `load_dataset`) results in features whose labels are not type-cast correctly. In the code snippet below, `features.label` should have a label of type `ClassLabel` but has type `Value` instead. ### Steps to re...
open
https://github.com/huggingface/datasets/issues/5288
2022-11-23T17:20:15
2022-11-25T12:53:51
null
{ "login": "anuragprat1k", "id": 57542204, "type": "User" }
[]
false
[]
1,461,971,889
5,287
Fix methods using `IterableDataset.map` that lead to `features=None`
As currently `IterableDataset.map` is setting the `info.features` to `None` every time as we don't know the output of the dataset in advance, `IterableDataset` methods such as `rename_column`, `rename_columns`, and `remove_columns`. that internally use `map` lead to the features being `None`. This PR is related to #...
closed
https://github.com/huggingface/datasets/pull/5287
2022-11-23T15:33:25
2022-11-28T15:43:14
2022-11-28T12:53:22
{ "login": "alvarobartt", "id": 36760800, "type": "User" }
[]
true
[]
1,461,908,087
5,286
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json
### Describe the bug I follow the steps provided on the website [https://huggingface.co/datasets/wikipedia](https://huggingface.co/datasets/wikipedia) $ pip install apache_beam mwparserfromhell >>> from datasets import load_dataset >>> load_dataset("wikipedia", "20220301.en") however this results in the follo...
closed
https://github.com/huggingface/datasets/issues/5286
2022-11-23T14:54:15
2024-11-23T01:16:41
2022-11-25T11:33:14
{ "login": "roritol", "id": 32490135, "type": "User" }
[]
false
[]
1,461,521,215
5,285
Save file name in embed_storage
Having the file name is useful in case we need to check the extension of the file (e.g. mp3), or in general in case it includes some metadata information (track id, image id etc.) Related to https://github.com/huggingface/datasets/issues/5276
closed
https://github.com/huggingface/datasets/pull/5285
2022-11-23T10:55:54
2022-11-24T14:11:41
2022-11-24T14:08:37
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,461,519,733
5,284
Features of IterableDataset set to None by remove column
### Describe the bug The `remove_column` method of the IterableDataset sets the dataset features to None. ### Steps to reproduce the bug ```python from datasets import Audio, load_dataset # load LS in streaming mode dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True) ...
closed
https://github.com/huggingface/datasets/issues/5284
2022-11-23T10:54:59
2025-02-07T11:36:41
2022-11-28T12:53:24
{ "login": "sanchit-gandhi", "id": 93869735, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "streaming", "color": "fef2c0" } ]
false
[]
1,460,291,003
5,283
Release: 2.6.2
null
closed
https://github.com/huggingface/datasets/pull/5283
2022-11-22T17:36:24
2022-11-22T17:50:12
2022-11-22T17:47:02
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,460,238,928
5,282
Release: 2.7.1
null
closed
https://github.com/huggingface/datasets/pull/5282
2022-11-22T16:58:54
2022-11-22T17:21:28
2022-11-22T17:21:27
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,459,930,271
5,281
Support cloud storage in load_dataset
Would be nice to be able to do ```python data_files=["s3://..."] # or gs:// or any cloud storage path storage_options = {...} load_dataset(..., data_files=data_files, storage_options=storage_options) ``` The idea would be to use `fsspec` as in `download_and_prepare` and `save_to_disk`. This has been reque...
open
https://github.com/huggingface/datasets/issues/5281
2022-11-22T14:00:10
2024-11-15T15:03:41
null
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "good second issue", "color": "BDE59C" } ]
false
[]
1,459,823,179
5,280
Import error
https://github.com/huggingface/datasets/blob/cd3d8e637cfab62d352a3f4e5e60e96597b5f0e9/src/datasets/__init__.py#L28 Hy, I have error at the above line. I have python version 3.8.13, the message says I need python>=3.7, which is True, but I think the if statement not working properly (or the message wrong)
closed
https://github.com/huggingface/datasets/issues/5280
2022-11-22T12:56:43
2022-12-15T19:57:40
2022-12-15T19:57:40
{ "login": "feketedavid1012", "id": 40760055, "type": "User" }
[]
false
[]
1,459,635,002
5,279
Warn about checksums
It takes a lot of time on big datasets to compute the checksums, we should at least add a warning to notify the user about this step. I also mentioned how to disable it, and added a tqdm bar (delay=5 seconds) cc @ola13
closed
https://github.com/huggingface/datasets/pull/5279
2022-11-22T10:58:48
2022-11-23T11:43:50
2022-11-23T09:47:02
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,459,574,490
5,278
load_dataset does not read jsonl metadata file properly
### Describe the bug Hi, I'm following [this page](https://huggingface.co/docs/datasets/image_dataset) to create a dataset of images and captions via an image folder and a metadata.json file, but I can't seem to get the dataloader to recognize the "text" column. It just spits out "image" and "label" as features. B...
closed
https://github.com/huggingface/datasets/issues/5278
2022-11-22T10:24:46
2023-02-14T14:48:16
2022-11-23T11:38:35
{ "login": "065294847", "id": 81414263, "type": "User" }
[]
false
[]
1,459,388,551
5,277
Remove YAML integer keys from class_label metadata
Fix partially #5275.
closed
https://github.com/huggingface/datasets/pull/5277
2022-11-22T08:34:07
2022-11-22T13:58:26
2022-11-22T13:55:49
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,459,363,442
5,276
Bug in downloading common_voice data and snall chunk of it to one's own hub
### Describe the bug I'm trying to load the common voice dataset. Currently there is no implementation to download just par tof the data, and I need just one part of it, without downloading the entire dataset Help please? ![image](https://user-images.githubusercontent.com/48530104/203260511-26df766f-6013-4...
closed
https://github.com/huggingface/datasets/issues/5276
2022-11-22T08:17:53
2023-07-21T14:33:10
2023-07-21T14:33:10
{ "login": "capsabogdan", "id": 48530104, "type": "User" }
[]
false
[]
1,459,358,919
5,275
YAML integer keys are not preserved Hub server-side
After an internal discussion (https://github.com/huggingface/moon-landing/issues/4563): - YAML integer keys are not preserved server-side: they are transformed to strings - See for example this Hub PR: https://huggingface.co/datasets/acronym_identification/discussions/1/files - Original: ```yaml ...
closed
https://github.com/huggingface/datasets/issues/5275
2022-11-22T08:14:47
2023-01-26T10:52:35
2023-01-26T10:40:21
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,458,646,455
5,274
load_dataset possibly broken for gated datasets?
### Describe the bug When trying to download the [winoground dataset](https://huggingface.co/datasets/facebook/winoground), I get this error unless I roll back the version of huggingface-hub: ``` [/usr/local/lib/python3.7/dist-packages/huggingface_hub/utils/_validators.py](https://localhost:8080/#) in validate_rep...
closed
https://github.com/huggingface/datasets/issues/5274
2022-11-21T21:59:53
2023-05-27T00:06:14
2022-11-28T02:50:42
{ "login": "TristanThrush", "id": 20826878, "type": "User" }
[]
false
[]
1,458,018,050
5,273
download_mode="force_redownload" does not refresh cached dataset
### Describe the bug `load_datasets` does not refresh dataset when features are imported from external file, even with `download_mode="force_redownload"`. The bug is not limited to nested fields, however it is more likely to occur with nested fields. ### Steps to reproduce the bug To reproduce the bug 3 files are ne...
open
https://github.com/huggingface/datasets/issues/5273
2022-11-21T14:12:43
2022-11-21T14:13:03
null
{ "login": "nomisto", "id": 28439912, "type": "User" }
[]
false
[]
1,456,940,021
5,272
Use pyarrow Tensor dtype
### Feature request I was going the discussion of converting tensors to lists. Is there a way to leverage pyarrow's Tensors for nested arrays / embeddings? For example: ```python import pyarrow as pa import numpy as np x = np.array([[2, 2, 4], [4, 5, 100]], np.int32) pa.Tensor.from_numpy(x, dim_names=["dim1...
open
https://github.com/huggingface/datasets/issues/5272
2022-11-20T15:18:41
2024-11-11T03:03:17
null
{ "login": "franz101", "id": 18228395, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,456,807,738
5,271
Fix #5269
``` $ datasets-cli convert --datasets_directory <TAB> datasets_directory benchmarks/ docs/ metrics/ notebooks/ src/ templates/ tests/ utils/ ```
closed
https://github.com/huggingface/datasets/pull/5271
2022-11-20T07:50:49
2022-11-21T15:07:19
2022-11-21T15:06:38
{ "login": "Freed-Wu", "id": 32936898, "type": "User" }
[]
true
[]
1,456,508,990
5,270
When len(_URLS) > 16, download will hang
### Describe the bug ```python In [9]: dataset = load_dataset('Freed-Wu/kodak', split='test') Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.53k/2.53k [00:00<00:00, 1.88MB/s] [1...
open
https://github.com/huggingface/datasets/issues/5270
2022-11-19T14:27:41
2022-11-21T15:27:16
null
{ "login": "Freed-Wu", "id": 32936898, "type": "User" }
[]
false
[]
1,456,485,799
5,269
Shell completions
### Feature request Like <https://github.com/huggingface/huggingface_hub/issues/1197>, datasets-cli maybe need it, too. ### Motivation See above. ### Your contribution Maybe.
closed
https://github.com/huggingface/datasets/issues/5269
2022-11-19T13:48:59
2022-11-21T15:06:15
2022-11-21T15:06:14
{ "login": "Freed-Wu", "id": 32936898, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,455,633,978
5,268
Sharded save_to_disk + multiprocessing
Added `num_shards=` and `num_proc=` to `save_to_disk()` EDIT: also added `max_shard_size=` to `save_to_disk()`, and also `num_shards=` to `push_to_hub` I also: - deprecated the fs parameter in favor of storage_options (for consistency with the rest of the lib) in save_to_disk and load_from_disk - always embed t...
closed
https://github.com/huggingface/datasets/pull/5268
2022-11-18T18:50:01
2022-12-14T18:25:52
2022-12-14T18:22:58
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,455,466,464
5,267
Fix `max_shard_size` docs
null
closed
https://github.com/huggingface/datasets/pull/5267
2022-11-18T16:55:22
2022-11-18T17:28:58
2022-11-18T17:25:27
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,455,281,310
5,266
Specify arguments as keywords in librosa.reshape to avoid future errors
Fixes a warning and future deprecation from `librosa.reshape`: ``` FutureWarning: Pass orig_sr=16000, target_sr=48000 as keyword args. From version 0.10 passing these as positional arguments will result in an error array = librosa.resample(array, sampling_rate, self.sampling_rate, res_type="kaiser_best") ```
closed
https://github.com/huggingface/datasets/pull/5266
2022-11-18T14:58:47
2022-11-21T15:45:02
2022-11-21T15:41:57
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,455,274,864
5,265
Get an IterableDataset from a map-style Dataset
This is useful to leverage iterable datasets specific features like: - fast approximate shuffling - lazy map, filter etc. Iterating over the resulting iterable dataset should be at least as fast at iterating over the map-style dataset. Here are some ideas regarding the API: ```python # 1. # - consistency wi...
closed
https://github.com/huggingface/datasets/issues/5265
2022-11-18T14:54:40
2023-02-01T16:36:03
2023-02-01T16:36:03
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "streaming", "color": "fef2c0" } ]
false
[]
1,455,252,906
5,264
`datasets` can't read a Parquet file in Python 3.9.13
### Describe the bug I have an error when trying to load this [dataset](https://huggingface.co/datasets/bigcode/the-stack-dedup-pjj) (it's private but I can add you to the bigcode org). `datasets` can't read one of the parquet files in the Java subset ```python from datasets import load_dataset ds = load_data...
closed
https://github.com/huggingface/datasets/issues/5264
2022-11-18T14:44:01
2023-05-07T09:52:59
2022-11-22T11:18:08
{ "login": "loubnabnl", "id": 44069155, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,455,252,626
5,263
Save a dataset in a determined number of shards
This is useful to distribute the shards to training nodes. This can be implemented in `save_to_disk` and can also leverage multiprocessing to speed up the process
closed
https://github.com/huggingface/datasets/issues/5263
2022-11-18T14:43:54
2022-12-14T18:22:59
2022-12-14T18:22:59
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,455,171,100
5,262
AttributeError: 'Value' object has no attribute 'names'
Hello I'm trying to build a model for custom token classification I already followed the token classification course on huggingface while adapting the code to my work, this message occures : 'Value' object has no attribute 'names' Here's my code: `raw_datasets` generates DatasetDict({ train: Datas...
closed
https://github.com/huggingface/datasets/issues/5262
2022-11-18T13:58:42
2022-11-22T10:09:24
2022-11-22T10:09:23
{ "login": "emnaboughariou", "id": 102913847, "type": "User" }
[]
false
[]
1,454,647,861
5,261
Add PubTables-1M
### Name PubTables-1M ### Paper https://openaccess.thecvf.com/content/CVPR2022/html/Smock_PubTables-1M_Towards_Comprehensive_Table_Extraction_From_Unstructured_Documents_CVPR_2022_paper.html ### Data https://github.com/microsoft/table-transformer ### Motivation Table Transformer is now available in 🤗 Transforme...
open
https://github.com/huggingface/datasets/issues/5261
2022-11-18T07:56:36
2022-11-18T08:02:18
null
{ "login": "NielsRogge", "id": 48327001, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,453,921,697
5,260
consumer-finance-complaints dataset not loading
### Describe the bug Error during dataset loading ### Steps to reproduce the bug ``` >>> import datasets >>> cf_raw = datasets.load_dataset("consumer-finance-complaints") Downloading builder script: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████...
open
https://github.com/huggingface/datasets/issues/5260
2022-11-17T20:10:26
2022-11-18T10:16:53
null
{ "login": "adiprasad", "id": 8098496, "type": "User" }
[]
false
[]
1,453,555,923
5,259
datasets 2.7 introduces sharding error
### Describe the bug dataset fails to load with runtime error `RuntimeError: Sharding is ambiguous for this dataset: we found several data sources lists of different lengths, and we don't know over which list we should parallelize: - key audio_files has length 46 - key data has length 0 To fix this, check the ...
closed
https://github.com/huggingface/datasets/issues/5259
2022-11-17T15:36:52
2022-12-24T01:44:02
2022-11-18T12:52:05
{ "login": "DCNemesis", "id": 3616964, "type": "User" }
[]
false
[]
1,453,516,636
5,258
Restore order of split names in dataset_info for canonical datasets
After a bulk edit of canonical datasets to create the YAML `dataset_info` metadata, the split names were accidentally sorted alphabetically. See for example: - https://huggingface.co/datasets/bc2gm_corpus/commit/2384629484401ecf4bb77cd808816719c424e57c Note that this order is the one appearing in the preview of the...
closed
https://github.com/huggingface/datasets/issues/5258
2022-11-17T15:13:15
2023-02-16T09:49:05
2022-11-19T06:51:37
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
false
[]
1,452,656,891
5,257
remove an unused statement
remove the unused statement: `input_pairs = list(zip())`
closed
https://github.com/huggingface/datasets/pull/5257
2022-11-17T04:00:50
2022-11-18T11:04:08
2022-11-18T11:04:08
{ "login": "WrRan", "id": 7569098, "type": "User" }
[]
true
[]
1,452,652,586
5,256
fix wrong print
print `encoded_dataset.column_names` not `dataset.column_names`
closed
https://github.com/huggingface/datasets/pull/5256
2022-11-17T03:54:26
2022-11-18T11:05:32
2022-11-18T11:05:32
{ "login": "WrRan", "id": 7569098, "type": "User" }
[]
true
[]
1,452,631,517
5,255
Add a Depth Estimation dataset - DIODE / NYUDepth / KITTI
### Name NYUDepth ### Paper http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf ### Data https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html ### Motivation Depth estimation is an important problem in computer vision. We have a couple of Depth Estimation models on Hub as well: * [GLPN...
closed
https://github.com/huggingface/datasets/issues/5255
2022-11-17T03:22:22
2022-12-17T12:20:38
2022-12-17T12:20:37
{ "login": "sayakpaul", "id": 22957388, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,452,600,088
5,254
typo
null
closed
https://github.com/huggingface/datasets/pull/5254
2022-11-17T02:39:57
2022-11-18T10:53:45
2022-11-18T10:53:45
{ "login": "WrRan", "id": 7569098, "type": "User" }
[]
true
[]
1,452,588,206
5,253
typo
null
closed
https://github.com/huggingface/datasets/pull/5253
2022-11-17T02:22:58
2022-11-18T10:53:11
2022-11-18T10:53:10
{ "login": "WrRan", "id": 7569098, "type": "User" }
[]
true
[]
1,451,765,838
5,252
Support for decoding Image/Audio types in map when format type is not default one
Add support for decoding the `Image`/`Audio` types in `map` for the formats (Numpy, TF, Jax, PyTorch) other than the default one (Python). Additional improvements: * make `Dataset`'s "iter" API cleaner by removing `_iter` and replacing `_iter_batches` with `iter(batch_size)` (also implemented for `IterableDataset`...
closed
https://github.com/huggingface/datasets/pull/5252
2022-11-16T15:02:13
2022-12-13T17:01:54
2022-12-13T16:59:04
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,451,761,321
5,251
Docs are not generated after latest release
After the latest `datasets` release version 0.7.0, the docs were not generated. As we have changed the release procedure (so that now we do not push directly to main branch), maybe we should also change the corresponding GitHub action: https://github.com/huggingface/datasets/blob/edf1902f954c5568daadebcd8754bdad4...
closed
https://github.com/huggingface/datasets/issues/5251
2022-11-16T14:59:31
2022-11-22T16:27:50
2022-11-22T16:27:50
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "maintenance", "color": "d4c5f9" } ]
false
[]
1,451,720,030
5,250
Change release procedure to use only pull requests
This PR changes the release procedure so that: - it only make changes to main branch via pull requests - it is no longer necessary to directly commit/push to main branch Close #5251.
closed
https://github.com/huggingface/datasets/pull/5250
2022-11-16T14:35:32
2022-11-22T16:30:58
2022-11-22T16:27:48
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,451,692,247
5,249
Protect the main branch from inadvertent direct pushes
We have decided to implement a protection mechanism in this repository, so that nobody (not even administrators) can inadvertently push accidentally directly to the main branch. See context here: - d7c942228b8dcf4de64b00a3053dce59b335f618 To do: - [x] Protect main branch - Settings > Branches > Branch protec...
closed
https://github.com/huggingface/datasets/issues/5249
2022-11-16T14:19:03
2023-12-21T10:28:27
2023-12-21T10:28:26
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "maintenance", "color": "d4c5f9" } ]
false
[]
1,451,338,676
5,248
Complete doc migration
Reverts huggingface/datasets#5214 Everything is handled on the doc-builder side now 😊
closed
https://github.com/huggingface/datasets/pull/5248
2022-11-16T10:41:04
2022-11-16T15:06:50
2022-11-16T10:41:10
{ "login": "mishig25", "id": 11827707, "type": "User" }
[]
true
[]
1,451,297,749
5,247
Set dev version
null
closed
https://github.com/huggingface/datasets/pull/5247
2022-11-16T10:17:31
2022-11-16T10:22:20
2022-11-16T10:17:50
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,451,226,055
5,246
Release: 2.7.0
null
closed
https://github.com/huggingface/datasets/pull/5246
2022-11-16T09:32:44
2022-11-16T09:39:42
2022-11-16T09:37:03
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,450,376,433
5,245
Unable to rename columns in streaming dataset
### Describe the bug Trying to rename column in a streaming datasets, destroys the features object. ### Steps to reproduce the bug The following code illustrates the error: ``` from datasets import load_dataset dataset = load_dataset('mc4', 'en', streaming=True, split='train') dataset.info.features # {'text':...
closed
https://github.com/huggingface/datasets/issues/5245
2022-11-15T21:04:41
2022-11-28T12:53:24
2022-11-28T12:53:24
{ "login": "peregilk", "id": 9079808, "type": "User" }
[]
false
[]
1,450,019,225
5,244
Allow dataset streaming from private a private source when loading a dataset with a dataset loading script
### Feature request Add arguments to the function _get_authentication_headers_for_url_ like custom_endpoint and custom_token in order to add flexibility when downloading files from a private source. It should also be possible to provide these arguments from the dataset loading script, maybe giving them to the dl_...
open
https://github.com/huggingface/datasets/issues/5244
2022-11-15T16:02:10
2022-11-23T14:02:30
null
{ "login": "bruno-hays", "id": 48770768, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,449,523,962
5,243
Download only split data
### Feature request Is it possible to download only the data that I am requesting and not the entire dataset? I run out of disk spaceas it seems to download the entire dataset, instead of only the part needed. common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", ...
open
https://github.com/huggingface/datasets/issues/5243
2022-11-15T10:15:54
2025-02-25T14:47:03
null
{ "login": "capsabogdan", "id": 48530104, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,449,069,382
5,242
Failed Data Processing upon upload with zip file full of images
I went to autotrain and under image classification arrived where it was time to prepare my dataset. Screenshot below ![image](https://user-images.githubusercontent.com/82735473/201814099-3cc5ff8a-88dc-4f5f-8140-f19560641d83.png) I chose the method 2 option. I have a csv file with two columns. ~23,000 files. I...
open
https://github.com/huggingface/datasets/issues/5242
2022-11-15T02:47:52
2022-11-15T17:59:23
null
{ "login": "scrambled2", "id": 82735473, "type": "User" }
[]
false
[]
1,448,510,407
5,241
Support hfh rc version
otherwise the code doesn't work for hfh 0.11.0rc0 following #5237
closed
https://github.com/huggingface/datasets/pull/5241
2022-11-14T18:05:47
2022-11-15T16:11:30
2022-11-15T16:09:31
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,448,478,617
5,240
Cleaner error tracebacks for dataset script errors
Make the traceback of the errors raised in `_generate_examples` cleaner for easier debugging. Additionally, initialize the `writer` in the for-loop to avoid the `ValueError` from `ArrowWriter.finalize` raised in the `finally` block when no examples are yielded before the `_generate_examples` error. <details> <s...
closed
https://github.com/huggingface/datasets/pull/5240
2022-11-14T17:42:02
2022-11-15T18:26:48
2022-11-15T18:24:38
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,448,211,373
5,239
Add num_proc to from_csv/generator/json/parquet/text
Allow multiprocessing to from_* methods
closed
https://github.com/huggingface/datasets/pull/5239
2022-11-14T14:53:00
2022-12-06T15:39:10
2022-12-06T15:39:09
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,448,211,251
5,238
Make `Version` hashable
Add `__hash__` to the `Version` class to make it hashable (and remove the unneeded methods), as `Version("0.0.0")` is the default value of `BuilderConfig.version` and the default fields of a dataclass need to be hashable in Python 3.11. Fix https://github.com/huggingface/datasets/issues/5230
closed
https://github.com/huggingface/datasets/pull/5238
2022-11-14T14:52:55
2022-11-14T15:30:02
2022-11-14T15:27:35
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,448,202,491
5,237
Encode path only for old versions of hfh
Next version of `huggingface-hub` 0.11 does encode the `path`, and we don't want to encode twice
closed
https://github.com/huggingface/datasets/pull/5237
2022-11-14T14:46:57
2022-11-14T17:38:18
2022-11-14T17:35:59
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,448,190,801
5,236
Handle ArrowNotImplementedError caused by try_type being Image or Audio in cast
Handle the `ArrowNotImplementedError` thrown when `try_type` is `Image` or `Audio` and the input array cannot be converted to their storage formats. Reproducer: ```python from datasets import Dataset from PIL import Image import requests ds = Dataset.from_dict({"image": [Image.open(requests.get("https://uploa...
closed
https://github.com/huggingface/datasets/pull/5236
2022-11-14T14:38:59
2022-11-14T16:04:29
2022-11-14T16:01:48
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,448,052,660
5,235
Pin `typer` version in tests to <0.5 to fix Windows CI
Otherwise `click` fails on Windows: ``` Traceback (most recent call last): File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\runpy.py", line 85, in _run_code exec(code, run_glob...
closed
https://github.com/huggingface/datasets/pull/5235
2022-11-14T13:17:02
2022-11-14T15:43:01
2022-11-14T13:41:12
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,447,999,062
5,234
fix: dataset path should be absolute
cache_file_name depends on dataset's path. A simple way where this could cause a problem: ``` import os import datasets def add_prefix(example): example["text"] = "Review: " + example["text"] return example ds = datasets.load_from_disk("a/relative/path") os.chdir("/tmp") ds_1 = ds.map(add_...
closed
https://github.com/huggingface/datasets/pull/5234
2022-11-14T12:47:40
2022-12-07T23:49:22
2022-12-07T23:46:34
{ "login": "vigsterkr", "id": 30353, "type": "User" }
[]
true
[]
1,447,906,868
5,233
Fix shards in IterableDataset.from_generator
Allow to define a sharded iterable dataset
closed
https://github.com/huggingface/datasets/pull/5233
2022-11-14T11:42:09
2022-11-14T14:16:03
2022-11-14T14:13:22
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,446,294,165
5,232
Incompatible dill versions in datasets 2.6.1
### Describe the bug datasets version 2.6.1 has a dependency on dill<0.3.6. This causes a conflict with dill>=0.3.6 used by multiprocess dependency in datasets 2.6.1 This issue is already fixed in https://github.com/huggingface/datasets/pull/5166/files, but not yet been released. Please release a new version of the...
closed
https://github.com/huggingface/datasets/issues/5232
2022-11-12T06:46:23
2022-11-14T08:24:43
2022-11-14T08:07:59
{ "login": "vinaykakade", "id": 10574123, "type": "User" }
[]
false
[]
1,445,883,267
5,231
Using `set_format(type='torch', columns=columns)` makes Array2D/3D columns stop formatting correctly
I have a Dataset with two Features defined as follows: ``` 'image': Array3D(dtype="int64", shape=(3, 224, 224)), 'bbox': Array2D(dtype="int64", shape=(512, 4)), ``` On said dataset, if I `dataset.set_format(type='torch')` and then use the dataset in a dataloader, these columns are correctly cast to Tensors of ...
closed
https://github.com/huggingface/datasets/issues/5231
2022-11-11T18:54:36
2022-11-11T20:42:29
2022-11-11T18:59:50
{ "login": "plamb-viso", "id": 99206017, "type": "User" }
[]
false
[]
1,445,507,580
5,230
dataclasses error when importing the library in python 3.11
### Describe the bug When I import datasets using python 3.11 the dataclasses standard library raises the following error: `ValueError: mutable default <class 'datasets.utils.version.Version'> for field version is not allowed: use default_factory` When I tried to import the library using the following jupyter note...
closed
https://github.com/huggingface/datasets/issues/5230
2022-11-11T13:53:49
2023-05-25T04:37:05
2022-11-14T15:27:37
{ "login": "yonikremer", "id": 76044840, "type": "User" }
[]
false
[]
1,445,121,028
5,229
Type error when calling `map` over dataset containing 0-d tensors
### Describe the bug 0-dimensional tensors in a dataset lead to `TypeError: iteration over a 0-d array` when calling `map`. It is easy to generate such tensors by using `.with_format("...")` on the whole dataset. ### Steps to reproduce the bug ``` ds = datasets.Dataset.from_list([{"a": 1}, {"a": 1}]).with_fo...
closed
https://github.com/huggingface/datasets/issues/5229
2022-11-11T08:27:28
2023-01-13T16:00:53
2023-01-13T16:00:53
{ "login": "phipsgabler", "id": 7878215, "type": "User" }
[]
false
[]
1,444,763,105
5,228
Loading a dataset from the hub fails if you happen to have a folder of the same name
### Describe the bug I'm not 100% sure this should be considered a bug, but it was certainly annoying to figure out the cause of. And perhaps I am just missing a specific argument needed to avoid this conflict. Basically I had a situation where multiple workers were downloading different parts of the glue dataset and ...
open
https://github.com/huggingface/datasets/issues/5228
2022-11-11T00:51:54
2023-05-03T23:23:04
null
{ "login": "dakinggg", "id": 43149077, "type": "User" }
[]
false
[]
1,444,620,094
5,227
datasets.data_files.EmptyDatasetError: The directory at wikisql doesn't contain any data files
### Describe the bug From these lines: from datasets import list_datasets, load_dataset dataset = load_dataset("wikisql","binary") I get error message: datasets.data_files.EmptyDatasetError: The directory at wikisql doesn't contain any data files And yet the 'wikisql' is reported to exist via the list_datas...
closed
https://github.com/huggingface/datasets/issues/5227
2022-11-10T21:57:06
2023-10-07T05:04:41
2022-11-10T22:05:43
{ "login": "ScottM-wizard", "id": 102275116, "type": "User" }
[]
false
[]
1,444,385,148
5,226
Q: Memory release when removing the column?
### Describe the bug How do I release memory when I use methods like `.remove_columns()` or `clear()` in notebooks? ```python from datasets import load_dataset common_voice = load_dataset("mozilla-foundation/common_voice_11_0", "ja", use_auth_token=True) # check memory -> RAM Used (GB): 0.704 / Total (GB) 33.670...
closed
https://github.com/huggingface/datasets/issues/5226
2022-11-10T18:35:27
2022-11-29T15:10:10
2022-11-29T15:10:10
{ "login": "bayartsogt-ya", "id": 43239645, "type": "User" }
[]
false
[]
1,444,305,183
5,225
Add video feature
### Feature request Add a `Video` feature to the library so folks can include videos in their datasets. ### Motivation Being able to load Video data would be quite helpful. However, there are some challenges when it comes to videos: 1. Videos, unlike images, can end up being extremely large files 2. Often times ...
open
https://github.com/huggingface/datasets/issues/5225
2022-11-10T17:36:11
2022-12-02T15:13:15
null
{ "login": "nateraw", "id": 32437151, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "help wanted", "color": "008672" }, { "name": "vision", "color": "bfdadc" } ]
false
[]
1,443,640,867
5,224
Seems to freeze when loading audio dataset with wav files from local folder
### Describe the bug I'm following the instructions in [https://huggingface.co/docs/datasets/audio_load#audiofolder-with-metadata](url) to be able to load a dataset from a local folder. I have everything into a folder, into a train folder and then the audios and csv. When I try to load the dataset and run from term...
closed
https://github.com/huggingface/datasets/issues/5224
2022-11-10T10:29:31
2023-04-25T09:54:05
2022-11-22T11:24:19
{ "login": "uriii3", "id": 45894267, "type": "User" }
[]
false
[]
1,442,610,658
5,223
Add SQL guide
This PR adapts @nateraw's awesome SQL notebook as a guide for the docs!
closed
https://github.com/huggingface/datasets/pull/5223
2022-11-09T19:10:27
2022-11-15T17:40:25
2022-11-15T17:40:21
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[]
true
[]
1,442,412,507
5,222
HuggingFace website is incorrectly reporting that my datasets are pickled
### Describe the bug HuggingFace is incorrectly reporting that my datasets are pickled. They are not picked, they are simple ZIP files containing PNG images. Hopefully this is the right location to report this bug. ### Steps to reproduce the bug Inspect my dataset respository here: https://huggingface.co/datasets...
closed
https://github.com/huggingface/datasets/issues/5222
2022-11-09T16:41:16
2022-11-09T18:10:46
2022-11-09T18:06:57
{ "login": "ProGamerGov", "id": 10626398, "type": "User" }
[]
false
[]
1,442,309,094
5,221
Cannot push
### Describe the bug I am facing the issue when I try to push the tar.gz file around 11G to HUB. ``` (venv) ╭─laptop@laptop ~/PersonalProjects/data/ulaanbal_v0 ‹main●› ╰─$ du -sh * 4.0K README.md 13G data 516K test.jsonl 18M train.jsonl 4.0K ulaanbal_v0.py 11G ulaanbal_v0.tar.gz 452K validation.jsonl...
closed
https://github.com/huggingface/datasets/issues/5221
2022-11-09T15:32:05
2022-11-10T18:11:21
2022-11-10T18:11:11
{ "login": "bayartsogt-ya", "id": 43239645, "type": "User" }
[]
false
[]
1,441,664,377
5,220
Implicit type conversion of lists in to_pandas
### Describe the bug ``` ds = Dataset.from_list([{'a':[1,2,3]}]) ds.to_pandas().a.values[0] ``` Results in `array([1, 2, 3])` -- a rather unexpected conversion of types which made downstream tools expecting lists not happy. ### Steps to reproduce the bug See snippet ### Expected behavior Keep the original typ...
closed
https://github.com/huggingface/datasets/issues/5220
2022-11-09T08:40:18
2022-11-10T16:12:26
2022-11-10T16:12:26
{ "login": "sanderland", "id": 48946947, "type": "User" }
[]
false
[]
1,441,255,910
5,219
Delta Tables usage using Datasets Library
### Feature request Adding compatibility of Datasets library with Delta Format. Elevating the utilities of Datasets library from Machine Learning Scope to Data Engineering Scope as well. ### Motivation We know datasets library can absorb csv, json, parquet, etc. file formats but it would be great if Datasets library...
open
https://github.com/huggingface/datasets/issues/5219
2022-11-09T02:43:56
2023-03-02T19:29:12
null
{ "login": "reichenbch", "id": 23002137, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,441,254,194
5,218
Delta Tables usage using Datasets Library
### Feature request Adding compatibility of Datasets library with Delta Format. Elevating the utilities of Datasets library from Machine Learning Scope to Data Engineering Scope as well. ### Motivation We know datasets library can absorb csv, json, parquet, etc. file formats but it would be great if Datasets library...
closed
https://github.com/huggingface/datasets/issues/5218
2022-11-09T02:42:18
2022-11-09T02:42:36
2022-11-09T02:42:36
{ "login": "rcv-koo", "id": 103188035, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,441,252,740
5,217
Reword E2E training and inference tips in the vision guides
Reference: https://github.com/huggingface/datasets/pull/5188#discussion_r1012148730
closed
https://github.com/huggingface/datasets/pull/5217
2022-11-09T02:40:01
2022-11-10T01:38:09
2022-11-10T01:36:09
{ "login": "sayakpaul", "id": 22957388, "type": "User" }
[]
true
[]
1,441,041,947
5,216
save_elasticsearch_index
Hi, I am new to Dataset and elasticsearch. I was wondering is there any equivalent approach to save elasticsearch index as of save_faiss_index locally for later use, to remove the need to re-index a dataset?
open
https://github.com/huggingface/datasets/issues/5216
2022-11-08T23:06:52
2022-11-09T13:16:45
null
{ "login": "amobash2", "id": 12739718, "type": "User" }
[]
false
[]
1,440,334,978
5,214
Update github pr docs actions
null
closed
https://github.com/huggingface/datasets/pull/5214
2022-11-08T14:43:37
2022-11-08T15:39:58
2022-11-08T15:39:57
{ "login": "mishig25", "id": 11827707, "type": "User" }
[]
true
[]
1,440,037,534
5,213
Add support for different configs with `push_to_hub`
will solve #5151 @lhoestq @albertvillanova @mariosasko This is still a super draft so please ignore code issues but I want to discuss some conceptually important things. I suggest a way to do `.push_to_hub("repo_id", "config_name")` with pushing parquet files to directories named as `config_name` (inside `data...
closed
https://github.com/huggingface/datasets/pull/5213
2022-11-08T11:45:47
2022-12-02T16:48:23
2022-12-02T16:44:07
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
true
[]
1,439,642,483
5,212
Fix CI require_beam maximum compatible dill version
A previous commit to main branch introduced an additional requirement on maximum compatible `dill` version with `apache-beam` in our CI `require_beam`: - d7c942228b8dcf4de64b00a3053dce59b335f618 - ec222b220b79f10c8d7b015769f0999b15959feb This PR fixes the maximum compatible `dill` version with `apache-beam`, which...
closed
https://github.com/huggingface/datasets/pull/5212
2022-11-08T07:30:01
2022-11-15T06:32:27
2022-11-15T06:32:26
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]