id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
1,699,235,739
5,828
Stream data concatenation issue
### Describe the bug I am not able to concatenate the augmentation of the stream data. I am using the latest version of dataset. ValueError: The features can't be aligned because the key audio of features {'audio_id': Value(dtype='string', id=None), 'audio': {'array': Sequence(feature=Value(dtype='float32', id=...
closed
https://github.com/huggingface/datasets/issues/5828
2023-05-07T21:02:54
2023-06-29T20:07:56
2023-05-10T05:05:47
{ "login": "krishnapriya-18", "id": 48817796, "type": "User" }
[]
false
[]
1,698,891,246
5,827
load json dataset interrupt when dtype cast problem occured
### Describe the bug i have a json like this: [ {"id": 1, "name": 1}, {"id": 2, "name": "Nan"}, {"id": 3, "name": 3}, .... ] ,which have several problematic rows data like row 2, then i load it with datasets.load_dataset('json', data_files=['xx.json'], split='train'), it will report like this: ...
open
https://github.com/huggingface/datasets/issues/5827
2023-05-07T04:52:09
2023-05-10T12:32:28
null
{ "login": "1014661165", "id": 46060451, "type": "User" }
[]
false
[]
1,698,155,751
5,826
Support working_dir in from_spark
Accept `working_dir` as an argument to `Dataset.from_spark`. Setting a non-NFS working directory for Spark workers to materialize to will improve write performance.
closed
https://github.com/huggingface/datasets/pull/5826
2023-05-05T20:22:40
2023-05-25T17:45:54
2023-05-25T08:46:15
{ "login": "maddiedawson", "id": 106995444, "type": "User" }
[]
true
[]
1,697,327,483
5,825
FileNotFound even though exists
### Describe the bug I'm trying to download https://huggingface.co/datasets/bigscience/xP3/resolve/main/ur/xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl which works fine in my webbrowser, but somehow not with datasets. Am I doing sth wrong? ``` Downloading builder script: 100% 2.82k/2.8...
closed
https://github.com/huggingface/datasets/issues/5825
2023-05-05T09:49:55
2023-08-16T10:02:01
2023-08-16T10:02:01
{ "login": "Muennighoff", "id": 62820084, "type": "User" }
[]
false
[]
1,697,152,148
5,824
Fix incomplete docstring for `BuilderConfig`
Fixes #5820 Also fixed a couple of typos I spotted
closed
https://github.com/huggingface/datasets/pull/5824
2023-05-05T07:34:28
2023-05-05T12:39:14
2023-05-05T12:31:54
{ "login": "Laurent2916", "id": 21087104, "type": "User" }
[]
true
[]
1,697,024,789
5,823
[2.12.0] DatasetDict.save_to_disk not saving to S3
### Describe the bug When trying to save a `DatasetDict` to a private S3 bucket using `save_to_disk`, the artifacts are instead saved locally, and not in the S3 bucket. I have tried using the deprecated `fs` as well as the `storage_options` arguments and I get the same results. ### Steps to reproduce the bug 1. C...
closed
https://github.com/huggingface/datasets/issues/5823
2023-05-05T05:22:59
2024-05-30T16:11:31
2023-05-05T15:01:17
{ "login": "thejamesmarq", "id": 5233185, "type": "User" }
[]
false
[]
1,696,627,308
5,822
Audio Dataset with_format torch problem
### Describe the bug Common Voice v10 Delta (German) Dataset from here https://commonvoice.mozilla.org/de/datasets ``` audio_dataset = \ (Dataset .from_dict({"audio": ('/tmp/cv-corpus-10.0-delta-2022-07-04/de/clips/' + df.path).to_list()}) .cast_column("audio", Audio(sampling_rate=16_000)) .with...
closed
https://github.com/huggingface/datasets/issues/5822
2023-05-04T20:07:51
2023-05-11T20:45:53
2023-05-11T20:45:53
{ "login": "paulbauriegel", "id": 20282916, "type": "User" }
[]
false
[]
1,696,400,343
5,821
IterableDataset Arrow formatting
Adding an optional `.iter_arrow` to examples iterable. This allows to use Arrow formatting in map/filter. This will also be useful for torch formatting, since we can reuse the TorchFormatter that converts Arrow data to torch tensors Related to https://github.com/huggingface/datasets/issues/5793 and https://github...
closed
https://github.com/huggingface/datasets/pull/5821
2023-05-04T17:23:43
2023-05-31T09:43:26
2023-05-31T09:36:18
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,695,892,811
5,820
Incomplete docstring for `BuilderConfig`
Hi guys ! I stumbled upon this docstring while working on a project. Some of the attributes have missing descriptions. https://github.com/huggingface/datasets/blob/bc5fef5b6d91f009e4101684adcb374df2c170f6/src/datasets/builder.py#L104-L117
closed
https://github.com/huggingface/datasets/issues/5820
2023-05-04T12:14:34
2023-05-05T12:31:56
2023-05-05T12:31:56
{ "login": "Laurent2916", "id": 21087104, "type": "User" }
[ { "name": "good first issue", "color": "7057ff" } ]
false
[]
1,695,536,738
5,819
Cannot pickle error in Dataset.from_generator()
### Describe the bug I'm trying to use Dataset.from_generator() to generate a large dataset. ### Steps to reproduce the bug Code to reproduce: ``` from transformers import T5Tokenizer, T5ForConditionalGeneration, GenerationConfig import torch from tqdm import tqdm from datasets import load_dataset tokenizer...
closed
https://github.com/huggingface/datasets/issues/5819
2023-05-04T08:39:09
2023-05-05T19:20:59
2023-05-05T19:20:58
{ "login": "xinghaow99", "id": 50691954, "type": "User" }
[]
false
[]
1,695,052,555
5,818
Ability to update a dataset
### Feature request The ability to load a dataset, add or change something, and save it back to disk. Maybe it's possible, but I can't work out how to do it, e.g. this fails: ```py import datasets dataset = datasets.load_from_disk("data/test1") dataset = dataset.add_item({"text": "A new item"}) dataset.sav...
open
https://github.com/huggingface/datasets/issues/5818
2023-05-04T01:08:13
2023-05-04T20:43:39
null
{ "login": "davidgilbertson", "id": 4443482, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,694,891,866
5,817
Setting `num_proc` errors when `.map` returns additional items.
### Describe the bug I'm using a map function that returns more rows than are passed in. If I try to use `num_proc` I get: ``` File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 563, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kw...
closed
https://github.com/huggingface/datasets/issues/5817
2023-05-03T21:46:53
2023-05-04T21:14:21
2023-05-04T20:22:25
{ "login": "davidgilbertson", "id": 4443482, "type": "User" }
[]
false
[]
1,694,590,856
5,816
Preserve `stopping_strategy` of shuffled interleaved dataset (random cycling case)
Preserve the `stopping_strategy` in the `RandomlyCyclingMultiSourcesExamplesIterable.shard_data_sources` to fix shuffling a dataset interleaved (from multiple sources) with probabilities. Fix #5812
closed
https://github.com/huggingface/datasets/pull/5816
2023-05-03T18:34:18
2023-05-04T14:31:55
2023-05-04T14:24:49
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,693,216,778
5,814
Repro windows crash
null
closed
https://github.com/huggingface/datasets/pull/5814
2023-05-02T23:30:18
2024-01-08T18:30:45
2024-01-08T18:30:45
{ "login": "maddiedawson", "id": 106995444, "type": "User" }
[]
true
[]
1,693,701,743
5,815
Easy way to create a Kaggle dataset from a Huggingface dataset?
I'm not sure whether this is more appropriately addressed with HuggingFace or Kaggle. I would like to somehow directly create a Kaggle dataset from a HuggingFace Dataset. While Kaggle does provide the option to create a dataset from a URI, that URI must point to a single file. For example: ![image](https://user...
open
https://github.com/huggingface/datasets/issues/5815
2023-05-02T21:43:33
2023-07-26T16:13:31
null
{ "login": "hrbigelow", "id": 5355286, "type": "User" }
[]
false
[]
1,691,908,535
5,813
[DO-NOT-MERGE] Debug Windows issue at #3
TBD
closed
https://github.com/huggingface/datasets/pull/5813
2023-05-02T07:19:34
2023-05-02T07:21:30
2023-05-02T07:21:30
{ "login": "HyukjinKwon", "id": 6477701, "type": "User" }
[]
true
[]
1,691,798,169
5,812
Cannot shuffle interleaved IterableDataset with "all_exhausted" stopping strategy
### Describe the bug Shuffling interleaved `IterableDataset` with "all_exhausted" strategy yields non-exhaustive sampling. ### Steps to reproduce the bug ```py from datasets import IterableDataset, interleave_datasets def gen(bias, length): for i in range(length): yield dict(a=bias+i) seed = 42 ...
closed
https://github.com/huggingface/datasets/issues/5812
2023-05-02T05:26:17
2023-05-04T14:24:51
2023-05-04T14:24:51
{ "login": "offchan42", "id": 15215732, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "streaming", "color": "fef2c0" } ]
false
[]
1,689,919,046
5,811
load_dataset: TypeError: 'NoneType' object is not callable, on local dataset filename changes
### Describe the bug I've adapted Databrick's [train_dolly.py](/databrickslabs/dolly/blob/master/train_dolly.py) to train using a local dataset, which has been working. Upon changing the filenames of the `.json` & `.py` files in my local dataset directory, `dataset = load_dataset(path_or_dataset)["train"]` throws th...
open
https://github.com/huggingface/datasets/issues/5811
2023-04-30T13:27:17
2025-02-27T07:32:30
null
{ "login": "durapensa", "id": 50685483, "type": "User" }
[]
false
[]
1,689,917,822
5,810
Add `fn_kwargs` to `map` and `filter` of `IterableDataset` and `IterableDatasetDict`
# Overview I've added an argument`fn_kwargs` for map and filter methods of `IterableDataset` and `IterableDatasetDict` classes. # Details Currently, the map and filter methods of some classes related to `IterableDataset` do not allow specifing the arguments passed to the function. This pull request adds `fn_kwargs...
closed
https://github.com/huggingface/datasets/pull/5810
2023-04-30T13:23:01
2023-05-22T08:12:39
2023-05-22T08:05:31
{ "login": "yuukicammy", "id": 3927621, "type": "User" }
[]
true
[]
1,689,797,293
5,809
wiki_dpr details for Open Domain Question Answering tasks
Hey guys! Thanks for creating the wiki_dpr dataset! I am currently trying to combine wiki_dpr and my own datasets. but I don't know how to make the embedding value the same way as wiki_dpr. As an experiment, I embeds the text of id="7" of wiki_dpr, but this result was very different from wiki_dpr.
closed
https://github.com/huggingface/datasets/issues/5809
2023-04-30T06:12:04
2023-07-21T14:11:00
2023-07-21T14:11:00
{ "login": "yulgok22", "id": 64122846, "type": "User" }
[]
false
[]
1,688,977,237
5,807
Support parallelized downloading in load_dataset with Spark
As proposed in https://github.com/huggingface/datasets/issues/5798, this adds support to parallelized downloading in `load_dataset` with Spark, which can speed up the process by distributing the workload to worker nodes. Parallelizing dataset processing is not supported in this PR.
closed
https://github.com/huggingface/datasets/pull/5807
2023-04-28T18:34:32
2023-05-25T16:54:14
2023-05-25T16:54:14
{ "login": "es94129", "id": 12763339, "type": "User" }
[]
true
[]
1,688,598,095
5,806
Return the name of the currently loaded file in the load_dataset function.
### Feature request Add an optional parameter return_file_name in the load_dataset function. When it is set to True, the function will include the name of the file corresponding to the current line as a feature in the returned output. ### Motivation When training large language models, machine problems may interrupt...
open
https://github.com/huggingface/datasets/issues/5806
2023-04-28T13:50:15
2025-03-21T12:07:15
null
{ "login": "s-JoL", "id": 16948304, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "good first issue", "color": "7057ff" } ]
false
[]
1,688,558,577
5,805
Improve `Create a dataset` tutorial
Our [tutorial on how to create a dataset](https://huggingface.co/docs/datasets/create_dataset) is a bit misleading. 1. In **Folder-based builders** section it says that we have two folder-based builders as standard builders, but we also have similar builders (that can be created from directory with data of required f...
open
https://github.com/huggingface/datasets/issues/5805
2023-04-28T13:26:22
2024-07-26T21:16:13
null
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
false
[]
1,688,285,666
5,804
Set dev version
null
closed
https://github.com/huggingface/datasets/pull/5804
2023-04-28T10:10:01
2023-04-28T10:18:51
2023-04-28T10:10:29
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,688,256,290
5,803
Release: 2.12.0
null
closed
https://github.com/huggingface/datasets/pull/5803
2023-04-28T09:52:11
2023-04-28T10:18:56
2023-04-28T09:54:43
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,686,509,799
5,802
Validate non-empty data_files
This PR adds validation of `data_files`, so that they are non-empty (str, list, or dict) or `None` (default). See: https://github.com/huggingface/datasets/pull/5787#discussion_r1178862327
closed
https://github.com/huggingface/datasets/pull/5802
2023-04-27T09:51:36
2023-04-27T14:59:47
2023-04-27T14:51:40
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,686,348,096
5,800
Change downloaded file permission based on umask
This PR changes the permission of downloaded files to cache, so that the umask is taken into account. Related to: - #2157 Fix #5799. CC: @stas00
closed
https://github.com/huggingface/datasets/pull/5800
2023-04-27T08:13:30
2023-04-27T09:33:05
2023-04-27T09:30:16
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,686,334,572
5,799
Files downloaded to cache do not respect umask
As reported by @stas00, files downloaded to the cache do not respect umask: ```bash $ ls -l /path/to/cache/datasets/downloads/ -rw------- 1 uername username 150M Apr 25 16:41 5e646c1d600f065adaeb134e536f6f2f296a6d804bd1f0e1fdcd20ee28c185c6 ``` Related to: - #2065
closed
https://github.com/huggingface/datasets/issues/5799
2023-04-27T08:06:05
2023-04-27T09:30:17
2023-04-27T09:30:17
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,685,904,526
5,798
Support parallelized downloading and processing in load_dataset with Spark
### Feature request When calling `load_dataset` for datasets that have multiple files, support using Spark to distribute the downloading and processing job to worker nodes when `cache_dir` is a cloud file system shared among nodes. ```python load_dataset(..., use_spark=True) ``` ### Motivation Further speed up ...
open
https://github.com/huggingface/datasets/issues/5798
2023-04-27T00:16:11
2023-05-25T14:11:41
null
{ "login": "es94129", "id": 12763339, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,685,501,199
5,797
load_dataset is case sentitive?
### Describe the bug load_dataset() function is case sensitive? ### Steps to reproduce the bug The following two code, get totally different behavior. 1. load_dataset('mbzuai/bactrian-x','en') 2. load_dataset('MBZUAI/Bactrian-X','en') ### Expected behavior Compare 1 and 2. 1 will download all 52 subsets, sh...
open
https://github.com/huggingface/datasets/issues/5797
2023-04-26T18:19:04
2023-04-27T11:56:58
null
{ "login": "haonan-li", "id": 34729065, "type": "User" }
[]
false
[]
1,685,451,919
5,796
Spark docs
Added a "Use with Spark" doc page to document `Dataset.from_spark` following https://github.com/huggingface/datasets/pull/5701 cc @maddiedawson
closed
https://github.com/huggingface/datasets/pull/5796
2023-04-26T17:39:43
2023-04-27T16:41:50
2023-04-27T16:34:45
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,685,414,505
5,795
Fix spark imports
null
closed
https://github.com/huggingface/datasets/pull/5795
2023-04-26T17:09:32
2023-04-26T17:49:03
2023-04-26T17:39:12
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,685,196,061
5,794
CI ZeroDivisionError
Sometimes when running our CI on Windows, we get a ZeroDivisionError: ``` FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore - ZeroDivisionError: float division by zero ``` See for example: - https://github.com/huggingface/datasets/actions/runs/4809358266/jobs/8560513110 - https:/...
closed
https://github.com/huggingface/datasets/issues/5794
2023-04-26T14:55:23
2024-05-17T09:12:11
2024-05-17T09:12:11
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,684,777,320
5,793
IterableDataset.with_format("torch") not working
### Describe the bug After calling the with_format("torch") method on an IterableDataset instance, the data format is unchanged. ### Steps to reproduce the bug ```python from datasets import IterableDataset def gen(): for i in range(4): yield {"a": [i] * 4} dataset = IterableDataset.from_generator(g...
closed
https://github.com/huggingface/datasets/issues/5793
2023-04-26T10:50:23
2023-06-13T15:57:06
2023-06-13T15:57:06
{ "login": "jiangwangyi", "id": 39762734, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "enhancement", "color": "a2eeef" }, { "name": "streaming", "color": "fef2c0" } ]
false
[]
1,683,473,943
5,791
TIFF/TIF support
### Feature request I currently have a dataset (with tiff and json files) where I have to do this: `wget path_to_data/images.zip && unzip images.zip` `wget path_to_data/annotations.zip && unzip annotations.zip` Would it make sense a contribution that supports these type of files? ### Motivation instead o...
closed
https://github.com/huggingface/datasets/issues/5791
2023-04-25T16:14:18
2024-01-15T16:40:33
2024-01-15T16:40:16
{ "login": "sebasmos", "id": 31293221, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,683,229,126
5,790
Allow to run CI on push to ci-branch
This PR allows to run the CI on push to a branch named "ci-*", without needing to open a PR. - This will allow to make CI tests without opening a PR, e.g., for future `huggingface-hub` releases, future dependency releases (like `fsspec`, `pandas`,...) Note that to build the documentation, we already allow it on pus...
closed
https://github.com/huggingface/datasets/pull/5790
2023-04-25T13:57:26
2023-04-26T13:43:08
2023-04-26T13:35:47
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,682,611,179
5,789
Support streaming datasets that use jsonlines
Extend support for streaming datasets that use `jsonlines.open`. Currently, if `jsonlines` is installed, `datasets` raises a `FileNotFoundError`: ``` FileNotFoundError: [Errno 2] No such file or directory: 'https://...' ``` See: - https://huggingface.co/datasets/masakhane/afriqa/discussions/1
open
https://github.com/huggingface/datasets/issues/5789
2023-04-25T07:40:02
2023-04-25T07:40:03
null
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,681,136,256
5,788
Prepare tests for hfh 0.14
Related to the coming release of `huggingface_hub==0.14.0`. It will break some internal tests. The PR fixes these tests. Let's double-check the CI but I expect the fixed tests to be running fine with both `hfh<=0.13.4` and `hfh==0.14`. Worth case scenario, existing PRs will have to be rebased once this fix is merged. ...
closed
https://github.com/huggingface/datasets/pull/5788
2023-04-24T12:13:03
2023-04-25T14:32:56
2023-04-25T14:25:30
{ "login": "Wauplin", "id": 11801849, "type": "User" }
[]
true
[]
1,680,965,959
5,787
Fix inferring module for unsupported data files
This PR raises a FileNotFoundError instead: ``` FileNotFoundError: No (supported) data files or dataset script found in <dataset_name> ``` Fix #5785.
closed
https://github.com/huggingface/datasets/pull/5787
2023-04-24T10:44:50
2023-04-27T13:06:01
2023-04-27T12:57:28
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,680,957,070
5,786
Multiprocessing in a `filter` or `map` function with a Pytorch model
### Describe the bug I am trying to use a Pytorch model loaded on CPUs with multiple processes with a `.map` or a `.filter` method. Usually, when dealing with models that are non-pickable, creating a class such that the `map` function is the method `__call__`, and adding `reduce` helps to solve the problem. Howe...
closed
https://github.com/huggingface/datasets/issues/5786
2023-04-24T10:38:07
2023-05-30T09:56:30
2023-04-24T10:43:58
{ "login": "HugoLaurencon", "id": 44556846, "type": "User" }
[]
false
[]
1,680,956,964
5,785
Unsupported data files raise TypeError: 'NoneType' object is not iterable
Currently, we raise a TypeError for unsupported data files: ``` TypeError: 'NoneType' object is not iterable ``` See: - https://github.com/huggingface/datasets-server/issues/1073 We should give a more informative error message.
closed
https://github.com/huggingface/datasets/issues/5785
2023-04-24T10:38:03
2023-04-27T12:57:30
2023-04-27T12:57:30
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,680,950,726
5,784
Raise subprocesses traceback when interrupting
When a subprocess hangs in `filter` or `map`, one should be able to get the subprocess' traceback when interrupting the main process. Right now it shows nothing. To do so I `.get()` the subprocesses async results even the main process is stopped with e.g. `KeyboardInterrupt`. I added a timeout in case the subprocess...
closed
https://github.com/huggingface/datasets/pull/5784
2023-04-24T10:34:03
2023-04-26T16:04:42
2023-04-26T15:54:44
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,679,664,393
5,783
Offset overflow while doing regex on a text column
### Describe the bug `ArrowInvalid: offset overflow while concatenating arrays` Same error as [here](https://github.com/huggingface/datasets/issues/615) ### Steps to reproduce the bug Steps to reproduce: (dataset is a few GB big so try in colab maybe) ``` import datasets import re ds = datasets.lo...
open
https://github.com/huggingface/datasets/issues/5783
2023-04-22T19:12:03
2023-09-22T06:44:07
null
{ "login": "nishanthcgit", "id": 5066268, "type": "User" }
[]
false
[]
1,679,622,367
5,782
Support for various audio-loading backends instead of always relying on SoundFile
### Feature request Introduce an option to select from a variety of audio-loading backends rather than solely relying on the SoundFile library. For instance, if the ffmpeg library is installed, it can serve as a fallback loading option. ### Motivation - The SoundFile library, used in [features/audio.py](https://gith...
closed
https://github.com/huggingface/datasets/issues/5782
2023-04-22T17:09:25
2023-05-10T20:23:04
2023-05-10T20:23:04
{ "login": "BoringDonut", "id": 129098876, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,679,580,460
5,781
Error using `load_datasets`
### Describe the bug I tried to load a dataset using the `datasets` library in a conda jupyter notebook and got the below error. ``` ImportError: dlopen(/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/_iterative.cpython-38-darwin.so, 0x0002): Library not ...
closed
https://github.com/huggingface/datasets/issues/5781
2023-04-22T15:10:44
2023-05-02T23:41:25
2023-05-02T23:41:25
{ "login": "gjyoungjr", "id": 61463108, "type": "User" }
[]
false
[]
1,679,367,149
5,780
TypeError: 'NoneType' object does not support item assignment
command: ``` def load_datasets(formats, data_dir=datadir, data_files=datafile): dataset = load_dataset(formats, data_dir=datadir, data_files=datafile, split=split, streaming=True, **kwargs) return dataset raw_datasets = DatasetDict() raw_datasets["train"] = load_datasets(“csv”, args.datadir, "train.csv", s...
closed
https://github.com/huggingface/datasets/issues/5780
2023-04-22T06:22:43
2023-04-23T08:49:18
2023-04-23T08:49:18
{ "login": "ben-8878", "id": 38179632, "type": "User" }
[]
false
[]
1,678,669,865
5,779
Call fs.makedirs in save_to_disk
We need to call `fs.makedirs` when saving a dataset using `save_to_disk`, because some fs implementations have actual directories (S3 and others don't) Close https://github.com/huggingface/datasets/issues/5775
closed
https://github.com/huggingface/datasets/pull/5779
2023-04-21T15:04:28
2023-04-26T12:20:01
2023-04-26T12:11:15
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,678,125,951
5,778
Schrödinger's dataset_dict
### Describe the bug If you use load_dataset('json', data_files="path/test.json"), it will return DatasetDict({train:...}). And if you use load_dataset("path"), it will return DatasetDict({test:...}). Why can't the output behavior be unified? ### Steps to reproduce the bug as description above. ### Expected b...
closed
https://github.com/huggingface/datasets/issues/5778
2023-04-21T08:38:12
2023-07-24T15:15:14
2023-07-24T15:15:14
{ "login": "liujuncn", "id": 902005, "type": "User" }
[]
false
[]
1,677,655,969
5,777
datasets.load_dataset("code_search_net", "python") : NotADirectoryError: [Errno 20] Not a directory
### Describe the bug While checking out the [tokenizer tutorial](https://huggingface.co/course/chapter6/2?fw=pt), i noticed getting an error while initially downloading the python dataset used in the examples. The [collab with the error is here](https://colab.research.google.com/github/huggingface/notebooks/blob/ma...
closed
https://github.com/huggingface/datasets/issues/5777
2023-04-21T02:08:07
2023-06-05T05:49:52
2023-05-11T11:51:56
{ "login": "ghost", "id": 10137, "type": "User" }
[]
false
[]
1,677,116,100
5,776
Use Pandas' `read_json` in the JSON builder
Instead of PyArrow's `read_json`, we should use `pd.read_json` in the JSON builder for consistency with the CSV and SQL builders (e.g., to address https://github.com/huggingface/datasets/issues/5725). In Pandas2.0, to get the same performance, we can set the `engine` to "pyarrow". The issue is that Colab still doesn...
open
https://github.com/huggingface/datasets/issues/5776
2023-04-20T17:15:49
2023-04-20T17:15:49
null
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,677,089,901
5,775
ArrowDataset.save_to_disk lost some logic of remote
### Describe the bug https://github.com/huggingface/datasets/blob/e7ce0ac60c7efc10886471932854903a7c19f172/src/datasets/arrow_dataset.py#L1371 Here is the bug point, when I want to save from a `DatasetDict` class and the items of the instance is like `[('train', Dataset({features: ..., num_rows: ...}))]` , there ...
closed
https://github.com/huggingface/datasets/issues/5775
2023-04-20T16:58:01
2023-04-26T12:11:36
2023-04-26T12:11:17
{ "login": "Zoupers", "id": 29817738, "type": "User" }
[]
false
[]
1,676,716,662
5,774
Fix style
Fix C419 issues
closed
https://github.com/huggingface/datasets/pull/5774
2023-04-20T13:21:32
2023-04-20T13:34:26
2023-04-20T13:24:28
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,675,984,633
5,773
train_dataset does not implement __len__
when train using data precessored by the datasets, I get follow warning and it leads to that I can not set epoch numbers: `ValueError: The train_dataset does not implement __len__, max_steps has to be specified. The number of steps needs to be known in advance for the learning rate scheduler.`
open
https://github.com/huggingface/datasets/issues/5773
2023-04-20T04:37:05
2023-07-19T20:33:13
null
{ "login": "ben-8878", "id": 38179632, "type": "User" }
[]
false
[]
1,675,033,510
5,772
Fix JSON builder when missing keys in first row
Until now, the JSON builder only considered the keys present in the first element of the list: - Either explicitly: by passing index 0 in `dataset[0].keys()` - Or implicitly: `pa.Table.from_pylist(dataset)`, where "schema (default None): If not passed, will be inferred from the first row of the mapping values" Thi...
closed
https://github.com/huggingface/datasets/pull/5772
2023-04-19T14:32:57
2023-04-21T06:45:13
2023-04-21T06:35:27
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,674,828,380
5,771
Support cloud storage for loading datasets
### Feature request It seems that the the current implementation supports cloud storage only for `load_from_disk`. It would be nice if a similar functionality existed in `load_dataset`. ### Motivation Motivation is pretty clear -- let users work with datasets located in the cloud. ### Your contribution ...
closed
https://github.com/huggingface/datasets/issues/5771
2023-04-19T12:43:53
2023-05-07T17:47:41
2023-05-07T17:47:41
{ "login": "eli-osherovich", "id": 2437102, "type": "User" }
[ { "name": "duplicate", "color": "cfd3d7" }, { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,673,581,555
5,770
Add IterableDataset.from_spark
Follow-up from https://github.com/huggingface/datasets/pull/5701 Related issue: https://github.com/huggingface/datasets/issues/5678
closed
https://github.com/huggingface/datasets/pull/5770
2023-04-18T17:47:53
2023-05-17T14:07:32
2023-05-17T14:00:38
{ "login": "maddiedawson", "id": 106995444, "type": "User" }
[]
true
[]
1,673,441,182
5,769
Tiktoken tokenizers are not pickable
### Describe the bug Since tiktoken tokenizer is not pickable, it is not possible to use it inside `dataset.map()` with multiprocessing enabled. However, you [made](https://github.com/huggingface/datasets/issues/5536) tiktoken's tokenizers pickable in `datasets==2.10.0` for caching. For some reason, this logic does no...
closed
https://github.com/huggingface/datasets/issues/5769
2023-04-18T16:07:40
2023-05-04T18:55:57
2023-05-04T18:55:57
{ "login": "markovalexander", "id": 22663468, "type": "User" }
[]
false
[]
1,672,494,561
5,768
load_dataset("squad") doesn't work in 2.7.1 and 2.10.1
### Describe the bug There is an issue that seems to be unique to the "squad" dataset, in which it cannot be loaded using standard methods. This issue is most quickly reproduced from the command line, using the HF examples to verify a dataset is loaded properly. This is not a problem with "squad_v2" dataset for e...
closed
https://github.com/huggingface/datasets/issues/5768
2023-04-18T07:10:56
2023-04-20T10:27:23
2023-04-20T10:27:22
{ "login": "yaseen157", "id": 57412770, "type": "User" }
[]
false
[]
1,672,433,979
5,767
How to use Distill-BERT with different datasets?
### Describe the bug - `transformers` version: 4.11.3 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.12.0+cu102 (True) - Tensorflow version (GPU?): 2.10.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxL...
closed
https://github.com/huggingface/datasets/issues/5767
2023-04-18T06:25:12
2023-04-20T16:52:05
2023-04-20T16:52:05
{ "login": "sauravtii", "id": 109907638, "type": "User" }
[]
false
[]
1,671,485,882
5,766
Support custom feature types
### Feature request I think it would be nice to allow registering custom feature types with the 🤗 Datasets library. For example, allow to do something along the following lines: ``` from datasets.features import register_feature_type # this would be a new function @register_feature_type class CustomFeature...
open
https://github.com/huggingface/datasets/issues/5766
2023-04-17T15:46:41
2024-03-10T11:11:22
null
{ "login": "jmontalt", "id": 37540982, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,671,388,824
5,765
ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['text']
### Describe the bug Following is my code that I am trying to run, but facing an error (have attached the whole error below): My code: ``` from collections import OrderedDict import warnings import flwr as fl import torch import numpy as np import random from torch.utils.data import DataLoader from...
open
https://github.com/huggingface/datasets/issues/5765
2023-04-17T15:00:50
2023-04-25T13:50:45
null
{ "login": "sauravtii", "id": 109907638, "type": "User" }
[]
false
[]
1,670,740,198
5,764
ConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1
### Describe the bug I want to use this (https://huggingface.co/datasets/josianem/imdb) dataset therefore I am trying to load it using the following code: ``` dataset = load_dataset("josianem/imdb") ``` The dataset is not getting loaded and gives the error message as the following: ``` Traceback (most rece...
closed
https://github.com/huggingface/datasets/issues/5764
2023-04-17T09:08:18
2023-04-18T07:18:20
2023-04-18T07:18:20
{ "login": "sauravtii", "id": 109907638, "type": "User" }
[]
false
[]
1,670,476,302
5,763
fix typo: "mow" -> "now"
I noticed a typo as I was reading the datasets documentation. This PR contains a trivial fix changing "mow" to "now."
closed
https://github.com/huggingface/datasets/pull/5763
2023-04-17T06:03:44
2023-04-17T15:01:53
2023-04-17T14:54:46
{ "login": "csris", "id": 1967608, "type": "User" }
[]
true
[]
1,670,326,470
5,762
Not able to load the pile
### Describe the bug Got this error when I am trying to load the pile dataset ``` TypeError: Couldn't cast array of type struct<file: string, id: string> to {'id': Value(dtype='string', id=None)} ``` ### Steps to reproduce the bug Please visit the following sample notebook https://colab.research.goo...
closed
https://github.com/huggingface/datasets/issues/5762
2023-04-17T03:09:10
2023-04-17T09:37:27
2023-04-17T09:37:27
{ "login": "surya-narayanan", "id": 17240858, "type": "User" }
[]
false
[]
1,670,034,582
5,761
One or several metadata.jsonl were found, but not in the same directory or in a parent directory
### Describe the bug An attempt to generate a dataset from a zip archive using imagefolder and metadata.jsonl does not lead to the expected result. Tried all possible locations of the json file: the file in the archive is ignored (generated dataset contains only images), the file next to the archive like [here](http...
open
https://github.com/huggingface/datasets/issues/5761
2023-04-16T16:21:55
2023-04-19T11:53:24
null
{ "login": "blghtr", "id": 69686152, "type": "User" }
[]
false
[]
1,670,028,072
5,760
Multi-image loading in Imagefolder dataset
### Feature request Extend the `imagefolder` dataloading script to support loading multiple images per dataset entry. This only really makes sense if a metadata file is present. Currently you can use the following format (example `metadata.jsonl`: ``` {'file_name': 'path_to_image.png', 'metadata': ...} ... `...
open
https://github.com/huggingface/datasets/issues/5760
2023-04-16T16:01:05
2024-12-01T11:16:09
null
{ "login": "vvvm23", "id": 44398246, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,669,977,848
5,759
Can I load in list of list of dict format?
### Feature request my jsonl dataset has following format: ``` [{'input':xxx, 'output':xxx},{'input:xxx,'output':xxx},...] [{'input':xxx, 'output':xxx},{'input:xxx,'output':xxx},...] ``` I try to use `datasets.load_dataset('json', data_files=path)` or `datasets.Dataset.from_json`, it raises ``` File "site-p...
open
https://github.com/huggingface/datasets/issues/5759
2023-04-16T13:50:14
2023-04-19T12:04:36
null
{ "login": "LZY-the-boys", "id": 72137647, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,669,920,923
5,758
Fixes #5757
Fixes the bug #5757
closed
https://github.com/huggingface/datasets/pull/5758
2023-04-16T11:56:01
2023-04-20T15:37:49
2023-04-20T15:30:48
{ "login": "eli-osherovich", "id": 2437102, "type": "User" }
[]
true
[]
1,669,910,503
5,757
Tilde (~) is not supported
### Describe the bug It seems that `~` is not recognized correctly in local paths. Whenever I try to use it I get an exception ### Steps to reproduce the bug ```python load_dataset("imagefolder", data_dir="~/data/my_dataset") ``` Will generate the following error: ``` EmptyDatasetError: The directory at ...
closed
https://github.com/huggingface/datasets/issues/5757
2023-04-16T11:48:10
2023-04-20T15:30:51
2023-04-20T15:30:51
{ "login": "eli-osherovich", "id": 2437102, "type": "User" }
[]
false
[]
1,669,678,080
5,756
Calling shuffle on a IterableDataset with streaming=True, gives "ValueError: cannot reshape array"
### Describe the bug When calling shuffle on a IterableDataset with streaming=True, I get the following error: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/administrator/Documents/Projects/huggingface/jax-diffusers-sprint-consistency-models/virtualenv/lib/python3.1...
closed
https://github.com/huggingface/datasets/issues/5756
2023-04-16T04:59:47
2023-04-18T03:40:56
2023-04-18T03:40:56
{ "login": "rohfle", "id": 21077341, "type": "User" }
[]
false
[]
1,669,048,438
5,755
ImportError: cannot import name 'DeprecatedEnum' from 'datasets.utils.deprecation_utils'
### Describe the bug The module moved to new place? ### Steps to reproduce the bug in the import step, ```python from datasets.utils.deprecation_utils import DeprecatedEnum ``` error: ``` ImportError: cannot import name 'DeprecatedEnum' from 'datasets.utils.deprecation_utils' ``` ### Expected behavior...
closed
https://github.com/huggingface/datasets/issues/5755
2023-04-14T23:28:54
2023-04-14T23:36:19
2023-04-14T23:36:19
{ "login": "fivejjs", "id": 1405491, "type": "User" }
[]
false
[]
1,668,755,035
5,754
Minor tqdm fixes
`GeneratorBasedBuilder`'s TQDM bars were not used as context managers. This PR fixes that (missed these bars in https://github.com/huggingface/datasets/pull/5560). Also, this PR modifies the single-proc `save_to_disk` to fix the issue with the TQDM bar not accumulating the progress in the multi-shard setting (again...
closed
https://github.com/huggingface/datasets/pull/5754
2023-04-14T18:15:14
2023-04-20T15:27:58
2023-04-20T15:21:00
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,668,659,536
5,753
[IterableDatasets] Add column followed by interleave datasets gives bogus outputs
### Describe the bug If we add a new column to our iterable dataset using the hack described in #5752, when we then interleave datasets the new column is pinned to one value. ### Steps to reproduce the bug What we're going to do here is: 1. Load an iterable dataset in streaming mode (`original_dataset`) 2. A...
closed
https://github.com/huggingface/datasets/issues/5753
2023-04-14T17:32:31
2025-07-04T05:22:53
2023-04-14T17:36:37
{ "login": "sanchit-gandhi", "id": 93869735, "type": "User" }
[]
false
[]
1,668,574,209
5,752
Streaming dataset looses `.feature` method after `.add_column`
### Describe the bug After appending a new column to a streaming dataset using `.add_column`, we can no longer access the list of dataset features using the `.feature` method. ### Steps to reproduce the bug ```python from datasets import load_dataset original_dataset = load_dataset("librispeech_asr", "clean", sp...
open
https://github.com/huggingface/datasets/issues/5752
2023-04-14T16:39:50
2024-01-18T10:15:20
null
{ "login": "sanchit-gandhi", "id": 93869735, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,668,333,316
5,751
Consistent ArrayXD Python formatting + better NumPy/Pandas formatting
Return a list of lists instead of a list of NumPy arrays when converting the variable-shaped `ArrayXD` to Python. Additionally, improve the NumPy conversion by returning a numeric NumPy array when the offsets are equal or a NumPy object array when they aren't, and allow converting the variable-shaped `ArrayXD` to Panda...
closed
https://github.com/huggingface/datasets/pull/5751
2023-04-14T14:13:59
2023-04-20T14:43:20
2023-04-20T14:40:34
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,668,289,067
5,750
Fail to create datasets from a generator when using Google Big Query
### Describe the bug Creating a dataset from a generator using `Dataset.from_generator()` fails if the generator is the [Google Big Query Python client](https://cloud.google.com/python/docs/reference/bigquery/latest). The problem is that the Big Query client is not pickable. And the function `create_config_id` tries t...
closed
https://github.com/huggingface/datasets/issues/5750
2023-04-14T13:50:59
2023-04-17T12:20:43
2023-04-17T12:20:43
{ "login": "ivanprado", "id": 895720, "type": "User" }
[]
false
[]
1,668,016,321
5,749
AttributeError: 'Version' object has no attribute 'match'
### Describe the bug When I run from datasets import load_dataset data = load_dataset("visual_genome", 'region_descriptions_v1.2.0') AttributeError: 'Version' object has no attribute 'match' ### Steps to reproduce the bug from datasets import load_dataset data = load_dataset("visual_genome", 'region_descripti...
closed
https://github.com/huggingface/datasets/issues/5749
2023-04-14T10:48:06
2023-06-30T11:31:17
2023-04-18T12:57:08
{ "login": "gulnaz-zh", "id": 54584290, "type": "User" }
[]
false
[]
1,667,517,024
5,748
[BUG FIX] Issue 5739
A fix for https://github.com/huggingface/datasets/issues/5739
open
https://github.com/huggingface/datasets/pull/5748
2023-04-14T05:07:31
2023-04-14T05:07:31
null
{ "login": "airlsyn", "id": 1772912, "type": "User" }
[]
true
[]
1,667,270,412
5,747
[WIP] Add Dataset.to_spark
null
closed
https://github.com/huggingface/datasets/pull/5747
2023-04-13T23:20:03
2024-01-08T18:31:50
2024-01-08T18:31:50
{ "login": "maddiedawson", "id": 106995444, "type": "User" }
[]
true
[]
1,667,102,459
5,746
Fix link in docs
Fixes a broken link in the use_with_pytorch docs
closed
https://github.com/huggingface/datasets/pull/5746
2023-04-13T20:45:19
2023-04-14T13:15:38
2023-04-14T13:08:42
{ "login": "bbbxyz", "id": 7485661, "type": "User" }
[]
true
[]
1,667,086,143
5,745
[BUG FIX] Issue 5744
A temporal fix for https://github.com/huggingface/datasets/issues/5744.
open
https://github.com/huggingface/datasets/pull/5745
2023-04-13T20:29:55
2023-04-21T15:22:43
null
{ "login": "keyboardAnt", "id": 15572698, "type": "User" }
[]
true
[]
1,667,076,620
5,744
[BUG] With Pandas 2.0.0, `load_dataset` raises `TypeError: read_csv() got an unexpected keyword argument 'mangle_dupe_cols'`
The `load_dataset` function with Pandas `1.5.3` has no issue (just a FutureWarning) but crashes with Pandas `2.0.0`. For your convenience, I opened a draft Pull Request to fix it quickly: https://github.com/huggingface/datasets/pull/5745 --- * The FutureWarning mentioned above: ``` FutureWarning: the 'mangle_...
closed
https://github.com/huggingface/datasets/issues/5744
2023-04-13T20:21:28
2024-04-09T16:13:59
2023-07-06T17:01:59
{ "login": "keyboardAnt", "id": 15572698, "type": "User" }
[]
false
[]
1,666,843,832
5,743
dataclass.py in virtual environment is overriding the stdlib module "dataclasses"
### Describe the bug "e:\Krish_naik\FSDSRegression\venv\Lib\dataclasses.py" is overriding the stdlib module "dataclasses" ### Steps to reproduce the bug module issue ### Expected behavior overriding the stdlib module "dataclasses" ### Environment info VS code
closed
https://github.com/huggingface/datasets/issues/5743
2023-04-13T17:28:33
2023-04-17T12:23:18
2023-04-17T12:23:18
{ "login": "syedabdullahhassan", "id": 71216295, "type": "User" }
[]
false
[]
1,666,209,738
5,742
Warning specifying future change in to_tf_dataset behaviour
Warning specifying future changes happening to `to_tf_dataset` behaviour when #5602 is merged in
closed
https://github.com/huggingface/datasets/pull/5742
2023-04-13T11:10:00
2023-04-21T13:18:14
2023-04-21T13:11:09
{ "login": "amyeroberts", "id": 22614925, "type": "User" }
[]
true
[]
1,665,860,919
5,741
Fix CI warnings
Fix warnings in our CI tests.
closed
https://github.com/huggingface/datasets/pull/5741
2023-04-13T07:17:02
2023-04-13T09:48:10
2023-04-13T09:40:50
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,664,132,130
5,740
Fix CI mock filesystem fixtures
This PR fixes the fixtures of our CI mock filesystems. Before, we had to pass `clobber=True` to `fsspec.register_implementation` to overwrite the still present previously added "mock" filesystem. That meant that the mock filesystem fixture was not working properly, because the previously added "mock" filesystem, sho...
closed
https://github.com/huggingface/datasets/pull/5740
2023-04-12T08:52:35
2023-04-13T11:01:24
2023-04-13T10:54:13
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,663,762,901
5,739
weird result during dataset split when data path starts with `/data`
### Describe the bug The regex defined here https://github.com/huggingface/datasets/blob/f2607935c4e45c70c44fcb698db0363ca7ba83d4/src/datasets/utils/py_utils.py#L158 will cause a weird result during dataset split when data path starts with `/data` ### Steps to reproduce the bug 1. clone dataset into local path ...
open
https://github.com/huggingface/datasets/issues/5739
2023-04-12T04:51:35
2023-04-21T14:20:59
null
{ "login": "airlsyn", "id": 1772912, "type": "User" }
[]
false
[]
1,663,477,690
5,738
load_dataset("text","dataset.txt") loads the wrong dataset!
### Describe the bug I am trying to load my own custom text dataset using the load_dataset function. My dataset is a bunch of ordered text, think along the lines of shakespeare plays. However, after I load the dataset and I inspect it, the dataset is a table with a bunch of latitude and longitude values! What in th...
closed
https://github.com/huggingface/datasets/issues/5738
2023-04-12T01:07:46
2023-04-19T12:08:27
2023-04-19T12:08:27
{ "login": "Tylersuard", "id": 41713505, "type": "User" }
[]
false
[]
1,662,919,811
5,737
ClassLabel Error
### Describe the bug I still getting the error "call() takes 1 positional argument but 2 were given" even after ensuring that the value being passed to the label object is a single value and that the ClassLabel object has been created with the correct number of label classes ### Steps to reproduce the bug from...
closed
https://github.com/huggingface/datasets/issues/5737
2023-04-11T17:14:13
2023-04-13T16:49:57
2023-04-13T16:49:57
{ "login": "mrcaelumn", "id": 10896776, "type": "User" }
[]
false
[]
1,662,286,061
5,736
FORCE_REDOWNLOAD raises "Directory not empty" exception on second run
### Describe the bug Running `load_dataset(..., download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)` twice raises a `Directory not empty` exception on the second run. ### Steps to reproduce the bug I cannot test this on datasets v2.11.0 due to #5711, but this happens in v2.10.1. 1. Set up a script `my_dataset.p...
open
https://github.com/huggingface/datasets/issues/5736
2023-04-11T11:29:15
2023-11-30T07:16:58
null
{ "login": "rcasero", "id": 1219084, "type": "User" }
[]
false
[]
1,662,150,903
5,735
Implement sharding on merged iterable datasets
This PR allows sharding of merged iterable datasets. Merged iterable datasets with for instance the `interleave_datasets` command are comprised of multiple sub-iterable, one for each dataset that has been merged. With this PR, sharding a merged iterable will result in multiple merged datasets each comprised of sh...
closed
https://github.com/huggingface/datasets/pull/5735
2023-04-11T10:02:25
2023-04-27T16:39:04
2023-04-27T16:32:09
{ "login": "bruno-hays", "id": 48770768, "type": "User" }
[]
true
[]
1,662,058,028
5,734
Remove temporary pin of fsspec
Once root cause is found and fixed, remove the temporary pin introduced by: - #5731
closed
https://github.com/huggingface/datasets/issues/5734
2023-04-11T09:04:17
2023-04-11T11:04:52
2023-04-11T11:04:52
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,662,039,191
5,733
Unpin fsspec
In `fsspec--2023.4.0` default value for clobber when registering an implementation was changed from True to False. See: - https://github.com/fsspec/filesystem_spec/pull/1237 This PR recovers previous behavior by passing clobber True when registering mock implementations. This PR also removes the temporary pin in...
closed
https://github.com/huggingface/datasets/pull/5733
2023-04-11T08:52:12
2023-04-11T11:11:45
2023-04-11T11:04:51
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,662,020,571
5,732
Enwik8 should support the standard split
### Feature request The HuggingFace Datasets library currently supports two BuilderConfigs for Enwik8. One config yields individual lines as examples, while the other config yields the entire dataset as a single example. Both support only a monolithic split: it is all grouped as "train". The HuggingFace Datasets l...
closed
https://github.com/huggingface/datasets/issues/5732
2023-04-11T08:38:53
2023-04-11T09:28:17
2023-04-11T09:28:16
{ "login": "lucaslingle", "id": 10287371, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,662,012,913
5,731
Temporarily pin fsspec
Fix #5730.
closed
https://github.com/huggingface/datasets/pull/5731
2023-04-11T08:33:15
2023-04-11T08:57:45
2023-04-11T08:47:55
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,662,007,926
5,730
CI is broken: ValueError: Name (mock) already in the registry and clobber is False
CI is broken for `test_py310`. See: https://github.com/huggingface/datasets/actions/runs/4665326892/jobs/8258580948 ``` =========================== short test summary info ============================ ERROR tests/test_builder.py::test_builder_with_filesystem_download_and_prepare - ValueError: Name (mock) already ...
closed
https://github.com/huggingface/datasets/issues/5730
2023-04-11T08:29:46
2023-04-11T08:47:56
2023-04-11T08:47:56
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,661,929,923
5,729
Fix nondeterministic sharded data split order
This PR makes the order of the split names deterministic. Before it was nondeterministic because we were iterating over `set` elements. Fix #5728.
closed
https://github.com/huggingface/datasets/pull/5729
2023-04-11T07:34:20
2023-04-26T15:12:25
2023-04-26T15:05:12
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,661,925,932
5,728
The order of data split names is nondeterministic
After this CI error: https://github.com/huggingface/datasets/actions/runs/4639528358/jobs/8210492953?pr=5718 ``` FAILED tests/test_data_files.py::test_get_data_files_patterns[data_file_per_split4] - AssertionError: assert ['random', 'train'] == ['train', 'random'] At index 0 diff: 'random' != 'train' Full diff:...
closed
https://github.com/huggingface/datasets/issues/5728
2023-04-11T07:31:25
2023-04-26T15:05:13
2023-04-26T15:05:13
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,661,536,363
5,727
load_dataset fails with FileNotFound error on Windows
### Describe the bug Although I can import and run the datasets library in a Colab environment, I cannot successfully load any data on my own machine (Windows 10) despite following the install steps: (1) create conda environment (2) activate environment (3) install with: ``conda` install -c huggingface -c conda-...
closed
https://github.com/huggingface/datasets/issues/5727
2023-04-10T23:21:12
2023-07-21T14:08:20
2023-07-21T14:08:19
{ "login": "joelkowalewski", "id": 122648572, "type": "User" }
[]
false
[]
1,660,944,807
5,726
Fallback JSON Dataset loading does not load all values when features specified manually
### Describe the bug The fallback JSON dataset loader located here: https://github.com/huggingface/datasets/blob/1c4ec00511868bd881e84a6f7e0333648d833b8e/src/datasets/packaged_modules/json/json.py#L130-L153 does not load the values of features correctly when features are specified manually and not all features...
closed
https://github.com/huggingface/datasets/issues/5726
2023-04-10T15:22:14
2023-04-21T06:35:28
2023-04-21T06:35:28
{ "login": "myluki2000", "id": 3610788, "type": "User" }
[]
false
[]