id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,525,733,818 | 5,414 | Sharding error with Multilingual LibriSpeech | ### Describe the bug
Loading the German Multilingual LibriSpeech dataset results in a RuntimeError regarding sharding with the following stacktrace:
```
Downloading and preparing dataset multilingual_librispeech/german to /home/nithin/datadrive/cache/huggingface/datasets/facebook___multilingual_librispeech/german/... | closed | https://github.com/huggingface/datasets/issues/5414 | 2023-01-09T14:45:31 | 2023-01-18T14:09:04 | 2023-01-18T14:09:04 | {
"login": "Nithin-Holla",
"id": 19574344,
"type": "User"
} | [] | false | [] |
1,524,591,837 | 5,413 | concatenate_datasets fails when two dataset with shards > 1 and unequal shard numbers | ### Describe the bug
When using `concatenate_datasets([dataset1, dataset2], axis = 1)` to concatenate two datasets with shards > 1, it fails:
```
File "/home/xzg/anaconda3/envs/tri-transfer/lib/python3.9/site-packages/datasets/combine.py", line 182, in concatenate_datasets
return _concatenate_map_style_data... | closed | https://github.com/huggingface/datasets/issues/5413 | 2023-01-08T17:01:52 | 2023-01-26T09:27:21 | 2023-01-26T09:27:21 | {
"login": "ZeguanXiao",
"id": 38279341,
"type": "User"
} | [] | false | [] |
1,524,250,269 | 5,412 | load_dataset() cannot find dataset_info.json with multiple training runs in parallel | ### Describe the bug
I have a custom local dataset in JSON form. I am trying to do multiple training runs in parallel. The first training run runs with no issue. However, when I start another run on another GPU, the following code throws this error.
If there is a workaround to ignore the cache I think that would ... | closed | https://github.com/huggingface/datasets/issues/5412 | 2023-01-08T00:44:32 | 2023-01-19T20:28:43 | 2023-01-19T20:28:43 | {
"login": "mtoles",
"id": 7139344,
"type": "User"
} | [] | false | [] |
1,523,297,786 | 5,411 | Update docs of S3 filesystem with async aiobotocore | [s3fs has migrated to all async calls](https://github.com/fsspec/s3fs/commit/0de2c6fb3d87c08ea694de96dca0d0834034f8bf).
Updating documentation to use `AioSession` while using s3fs for download manager as well as working with datasets | closed | https://github.com/huggingface/datasets/pull/5411 | 2023-01-06T23:19:17 | 2023-01-18T11:18:59 | 2023-01-18T11:12:04 | {
"login": "maheshpec",
"id": 5677912,
"type": "User"
} | [] | true | [] |
1,521,168,032 | 5,410 | Map-style Dataset to IterableDataset | Added `ds.to_iterable()` to get an iterable dataset from a map-style arrow dataset.
It also has a `num_shards` argument to split the dataset before converting to an iterable dataset. Sharding is important to enable efficient shuffling and parallel loading of iterable datasets.
TODO:
- [x] tests
- [x] docs
Fi... | closed | https://github.com/huggingface/datasets/pull/5410 | 2023-01-05T18:12:17 | 2023-02-01T18:11:45 | 2023-02-01T16:36:01 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,520,374,219 | 5,409 | Fix deprecation warning when use_auth_token passed to download_and_prepare | The `DatasetBuilder.download_and_prepare` argument `use_auth_token` was deprecated in:
- #5302
However, `use_auth_token` is still passed to `download_and_prepare` in our built-in `io` readers (csv, json, parquet,...).
This PR fixes it, so that no deprecation warning is raised.
Fix #5407. | closed | https://github.com/huggingface/datasets/pull/5409 | 2023-01-05T09:10:58 | 2023-01-06T11:06:16 | 2023-01-06T10:59:13 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,519,890,752 | 5,408 | dataset map function could not be hash properly | ### Describe the bug
I follow the [blog post](https://huggingface.co/blog/fine-tune-whisper#building-a-demo) to finetune a Cantonese transcribe model.
When using map function to prepare dataset, following warning pop out:
`common_voice = common_voice.map(prepare_dataset,
remove_... | closed | https://github.com/huggingface/datasets/issues/5408 | 2023-01-05T01:59:59 | 2023-01-06T13:22:19 | 2023-01-06T13:22:18 | {
"login": "Tungway1990",
"id": 68179274,
"type": "User"
} | [] | false | [] |
1,519,797,345 | 5,407 | Datasets.from_sql() generates deprecation warning | ### Describe the bug
Calling `Datasets.from_sql()` generates a warning:
`.../site-packages/datasets/builder.py:712: FutureWarning: 'use_auth_token' was deprecated in version 2.7.1 and will be removed in 3.0.0. Pass 'use_auth_token' to the initializer/'load_dataset_builder' instead.`
### Steps to reproduce the ... | closed | https://github.com/huggingface/datasets/issues/5407 | 2023-01-05T00:43:17 | 2023-01-06T10:59:14 | 2023-01-06T10:59:14 | {
"login": "msummerfield",
"id": 21002157,
"type": "User"
} | [] | false | [] |
1,519,140,544 | 5,406 | [2.6.1][2.7.0] Upgrade `datasets` to fix `TypeError: can only concatenate str (not "int") to str` | `datasets` 2.6.1 and 2.7.0 started to stop supporting datasets like IMDB, ConLL or MNIST datasets.
When loading a dataset using 2.6.1 or 2.7.0, you may this error when loading certain datasets:
```python
TypeError: can only concatenate str (not "int") to str
```
This is because we started to update the metadat... | open | https://github.com/huggingface/datasets/issues/5406 | 2023-01-04T15:10:04 | 2023-06-21T18:45:38 | null | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | false | [] |
1,517,879,386 | 5,405 | size_in_bytes the same for all splits | ### Describe the bug
Hi, it looks like whenever you pull a dataset and get size_in_bytes, it returns the same size for all splits (and that size is the combined size of all splits). It seems like this shouldn't be the intended behavior since it is misleading. Here's an example:
```
>>> from datasets import load_da... | open | https://github.com/huggingface/datasets/issues/5405 | 2023-01-03T20:25:48 | 2023-01-04T09:22:59 | null | {
"login": "Breakend",
"id": 1609857,
"type": "User"
} | [] | false | [] |
1,517,566,331 | 5,404 | Better integration of BIG-bench | ### Feature request
Ideally, it would be nice to have a maintained PyPI package for `bigbench`.
### Motivation
We'd like to allow anyone to access, explore and use any task.
### Your contribution
@lhoestq has opened an issue in their repo:
- https://github.com/google/BIG-bench/issues/906 | open | https://github.com/huggingface/datasets/issues/5404 | 2023-01-03T15:37:57 | 2023-02-09T20:30:26 | null | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,517,466,492 | 5,403 | Replace one letter import in docs | This PR updates a code example for consistency across the docs based on [feedback from this comment](https://github.com/huggingface/transformers/pull/20925/files/9fda31634d203a47d3212e4e8d43d3267faf9808#r1058769500):
"In terms of style we usually stay away from one-letter imports like this (even if the community use... | closed | https://github.com/huggingface/datasets/pull/5403 | 2023-01-03T14:26:32 | 2023-01-03T15:06:18 | 2023-01-03T14:59:01 | {
"login": "MKhalusova",
"id": 1065417,
"type": "User"
} | [] | true | [] |
1,517,409,429 | 5,402 | Missing state.json when creating a cloud dataset using a dataset_builder | ### Describe the bug
Using `load_dataset_builder` to create a builder, run `download_and_prepare` do upload it to S3. However when trying to load it, there are missing `state.json` files. Complete example:
```python
from aiobotocore.session import AioSession as Session
from datasets import load_from_disk, load_da... | open | https://github.com/huggingface/datasets/issues/5402 | 2023-01-03T13:39:59 | 2023-01-04T17:23:57 | null | {
"login": "danielfleischer",
"id": 22022514,
"type": "User"
} | [] | false | [] |
1,517,160,935 | 5,401 | Support Dataset conversion from/to Spark | This PR implements Spark integration by supporting `Dataset` conversion from/to Spark `DataFrame`. | open | https://github.com/huggingface/datasets/pull/5401 | 2023-01-03T09:57:40 | 2023-01-05T14:21:33 | null | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,517,032,972 | 5,400 | Support streaming datasets with os.path.exists and Path.exists | Support streaming datasets with `os.path.exists` and `pathlib.Path.exists`. | closed | https://github.com/huggingface/datasets/pull/5400 | 2023-01-03T07:42:37 | 2023-01-06T10:42:44 | 2023-01-06T10:35:44 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,515,548,427 | 5,399 | Got disconnected from remote data host. Retrying in 5sec [2/20] | ### Describe the bug
While trying to upload my image dataset of a CSV file type to huggingface by running the below code. The dataset consists of a little over 100k of image-caption pairs
### Steps to reproduce the bug
```
df = pd.read_csv('x.csv', encoding='utf-8-sig')
features = Features({
'link': Ima... | closed | https://github.com/huggingface/datasets/issues/5399 | 2023-01-01T13:00:11 | 2023-01-02T07:21:52 | 2023-01-02T07:21:52 | {
"login": "alhuri",
"id": 46427957,
"type": "User"
} | [] | false | [] |
1,514,425,231 | 5,398 | Unpin pydantic | Once `pydantic` fixes their issue in their 1.10.3 version, unpin it.
See issue:
- #5394
See temporary fix:
- #5395 | closed | https://github.com/huggingface/datasets/issues/5398 | 2022-12-30T10:37:31 | 2022-12-30T10:43:41 | 2022-12-30T10:43:41 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | false | [] |
1,514,412,246 | 5,397 | Unpin pydantic test dependency | Once pydantic-1.10.3 has been yanked, we can unpin it: https://pypi.org/project/pydantic/1.10.3/
See reply by pydantic team https://github.com/pydantic/pydantic/issues/4885#issuecomment-1367819807
```
v1.10.3 has been yanked.
```
in response to spacy request: https://github.com/pydantic/pydantic/issues/4885#issu... | closed | https://github.com/huggingface/datasets/pull/5397 | 2022-12-30T10:22:09 | 2022-12-30T10:53:11 | 2022-12-30T10:43:40 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,514,002,934 | 5,396 | Fix checksum verification | Expected checksum was verified against checksum dict (not checksum). | closed | https://github.com/huggingface/datasets/pull/5396 | 2022-12-29T19:45:17 | 2023-02-13T11:11:22 | 2023-02-13T11:11:22 | {
"login": "daskol",
"id": 9336514,
"type": "User"
} | [] | true | [] |
1,513,997,335 | 5,395 | Temporarily pin pydantic test dependency | Temporarily pin `pydantic` until a permanent solution is found.
Fix #5394. | closed | https://github.com/huggingface/datasets/pull/5395 | 2022-12-29T19:34:19 | 2022-12-30T06:36:57 | 2022-12-29T21:00:26 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,513,976,229 | 5,394 | CI error: TypeError: dataclass_transform() got an unexpected keyword argument 'field_specifiers' | ### Describe the bug
While installing the dependencies, the CI raises a TypeError:
```
Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/runpy.py", line 183, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/opt/hoste... | closed | https://github.com/huggingface/datasets/issues/5394 | 2022-12-29T18:58:44 | 2022-12-30T10:40:51 | 2022-12-29T21:00:27 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | false | [] |
1,512,908,613 | 5,393 | Finish deprecating the fs argument | See #5385 for some discussion on this
The `fs=` arg was depcrecated from `Dataset.save_to_disk` and `Dataset.load_from_disk` in `2.8.0` (to be removed in `3.0.0`). There are a few other places where the `fs=` arg was still used (functions/methods in `datasets.info` and `datasets.load`). This PR adds a similar beha... | closed | https://github.com/huggingface/datasets/pull/5393 | 2022-12-28T15:33:17 | 2023-01-18T12:42:33 | 2023-01-18T12:35:32 | {
"login": "dconathan",
"id": 15098095,
"type": "User"
} | [] | true | [] |
1,512,712,529 | 5,392 | Fix Colab notebook link | Fix notebook link to open in Colab. | closed | https://github.com/huggingface/datasets/pull/5392 | 2022-12-28T11:44:53 | 2023-01-03T15:36:14 | 2023-01-03T15:27:31 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,510,350,400 | 5,391 | Whisper Event - RuntimeError: The size of tensor a (504) must match the size of tensor b (448) at non-singleton dimension 1 100% 1000/1000 [2:52:21<00:00, 10.34s/it] | Done in a VM with a GPU (Ubuntu) following the [Whisper Event - PYTHON](https://github.com/huggingface/community-events/tree/main/whisper-fine-tuning-event#python-script) instructions.
Attempted using [RuntimeError: he size of tensor a (504) must match the size of tensor b (448) at non-singleton dimension 1 100% 1... | closed | https://github.com/huggingface/datasets/issues/5391 | 2022-12-25T15:17:14 | 2023-07-21T14:29:47 | 2023-07-21T14:29:47 | {
"login": "catswithbats",
"id": 12885107,
"type": "User"
} | [] | false | [] |
1,509,357,553 | 5,390 | Error when pushing to the CI hub | ### Describe the bug
Note that it's a special case where the Hub URL is "https://hub-ci.huggingface.co", which does not appear if we do the same on the Hub (https://huggingface.co).
The call to `dataset.push_to_hub(` fails:
```
Pushing dataset shards to the dataset hub: 100%|██████████████████████████████████... | closed | https://github.com/huggingface/datasets/issues/5390 | 2022-12-23T13:36:37 | 2022-12-23T20:29:02 | 2022-12-23T20:29:02 | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [] | false | [] |
1,509,348,626 | 5,389 | Fix link in `load_dataset` docstring | Fix https://github.com/huggingface/datasets/issues/5387, fix https://github.com/huggingface/datasets/issues/4566 | closed | https://github.com/huggingface/datasets/pull/5389 | 2022-12-23T13:26:31 | 2023-01-25T19:00:43 | 2023-01-24T16:33:38 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,509,042,348 | 5,388 | Getting Value Error while loading a dataset.. | ### Describe the bug
I am trying to load a dataset using Hugging Face Datasets load_dataset method. I am getting the value error as show below. Can someone help with this? I am using Windows laptop and Google Colab notebook.
```
WARNING:datasets.builder:Using custom data configuration default-a1d9e8eaedd958cd
---... | closed | https://github.com/huggingface/datasets/issues/5388 | 2022-12-23T08:16:43 | 2022-12-29T08:36:33 | 2022-12-27T17:59:09 | {
"login": "valmetisrinivas",
"id": 51160232,
"type": "User"
} | [] | false | [] |
1,508,740,177 | 5,387 | Missing documentation page : improve-performance | ### Describe the bug
Trying to access https://huggingface.co/docs/datasets/v2.8.0/en/package_reference/cache#improve-performance, the page is missing.
The link is in here : https://huggingface.co/docs/datasets/v2.8.0/en/package_reference/loading_methods#datasets.load_dataset.keep_in_memory
### Steps to reproduce t... | closed | https://github.com/huggingface/datasets/issues/5387 | 2022-12-23T01:12:57 | 2023-01-24T16:33:40 | 2023-01-24T16:33:40 | {
"login": "astariul",
"id": 43774355,
"type": "User"
} | [] | false | [] |
1,508,592,918 | 5,386 | `max_shard_size` in `datasets.push_to_hub()` breaks with large files | ### Describe the bug
`max_shard_size` parameter for `datasets.push_to_hub()` works unreliably with large files, generating shard files that are way past the specified limit.
In my private dataset, which contains unprocessed images of all sizes (up to `~100MB` per file), I've encountered cases where `max_shard_siz... | closed | https://github.com/huggingface/datasets/issues/5386 | 2022-12-22T21:50:58 | 2022-12-26T23:45:51 | 2022-12-26T23:45:51 | {
"login": "salieri",
"id": 1086393,
"type": "User"
} | [] | false | [] |
1,508,535,532 | 5,385 | Is `fs=` deprecated in `load_from_disk()` as well? | ### Describe the bug
The `fs=` argument was deprecated from `Dataset.save_to_disk` and `Dataset.load_from_disk` in favor of automagically figuring it out via fsspec:
https://github.com/huggingface/datasets/blob/9a7272cd4222383a5b932b0083a4cc173fda44e8/src/datasets/arrow_dataset.py#L1339-L1340
Is there a reason the... | closed | https://github.com/huggingface/datasets/issues/5385 | 2022-12-22T21:00:45 | 2023-01-23T10:50:05 | 2023-01-23T10:50:04 | {
"login": "dconathan",
"id": 15098095,
"type": "User"
} | [] | false | [] |
1,508,152,598 | 5,384 | Handle 0-dim tensors in `cast_to_python_objects` | Fix #5229 | closed | https://github.com/huggingface/datasets/pull/5384 | 2022-12-22T16:15:30 | 2023-01-13T16:10:15 | 2023-01-13T16:00:52 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,507,293,968 | 5,383 | IterableDataset missing column_names, differs from Dataset interface | ### Describe the bug
The documentation on [Stream](https://huggingface.co/docs/datasets/v1.18.2/stream.html) seems to imply that IterableDataset behaves just like a Dataset. However, examples like
```
dataset.map(augment_data, batched=True, remove_columns=dataset.column_names, ...)
```
will not work because `.colu... | closed | https://github.com/huggingface/datasets/issues/5383 | 2022-12-22T05:27:02 | 2023-03-13T19:03:33 | 2023-03-13T19:03:33 | {
"login": "iceboundflame",
"id": 933687,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "good first issue",
"color": "7057ff"
}
] | false | [] |
1,504,788,691 | 5,382 | Raise from disconnect error in xopen | this way we can know the cause of the disconnect
related to https://github.com/huggingface/datasets/issues/5374 | closed | https://github.com/huggingface/datasets/pull/5382 | 2022-12-20T15:52:44 | 2023-01-26T09:51:13 | 2023-01-26T09:42:45 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,504,498,387 | 5,381 | Wrong URL for the_pile dataset | ### Describe the bug
When trying to load `the_pile` dataset from the library, I get a `FileNotFound` error.
### Steps to reproduce the bug
Steps to reproduce:
Run:
```
from datasets import load_dataset
dataset = load_dataset("the_pile")
```
I get the output:
"name": "FileNotFoundError",
"message... | closed | https://github.com/huggingface/datasets/issues/5381 | 2022-12-20T12:40:14 | 2023-02-15T16:24:57 | 2023-02-15T16:24:57 | {
"login": "LeoGrin",
"id": 45738728,
"type": "User"
} | [] | false | [] |
1,504,404,043 | 5,380 | Improve dataset `.skip()` speed in streaming mode | ### Feature request
Add extra information to the `dataset_infos.json` file to include the number of samples/examples in each shard, for example in a new field `num_examples` alongside `num_bytes`. The `.skip()` function could use this information to ignore the download of a shard when in streaming mode, which AFAICT... | open | https://github.com/huggingface/datasets/issues/5380 | 2022-12-20T11:25:23 | 2023-03-08T10:47:12 | null | {
"login": "versae",
"id": 173537,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "good second issue",
"color": "BDE59C"
}
] | false | [] |
1,504,010,639 | 5,379 | feat: depth estimation dataset guide. | This PR adds a guide for prepping datasets for depth estimation.
PR to add documentation images is up here: https://huggingface.co/datasets/huggingface/documentation-images/discussions/22 | closed | https://github.com/huggingface/datasets/pull/5379 | 2022-12-20T05:32:11 | 2023-01-13T12:30:31 | 2023-01-13T12:23:34 | {
"login": "sayakpaul",
"id": 22957388,
"type": "User"
} | [] | true | [] |
1,503,887,508 | 5,378 | The dataset "the_pile", subset "enron_emails" , load_dataset() failure | ### Describe the bug
When run
"datasets.load_dataset("the_pile","enron_emails")" failure

### Steps to reproduce the bug
Run below code in python cli:
>>> import datasets
>>> datasets.load_dataset(... | closed | https://github.com/huggingface/datasets/issues/5378 | 2022-12-20T02:19:13 | 2022-12-20T07:52:54 | 2022-12-20T07:52:54 | {
"login": "shaoyuta",
"id": 52023469,
"type": "User"
} | [] | false | [] |
1,503,477,833 | 5,377 | Add a parallel implementation of to_tf_dataset() | Hey all! Here's a first draft of the PR to add a multiprocessing implementation for `to_tf_dataset()`. It worked in some quick testing for me, but obviously I need to do some much more rigorous testing/benchmarking, and add some proper library tests.
The core idea is that we do everything using `multiprocessing` and... | closed | https://github.com/huggingface/datasets/pull/5377 | 2022-12-19T19:40:27 | 2023-01-25T16:28:44 | 2023-01-25T16:21:40 | {
"login": "Rocketknight1",
"id": 12866554,
"type": "User"
} | [] | true | [] |
1,502,730,559 | 5,376 | set dev version | null | closed | https://github.com/huggingface/datasets/pull/5376 | 2022-12-19T10:56:56 | 2022-12-19T11:01:55 | 2022-12-19T10:57:16 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,502,720,404 | 5,375 | Release: 2.8.0 | null | closed | https://github.com/huggingface/datasets/pull/5375 | 2022-12-19T10:48:26 | 2022-12-19T10:55:43 | 2022-12-19T10:53:15 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,501,872,945 | 5,374 | Using too many threads results in: Got disconnected from remote data host. Retrying in 5sec | ### Describe the bug
`streaming_download_manager` seems to disconnect if too many runs access the same underlying dataset 🧐
The code works fine for me if I have ~100 runs in parallel, but disconnects once scaling to 200.
Possibly related:
- https://github.com/huggingface/datasets/pull/3100
- https://github.com/... | closed | https://github.com/huggingface/datasets/issues/5374 | 2022-12-18T11:38:58 | 2023-07-24T15:23:07 | 2023-07-24T15:23:07 | {
"login": "Muennighoff",
"id": 62820084,
"type": "User"
} | [] | false | [] |
1,501,484,197 | 5,373 | Simplify skipping | Was hoping to find a way to speed up the skipping as I'm running into bottlenecks skipping 100M examples on C4 (it takes 12 hours to skip), but didn't find anything better than this small change :(
Maybe there's a way to directly skip whole shards to speed it up? 🧐 | closed | https://github.com/huggingface/datasets/pull/5373 | 2022-12-17T17:23:52 | 2022-12-18T21:43:31 | 2022-12-18T21:40:21 | {
"login": "Muennighoff",
"id": 62820084,
"type": "User"
} | [] | true | [] |
1,501,377,802 | 5,372 | Fix streaming pandas.read_excel | This PR fixes `xpandas_read_excel`:
- Support passing a path string, besides a file-like object
- Support passing `use_auth_token`
- First assumes the host server supports HTTP range requests; only if a ValueError is thrown (Cannot seek streaming HTTP file), then it preserves previous behavior (see [#3355](https://g... | closed | https://github.com/huggingface/datasets/pull/5372 | 2022-12-17T12:58:52 | 2023-01-06T11:50:58 | 2023-01-06T11:43:37 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,501,369,036 | 5,371 | Add a robustness benchmark dataset for vision | ### Name
ImageNet-C
### Paper
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
### Data
https://github.com/hendrycks/robustness
### Motivation
It's a known fact that vision models are brittle when they meet with slightly corrupted and perturbed data. This is also corre... | open | https://github.com/huggingface/datasets/issues/5371 | 2022-12-17T12:35:13 | 2022-12-20T06:21:41 | null | {
"login": "sayakpaul",
"id": 22957388,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,500,622,276 | 5,369 | Distributed support | To split your dataset across your training nodes, you can use the new [`datasets.distributed.split_dataset_by_node`]:
```python
import os
from datasets.distributed import split_dataset_by_node
ds = split_dataset_by_node(ds, rank=int(os.environ["RANK"]), world_size=int(os.environ["WORLD_SIZE"]))
```
This wor... | closed | https://github.com/huggingface/datasets/pull/5369 | 2022-12-16T17:43:47 | 2023-07-25T12:00:31 | 2023-01-16T13:33:32 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,500,322,973 | 5,368 | Align remove columns behavior and input dict mutation in `map` with previous behavior | Align the `remove_columns` behavior and input dict mutation in `map` with the behavior before https://github.com/huggingface/datasets/pull/5252. | closed | https://github.com/huggingface/datasets/pull/5368 | 2022-12-16T14:28:47 | 2022-12-16T16:28:08 | 2022-12-16T16:25:12 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,499,174,749 | 5,367 | Fix remove columns from lazy dict | This was introduced in https://github.com/huggingface/datasets/pull/5252 and causing the transformers CI to break: https://app.circleci.com/pipelines/github/huggingface/transformers/53886/workflows/522faf2e-a053-454c-94f8-a617fde33393/jobs/648597
Basically this code should return a dataset with only one column:
`... | closed | https://github.com/huggingface/datasets/pull/5367 | 2022-12-15T22:04:12 | 2022-12-15T22:27:53 | 2022-12-15T22:24:50 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,498,530,851 | 5,366 | ExamplesIterable fixes | fix typing and ExamplesIterable.shard_data_sources | closed | https://github.com/huggingface/datasets/pull/5366 | 2022-12-15T14:23:05 | 2022-12-15T14:44:47 | 2022-12-15T14:41:45 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,498,422,466 | 5,365 | fix: image array should support other formats than uint8 | Currently images that are provided as ndarrays, but not in `uint8` format are going to loose data. Namely, for example in a depth image where the data is in float32 format, the type-casting to uint8 will basically make the whole image blank.
`PIL.Image.fromarray` [does support mode `F`](https://pillow.readthedocs.io/e... | closed | https://github.com/huggingface/datasets/pull/5365 | 2022-12-15T13:17:50 | 2023-01-26T18:46:45 | 2023-01-26T18:39:36 | {
"login": "vigsterkr",
"id": 30353,
"type": "User"
} | [] | true | [] |
1,498,360,628 | 5,364 | Support for writing arrow files directly with BeamWriter | Make it possible to write Arrow files directly with `BeamWriter` rather than converting from Parquet to Arrow, which is sub-optimal, especially for big datasets for which Beam is primarily used. | closed | https://github.com/huggingface/datasets/pull/5364 | 2022-12-15T12:38:05 | 2024-01-11T14:52:33 | 2024-01-11T14:45:15 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,498,171,317 | 5,363 | Dataset.from_generator() crashes on simple example | null | closed | https://github.com/huggingface/datasets/issues/5363 | 2022-12-15T10:21:28 | 2022-12-15T11:51:33 | 2022-12-15T11:51:33 | {
"login": "villmow",
"id": 2743060,
"type": "User"
} | [] | false | [] |
1,497,643,744 | 5,362 | Run 'GPT-J' failure due to download dataset fail (' ConnectionError: Couldn't reach http://eaidata.bmk.sh/data/enron_emails.jsonl.zst ' ) | ### Describe the bug
Run model "GPT-J" with dataset "the_pile" fail.
The fail out is as below:

Looks like which is due to "http://eaidata.bmk.sh/data/enron_emails.jsonl.zst" unreachable .
### Steps to ... | closed | https://github.com/huggingface/datasets/issues/5362 | 2022-12-15T01:23:03 | 2022-12-15T07:45:54 | 2022-12-15T07:45:53 | {
"login": "shaoyuta",
"id": 52023469,
"type": "User"
} | [] | false | [] |
1,497,153,889 | 5,361 | How concatenate `Audio` elements using batch mapping | ### Describe the bug
I am trying to do concatenate audios in a dataset e.g. `google/fleurs`.
```python
print(dataset)
# Dataset({
# features: ['path', 'audio'],
# num_rows: 24
# })
def mapper_function(batch):
# to merge every 3 audio
# np.concatnate(audios[i: i+3]) for i in range(i, len(batc... | closed | https://github.com/huggingface/datasets/issues/5361 | 2022-12-14T18:13:55 | 2023-07-21T14:30:51 | 2023-07-21T14:30:51 | {
"login": "bayartsogt-ya",
"id": 43239645,
"type": "User"
} | [] | false | [] |
1,496,947,177 | 5,360 | IterableDataset returns duplicated data using PyTorch DDP | As mentioned in https://github.com/huggingface/datasets/issues/3423, when using PyTorch DDP the dataset ends up with duplicated data. We already check for the PyTorch `worker_info` for single node, but we should also check for `torch.distributed.get_world_size()` and `torch.distributed.get_rank()` | closed | https://github.com/huggingface/datasets/issues/5360 | 2022-12-14T16:06:19 | 2023-06-15T09:51:13 | 2023-01-16T13:33:33 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | false | [] |
1,495,297,857 | 5,359 | Raise error if ClassLabel names is not python list | Checks type of names provided to ClassLabel to avoid easy and hard to debug errors (closes #5332 - see for discussion) | closed | https://github.com/huggingface/datasets/pull/5359 | 2022-12-13T23:04:06 | 2022-12-22T16:35:49 | 2022-12-22T16:32:49 | {
"login": "freddyheppell",
"id": 1475568,
"type": "User"
} | [] | true | [] |
1,495,270,822 | 5,358 | Fix `fs.open` resource leaks | Invoking `{load,save}_from_dict` results in resource leak warnings, this should fix.
Introduces no significant logic changes. | closed | https://github.com/huggingface/datasets/pull/5358 | 2022-12-13T22:35:51 | 2023-01-05T16:46:31 | 2023-01-05T15:59:51 | {
"login": "tkukurin",
"id": 297847,
"type": "User"
} | [] | true | [] |
1,495,029,602 | 5,357 | Support torch dataloader without torch formatting | In https://github.com/huggingface/datasets/pull/5084 we make the torch formatting consistent with the map-style datasets formatting: a torch formatted iterable dataset will yield torch tensors.
The previous behavior of the torch formatting for iterable dataset was simply to make the iterable dataset inherit from `to... | closed | https://github.com/huggingface/datasets/pull/5357 | 2022-12-13T19:39:24 | 2023-01-04T12:45:40 | 2022-12-15T19:15:54 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,494,961,609 | 5,356 | Clean filesystem and logging docstrings | This PR cleans the `Filesystems` and `Logging` docstrings. | closed | https://github.com/huggingface/datasets/pull/5356 | 2022-12-13T18:54:09 | 2022-12-14T17:25:58 | 2022-12-14T17:22:16 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [] | true | [] |
1,493,076,860 | 5,355 | Clean up Table class docstrings | This PR cleans up the `Table` class docstrings :) | closed | https://github.com/huggingface/datasets/pull/5355 | 2022-12-13T00:29:47 | 2022-12-13T18:17:56 | 2022-12-13T18:14:42 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [] | true | [] |
1,492,174,125 | 5,354 | Consider using "Sequence" instead of "List" | ### Feature request
Hi, please consider using `Sequence` type annotation instead of `List` in function arguments such as in [`Dataset.from_parquet()`](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L1088). It leads to type checking errors, see below.
**How to reproduce**
```py
... | open | https://github.com/huggingface/datasets/issues/5354 | 2022-12-12T15:39:45 | 2025-06-21T13:56:58 | null | {
"login": "tranhd95",
"id": 15568078,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "good first issue",
"color": "7057ff"
}
] | false | [] |
1,491,880,500 | 5,353 | Support remote file systems for `Audio` | ### Feature request
Hi there!
It would be super cool if `Audio()`, and potentially other features, could read files from a remote file system.
### Motivation
Large amounts of data is often stored in buckets. `load_from_disk` is able to retrieve data from cloud storage but to my knowledge actually copies the datas... | closed | https://github.com/huggingface/datasets/issues/5353 | 2022-12-12T13:22:13 | 2022-12-12T13:37:14 | 2022-12-12T13:37:14 | {
"login": "OllieBroadhurst",
"id": 46894149,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,490,796,414 | 5,352 | __init__() got an unexpected keyword argument 'input_size' | ### Describe the bug
I try to define a custom configuration with a input_size attribute following the instructions by "Specifying several dataset configurations" in https://huggingface.co/docs/datasets/v1.2.1/add_dataset.html
But when I load the dataset, I got an error "__init__() got an unexpected keyword argument... | open | https://github.com/huggingface/datasets/issues/5352 | 2022-12-12T02:52:03 | 2022-12-19T01:38:48 | null | {
"login": "J-shel",
"id": 82662111,
"type": "User"
} | [] | false | [] |
1,490,659,504 | 5,351 | Do we need to implement `_prepare_split`? | ### Describe the bug
I'm not sure this is a bug or if it's just missing in the documentation, or i'm not doing something correctly, but I'm subclassing `DatasetBuilder` and getting the following error because on the `DatasetBuilder` class the `_prepare_split` method is abstract (as are the others we are required to im... | closed | https://github.com/huggingface/datasets/issues/5351 | 2022-12-12T01:38:54 | 2022-12-20T18:20:57 | 2022-12-12T16:48:56 | {
"login": "jmwoloso",
"id": 7530947,
"type": "User"
} | [] | false | [] |
1,487,559,904 | 5,350 | Clean up Loading methods docstrings | Clean up for the docstrings in Loading methods! | closed | https://github.com/huggingface/datasets/pull/5350 | 2022-12-09T22:25:30 | 2022-12-12T17:27:20 | 2022-12-12T17:24:01 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [] | true | [] |
1,487,396,780 | 5,349 | Clean up remaining Main Classes docstrings | This PR cleans up the remaining docstrings in Main Classes (`IterableDataset`, `IterableDatasetDict`, and `Features`). | closed | https://github.com/huggingface/datasets/pull/5349 | 2022-12-09T20:17:15 | 2022-12-12T17:27:17 | 2022-12-12T17:24:13 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [] | true | [] |
1,486,975,626 | 5,348 | The data downloaded in the download folder of the cache does not respect `umask` | ### Describe the bug
For a project on a cluster we are several users to share the same cache for the datasets library. And we have a problem with the permissions on the data downloaded in the cache.
Indeed, it seems that the data is downloaded by giving read and write permissions only to the user launching the com... | open | https://github.com/huggingface/datasets/issues/5348 | 2022-12-09T15:46:27 | 2022-12-09T17:21:26 | null | {
"login": "SaulLu",
"id": 55560583,
"type": "User"
} | [] | false | [] |
1,486,920,261 | 5,347 | Force soundfile to return float32 instead of the default float64 | (Fixes issue #5345) | open | https://github.com/huggingface/datasets/pull/5347 | 2022-12-09T15:10:24 | 2023-01-17T16:12:49 | null | {
"login": "qmeeus",
"id": 25608944,
"type": "User"
} | [] | true | [] |
1,486,884,983 | 5,346 | [Quick poll] Give your opinion on the future of the Hugging Face Open Source ecosystem! | Thanks to all of you, Datasets is just about to pass 15k stars!
Since the last survey, a lot has happened: the [diffusers](https://github.com/huggingface/diffusers), [evaluate](https://github.com/huggingface/evaluate) and [skops](https://github.com/skops-dev/skops) libraries were born. `timm` joined the Hugging Face... | closed | https://github.com/huggingface/datasets/issues/5346 | 2022-12-09T14:48:02 | 2023-06-02T20:24:44 | 2023-01-25T19:35:40 | {
"login": "LysandreJik",
"id": 30755778,
"type": "User"
} | [] | false | [] |
1,486,555,384 | 5,345 | Wrong dtype for array in audio features | ### Describe the bug
When concatenating/interleaving different datasets, I stumble into an error because the features can't be aligned. After some investigation, I understood that the audio arrays had different dtypes, namely `float32` and `float64`. Consequently, the datasets cannot be merged.
### Steps to repro... | open | https://github.com/huggingface/datasets/issues/5345 | 2022-12-09T11:05:11 | 2023-02-10T14:39:28 | null | {
"login": "qmeeus",
"id": 25608944,
"type": "User"
} | [] | false | [] |
1,485,628,319 | 5,344 | Clean up Dataset and DatasetDict | This PR cleans up the docstrings for the other half of the methods in `Dataset` and finishes `DatasetDict`. | closed | https://github.com/huggingface/datasets/pull/5344 | 2022-12-09T00:02:08 | 2022-12-13T00:56:07 | 2022-12-13T00:53:02 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [] | true | [] |
1,485,297,823 | 5,343 | T5 for Q&A produces truncated sentence | Dear all, I am fine-tuning T5 for Q&A task using the MedQuAD ([GitHub - abachaa/MedQuAD: Medical Question Answering Dataset of 47,457 QA pairs created from 12 NIH websites](https://github.com/abachaa/MedQuAD)) dataset. In the dataset, there are many long answers with thousands of words. I have used pytorch_lightning to... | closed | https://github.com/huggingface/datasets/issues/5343 | 2022-12-08T19:48:46 | 2022-12-08T19:57:17 | 2022-12-08T19:57:17 | {
"login": "junyongyou",
"id": 13484072,
"type": "User"
} | [] | false | [] |
1,485,244,178 | 5,342 | Emotion dataset cannot be downloaded | ### Describe the bug
The emotion dataset gives a FileNotFoundError. The full error is: `FileNotFoundError: Couldn't find file at https://www.dropbox.com/s/1pzkadrvffbqw6o/train.txt?dl=1`.
It was working yesterday (December 7, 2022), but stopped working today (December 8, 2022).
### Steps to reproduce the bug
... | closed | https://github.com/huggingface/datasets/issues/5342 | 2022-12-08T19:07:09 | 2023-02-23T19:13:19 | 2022-12-09T10:46:11 | {
"login": "cbarond",
"id": 78887193,
"type": "User"
} | [
{
"name": "duplicate",
"color": "cfd3d7"
}
] | false | [] |
1,484,376,644 | 5,341 | Remove tasks.json | After discussions in https://github.com/huggingface/datasets/pull/5335 we should remove this file that is not used anymore. We should update https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts instead. | closed | https://github.com/huggingface/datasets/pull/5341 | 2022-12-08T11:04:35 | 2022-12-09T12:26:21 | 2022-12-09T12:23:20 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,483,182,158 | 5,340 | Clean up DatasetInfo and Dataset docstrings | This PR cleans up the docstrings for `DatasetInfo` and about half of the methods in `Dataset`. | closed | https://github.com/huggingface/datasets/pull/5340 | 2022-12-08T00:17:53 | 2022-12-08T19:33:14 | 2022-12-08T19:30:10 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [] | true | [] |
1,482,817,424 | 5,339 | Add Video feature, videofolder, and video-classification task | This PR does the following:
- Adds `Video` feature (Resolves #5225 )
- Adds `video-classification` task
- Adds `videofolder` packaged module for easy loading of local video classification datasets
TODO:
- [ ] add tests
- [ ] add docs | closed | https://github.com/huggingface/datasets/pull/5339 | 2022-12-07T20:48:34 | 2024-01-11T06:30:24 | 2023-10-11T09:13:11 | {
"login": "nateraw",
"id": 32437151,
"type": "User"
} | [] | true | [] |
1,482,646,151 | 5,338 | `map()` stops every 1000 steps | ### Describe the bug
I am passing the following `prepare_dataset` function to `Dataset.map` (code is inspired from [here](https://github.com/huggingface/community-events/blob/main/whisper-fine-tuning-event/run_speech_recognition_seq2seq_streaming.py#L454))
```python3
def prepare_dataset(batch):
# load and res... | closed | https://github.com/huggingface/datasets/issues/5338 | 2022-12-07T19:09:40 | 2025-02-14T18:10:07 | 2022-12-10T00:39:28 | {
"login": "bayartsogt-ya",
"id": 43239645,
"type": "User"
} | [] | false | [] |
1,481,692,156 | 5,337 | Support webdataset format | Webdataset is an efficient format for iterable datasets. It would be nice to support it in `datasets`, as discussed in https://github.com/rom1504/img2dataset/issues/234.
In particular it would be awesome to be able to load one using `load_dataset` in streaming mode (either from a local directory, or from a dataset o... | closed | https://github.com/huggingface/datasets/issues/5337 | 2022-12-07T11:32:25 | 2024-03-06T14:39:29 | 2024-03-06T14:39:28 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | false | [] |
1,479,649,900 | 5,336 | Set `IterableDataset.map` param `batch_size` typing as optional | This PR solves #5325
~Indeed we're using the typing for optional values as `Union[type, None]` as it's similar to how Python 3.10 handles optional values as `type | None`, instead of using `Optional[type]`.~
~Do we want to start using `Union[type, None]` for type-hinting optional values or just keep on using `Op... | closed | https://github.com/huggingface/datasets/pull/5336 | 2022-12-06T17:08:10 | 2022-12-07T14:14:56 | 2022-12-07T14:06:27 | {
"login": "alvarobartt",
"id": 36760800,
"type": "User"
} | [] | true | [] |
1,478,890,788 | 5,335 | Update tasks.json | Context:
* https://github.com/huggingface/datasets/issues/5255#issuecomment-1339107195
Cc: @osanseviero | closed | https://github.com/huggingface/datasets/pull/5335 | 2022-12-06T11:37:57 | 2023-09-24T10:06:42 | 2022-12-07T12:46:03 | {
"login": "sayakpaul",
"id": 22957388,
"type": "User"
} | [] | true | [] |
1,477,421,927 | 5,334 | Clean up docstrings | As raised by @polinaeterna in #5324, some of the docstrings are a bit of a mess because it has both Markdown and Sphinx syntax. This PR fixes the docstring for `DatasetBuilder`.
I'll start working on cleaning up the rest of the docstrings and removing the old Sphinx syntax (let me know if you prefer one big PR with... | closed | https://github.com/huggingface/datasets/pull/5334 | 2022-12-05T20:56:08 | 2022-12-09T01:44:25 | 2022-12-09T01:41:44 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
1,476,890,156 | 5,333 | fix: 🐛 pass the token to get the list of config names | Otherwise, get_dataset_infos doesn't work on gated or private datasets, even with the correct token. | closed | https://github.com/huggingface/datasets/pull/5333 | 2022-12-05T16:06:09 | 2022-12-06T08:25:17 | 2022-12-06T08:22:49 | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [] | true | [] |
1,476,513,072 | 5,332 | Passing numpy array to ClassLabel names causes ValueError | ### Describe the bug
If a numpy array is passed to the names argument of ClassLabel, creating a dataset with those features causes an error.
### Steps to reproduce the bug
https://colab.research.google.com/drive/1cV_es1PWZiEuus17n-2C-w0KEoEZ68IX
TLDR:
If I define my classes as:
```
my_classes = np.array(['on... | closed | https://github.com/huggingface/datasets/issues/5332 | 2022-12-05T12:59:03 | 2022-12-22T16:32:50 | 2022-12-22T16:32:50 | {
"login": "freddyheppell",
"id": 1475568,
"type": "User"
} | [] | false | [] |
1,473,146,738 | 5,331 | Support for multiple configs in packaged modules via metadata yaml info | will solve https://github.com/huggingface/datasets/issues/5209 and https://github.com/huggingface/datasets/issues/5151 and many other...
Config parameters for packaged builders are parsed from `“builder_config”` field in README.md file (separate firs-level field, not part of “dataset_info”), example:
```yaml
---
... | closed | https://github.com/huggingface/datasets/pull/5331 | 2022-12-02T16:43:44 | 2023-07-24T15:49:54 | 2023-07-13T13:27:56 | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [] | true | [] |
1,471,999,125 | 5,329 | Clarify imagefolder is for small datasets | Based on feedback from [here](https://github.com/huggingface/datasets/issues/5317#issuecomment-1334108824), this PR adds a note to the `imagefolder` loading and creating docs that `imagefolder` is designed for small scale image datasets. | closed | https://github.com/huggingface/datasets/pull/5329 | 2022-12-01T21:47:29 | 2022-12-06T17:20:04 | 2022-12-06T17:16:53 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [] | true | [] |
1,471,661,437 | 5,328 | Fix docs building for main | This PR reverts the triggering event for building documentation introduced by:
- #5250
Fix #5326. | closed | https://github.com/huggingface/datasets/pull/5328 | 2022-12-01T17:07:45 | 2022-12-02T16:29:00 | 2022-12-02T16:26:00 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,471,657,247 | 5,327 | Avoid unwanted behaviour when splits from script and metadata are not matching because of outdated metadata | will fix #5315 | open | https://github.com/huggingface/datasets/pull/5327 | 2022-12-01T17:05:23 | 2023-01-23T12:48:29 | null | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [] | true | [] |
1,471,634,168 | 5,326 | No documentation for main branch is built | Since:
- #5250
- Commit: 703b84311f4ead83c7f79639f2dfa739295f0be6
the docs for main branch are no longer built.
The change introduced only triggers the docs building for releases. | closed | https://github.com/huggingface/datasets/issues/5326 | 2022-12-01T16:50:58 | 2022-12-02T16:26:01 | 2022-12-02T16:26:01 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,471,536,822 | 5,325 | map(...batch_size=None) for IterableDataset | ### Feature request
Dataset.map(...) allows batch_size to be None. It would be nice if IterableDataset did too.
### Motivation
Although it may seem a bit of a spurious request given that `IterableDataset` is meant for larger than memory datasets, but there are a couple of reasons why this might be nice.
One is th... | closed | https://github.com/huggingface/datasets/issues/5325 | 2022-12-01T15:43:42 | 2022-12-07T15:54:43 | 2022-12-07T15:54:42 | {
"login": "frankier",
"id": 299380,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "good first issue",
"color": "7057ff"
}
] | false | [] |
1,471,524,512 | 5,324 | Fix docstrings and types in documentation that appears on the website | While I was working on https://github.com/huggingface/datasets/pull/5313 I've noticed that we have a mess in how we annotate types and format args and return values in the code. And some of it is displayed in the [Reference section](https://huggingface.co/docs/datasets/package_reference/builder_classes) of the document... | open | https://github.com/huggingface/datasets/issues/5324 | 2022-12-01T15:34:53 | 2024-01-23T16:21:54 | null | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | false | [] |
1,471,518,803 | 5,323 | Duplicated Keys in Taskmaster-2 Dataset | ### Describe the bug
Loading certain splits () of the taskmaster-2 dataset fails because of a DuplicatedKeysError. This occurs for the following domains: `'hotels', 'movies', 'music', 'sports'`. The domains `'flights', 'food-ordering', 'restaurant-search'` load fine.
Output:
### Steps to reproduce the bug
```
... | closed | https://github.com/huggingface/datasets/issues/5323 | 2022-12-01T15:31:06 | 2022-12-01T16:26:06 | 2022-12-01T16:26:06 | {
"login": "liaeh",
"id": 52380283,
"type": "User"
} | [] | false | [] |
1,471,502,162 | 5,322 | Raise error for `.tar` archives in the same way as for `.tar.gz` and `.tgz` in `_get_extraction_protocol` | Currently `download_and_extract` doesn't throw an error when it is used with files with `.tar` extension in streaming mode because `_get_extraction_protocol` doesn't do it (like it does for `tar.gz` and `tgz`). `_get_extraction_protocol` returns formatted url as if we support tar protocol but we don't.
That means tha... | closed | https://github.com/huggingface/datasets/pull/5322 | 2022-12-01T15:19:28 | 2022-12-14T16:37:16 | 2022-12-14T16:33:30 | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [] | true | [] |
1,471,430,667 | 5,321 | Fix loading from HF GCP cache | As reported in https://discuss.huggingface.co/t/error-loading-wikipedia-dataset/26599/4 it's not possible to download a cached version of Wikipedia from the HF GCP cache
I fixed it and added an integration test (runs in 10sec) | closed | https://github.com/huggingface/datasets/pull/5321 | 2022-12-01T14:39:06 | 2022-12-01T16:10:09 | 2022-12-01T16:07:02 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,471,360,910 | 5,320 | [Extract] Place the lock file next to the destination directory | Previously it was placed next to the archive to extract, but the archive can be in a read-only directory as noticed in https://github.com/huggingface/datasets/issues/5295
Therefore I moved the lock location to be next to the destination directory, which is required to have write permissions | closed | https://github.com/huggingface/datasets/pull/5320 | 2022-12-01T13:55:49 | 2022-12-01T15:36:44 | 2022-12-01T15:33:58 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,470,945,515 | 5,319 | Fix Text sample_by paragraph | Fix #5316. | closed | https://github.com/huggingface/datasets/pull/5319 | 2022-12-01T09:08:09 | 2022-12-01T15:21:44 | 2022-12-01T15:19:00 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,470,749,750 | 5,318 | Origin/fix missing features error | This fixes the problem of when the dataset_load function reads a function with "features" provided but some read batches don't have columns that later show up. For instance, the provided "features" requires columns A,B,C but only columns B,C show. This fixes this by adding the column A with nulls. | closed | https://github.com/huggingface/datasets/pull/5318 | 2022-12-01T06:18:39 | 2022-12-12T19:06:42 | 2022-12-04T05:49:39 | {
"login": "eunseojo",
"id": 12104720,
"type": "User"
} | [] | true | [] |
1,470,390,164 | 5,317 | `ImageFolder` performs poorly with large datasets | ### Describe the bug
While testing image dataset creation, I'm seeing significant performance bottlenecks with imagefolders when scanning a directory structure with large number of images.
## Setup
* Nested directories (5 levels deep)
* 3M+ images
* 1 `metadata.jsonl` file
## Performance Degradation Point... | open | https://github.com/huggingface/datasets/issues/5317 | 2022-12-01T00:04:21 | 2022-12-01T21:49:26 | null | {
"login": "salieri",
"id": 1086393,
"type": "User"
} | [] | false | [] |
1,470,115,681 | 5,316 | Bug in sample_by="paragraph" | ### Describe the bug
I think [this line](https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/text/text.py#L96) is wrong and should be `batch = f.read(self.config.chunksize)`. Otherwise it will never terminate because even when `f` is finished reading, `batch` will still be truthy from the l... | closed | https://github.com/huggingface/datasets/issues/5316 | 2022-11-30T19:24:13 | 2022-12-01T15:19:02 | 2022-12-01T15:19:02 | {
"login": "adampauls",
"id": 1243668,
"type": "User"
} | [] | false | [] |
1,470,026,797 | 5,315 | Adding new splits to a dataset script with existing old splits info in metadata's `dataset_info` fails | ### Describe the bug
If you first create a custom dataset with a specific set of splits, generate metadata with `datasets-cli test ... --save_info`, then change your script to include more splits, it fails.
That's what happened in https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/discussions/2#6385f... | open | https://github.com/huggingface/datasets/issues/5315 | 2022-11-30T18:02:15 | 2022-12-02T07:02:53 | null | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,469,685,118 | 5,314 | Datasets: classification_report() got an unexpected keyword argument 'suffix' | https://github.com/huggingface/datasets/blob/main/metrics/seqeval/seqeval.py
> import datasets
predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
seqeval = datasets.load_metri... | closed | https://github.com/huggingface/datasets/issues/5314 | 2022-11-30T14:01:03 | 2023-07-21T14:40:31 | 2023-07-21T14:40:31 | {
"login": "JonathanAlis",
"id": 42126634,
"type": "User"
} | [] | false | [] |
1,468,484,136 | 5,313 | Fix description of streaming in the docs | We say that "the data is being downloaded progressively" which is not true, it's just streamed, so I fixed it. Probably I missed some other places where it is written?
Also changed docstrings for `StreamingDownloadManager`'s `download` and `extract` to reflect the same, as these docstrings are displayed in the docu... | closed | https://github.com/huggingface/datasets/pull/5313 | 2022-11-29T18:00:28 | 2022-12-01T14:55:30 | 2022-12-01T14:00:34 | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [] | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.