id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,288,029,377 | 4,592 | Issue with jalFaizy/detect_chess_pieces when running datasets-cli test | ### Link
https://huggingface.co/datasets/jalFaizy/detect_chess_pieces
### Description
I am trying to write a appropriate data loader for [a custom dataset](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces) using [this script](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/blob/main/detect_c... | closed | https://github.com/huggingface/datasets/issues/4592 | 2022-06-29T00:15:54 | 2022-06-29T10:30:03 | 2022-06-29T07:49:27 | {
"login": "faizankshaikh",
"id": 8406903,
"type": "User"
} | [] | false | [] |
1,288,021,332 | 4,591 | Can't push Images to hub with manual Dataset | ## Describe the bug
If I create a dataset including an 'Image' feature manually, when pushing to hub decoded images are not pushed,
instead it looks for image where image local path is/used to be.
This doesn't (at least didn't used to) happen with imagefolder. I want to build dataset manually because it is compli... | closed | https://github.com/huggingface/datasets/issues/4591 | 2022-06-29T00:01:23 | 2022-07-08T12:01:36 | 2022-07-08T12:01:35 | {
"login": "cceyda",
"id": 15624271,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,287,941,058 | 4,590 | Generalize meta_path json file creation in load.py [#4540] | # What does this PR do?
## Summary
*In function `_copy_script_and_other_resources_in_importable_dir`, using string split when generating `meta_path` throws error in the edge case raised in #4540.*
## Additions
-
## Changes
- Changed meta_path to use `os.path.splitext` instead of using `str.split` to gener... | closed | https://github.com/huggingface/datasets/pull/4590 | 2022-06-28T21:48:06 | 2022-07-08T14:55:13 | 2022-07-07T13:17:45 | {
"login": "VijayKalmath",
"id": 20517962,
"type": "User"
} | [] | true | [] |
1,287,600,029 | 4,589 | Permission denied: '/home/.cache' when load_dataset with local script | null | closed | https://github.com/huggingface/datasets/issues/4589 | 2022-06-28T16:26:03 | 2022-06-29T06:26:28 | 2022-06-29T06:25:08 | {
"login": "jiangh0",
"id": 24559732,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,287,368,751 | 4,588 | Host head_qa data on the Hub and fix NonMatchingChecksumError | This PR:
- Hosts head_qa data on the Hub instead of Google Drive
- Fixes NonMatchingChecksumError
Fix https://huggingface.co/datasets/head_qa/discussions/1 | closed | https://github.com/huggingface/datasets/pull/4588 | 2022-06-28T13:39:28 | 2022-07-05T16:01:15 | 2022-07-05T15:49:52 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,287,291,494 | 4,587 | Validate new_fingerprint passed by user | Users can pass the dataset fingerprint they want in `map` and other dataset transforms.
However the fingerprint is used to name cache files so we need to make sure it doesn't contain bad characters as mentioned in https://github.com/huggingface/datasets/issues/1718, and that it's not too long | closed | https://github.com/huggingface/datasets/pull/4587 | 2022-06-28T12:46:21 | 2022-06-28T14:11:57 | 2022-06-28T14:00:44 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,287,105,636 | 4,586 | Host pn_summary data on the Hub instead of Google Drive | Fix #4581. | closed | https://github.com/huggingface/datasets/pull/4586 | 2022-06-28T10:05:05 | 2022-06-28T14:52:56 | 2022-06-28T14:42:03 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,287,064,929 | 4,585 | Host multi_news data on the Hub instead of Google Drive | Host data files of multi_news dataset on the Hub.
They were on Google Drive.
Fix #4580. | closed | https://github.com/huggingface/datasets/pull/4585 | 2022-06-28T09:32:06 | 2022-06-28T14:19:35 | 2022-06-28T14:08:48 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,286,911,993 | 4,584 | Add binary classification task IDs | As a precursor to aligning the task IDs in `datasets` and AutoTrain, we need a way to distinguish binary vs multiclass vs multilabel classification.
This PR adds binary classification to the task IDs to enable this.
Related AutoTrain issue: https://github.com/huggingface/autonlp-backend/issues/597
cc @abhishek... | closed | https://github.com/huggingface/datasets/pull/4584 | 2022-06-28T07:30:39 | 2023-09-24T10:04:04 | 2023-01-26T09:27:52 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | true | [] |
1,286,790,871 | 4,583 | <code> implementation of FLAC support using torchaudio | I had added Audio FLAC support with torchaudio given that Librosa and SoundFile can give problems. Also, FLAC is been used as audio from https://mlcommons.org/en/peoples-speech/ | closed | https://github.com/huggingface/datasets/pull/4583 | 2022-06-28T05:24:21 | 2022-06-28T05:47:02 | 2022-06-28T05:47:02 | {
"login": "rafael-ariascalles",
"id": 45745870,
"type": "User"
} | [] | true | [] |
1,286,517,060 | 4,582 | add_column should preserve _indexes | https://github.com/huggingface/datasets/issues/3769#issuecomment-1167146126
doing `.add_column("x",x_data)` also removed any `_indexes` on the dataset, decided this shouldn't be the case.
This was because `add_column` was creating a new `Dataset(...)` and wasn't possible to pass indexes on init.
with this PR now... | open | https://github.com/huggingface/datasets/pull/4582 | 2022-06-27T22:35:47 | 2022-07-06T15:19:54 | null | {
"login": "cceyda",
"id": 15624271,
"type": "User"
} | [] | true | [] |
1,286,362,907 | 4,581 | Dataset Viewer issue for pn_summary | ### Link
https://huggingface.co/datasets/pn_summary/viewer/1.0.0/validation
### Description
Getting an index error on the `validation` and `test` splits:
```
Server error
Status code: 400
Exception: IndexError
Message: list index out of range
```
### Owner
No | closed | https://github.com/huggingface/datasets/issues/4581 | 2022-06-27T20:56:12 | 2022-06-28T14:42:03 | 2022-06-28T14:42:03 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,286,312,912 | 4,580 | Dataset Viewer issue for multi_news | ### Link
https://huggingface.co/datasets/multi_news
### Description
Not sure what the index error is referring to here:
```
Status code: 400
Exception: IndexError
Message: list index out of range
```
### Owner
No | closed | https://github.com/huggingface/datasets/issues/4580 | 2022-06-27T20:25:25 | 2022-06-28T14:08:48 | 2022-06-28T14:08:48 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,286,106,285 | 4,579 | Support streaming cfq dataset | Support streaming cfq dataset. | closed | https://github.com/huggingface/datasets/pull/4579 | 2022-06-27T17:11:23 | 2022-07-04T19:35:01 | 2022-07-04T19:23:57 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,286,086,400 | 4,578 | [Multi Configs] Use directories to differentiate between subsets/configurations | Currently to define several subsets/configurations of your dataset, you need to use a dataset script.
However it would be nice to have a no-code way to to this.
For example we could specify different configurations of a dataset (for example, if a dataset contains different languages) with one directory per confi... | open | https://github.com/huggingface/datasets/issues/4578 | 2022-06-27T16:55:11 | 2023-06-14T15:43:05 | null | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,285,703,775 | 4,577 | Add authentication tip to `load_dataset` | Add an authentication tip similar to the one in transformers' `PreTrainedModel.from_pretrained` to `load_dataset`/`load_dataset_builder`. | closed | https://github.com/huggingface/datasets/pull/4577 | 2022-06-27T12:05:34 | 2022-07-04T13:13:15 | 2022-07-04T13:01:30 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,285,698,576 | 4,576 | Include `metadata.jsonl` in resolved data files | Include `metadata.jsonl` in resolved data files.
Fix #4548
@lhoestq ~~https://github.com/huggingface/datasets/commit/d94336d30eef17fc9abc67f67fa1c139661f4e75 adds support for metadata files placed at the root, and https://github.com/huggingface/datasets/commit/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 accounts fo... | closed | https://github.com/huggingface/datasets/pull/4576 | 2022-06-27T12:01:29 | 2022-07-01T12:44:55 | 2022-06-30T10:15:32 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,285,446,700 | 4,575 | Problem about wmt17 zh-en dataset | It seems that in subset casia2015, some samples are like `{'c[hn]':'xxx', 'en': 'aa'}`.
So when using `data = load_dataset('wmt17', "zh-en")` to load the wmt17 zh-en dataset, which will raise the exception:
```
Traceback (most recent call last):
File "train.py", line 78, in <module>
data = load_dataset(args.... | closed | https://github.com/huggingface/datasets/issues/4575 | 2022-06-27T08:35:42 | 2022-08-23T10:01:02 | 2022-08-23T10:00:21 | {
"login": "winterfell2021",
"id": 85819194,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,285,380,616 | 4,574 | Support streaming mlsum dataset | Support streaming mlsum dataset.
This PR:
- pins `fsspec` min version with fixed BlockSizeError: `fsspec[http]>=2021.11.1`
- https://github.com/fsspec/filesystem_spec/pull/830
- unpins `s3fs==2021.08.1` to align it with `fsspec` requirement: `s3fs>=2021.11.1`
> s3fs 2021.8.1 requires fsspec==2021.08.1
- s... | closed | https://github.com/huggingface/datasets/pull/4574 | 2022-06-27T07:37:03 | 2022-07-21T13:37:30 | 2022-07-21T12:40:00 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,285,023,629 | 4,573 | Fix evaluation metadata for ncbi_disease | This PR fixes the task in the evaluation metadata and removes the metrics info as we've decided this is not a great way to propagate this information downstream. | closed | https://github.com/huggingface/datasets/pull/4573 | 2022-06-26T20:29:32 | 2023-09-24T09:35:07 | 2022-09-23T09:38:02 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,285,022,499 | 4,572 | Dataset Viewer issue for mlsum | ### Link
https://huggingface.co/datasets/mlsum/viewer/de/train
### Description
There's seems to be a problem with the download / streaming of this dataset:
```
Server error
Status code: 400
Exception: BadZipFile
Message: File is not a zip file
```
### Owner
No | closed | https://github.com/huggingface/datasets/issues/4572 | 2022-06-26T20:24:17 | 2022-07-21T12:40:01 | 2022-07-21T12:40:01 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,284,883,289 | 4,571 | move under the facebook org? | ### Link
https://huggingface.co/datasets/gsarti/flores_101
### Description
It seems like streaming isn't supported for this dataset:
```
Server Error
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://dl.fbaipublicfiles.com/flores101/dataset... | open | https://github.com/huggingface/datasets/issues/4571 | 2022-06-26T11:19:09 | 2023-09-25T12:05:18 | null | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | false | [] |
1,284,846,168 | 4,570 | Dataset sharding non-contiguous? | ## Describe the bug
I'm not sure if this is a bug; more likely normal behavior but i wanted to double check.
Is it normal that `datasets.shard` does not produce chunks that, when concatenated produce the original ordering of the sharded dataset?
This might be related to this pull request (https://github.com/huggi... | closed | https://github.com/huggingface/datasets/issues/4570 | 2022-06-26T08:34:05 | 2022-06-30T11:00:47 | 2022-06-26T14:36:20 | {
"login": "cakiki",
"id": 3664563,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,284,833,694 | 4,569 | Dataset Viewer issue for sst2 | ### Link
https://huggingface.co/datasets/sst2
### Description
Not sure what is causing this, however it seems that `load_dataset("sst2")` also hangs (even though it downloads the files without problem):
```
Status code: 400
Exception: Exception
Message: Give up after 5 attempts with Connectio... | closed | https://github.com/huggingface/datasets/issues/4569 | 2022-06-26T07:32:54 | 2022-06-27T06:37:48 | 2022-06-27T06:37:48 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,284,655,624 | 4,568 | XNLI cache reload is very slow | ### Reproduce
Using `2.3.3.dev0`
`from datasets import load_dataset`
`load_dataset("xnli", "en")`
Turn off Internet
`load_dataset("xnli", "en")`
I cancelled the second `load_dataset` eventually cuz it took super long. It would be great to have something to specify e.g. `only_load_from_cache` and avoid the ... | closed | https://github.com/huggingface/datasets/issues/4568 | 2022-06-25T16:43:56 | 2022-07-04T14:29:40 | 2022-07-04T14:29:40 | {
"login": "Muennighoff",
"id": 62820084,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,284,528,474 | 4,567 | Add evaluation data for amazon_reviews_multi | null | closed | https://github.com/huggingface/datasets/pull/4567 | 2022-06-25T09:40:52 | 2023-09-24T09:35:22 | 2022-09-23T09:37:23 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,284,397,594 | 4,566 | Document link #load_dataset_enhancing_performance points to nowhere | ## Describe the bug
A clear and concise description of what the bug is.

The [load_dataset_enhancing_performance](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#load_dat... | closed | https://github.com/huggingface/datasets/issues/4566 | 2022-06-25T01:18:19 | 2023-01-24T16:33:40 | 2023-01-24T16:33:40 | {
"login": "subercui",
"id": 11674033,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,284,141,666 | 4,565 | Add UFSC OCPap dataset | ## Adding a Dataset
- **Name:** UFSC OCPap: Papanicolaou Stained Oral Cytology Dataset (v4)
- **Description:** The UFSC OCPap dataset comprises 9,797 labeled images of 1200x1600 pixels acquired from 5 slides of cancer diagnosed and 3 healthy of oral brush samples, from distinct patients.
- **Paper:** https://dx.doi.... | closed | https://github.com/huggingface/datasets/issues/4565 | 2022-06-24T20:07:54 | 2022-07-06T19:03:02 | 2022-07-06T19:03:02 | {
"login": "johnnv1",
"id": 20444345,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,283,932,333 | 4,564 | Support streaming bookcorpus dataset | Support streaming bookcorpus dataset. | closed | https://github.com/huggingface/datasets/pull/4564 | 2022-06-24T16:13:39 | 2022-07-06T09:34:48 | 2022-07-06T09:23:04 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,283,914,383 | 4,563 | Support streaming allocine dataset | Support streaming allocine dataset.
Fix #4562. | closed | https://github.com/huggingface/datasets/pull/4563 | 2022-06-24T15:55:03 | 2022-06-24T16:54:57 | 2022-06-24T16:44:41 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,283,779,557 | 4,562 | Dataset Viewer issue for allocine | ### Link
https://huggingface.co/datasets/allocine
### Description
Not sure if this is a problem with `bz2` compression, but I thought these datasets could be streamed:
```
Status code: 400
Exception: AttributeError
Message: 'TarContainedFile' object has no attribute 'readable'
```
### Owner
No | closed | https://github.com/huggingface/datasets/issues/4562 | 2022-06-24T13:50:38 | 2022-06-27T06:39:32 | 2022-06-24T16:44:41 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,283,624,242 | 4,561 | Add evaluation data to acronym_identification | null | closed | https://github.com/huggingface/datasets/pull/4561 | 2022-06-24T11:17:33 | 2022-06-27T09:37:55 | 2022-06-27T08:49:22 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | true | [] |
1,283,558,873 | 4,560 | Add evaluation metadata to imagenet-1k | null | closed | https://github.com/huggingface/datasets/pull/4560 | 2022-06-24T10:12:41 | 2023-09-24T09:35:32 | 2022-09-23T09:37:03 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,283,544,937 | 4,559 | Add action names in schema_guided_dstc8 dataset card | As aseked in https://huggingface.co/datasets/schema_guided_dstc8/discussions/1, I added the action names in the dataset card | closed | https://github.com/huggingface/datasets/pull/4559 | 2022-06-24T10:00:01 | 2022-06-24T10:54:28 | 2022-06-24T10:43:47 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,283,479,650 | 4,558 | Add evaluation metadata to wmt14 | null | closed | https://github.com/huggingface/datasets/pull/4558 | 2022-06-24T09:08:54 | 2023-09-24T09:35:39 | 2022-09-23T09:36:50 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,283,473,889 | 4,557 | Add evaluation metadata to wmt16 | Just to confirm: we should add this metadata via GitHub and not Hub PRs for canonical datasets right? | closed | https://github.com/huggingface/datasets/pull/4557 | 2022-06-24T09:04:23 | 2023-09-24T09:35:49 | 2022-09-23T09:36:32 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,283,462,881 | 4,556 | Dataset Viewer issue for conll2003 | ### Link
https://huggingface.co/datasets/conll2003/viewer/conll2003/test
### Description
Seems like a cache problem with this config / split:
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/conll... | closed | https://github.com/huggingface/datasets/issues/4556 | 2022-06-24T08:55:18 | 2022-06-24T09:50:39 | 2022-06-24T09:50:39 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,283,451,651 | 4,555 | Dataset Viewer issue for xtreme | ### Link
https://huggingface.co/datasets/xtreme/viewer/PAN-X.de/test
### Description
There seems to be a problem with the cache of this config / split:
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/data... | closed | https://github.com/huggingface/datasets/issues/4555 | 2022-06-24T08:46:08 | 2022-06-24T09:50:45 | 2022-06-24T09:50:45 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,283,369,453 | 4,554 | Fix WMT dataset loading issue and docs update (Re-opened) | This PR is a fix for #4354
Changes are made for `wmt14`, `wmt15`, `wmt16`, `wmt17`, `wmt18`, `wmt19` and `wmt_t2t`. And READMEs are updated for the corresponding datasets.
Let me know, if any additional changes are required.
Thanks | closed | https://github.com/huggingface/datasets/pull/4554 | 2022-06-24T07:26:16 | 2022-07-08T15:39:20 | 2022-07-08T15:27:44 | {
"login": "khushmeeet",
"id": 8711912,
"type": "User"
} | [] | true | [] |
1,282,779,560 | 4,553 | Stop dropping columns in to_tf_dataset() before we load batches | `to_tf_dataset()` dropped unnecessary columns before loading batches from the dataset, but this is causing problems when using a transform, because the dropped columns might be needed to compute the transform. Since there's no real way to check which columns the transform might need, we skip dropping columns and instea... | closed | https://github.com/huggingface/datasets/pull/4553 | 2022-06-23T18:21:05 | 2022-07-04T19:00:13 | 2022-07-04T18:49:01 | {
"login": "Rocketknight1",
"id": 12866554,
"type": "User"
} | [] | true | [] |
1,282,615,646 | 4,552 | Tell users to upload on the hub directly | As noted in https://github.com/huggingface/datasets/pull/4534, it is still not clear that it is recommended to add datasets on the Hugging Face Hub directly instead of GitHub, so I updated some docs.
Moreover since users won't be able to get reviews from us on the Hub, I added a paragraph to tell users that they can... | closed | https://github.com/huggingface/datasets/pull/4552 | 2022-06-23T15:47:52 | 2022-06-26T15:49:46 | 2022-06-26T15:39:11 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,282,534,807 | 4,551 | Perform hidden file check on relative data file path | Fix #4549 | closed | https://github.com/huggingface/datasets/pull/4551 | 2022-06-23T14:49:11 | 2022-06-30T14:49:20 | 2022-06-30T14:38:18 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,282,374,441 | 4,550 | imdb source error | ## Describe the bug
imdb dataset not loading
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("imdb")
```
## Expected results
## Actual results
```bash
06/23/2022 14:45:18 - INFO - datasets.builder - Dataset not on Hf google storage. Downloading and pr... | closed | https://github.com/huggingface/datasets/issues/4550 | 2022-06-23T13:02:52 | 2022-06-23T13:47:05 | 2022-06-23T13:47:04 | {
"login": "Muhtasham",
"id": 20128202,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,282,312,975 | 4,549 | FileNotFoundError when passing a data_file inside a directory starting with double underscores | Bug experienced in the `accelerate` CI: https://github.com/huggingface/accelerate/runs/7016055148?check_suite_focus=true
This is related to https://github.com/huggingface/datasets/pull/4505 and the changes from https://github.com/huggingface/datasets/pull/4412 | closed | https://github.com/huggingface/datasets/issues/4549 | 2022-06-23T12:19:24 | 2022-06-30T14:38:18 | 2022-06-30T14:38:18 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,282,218,096 | 4,548 | Metadata.jsonl for Imagefolder is ignored if it's in a parent directory to the splits directories/do not have "{split}_" prefix | If data contains a single `metadata.jsonl` file for several splits, it won't be included in a dataset's `data_files` and therefore ignored.
This happens when a directory is structured like as follows:
```
train/
file_1.jpg
file_2.jpg
test/
file_3.jpg
file_4.jpg
metadata.jsonl
```
or like as follows:... | closed | https://github.com/huggingface/datasets/issues/4548 | 2022-06-23T10:58:57 | 2022-06-30T10:15:32 | 2022-06-30T10:15:32 | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [] | false | [] |
1,282,160,517 | 4,547 | [CI] Fix some warnings | There are some warnings in the CI that are annoying, I tried to remove most of them | closed | https://github.com/huggingface/datasets/pull/4547 | 2022-06-23T10:10:49 | 2022-06-28T14:10:57 | 2022-06-28T13:59:54 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,282,093,288 | 4,546 | [CI] fixing seqeval install in ci by pinning setuptools-scm | The latest setuptools-scm version supported on 3.6 is 6.4.2. However for some reason circleci has version 7, which doesn't work.
I fixed this by pinning the version of setuptools-scm in the circleci job
Fix https://github.com/huggingface/datasets/issues/4544 | closed | https://github.com/huggingface/datasets/pull/4546 | 2022-06-23T09:24:37 | 2022-06-23T10:24:16 | 2022-06-23T10:13:44 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,280,899,028 | 4,545 | Make DuplicateKeysError more user friendly [For Issue #2556] | # What does this PR do?
## Summary
*DuplicateKeysError error does not provide any information regarding the examples which have the same the key.*
*This information is very helpful for debugging the dataset generator script.*
## Additions
-
## Changes
- Changed `DuplicateKeysError Class` in `src/datase... | closed | https://github.com/huggingface/datasets/pull/4545 | 2022-06-22T21:01:34 | 2022-06-28T09:37:06 | 2022-06-28T09:26:04 | {
"login": "VijayKalmath",
"id": 20517962,
"type": "User"
} | [] | true | [] |
1,280,500,340 | 4,544 | [CI] seqeval installation fails sometimes on python 3.6 | The CI sometimes fails to install seqeval, which cause the `seqeval` metric tests to fail.
The installation fails because of this error:
```
Collecting seqeval
Downloading seqeval-1.2.2.tar.gz (43 kB)
|███████▌ | 10 kB 42.1 MB/s eta 0:00:01
|███████████████ ... | closed | https://github.com/huggingface/datasets/issues/4544 | 2022-06-22T16:35:23 | 2022-06-23T10:13:44 | 2022-06-23T10:13:44 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | false | [] |
1,280,379,781 | 4,543 | [CI] Fix upstream hub test url | Some tests were still using moon-stagign instead of hub-ci.
I also updated the token to use one dedicated to `datasets` | closed | https://github.com/huggingface/datasets/pull/4543 | 2022-06-22T15:34:27 | 2022-06-22T16:37:40 | 2022-06-22T16:27:37 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,280,269,445 | 4,542 | [to_tf_dataset] Use Feather for better compatibility with TensorFlow ? | To have better performance in TensorFlow, it is important to provide lists of data files in supported formats. For example sharded TFRecords datasets are extremely performant. This is because tf.data can better leverage parallelism in this case, and load one file at a time in memory.
It seems that using `tensorflow_... | open | https://github.com/huggingface/datasets/issues/4542 | 2022-06-22T14:42:00 | 2022-10-11T08:45:45 | null | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "generic discussion",
"color": "c5def5"
}
] | false | [] |
1,280,161,436 | 4,541 | Fix timestamp conversion from Pandas to Python datetime in streaming mode | Arrow accepts both pd.Timestamp and datetime.datetime objects to create timestamp arrays.
However a timestamp array is always converted to datetime.datetime objects.
This created an inconsistency between streaming in non-streaming. e.g. the `ett` dataset outputs datetime.datetime objects in non-streaming but pd.tim... | closed | https://github.com/huggingface/datasets/pull/4541 | 2022-06-22T13:40:01 | 2022-06-22T16:39:27 | 2022-06-22T16:29:09 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,280,142,942 | 4,540 | Avoid splitting by` .py` for the file. | https://github.com/huggingface/datasets/blob/90b3a98065556fc66380cafd780af9b1814b9426/src/datasets/load.py#L272
Hello,
Thanks you for this library .
I was using it and I had one edge case. my home folder name ends with `.py` it is `/home/espoir.py` so anytime I am running the code to load a local module thi... | closed | https://github.com/huggingface/datasets/issues/4540 | 2022-06-22T13:26:55 | 2022-07-07T13:17:44 | 2022-07-07T13:17:44 | {
"login": "espoirMur",
"id": 18573157,
"type": "User"
} | [
{
"name": "good first issue",
"color": "7057ff"
}
] | false | [] |
1,279,779,829 | 4,539 | Replace deprecated logging.warn with logging.warning | Replace `logging.warn` (deprecated in [Python 2.7, 2011](https://github.com/python/cpython/commit/04d5bc00a219860c69ea17eaa633d3ab9917409f)) with `logging.warning` (added in [Python 2.3, 2003](https://github.com/python/cpython/commit/6fa635df7aa88ae9fd8b41ae42743341316c90f7)).
* https://docs.python.org/3/library/log... | closed | https://github.com/huggingface/datasets/pull/4539 | 2022-06-22T08:32:29 | 2022-06-22T13:43:23 | 2022-06-22T12:51:51 | {
"login": "hugovk",
"id": 1324225,
"type": "User"
} | [] | true | [] |
1,279,409,786 | 4,538 | Dataset Viewer issue for Pile of Law | ### Link
https://huggingface.co/datasets/pile-of-law/pile-of-law
### Description
Hi, I would like to turn off the dataset viewer for our dataset without enabling access requests. To comply with upstream dataset creator requests/licenses, we would like to make sure that the data is not indexed by search engines... | closed | https://github.com/huggingface/datasets/issues/4538 | 2022-06-22T02:48:40 | 2022-06-27T07:30:23 | 2022-06-26T22:26:22 | {
"login": "Breakend",
"id": 1609857,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,279,144,310 | 4,537 | Fix WMT dataset loading issue and docs update | This PR is a fix for #4354
Changes are made for `wmt14`, `wmt15`, `wmt16`, `wmt17`, `wmt18`, `wmt19` and `wmt_t2t`. And READMEs are updated for the corresponding datasets.
As I am on a M1 Mac, I am not able to create a virtual `dev` environment using `pip install -e ".[dev]"`. Issue is with `tensorflow-text` not... | closed | https://github.com/huggingface/datasets/pull/4537 | 2022-06-21T21:48:02 | 2022-06-24T07:05:43 | 2022-06-24T07:05:10 | {
"login": "khushmeeet",
"id": 8711912,
"type": "User"
} | [] | true | [] |
1,278,734,727 | 4,536 | Properly raise FileNotFound even if the dataset is private | `tests/test_load.py::test_load_streaming_private_dataset` was failing because the hub now returns 401 when getting the HfApi.dataset_info of a dataset without authentication. `load_dataset` was raising ConnectionError, while it should be FileNoteFoundError since it first checks for local files before checking the Hub.
... | closed | https://github.com/huggingface/datasets/pull/4536 | 2022-06-21T17:05:50 | 2022-06-28T10:46:51 | 2022-06-28T10:36:10 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,278,365,039 | 4,535 | Add `batch_size` parameter when calling `add_faiss_index` and `add_faiss_index_from_external_arrays` | Currently, even though the `batch_size` when adding vectors to the FAISS index can be tweaked in `FaissIndex.add_vectors()`, the function `ArrowDataset.add_faiss_index` doesn't have either the parameter `batch_size` to be propagated to the nested `FaissIndex.add_vectors` function or `*args, **kwargs`, so on, this PR ad... | closed | https://github.com/huggingface/datasets/pull/4535 | 2022-06-21T12:18:49 | 2022-06-27T16:25:09 | 2022-06-27T16:14:36 | {
"login": "alvarobartt",
"id": 36760800,
"type": "User"
} | [] | true | [] |
1,277,897,197 | 4,534 | Add `tldr_news` dataset | This PR aims at adding support for a news dataset: `tldr news`.
This dataset is based on the daily [tldr tech newsletter](https://tldr.tech/newsletter) and contains a `headline` as well as a `content` for every piece of news contained in a newsletter. | closed | https://github.com/huggingface/datasets/pull/4534 | 2022-06-21T05:02:43 | 2022-06-23T14:33:54 | 2022-06-21T14:21:11 | {
"login": "JulesBelveze",
"id": 32683010,
"type": "User"
} | [] | true | [] |
1,277,211,490 | 4,533 | Timestamp not returned as datetime objects in streaming mode | As reported in (internal) https://github.com/huggingface/datasets-server/issues/397
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("ett", name="h2", split="test", streaming=True)
>>> d = next(iter(dataset))
>>> d['start']
Timestamp('2016-07-01 00:00:00')
```
while loading in non-... | closed | https://github.com/huggingface/datasets/issues/4533 | 2022-06-20T17:28:47 | 2022-06-22T16:29:09 | 2022-06-22T16:29:09 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "streaming",
"color": "fef2c0"
}
] | false | [] |
1,277,167,129 | 4,532 | Add Video feature | The following adds a `Video` feature for encoding/decoding videos on the fly from in memory bytes. It uses my own `encoded-video` library which is basically `pytorchvideo`'s encoded video but with all the `torch` specific stuff stripped out. Because of that, and because the tool I used under the hood is not very mature... | closed | https://github.com/huggingface/datasets/pull/4532 | 2022-06-20T16:36:41 | 2022-11-10T16:59:51 | 2022-11-10T16:59:51 | {
"login": "nateraw",
"id": 32437151,
"type": "User"
} | [] | true | [] |
1,277,054,172 | 4,531 | Dataset Viewer issue for CSV datasets | ### Link
https://huggingface.co/datasets/scikit-learn/breast-cancer-wisconsin
### Description
I'm populating CSV datasets [here](https://huggingface.co/scikit-learn) but the viewer is not enabled and it looks for a dataset loading script, the datasets aren't on queue as well.
You can replicate the problem by sim... | closed | https://github.com/huggingface/datasets/issues/4531 | 2022-06-20T14:56:24 | 2022-06-21T08:28:46 | 2022-06-21T08:28:27 | {
"login": "merveenoyan",
"id": 53175384,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,276,884,962 | 4,530 | Add AudioFolder packaged loader | will close #3964
AudioFolder is almost identical to ImageFolder except for inferring labels is not the default behavior (`drop_labels` is set to True in config), the option of inferring them is preserved though.
The weird thing is happening with the `test_data_files_with_metadata_and_archives` when `streaming` i... | closed | https://github.com/huggingface/datasets/pull/4530 | 2022-06-20T12:54:02 | 2022-08-22T14:36:49 | 2022-08-22T14:20:40 | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | true | [] |
1,276,729,303 | 4,529 | Ecoset | ## Adding a Dataset
- **Name:** *Ecoset*
- **Description:** *https://www.kietzmannlab.org/ecoset/*
- **Paper:** *https://doi.org/10.1073/pnas.2011417118*
- **Data:** *https://codeocean.com/capsule/9570390/tree/v1*
- **Motivation:**
**Ecoset** was created as a clean and ecologically valid alternative to **Imagen... | closed | https://github.com/huggingface/datasets/issues/4529 | 2022-06-20T10:39:34 | 2023-10-26T09:12:32 | 2023-10-04T18:19:52 | {
"login": "DiGyt",
"id": 34550289,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,276,679,155 | 4,528 | Memory leak when iterating a Dataset | e## Describe the bug
It seems that memory never gets freed after iterating a `Dataset` (using `.map()` or a simple `for` loop)
## Steps to reproduce the bug
```python
import gc
import logging
import time
import pyarrow
from datasets import load_dataset
from tqdm import trange
import os, psutil
logging.ba... | closed | https://github.com/huggingface/datasets/issues/4528 | 2022-06-20T10:03:14 | 2022-09-12T08:51:39 | 2022-09-12T08:51:39 | {
"login": "NouamaneTazi",
"id": 29777165,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,276,583,536 | 4,527 | Dataset Viewer issue for vadis/sv-ident | ### Link
https://huggingface.co/datasets/vadis/sv-ident
### Description
The dataset preview does not work:
```
Server Error
Status code: 400
Exception: Status400Error
Message: The dataset does not exist.
```
However, the dataset is streamable and works locally:
```python
In [1]: from dataset... | closed | https://github.com/huggingface/datasets/issues/4527 | 2022-06-20T08:47:42 | 2022-06-21T16:42:46 | 2022-06-21T16:42:45 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,276,580,185 | 4,526 | split cache used when processing different split | ## Describe the bug`
```
ds1 = load_dataset('squad', split='validation')
ds2 = load_dataset('squad', split='train')
ds1 = ds1.map(some_function)
ds2 = ds2.map(some_function)
assert ds1 == ds2
```
This happens when ds1 and ds2 are created in `pytorch_lightning.DataModule` through
```
class myDataModule:
... | open | https://github.com/huggingface/datasets/issues/4526 | 2022-06-20T08:44:58 | 2022-06-28T14:04:58 | null | {
"login": "gpucce",
"id": 32967787,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,276,491,386 | 4,525 | Out of memory error on workers while running Beam+Dataflow | ## Describe the bug
While running the preprocessing of the natural_question dataset (see PR #4368), there is an issue for the "default" config (train+dev files).
Previously we ran the preprocessing for the "dev" config (only dev files) with success.
Train data files are larger than dev ones and apparently worker... | closed | https://github.com/huggingface/datasets/issues/4525 | 2022-06-20T07:28:12 | 2024-10-09T16:09:50 | 2024-10-09T16:09:50 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,275,909,186 | 4,524 | Downloading via Apache Pipeline, client cancelled (org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException) | ## Describe the bug
When downloading some `wikipedia` languages (in particular, I'm having a hard time with Spanish, Cebuano, and Russian) via FlinkRunner, I encounter the exception in the title. I have been playing with package versions a lot, because unfortunately, the different dependencies required by these packag... | open | https://github.com/huggingface/datasets/issues/4524 | 2022-06-18T23:36:45 | 2022-06-21T00:38:20 | null | {
"login": "ddegenaro",
"id": 45244059,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,275,002,639 | 4,523 | Update download url and improve card of `cats_vs_dogs` dataset | Improve the download URL (reported here: https://huggingface.co/datasets/cats_vs_dogs/discussions/1), remove the `image_file_path` column (not used in Transformers, so it should be safe) and add more info to the card. | closed | https://github.com/huggingface/datasets/pull/4523 | 2022-06-17T12:59:44 | 2022-06-21T14:23:26 | 2022-06-21T14:13:08 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,274,929,328 | 4,522 | Try to reduce the number of datasets that require manual download | > Currently, 41 canonical datasets require manual download. I checked their scripts and I'm pretty sure this number can be reduced to ≈ 30 by not relying on bash scripts to download data, hosting data directly on the Hub when the license permits, etc. Then, we will mostly be left with datasets with restricted access, w... | open | https://github.com/huggingface/datasets/issues/4522 | 2022-06-17T11:42:03 | 2022-06-17T11:52:48 | null | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [] | false | [] |
1,274,919,437 | 4,521 | Datasets method `.map` not hashing | ## Describe the bug
Datasets method `.map` not hashing, even with an empty no-op function
## Steps to reproduce the bug
```python
from datasets import load_dataset
# download 9MB dummy dataset
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean")
def prepare_dataset(batch):
return(b... | closed | https://github.com/huggingface/datasets/issues/4521 | 2022-06-17T11:31:10 | 2022-08-04T12:08:16 | 2022-06-28T13:23:05 | {
"login": "sanchit-gandhi",
"id": 93869735,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,274,879,180 | 4,520 | Failure to hash `dataclasses` - results in functions that cannot be hashed or cached in `.map` | Dataclasses cannot be hashed. As a result, they cannot be hashed or cached if used in the `.map` method. Dataclasses are used extensively in Transformers examples scripts: (c.f. [CTC example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py)). Since... | closed | https://github.com/huggingface/datasets/issues/4520 | 2022-06-17T10:47:17 | 2022-06-28T14:47:17 | 2022-06-28T14:04:29 | {
"login": "sanchit-gandhi",
"id": 93869735,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,274,110,623 | 4,519 | Create new sections for audio and vision in guides | This PR creates separate sections in the guides for audio, vision, text, and general usage so it is easier for users to find loading, processing, or sharing guides specific to the dataset type they're working with. It'll also allow us to scale the docs to additional dataset types - like time series, tabular, etc. - whi... | closed | https://github.com/huggingface/datasets/pull/4519 | 2022-06-16T21:38:24 | 2022-07-07T15:36:37 | 2022-07-07T15:24:58 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
1,274,010,628 | 4,518 | Patch tests for hfh v0.8.0 | This PR patches testing utilities that would otherwise fail with hfh v0.8.0. | closed | https://github.com/huggingface/datasets/pull/4518 | 2022-06-16T19:45:32 | 2022-06-17T16:15:57 | 2022-06-17T16:06:07 | {
"login": "LysandreJik",
"id": 30755778,
"type": "User"
} | [] | true | [] |
1,273,960,476 | 4,517 | Add tags for task_ids:summarization-* and task_categories:summarization* | yaml header at top of README.md file was edited to add task tags because I couldn't find the existing tags in the json
separate Pull Request will modify dataset_infos.json to add these tags
The Enron dataset (dataset id aeslc) is only tagged with:
arxiv:1906.03497'
languages:en
pretty_name:AESLC
... | closed | https://github.com/huggingface/datasets/pull/4517 | 2022-06-16T18:52:25 | 2022-07-08T15:14:23 | 2022-07-08T15:02:31 | {
"login": "hobson",
"id": 292855,
"type": "User"
} | [] | true | [] |
1,273,825,640 | 4,516 | Fix hashing for python 3.9 | In python 3.9, pickle hashes the `glob_ids` dictionary in addition to the `globs` of a function.
Therefore the test at `tests/test_fingerprint.py::RecurseDumpTest::test_recurse_dump_for_function_with_shuffled_globals` is currently failing for python 3.9
To make hashing deterministic when the globals are not in th... | closed | https://github.com/huggingface/datasets/pull/4516 | 2022-06-16T16:42:31 | 2022-06-28T13:33:46 | 2022-06-28T13:23:06 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,273,626,131 | 4,515 | Add uppercased versions of image file extensions for automatic module inference | Adds the uppercased versions of the image file extensions to the supported extensions.
Another approach would be to call `.lower()` on extensions while resolving data files, but uppercased extensions are not something we want to encourage out of the box IMO unless they are commonly used (as they are in the vision d... | closed | https://github.com/huggingface/datasets/pull/4515 | 2022-06-16T14:14:49 | 2022-06-16T17:21:53 | 2022-06-16T17:11:41 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,273,505,230 | 4,514 | Allow .JPEG as a file extension | ## Describe the bug
When loading image data, HF datasets seems to recognize `.jpg` and `.jpeg` file extensions, but not e.g. .JPEG. As the naming convention .JPEG is used in important datasets such as imagenet, I would welcome if according extensions like .JPEG or .JPG would be allowed.
## Steps to reproduce the bu... | closed | https://github.com/huggingface/datasets/issues/4514 | 2022-06-16T12:36:20 | 2022-06-20T08:18:46 | 2022-06-16T17:11:40 | {
"login": "DiGyt",
"id": 34550289,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,273,450,338 | 4,513 | Update Google Cloud Storage documentation and add Azure Blob Storage example | While I was going through the 🤗 Datasets documentation of the Cloud storage filesystems at https://huggingface.co/docs/datasets/filesystems, I realized that the Google Cloud Storage documentation could be improved e.g. bullet point says "Load your dataset" when the actual call was to "Save your dataset", in-line code ... | closed | https://github.com/huggingface/datasets/pull/4513 | 2022-06-16T11:46:09 | 2022-06-23T17:05:11 | 2022-06-23T16:54:59 | {
"login": "alvarobartt",
"id": 36760800,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
1,273,378,129 | 4,512 | Add links to vision tasks scripts in ADD_NEW_DATASET template | Add links to vision dataset scripts in the ADD_NEW_DATASET template. | closed | https://github.com/huggingface/datasets/pull/4512 | 2022-06-16T10:35:35 | 2022-07-08T14:07:50 | 2022-07-08T13:56:23 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,273,336,874 | 4,511 | Support all negative values in ClassLabel | We usually use -1 to represent a missing label, but we should also support any negative values (some users use -100 for example). This is a regression from `datasets` 2.3
Fix https://github.com/huggingface/datasets/issues/4508 | closed | https://github.com/huggingface/datasets/pull/4511 | 2022-06-16T09:59:39 | 2025-07-23T18:38:15 | 2022-06-16T13:54:07 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,273,260,396 | 4,510 | Add regression test for `ArrowWriter.write_batch` when batch is empty | As spotted by @cccntu in #4502, there's a logic bug in `ArrowWriter.write_batch` as the if-statement to handle the empty batches as detailed in the docstrings of the function ("Ignores the batch if it appears to be empty, preventing a potential schema update of unknown types."), the current if-statement is not handling... | closed | https://github.com/huggingface/datasets/pull/4510 | 2022-06-16T08:53:51 | 2022-06-16T12:38:02 | 2022-06-16T12:28:19 | {
"login": "alvarobartt",
"id": 36760800,
"type": "User"
} | [] | true | [] |
1,273,227,760 | 4,509 | Support skipping Parquet to Arrow conversion when using Beam | null | closed | https://github.com/huggingface/datasets/pull/4509 | 2022-06-16T08:25:38 | 2022-11-07T16:22:41 | 2022-11-07T16:22:41 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,272,718,921 | 4,508 | cast_storage method from datasets.features | ## Describe the bug
A bug occurs when mapping a function to a dataset object. I ran the same code with the same data yesterday and it worked just fine. It works when i run locally on an old version of datasets.
## Steps to reproduce the bug
Steps are:
- load whatever datset
- write a preprocessing function such ... | closed | https://github.com/huggingface/datasets/issues/4508 | 2022-06-15T20:47:22 | 2022-06-16T13:54:07 | 2022-06-16T13:54:07 | {
"login": "romainremyb",
"id": 67968596,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,272,615,932 | 4,507 | How to let `load_dataset` return a `Dataset` instead of `DatasetDict` in customized loading script | If the dataset does not need splits, i.e., no training and validation split, more like a table. How can I let the `load_dataset` function return a `Dataset` object directly rather than return a `DatasetDict` object with only one key-value pair.
Or I can paraphrase the question in the following way: how to skip `_spl... | closed | https://github.com/huggingface/datasets/issues/4507 | 2022-06-15T18:56:34 | 2022-06-16T10:40:08 | 2022-06-16T10:40:08 | {
"login": "liyucheng09",
"id": 27999909,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,272,516,895 | 4,506 | Failure to hash (and cache) a `.map(...)` (almost always) - using this method can produce incorrect results | ## Describe the bug
Sometimes I get messages about not being able to hash a method:
`Parameter 'function'=<function StupidDataModule._separate_speaker_id_from_dialogue at 0x7f1b27180d30> of the transform datasets.arrow_dataset.Dataset.
_map_single couldn't be hashed properly, a random hash was used instead. Make sur... | closed | https://github.com/huggingface/datasets/issues/4506 | 2022-06-15T17:11:31 | 2023-02-16T03:14:32 | 2022-06-28T13:23:05 | {
"login": "DrMatters",
"id": 22641583,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,272,477,226 | 4,505 | Fix double dots in data files | As mentioned in https://github.com/huggingface/transformers/pull/17715 `data_files` can't find a file if the path contains double dots `/../`. This has been introduced in https://github.com/huggingface/datasets/pull/4412, by trying to ignore hidden files and directories (i.e. if they start with a dot)
I fixed this a... | closed | https://github.com/huggingface/datasets/pull/4505 | 2022-06-15T16:31:04 | 2022-06-15T17:15:58 | 2022-06-15T17:05:53 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,272,418,480 | 4,504 | Can you please add the Stanford dog dataset? | ## Adding a Dataset
- **Name:** *Stanford dog dataset*
- **Description:** *The dataset is about 120 classes for a total of 20.580 images. You can find the dataset here http://vision.stanford.edu/aditya86/ImageNetDogs/*
- **Paper:** *http://vision.stanford.edu/aditya86/ImageNetDogs/*
- **Data:** *[link to the Github... | closed | https://github.com/huggingface/datasets/issues/4504 | 2022-06-15T15:39:35 | 2024-12-09T15:44:11 | 2023-10-18T18:55:30 | {
"login": "dgrnd4",
"id": 69434832,
"type": "User"
} | [
{
"name": "good first issue",
"color": "7057ff"
},
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,272,367,055 | 4,503 | Refactor and add metadata to fever dataset | Related to: #4452 and #3792. | closed | https://github.com/huggingface/datasets/pull/4503 | 2022-06-15T14:59:47 | 2022-07-06T11:54:15 | 2022-07-06T11:41:30 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,272,353,700 | 4,502 | Logic bug in arrow_writer? | https://github.com/huggingface/datasets/blob/88a902d6474fae8d793542d57a4f3b0d187f3c5b/src/datasets/arrow_writer.py#L475-L488
I got some error, and I found it's caused by `batch_examples` being `{}`. I wonder if the code should be as follows:
```
- if batch_examples and len(next(iter(batch_examples.values())... | closed | https://github.com/huggingface/datasets/issues/4502 | 2022-06-15T14:50:00 | 2022-06-18T15:15:51 | 2022-06-18T15:15:51 | {
"login": "changjonathanc",
"id": 31893406,
"type": "User"
} | [] | false | [] |
1,272,300,646 | 4,501 | Corrected broken links in doc | null | closed | https://github.com/huggingface/datasets/pull/4501 | 2022-06-15T14:12:17 | 2022-06-15T15:11:05 | 2022-06-15T15:00:56 | {
"login": "clefourrier",
"id": 22726840,
"type": "User"
} | [] | true | [] |
1,272,281,992 | 4,500 | Add `concatenate_datasets` for iterable datasets | `concatenate_datasets` currently only supports lists of `datasets.Dataset`, not lists of `datasets.IterableDataset` like `interleave_datasets`
Fix https://github.com/huggingface/datasets/issues/2564
I also moved `_interleave_map_style_datasets` from combine.py to arrow_dataset.py, since the logic depends a lot on... | closed | https://github.com/huggingface/datasets/pull/4500 | 2022-06-15T13:58:50 | 2022-06-28T21:25:39 | 2022-06-28T21:15:04 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,272,118,162 | 4,499 | fix ETT m1/m2 test/val dataset | https://huggingface.co/datasets/ett/discussions/1 | closed | https://github.com/huggingface/datasets/pull/4499 | 2022-06-15T11:51:02 | 2022-06-15T14:55:56 | 2022-06-15T14:45:13 | {
"login": "kashif",
"id": 8100,
"type": "User"
} | [] | true | [] |
1,272,100,549 | 4,498 | WER and CER > 1 | ## Describe the bug
It seems that in some cases in which the `prediction` is longer than the `reference` we may have word/character error rate higher than 1 which is a bit odd.
If it's a real bug I think I can solve it with a PR changing [this](https://github.com/huggingface/datasets/blob/master/metrics/wer/wer.py#... | closed | https://github.com/huggingface/datasets/issues/4498 | 2022-06-15T11:35:12 | 2022-06-15T16:38:05 | 2022-06-15T16:38:05 | {
"login": "sadrasabouri",
"id": 43045767,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,271,964,338 | 4,497 | Re-add download_manager module in utils | https://github.com/huggingface/datasets/pull/4384 moved `datasets.utils.download_manager` to `datasets.download.download_manager`
This breaks `evaluate` which imports `DownloadMode` from `datasets.utils.download_manager`
This PR re-adds `datasets.utils.download_manager` without circular imports.
We could also... | closed | https://github.com/huggingface/datasets/pull/4497 | 2022-06-15T09:44:33 | 2022-06-15T10:33:28 | 2022-06-15T10:23:44 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,271,945,704 | 4,496 | Replace `assertEqual` with `assertTupleEqual` in unit tests for verbosity | As detailed in #4419 and as suggested by @mariosasko, we could replace the `assertEqual` assertions with `assertTupleEqual` when the assertion is between Tuples, in order to make the tests more verbose. | closed | https://github.com/huggingface/datasets/pull/4496 | 2022-06-15T09:29:16 | 2022-07-07T17:06:51 | 2022-07-07T16:55:48 | {
"login": "alvarobartt",
"id": 36760800,
"type": "User"
} | [] | true | [] |
1,271,851,025 | 4,495 | Fix patching module that doesn't exist | Reported in https://github.com/huggingface/huggingface_hub/runs/6894703718?check_suite_focus=true
When trying to patch `scipy.io.loadmat`:
```python
ModuleNotFoundError: No module named 'scipy'
```
Instead it shouldn't raise an error and do nothing
Bug introduced by #4375
Fix https://github.com/hugging... | closed | https://github.com/huggingface/datasets/pull/4495 | 2022-06-15T08:17:50 | 2022-06-15T16:40:49 | 2022-06-15T08:54:09 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,271,850,599 | 4,494 | Patching fails for modules that are not installed or don't exist | Reported in https://github.com/huggingface/huggingface_hub/runs/6894703718?check_suite_focus=true
When trying to patch `scipy.io.loadmat`:
```python
ModuleNotFoundError: No module named 'scipy'
```
Instead it shouldn't raise an error and do nothing
We use patching to extend such functions to support remot... | closed | https://github.com/huggingface/datasets/issues/4494 | 2022-06-15T08:17:29 | 2022-06-15T08:54:09 | 2022-06-15T08:54:09 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | false | [] |
1,271,306,385 | 4,493 | Add `@transmit_format` in `flatten` | As suggested by @mariosasko in https://github.com/huggingface/datasets/pull/4411, we should include the `@transmit_format` decorator to `flatten`, `rename_column`, and `rename_columns` so as to ensure that the value of `_format_columns` in an `ArrowDataset` is properly updated.
**Edit**: according to @mariosasko com... | closed | https://github.com/huggingface/datasets/pull/4493 | 2022-06-14T20:09:09 | 2022-09-27T11:37:25 | 2022-09-27T10:48:54 | {
"login": "alvarobartt",
"id": 36760800,
"type": "User"
} | [] | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.