id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,209,429,743 | 4,185 | Librispeech documentation, clarification on format | https://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/datasets/librispeech_asr/librispeech_asr.py#L53
> Note that in order to limit the required storage for preparing this dataset, the audio
> is stored in the .flac format and is not converted to a float32 array. To convert, the audi... | open | https://github.com/huggingface/datasets/issues/4185 | 2022-04-20T09:35:55 | 2022-04-21T11:00:53 | null | {
"login": "albertz",
"id": 59132,
"type": "User"
} | [] | false | [] |
1,208,592,669 | 4,184 | [Librispeech] Add 'all' config | Add `"all"` config to Librispeech
Closed #4179 | closed | https://github.com/huggingface/datasets/pull/4184 | 2022-04-19T16:27:56 | 2024-08-02T05:03:04 | 2022-04-22T09:45:17 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
1,208,449,335 | 4,183 | Document librispeech configs | Added an example of how to load one config or the other | closed | https://github.com/huggingface/datasets/pull/4183 | 2022-04-19T14:26:59 | 2023-09-24T10:02:24 | 2022-04-19T15:15:20 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,208,285,235 | 4,182 | Zenodo.org download is not responding | ## Describe the bug
Source download_url from zenodo.org does not respond.
`_DOWNLOAD_URL = "https://zenodo.org/record/2787612/files/SICK.zip?download=1"`
Other datasets also use zenodo.org to store data and they cannot be downloaded as well.
It would be better to actually use more reliable way to store original ... | closed | https://github.com/huggingface/datasets/issues/4182 | 2022-04-19T12:26:57 | 2022-04-20T07:11:05 | 2022-04-20T07:11:05 | {
"login": "dkajtoch",
"id": 32985207,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,208,194,805 | 4,181 | Support streaming FLEURS dataset | ## Dataset viewer issue for '*name of the dataset*'
https://huggingface.co/datasets/google/fleurs
```
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://storage.googleapis.com/xtreme_translations/FLEURS/af_za.tar.gz' is not implemented in str... | closed | https://github.com/huggingface/datasets/issues/4181 | 2022-04-19T11:09:56 | 2022-07-25T11:44:02 | 2022-07-25T11:44:02 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
1,208,042,320 | 4,180 | Add some iteration method on a dataset column (specific for inference) | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
Currently, `dataset["audio"]` will load EVERY element in the dataset in RAM, which can be quite big for an audio dataset.
Having an iterator (or sequence) type of object, would make inference ... | closed | https://github.com/huggingface/datasets/issues/4180 | 2022-04-19T09:15:45 | 2025-06-17T13:08:50 | 2025-06-17T13:08:50 | {
"login": "Narsil",
"id": 204321,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,208,001,118 | 4,179 | Dataset librispeech_asr fails to load | ## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc](https://huggingface.co/datasets/libris... | closed | https://github.com/huggingface/datasets/issues/4179 | 2022-04-19T08:45:48 | 2022-07-27T16:10:00 | 2022-07-27T16:10:00 | {
"login": "albertz",
"id": 59132,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,207,787,073 | 4,178 | [feat] Add ImageNet dataset | To use the dataset download the tar file
[imagenet_object_localization_patched2019.tar.gz](https://www.kaggle.com/competitions/imagenet-object-localization-challenge/data?select=imagenet_object_localization_patched2019.tar.gz) from Kaggle and then point the datasets library to it by using:
```py
from datasets impo... | closed | https://github.com/huggingface/datasets/pull/4178 | 2022-04-19T06:01:35 | 2022-04-29T21:43:59 | 2022-04-29T21:37:08 | {
"login": "apsdehal",
"id": 3616806,
"type": "User"
} | [] | true | [] |
1,207,535,920 | 4,177 | Adding missing subsets to the `SemEval-2018 Task 1` dataset | This dataset for the [1st task of SemEval-2018](https://competitions.codalab.org/competitions/17751) competition was missing all subtasks except for subtask 5. I added another two subtasks (subtask 1 and 2), which are each comprised of 12 additional data subsets: for each language in En, Es, Ar, there are 4 datasets, b... | open | https://github.com/huggingface/datasets/pull/4177 | 2022-04-18T22:59:30 | 2022-10-05T10:38:16 | null | {
"login": "micahcarroll",
"id": 11460267,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,206,515,563 | 4,176 | Very slow between two operations | Hello, in the processing stage, I use two operations. The first one : map + filter, is very fast and it uses the full cores, while the socond step is very slow and did not use full cores.
Also, there is a significant lag between them. Am I missing something ?
```
raw_datasets = raw_datasets.map(split_func... | closed | https://github.com/huggingface/datasets/issues/4176 | 2022-04-17T23:52:29 | 2022-04-18T00:03:00 | 2022-04-18T00:03:00 | {
"login": "yanan1116",
"id": 26405281,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,205,589,842 | 4,175 | Add WIT Dataset | closes #2981 #2810
@nateraw @hassiahk I've listed you guys as co-author as you've contributed previously to this dataset | closed | https://github.com/huggingface/datasets/pull/4175 | 2022-04-15T13:42:32 | 2023-09-24T10:02:38 | 2022-05-02T14:26:41 | {
"login": "thomasw21",
"id": 24695242,
"type": "User"
} | [] | true | [] |
1,205,575,941 | 4,174 | Fix when map function modifies input in-place | When `function` modifies input in-place, the guarantee that columns in `remove_columns` are contained in `input` doesn't hold true anymore. Therefore we need to relax way we pop elements by checking if that column exists. | closed | https://github.com/huggingface/datasets/pull/4174 | 2022-04-15T13:23:15 | 2022-04-15T14:52:07 | 2022-04-15T14:45:58 | {
"login": "thomasw21",
"id": 24695242,
"type": "User"
} | [] | true | [] |
1,204,657,114 | 4,173 | Stream private zipped images | As mentioned in https://github.com/huggingface/datasets/issues/4139 it's currently not possible to stream private/gated zipped images from the Hub.
This is because `Image.decode_example` does not handle authentication. Indeed decoding requires to access and download the file from the private repository.
In this P... | closed | https://github.com/huggingface/datasets/pull/4173 | 2022-04-14T15:15:07 | 2022-05-05T14:05:54 | 2022-05-05T13:58:35 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,204,433,160 | 4,172 | Update assin2 dataset_infos.json | Following comments in https://github.com/huggingface/datasets/issues/4003 we found that it was outdated and casing an error when loading the dataset | closed | https://github.com/huggingface/datasets/pull/4172 | 2022-04-14T11:53:06 | 2022-04-15T14:47:42 | 2022-04-15T14:41:22 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,204,413,620 | 4,170 | to_tf_dataset rewrite | This PR rewrites almost all of `to_tf_dataset()`, which makes it kind of hard to list all the changes, but the most critical ones are:
- Much better stability and no more dropping unexpected column names (Sorry @NielsRogge)
- Doesn't clobber custom transforms on the data (Sorry @NielsRogge again)
- Much better han... | closed | https://github.com/huggingface/datasets/pull/4170 | 2022-04-14T11:30:58 | 2022-06-06T14:31:12 | 2022-06-06T14:22:09 | {
"login": "Rocketknight1",
"id": 12866554,
"type": "User"
} | [] | true | [] |
1,203,995,869 | 4,169 | Timit_asr dataset cannot be previewed recently | ## Dataset viewer issue for '*timit_asr*'
**Link:** *https://huggingface.co/datasets/timit_asr*
Issue: The timit-asr dataset cannot be previewed recently.
Am I the one who added this dataset ? Yes-No
No | closed | https://github.com/huggingface/datasets/issues/4169 | 2022-04-14T03:28:31 | 2023-02-03T04:54:57 | 2022-05-06T16:06:51 | {
"login": "YingLi001",
"id": 75192317,
"type": "User"
} | [] | false | [] |
1,203,867,540 | 4,168 | Add code examples to API docs | This PR adds code examples for functions related to the base Datasets class to highlight usage. Most of the examples use the `rotten_tomatoes` dataset since it is nice and small. Several things I would appreciate feedback on:
- Do you think it is clearer to make every code example fully reproducible so when users co... | closed | https://github.com/huggingface/datasets/pull/4168 | 2022-04-13T23:03:38 | 2022-04-27T18:53:37 | 2022-04-27T18:48:34 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
1,203,761,614 | 4,167 | Avoid rate limit in update hub repositories | use http.extraHeader to avoid rate limit | closed | https://github.com/huggingface/datasets/pull/4167 | 2022-04-13T20:32:17 | 2022-04-13T20:56:41 | 2022-04-13T20:50:32 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,203,758,004 | 4,166 | Fix exact match | Clarify docs and add clarifying example to the exact_match metric | closed | https://github.com/huggingface/datasets/pull/4166 | 2022-04-13T20:28:06 | 2022-05-03T12:23:31 | 2022-05-03T12:16:27 | {
"login": "emibaylor",
"id": 27527747,
"type": "User"
} | [] | true | [] |
1,203,730,187 | 4,165 | Fix google bleu typos, examples | null | closed | https://github.com/huggingface/datasets/pull/4165 | 2022-04-13T19:59:54 | 2022-05-03T12:23:52 | 2022-05-03T12:16:44 | {
"login": "emibaylor",
"id": 27527747,
"type": "User"
} | [] | true | [] |
1,203,661,346 | 4,164 | Fix duplicate key in multi_news | To merge after this job succeeded: https://github.com/huggingface/datasets/runs/6012207928 | closed | https://github.com/huggingface/datasets/pull/4164 | 2022-04-13T18:48:24 | 2022-04-13T21:04:16 | 2022-04-13T20:58:02 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,203,539,268 | 4,163 | Optional Content Warning for Datasets | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
We now have hate speech datasets on the hub, like this one: https://huggingface.co/datasets/HannahRoseKirk/HatemojiBuild
I'm wondering if there is an option to select a content warning messa... | open | https://github.com/huggingface/datasets/issues/4163 | 2022-04-13T16:38:01 | 2022-06-09T20:39:02 | null | {
"login": "TristanThrush",
"id": 20826878,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,203,421,909 | 4,162 | Add Conceptual 12M | null | closed | https://github.com/huggingface/datasets/pull/4162 | 2022-04-13T14:57:23 | 2022-04-15T08:13:01 | 2022-04-15T08:06:25 | {
"login": "thomasw21",
"id": 24695242,
"type": "User"
} | [] | true | [] |
1,203,230,485 | 4,161 | Add Visual Genome | null | closed | https://github.com/huggingface/datasets/pull/4161 | 2022-04-13T12:25:24 | 2022-04-21T15:42:49 | 2022-04-21T13:08:52 | {
"login": "thomasw21",
"id": 24695242,
"type": "User"
} | [] | true | [] |
1,202,845,874 | 4,160 | RGBA images not showing | ## Dataset viewer issue for ceyda/smithsonian_butterflies_transparent
[**Link:** *link to the dataset viewer page*](https://huggingface.co/datasets/ceyda/smithsonian_butterflies_transparent)

Am I the... | closed | https://github.com/huggingface/datasets/issues/4160 | 2022-04-13T06:59:23 | 2022-06-21T16:43:11 | 2022-06-21T16:43:11 | {
"login": "cceyda",
"id": 15624271,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
},
{
"name": "dataset-viewer-rgba-images",
"color": "6C5FC0"
}
] | false | [] |
1,202,522,153 | 4,159 | Add `TruthfulQA` dataset | null | closed | https://github.com/huggingface/datasets/pull/4159 | 2022-04-12T23:19:04 | 2022-06-08T15:51:33 | 2022-06-08T14:43:34 | {
"login": "jon-tow",
"id": 41410219,
"type": "User"
} | [] | true | [] |
1,202,376,843 | 4,158 | Add AUC ROC Metric | null | closed | https://github.com/huggingface/datasets/pull/4158 | 2022-04-12T20:53:28 | 2022-04-26T19:41:50 | 2022-04-26T19:35:22 | {
"login": "emibaylor",
"id": 27527747,
"type": "User"
} | [] | true | [] |
1,202,239,622 | 4,157 | Fix formatting in BLEU metric card | Fix #4148 | closed | https://github.com/huggingface/datasets/pull/4157 | 2022-04-12T18:29:51 | 2022-04-13T14:30:25 | 2022-04-13T14:16:34 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,202,220,531 | 4,156 | Adding STSb-TR dataset | Semantic Textual Similarity benchmark Turkish (STSb-TR) dataset introduced in our paper [Semantic Similarity Based Evaluation for Abstractive News Summarization](https://aclanthology.org/2021.gem-1.3.pdf) added. | closed | https://github.com/huggingface/datasets/pull/4156 | 2022-04-12T18:10:05 | 2022-10-03T09:36:25 | 2022-10-03T09:36:25 | {
"login": "figenfikri",
"id": 12762065,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,202,183,608 | 4,155 | Make HANS dataset streamable | Fix #4133 | closed | https://github.com/huggingface/datasets/pull/4155 | 2022-04-12T17:34:13 | 2022-04-13T12:03:46 | 2022-04-13T11:57:35 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,202,145,721 | 4,154 | Generate tasks.json taxonomy from `huggingface_hub` | null | closed | https://github.com/huggingface/datasets/pull/4154 | 2022-04-12T17:12:46 | 2022-04-14T10:32:32 | 2022-04-14T10:26:13 | {
"login": "julien-c",
"id": 326577,
"type": "User"
} | [] | true | [] |
1,202,040,506 | 4,153 | Adding Text-based NP Enrichment (TNE) dataset | Added the [TNE](https://github.com/yanaiela/TNE) dataset to the library | closed | https://github.com/huggingface/datasets/pull/4153 | 2022-04-12T15:47:03 | 2022-05-03T14:05:48 | 2022-05-03T14:05:48 | {
"login": "yanaiela",
"id": 8031035,
"type": "User"
} | [] | true | [] |
1,202,034,115 | 4,152 | ArrayND error in pyarrow 5 | As found in https://github.com/huggingface/datasets/pull/3903, The ArrayND features fail on pyarrow 5:
```python
import pyarrow as pa
from datasets import Array2D
from datasets.table import cast_array_to_feature
arr = pa.array([[[0]]])
feature_type = Array2D(shape=(1, 1), dtype="int64")
cast_array_to_feature(a... | closed | https://github.com/huggingface/datasets/issues/4152 | 2022-04-12T15:41:40 | 2022-05-04T09:29:46 | 2022-05-04T09:29:46 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | false | [] |
1,201,837,999 | 4,151 | Add missing label for emotion description | null | closed | https://github.com/huggingface/datasets/pull/4151 | 2022-04-12T13:17:37 | 2022-04-12T13:58:50 | 2022-04-12T13:58:50 | {
"login": "lijiazheng99",
"id": 44396506,
"type": "User"
} | [] | true | [] |
1,201,689,730 | 4,150 | Inconsistent splits generation for datasets without loading script (packaged dataset puts everything into a single split) | ## Describe the bug
Splits for dataset loaders without scripts are prepared inconsistently. I think it might be confusing for users.
## Steps to reproduce the bug
* If you load a packaged datasets from Hub, it infers splits from directory structure / filenames (check out the data [here](https://huggingface.co/data... | closed | https://github.com/huggingface/datasets/issues/4150 | 2022-04-12T11:15:55 | 2022-04-28T21:02:44 | 2022-04-28T21:02:44 | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,201,389,221 | 4,149 | load_dataset for winoground returning decoding error | ## Describe the bug
I am trying to use datasets to load winoground and I'm getting a JSON decoding error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
token = 'hf_XXXXX' # my HF access token
datasets = load_dataset('facebook/winoground', use_auth_token=token)
```
## Expected res... | closed | https://github.com/huggingface/datasets/issues/4149 | 2022-04-12T08:16:16 | 2022-05-04T23:40:38 | 2022-05-04T23:40:38 | {
"login": "odellus",
"id": 4686956,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,201,169,242 | 4,148 | fix confusing bleu metric example | **Is your feature request related to a problem? Please describe.**
I would like to see the example in "Metric Card for BLEU" changed.
The 0th element in the predictions list is not closed in square brackets, and the 1st list is missing a comma.
The BLEU score are calculated correctly, but it is difficult to understa... | closed | https://github.com/huggingface/datasets/issues/4148 | 2022-04-12T06:18:26 | 2022-04-13T14:16:34 | 2022-04-13T14:16:34 | {
"login": "aizawa-naoki",
"id": 6253193,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,200,756,008 | 4,147 | Adjust path to datasets tutorial in How-To | The link in the How-To overview page to the Datasets tutorials is currently broken. This is just a small adjustment to make it match the format used in https://github.com/huggingface/datasets/blob/master/docs/source/tutorial.md.
(Edit to add: The link in the PR deployment (https://moon-ci-docs.huggingface.co/docs/da... | closed | https://github.com/huggingface/datasets/pull/4147 | 2022-04-12T01:20:34 | 2022-04-12T08:32:24 | 2022-04-12T08:26:02 | {
"login": "NimaBoscarino",
"id": 6765188,
"type": "User"
} | [] | true | [] |
1,200,215,789 | 4,146 | SAMSum dataset viewer not working | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| closed | https://github.com/huggingface/datasets/issues/4146 | 2022-04-11T16:22:57 | 2022-04-29T16:26:09 | 2022-04-29T16:26:09 | {
"login": "aakashnegi10",
"id": 39906333,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,200,209,781 | 4,145 | Redirect TIMIT download from LDC | LDC data is protected under US copyright laws and under various legal agreements between the Linguistic Data Consortium/the University of Pennsylvania and data providers which prohibit redistribution of that data by anyone other than LDC. Similarly, LDC's membership agreements, non-member user agreement and various cor... | closed | https://github.com/huggingface/datasets/pull/4145 | 2022-04-11T16:17:55 | 2022-04-13T15:39:31 | 2022-04-13T15:33:04 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,200,016,983 | 4,144 | Fix splits in local packaged modules, local datasets without script and hub datasets without script | fixes #4150
I suggest to infer splits structure from files when `data_dir` is passed with `get_patterns_locally`, analogous to what's done in `LocalDatasetModuleFactoryWithoutScript` with `self.path`, instead of generating files with `data_dir/**` patterns and putting them all into a single default (train) split.
... | closed | https://github.com/huggingface/datasets/pull/4144 | 2022-04-11T13:57:33 | 2022-04-29T09:12:14 | 2022-04-28T21:02:45 | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [] | true | [] |
1,199,937,961 | 4,143 | Unable to download `Wikepedia` 20220301.en version | ## Describe the bug
Unable to download `Wikepedia` dataset, 20220301.en version
## Steps to reproduce the bug
```python
!pip install apache_beam mwparserfromhell
dataset_wikipedia = load_dataset("wikipedia", "20220301.en")
```
## Actual results
```
ValueError: BuilderConfig 20220301.en not found.
Avail... | closed | https://github.com/huggingface/datasets/issues/4143 | 2022-04-11T13:00:14 | 2022-08-17T00:37:55 | 2022-04-21T17:04:14 | {
"login": "beyondguo",
"id": 37113676,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,199,794,750 | 4,142 | Add ObjectFolder 2.0 dataset | ## Adding a Dataset
- **Name:** ObjectFolder 2.0
- **Description:** ObjectFolder 2.0 is a dataset of 1,000 objects in the form of implicit representations. It contains 1,000 Object Files each containing the complete multisensory profile for an object instance.
- **Paper:** [*link to the dataset paper if available*](... | open | https://github.com/huggingface/datasets/issues/4142 | 2022-04-11T10:57:51 | 2022-10-05T10:30:49 | null | {
"login": "osanseviero",
"id": 7246357,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,199,610,885 | 4,141 | Why is the dataset not visible under the dataset preview section? | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| closed | https://github.com/huggingface/datasets/issues/4141 | 2022-04-11T08:36:42 | 2022-04-11T18:55:32 | 2022-04-11T17:09:49 | {
"login": "Nid989",
"id": 75028682,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,199,492,356 | 4,140 | Error loading arxiv data set | ## Describe the bug
A clear and concise description of what the bug is.
I met the error below when loading arxiv dataset via `nlp.load_dataset('scientific_papers', 'arxiv',)`.
```
Traceback (most recent call last):
File "scripts/summarization.py", line 354, in <module>
main(args)
File "scripts/summari... | closed | https://github.com/huggingface/datasets/issues/4140 | 2022-04-11T07:06:34 | 2022-04-12T16:24:08 | 2022-04-12T16:24:08 | {
"login": "yjqiu",
"id": 5383918,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,199,443,822 | 4,139 | Dataset viewer issue for Winoground | ## Dataset viewer issue for 'Winoground'
**Link:** [*link to the dataset viewer page*](https://huggingface.co/datasets/facebook/winoground/viewer/facebook--winoground/train)
*short description of the issue*
Getting 401, message='Unauthorized'
The dataset is subject to authorization, but I can access the files f... | closed | https://github.com/huggingface/datasets/issues/4139 | 2022-04-11T06:11:41 | 2022-06-21T16:43:58 | 2022-06-21T16:43:58 | {
"login": "alcinos",
"id": 7438704,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
},
{
"name": "dataset-viewer-gated",
"color": "51F745"
}
] | false | [] |
1,199,291,730 | 4,138 | Incorrect Russian filenames encoding after extraction by datasets.DownloadManager.download_and_extract() | ## Dataset viewer issue for 'MalakhovIlya/RuREBus'
**Link:** https://huggingface.co/datasets/MalakhovIlya/RuREBus
**Description**
Using os.walk(topdown=False) in DatasetBuilder causes following error:
Status code: 400
Exception: TypeError
Message: xwalk() got an unexpected keyword argument 'topdow... | closed | https://github.com/huggingface/datasets/issues/4138 | 2022-04-11T02:07:13 | 2022-04-19T03:15:46 | 2022-04-16T15:46:29 | {
"login": "iluvvatar",
"id": 55381086,
"type": "User"
} | [] | false | [] |
1,199,000,453 | 4,137 | Add single dataset citations for TweetEval | This PR adds single data citations as per request of the original creators of the TweetEval dataset.
This is a recent email from the creator:
> Could I ask you a favor? Would you be able to add at the end of the README the citations of the single datasets as well? You can just copy our readme maybe? https://githu... | closed | https://github.com/huggingface/datasets/pull/4137 | 2022-04-10T11:51:54 | 2022-04-12T07:57:22 | 2022-04-12T07:51:15 | {
"login": "gchhablani",
"id": 29076344,
"type": "User"
} | [] | true | [] |
1,198,307,610 | 4,135 | Support streaming xtreme dataset for PAN-X config | Support streaming xtreme dataset for PAN-X config. | closed | https://github.com/huggingface/datasets/pull/4135 | 2022-04-09T06:19:48 | 2022-05-06T08:39:40 | 2022-04-11T06:54:14 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,197,937,146 | 4,134 | ELI5 supporting documents | if i am using dense search to create supporting documents for eli5 how much time it will take bcz i read somewhere that it takes about 18 hrs?? | open | https://github.com/huggingface/datasets/issues/4134 | 2022-04-08T23:36:27 | 2022-04-13T13:52:46 | null | {
"login": "saurabh-0077",
"id": 69015896,
"type": "User"
} | [
{
"name": "question",
"color": "d876e3"
}
] | false | [] |
1,197,830,623 | 4,133 | HANS dataset preview broken | ## Dataset viewer issue for '*hans*'
**Link:** [https://huggingface.co/datasets/hans](https://huggingface.co/datasets/hans)
HANS dataset preview is broken with error 400
Am I the one who added this dataset ? No
| closed | https://github.com/huggingface/datasets/issues/4133 | 2022-04-08T21:06:15 | 2022-04-13T11:57:34 | 2022-04-13T11:57:34 | {
"login": "pietrolesci",
"id": 61748653,
"type": "User"
} | [
{
"name": "streaming",
"color": "fef2c0"
}
] | false | [] |
1,197,661,720 | 4,132 | Support streaming xtreme dataset for PAWS-X config | Support streaming xtreme dataset for PAWS-X config. | closed | https://github.com/huggingface/datasets/pull/4132 | 2022-04-08T18:25:32 | 2022-05-06T08:39:42 | 2022-04-08T21:02:44 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,197,472,249 | 4,131 | Support streaming xtreme dataset for udpos config | Support streaming xtreme dataset for udpos config. | closed | https://github.com/huggingface/datasets/pull/4131 | 2022-04-08T15:30:49 | 2022-05-06T08:39:46 | 2022-04-08T16:28:07 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,197,456,857 | 4,130 | Add SBU Captions Photo Dataset | null | closed | https://github.com/huggingface/datasets/pull/4130 | 2022-04-08T15:17:39 | 2022-04-12T10:47:31 | 2022-04-12T10:41:29 | {
"login": "thomasw21",
"id": 24695242,
"type": "User"
} | [] | true | [] |
1,197,376,796 | 4,129 | dataset metadata for reproducibility | When pulling a dataset from the hub, it would be useful to have some metadata about the specific dataset and version that is used. The metadata could then be passed to the `Trainer` which could then be saved to a model card. This is useful for people who run many experiments on different versions (commits/branches) of ... | open | https://github.com/huggingface/datasets/issues/4129 | 2022-04-08T14:17:28 | 2023-09-29T09:23:56 | null | {
"login": "nbroad1881",
"id": 24982805,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,197,326,311 | 4,128 | More robust `cast_to_python_objects` in `TypedSequence` | Adds a fallback to run an expensive version of `cast_to_python_objects` which exhaustively checks entire lists to avoid the `ArrowInvalid: Could not convert` error in `TypedSequence`. Currently, this error can happen in situations where only some images are decoded in `map`, in which case `cast_to_python_objects` fails... | closed | https://github.com/huggingface/datasets/pull/4128 | 2022-04-08T13:33:35 | 2022-04-13T14:07:41 | 2022-04-13T14:01:16 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,197,297,756 | 4,127 | Add configs with processed data in medical_dialog dataset | There exist processed data files that do not require parsing the raw data files (which can take long time).
Fix #4122. | closed | https://github.com/huggingface/datasets/pull/4127 | 2022-04-08T13:08:16 | 2022-05-06T08:39:50 | 2022-04-08T16:20:51 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,196,665,194 | 4,126 | dataset viewer issue for common_voice | ## Dataset viewer issue for 'common_voice'
**Link:** https://huggingface.co/datasets/common_voice
Server Error
Status code: 400
Exception: TypeError
Message: __init__() got an unexpected keyword argument 'audio_column'
Am I the one who added this dataset ? No
| closed | https://github.com/huggingface/datasets/issues/4126 | 2022-04-07T23:34:28 | 2022-04-25T13:42:17 | 2022-04-25T13:42:16 | {
"login": "laphang",
"id": 24724502,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
},
{
"name": "audio_column",
"color": "F83ACF"
}
] | false | [] |
1,196,633,936 | 4,125 | BIG-bench | This PR adds all BIG-bench json tasks to huggingface/datasets. | closed | https://github.com/huggingface/datasets/pull/4125 | 2022-04-07T22:33:30 | 2022-06-08T17:57:48 | 2022-06-08T17:32:32 | {
"login": "andersjohanandreassen",
"id": 43357549,
"type": "User"
} | [] | true | [] |
1,196,469,842 | 4,124 | Image decoding often fails when transforming Image datasets | ## Describe the bug
When transforming/modifying images in an image dataset using the `map` function the PIL images often fail to decode in time for the image transforms, causing errors.
Using a debugger it is easy to see what the problem is, the Image decode invocation does not take place and the resulting image pa... | closed | https://github.com/huggingface/datasets/issues/4124 | 2022-04-07T19:17:25 | 2022-04-13T14:01:16 | 2022-04-13T14:01:16 | {
"login": "RafayAK",
"id": 17025191,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,196,367,512 | 4,123 | Building C4 takes forever | ## Describe the bug
C4-en is a 300 GB dataset. However, when I try to download it through the hub it takes over _six hours_ to generate the train/test split from the downloaded files. This is an absurd amount of time and an unnecessary waste of resources.
## Steps to reproduce the bug
```python
c4 = datasets.load... | closed | https://github.com/huggingface/datasets/issues/4123 | 2022-04-07T17:41:30 | 2023-06-26T22:01:29 | 2023-06-26T22:01:29 | {
"login": "StellaAthena",
"id": 15899312,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,196,095,072 | 4,122 | medical_dialog zh has very slow _generate_examples | ## Describe the bug
After downloading the files from Google Drive, `load_dataset("medical_dialog", "zh", data_dir="./")` takes an unreasonable amount of time. Generating the train/test split for 33% of the dataset takes over 4.5 hours.
## Steps to reproduce the bug
The easiest way I've found to download files from... | closed | https://github.com/huggingface/datasets/issues/4122 | 2022-04-07T14:00:51 | 2022-04-08T16:20:51 | 2022-04-08T16:20:51 | {
"login": "nbroad1881",
"id": 24982805,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,196,000,018 | 4,121 | datasets.load_metric can not load a local metirc | ## Describe the bug
No matter how I hard try to tell load_metric that I want to load a local metric file, it still continues to fetch things on the Internet. And unfortunately it says 'ConnectionError: Couldn't reach'. However I can download this file without connectionerror and tell load_metric its local directory. A... | closed | https://github.com/huggingface/datasets/issues/4121 | 2022-04-07T12:48:56 | 2023-01-18T14:30:46 | 2022-04-07T13:53:27 | {
"login": "SadGare",
"id": 51749469,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,195,887,430 | 4,120 | Representing dictionaries (json) objects as features | In the process of adding a new dataset to the hub, I stumbled upon the inability to represent dictionaries that contain different key names, unknown in advance (and may differ between samples), original asked in the [forum](https://discuss.huggingface.co/t/representing-nested-dictionary-with-different-keys/16442).
F... | open | https://github.com/huggingface/datasets/issues/4120 | 2022-04-07T11:07:41 | 2022-04-07T11:07:41 | null | {
"login": "yanaiela",
"id": 8031035,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,195,641,298 | 4,119 | Hotfix failing CI tests on Windows | This PR makes a hotfix for our CI Windows tests: https://app.circleci.com/pipelines/github/huggingface/datasets/11092/workflows/9cfdb1dd-0fec-4fe0-8122-5f533192ebdc/jobs/67414
Fix #4118
I guess this issue is related to this PR:
- huggingface/huggingface_hub#815 | closed | https://github.com/huggingface/datasets/pull/4119 | 2022-04-07T07:38:46 | 2022-04-07T09:47:24 | 2022-04-07T07:57:13 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,195,638,944 | 4,118 | Failing CI tests on Windows | ## Describe the bug
Our CI Windows tests are failing from yesterday: https://app.circleci.com/pipelines/github/huggingface/datasets/11092/workflows/9cfdb1dd-0fec-4fe0-8122-5f533192ebdc/jobs/67414
| closed | https://github.com/huggingface/datasets/issues/4118 | 2022-04-07T07:36:25 | 2022-04-07T07:57:13 | 2022-04-07T07:57:13 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,195,552,406 | 4,117 | AttributeError: module 'huggingface_hub' has no attribute 'hf_api' | ## Describe the bug
Could you help me please. I got this following error.
AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
## Steps to reproduce the bug
when I imported the datasets
# Sample code to reproduce the bug
from datasets import list_datasets, load_dataset, list_metrics, load_metr... | closed | https://github.com/huggingface/datasets/issues/4117 | 2022-04-07T05:52:36 | 2024-05-07T09:24:35 | 2022-04-19T15:36:35 | {
"login": "arymbe",
"id": 4567991,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,194,926,459 | 4,116 | Pretty print dataset info files | Adds indentation to the `dataset_infos.json` file when saving for nicer diffs.
(suggested by @julien-c)
This PR also updates the info files of the GH datasets. Note that this change adds more than **10 MB** to the repo size (the total file size before the change: 29.672298 MB, after: 41.666475 MB), so I'm not sur... | closed | https://github.com/huggingface/datasets/pull/4116 | 2022-04-06T17:40:48 | 2022-04-08T11:28:01 | 2022-04-08T11:21:53 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,194,907,555 | 4,115 | ImageFolder add option to ignore some folders like '.ipynb_checkpoints' | **Is your feature request related to a problem? Please describe.**
I sometimes like to peek at the dataset images from jupyterlab. thus '.ipynb_checkpoints' folder appears where my dataset is and (just realized) leads to accidental duplicate image additions. I think this is an easy enough thing to miss especially if t... | closed | https://github.com/huggingface/datasets/issues/4115 | 2022-04-06T17:29:43 | 2022-06-01T13:04:16 | 2022-06-01T13:04:16 | {
"login": "cceyda",
"id": 15624271,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,194,855,345 | 4,114 | Allow downloading just some columns of a dataset | **Is your feature request related to a problem? Please describe.**
Some people are interested in doing label analysis of a CV dataset without downloading all the images. Downloading the whole dataset does not always makes sense for this kind of use case
**Describe the solution you'd like**
Be able to just download... | open | https://github.com/huggingface/datasets/issues/4114 | 2022-04-06T16:38:46 | 2025-02-17T15:10:56 | null | {
"login": "osanseviero",
"id": 7246357,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,194,843,532 | 4,113 | Multiprocessing with FileLock fails in python 3.9 | On python 3.9, this code hangs:
```python
from multiprocessing import Pool
from filelock import FileLock
def run(i):
print(f"got the lock in multi process [{i}]")
with FileLock("tmp.lock"):
with Pool(2) as pool:
pool.map(run, range(2))
```
This is because the subprocesses try to ac... | closed | https://github.com/huggingface/datasets/issues/4113 | 2022-04-06T16:27:09 | 2022-11-28T11:49:14 | 2022-11-28T11:49:14 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,194,752,765 | 4,112 | ImageFolder with Grayscale images dataset | Hi, I'm facing a problem with a grayscale images dataset I have uploaded [here](https://huggingface.co/datasets/ChainYo/rvl-cdip) (RVL-CDIP)
I'm getting an error while I want to use images for training a model with PyTorch DataLoader. Here is the full traceback:
```bash
AttributeError: Caught AttributeError in D... | closed | https://github.com/huggingface/datasets/issues/4112 | 2022-04-06T15:10:00 | 2022-04-22T10:21:53 | 2022-04-22T10:21:52 | {
"login": "chainyo",
"id": 50595514,
"type": "User"
} | [] | false | [] |
1,194,660,699 | 4,111 | Update security policy | null | closed | https://github.com/huggingface/datasets/pull/4111 | 2022-04-06T13:59:51 | 2022-04-07T09:46:30 | 2022-04-07T09:40:27 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,194,581,375 | 4,110 | Matthews Correlation Metric Card | null | closed | https://github.com/huggingface/datasets/pull/4110 | 2022-04-06T12:59:35 | 2022-05-03T13:43:17 | 2022-05-03T13:36:13 | {
"login": "emibaylor",
"id": 27527747,
"type": "User"
} | [] | true | [] |
1,194,579,257 | 4,109 | Add Spearmanr Metric Card | null | closed | https://github.com/huggingface/datasets/pull/4109 | 2022-04-06T12:57:53 | 2022-05-03T16:50:26 | 2022-05-03T16:43:37 | {
"login": "emibaylor",
"id": 27527747,
"type": "User"
} | [] | true | [] |
1,194,578,584 | 4,108 | Perplexity Speedup | This PR makes necessary changes to perplexity such that:
- it runs much faster (via batching)
- it throws an error when input is empty, or when input is one word without <BOS> token
- it adds the option to add a <BOS> token
Issues:
- The values returned are extremely high, and I'm worried they aren't correct. Ev... | closed | https://github.com/huggingface/datasets/pull/4108 | 2022-04-06T12:57:21 | 2022-04-20T13:00:54 | 2022-04-20T12:54:42 | {
"login": "emibaylor",
"id": 27527747,
"type": "User"
} | [] | true | [] |
1,194,484,885 | 4,107 | Unable to view the dataset and loading the same dataset throws the error - ArrowInvalid: Exceeded maximum rows | ## Dataset viewer issue - -ArrowInvalid: Exceeded maximum rows
**Link:** *https://huggingface.co/datasets/Pavithree/explainLikeImFive*
*This is the subset of original eli5 dataset https://huggingface.co/datasets/vblagoje/lfqa. I just filtered the data samples which belongs to one particular subreddit thread. How... | closed | https://github.com/huggingface/datasets/issues/4107 | 2022-04-06T11:37:15 | 2022-04-08T07:13:07 | 2022-04-06T14:39:55 | {
"login": "Pavithree",
"id": 23344465,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,194,393,892 | 4,106 | Support huggingface_hub 0.5 | Following https://github.com/huggingface/datasets/issues/4105
`huggingface_hub` deprecated some parameters in `HfApi` in 0.5. This PR updates all the calls to HfApi to remove all the deprecations, <s>and I set the `hugginface_hub` requirement to `>=0.5.0`</s>
cc @adrinjalali @LysandreJik | closed | https://github.com/huggingface/datasets/pull/4106 | 2022-04-06T10:15:25 | 2022-04-08T10:28:43 | 2022-04-08T10:22:23 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,194,297,119 | 4,105 | push to hub fails with huggingface-hub 0.5.0 | ## Describe the bug
`ds.push_to_hub` is failing when updating a dataset in the form "org_id/repo_id"
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("rubrix/news_test")
ds.push_to_hub("<your-user>/news_test", token="<your-token>")
```
## Expected results
The data... | closed | https://github.com/huggingface/datasets/issues/4105 | 2022-04-06T08:59:57 | 2022-04-13T14:30:47 | 2022-04-13T14:30:47 | {
"login": "frascuchon",
"id": 2518789,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,194,072,966 | 4,104 | Add time series data - stock market | ## Adding a Time Series Dataset
- **Name:** 2min ticker data for stock market
- **Description:** 8 stocks' data collected for 1month post ukraine-russia war. 4 NSE stocks and 4 NASDAQ stocks. Along with technical indicators (additional features) as shown in below image
- **Data:** Collected by myself from investing... | open | https://github.com/huggingface/datasets/issues/4104 | 2022-04-06T05:46:58 | 2024-07-21T16:54:30 | null | {
"login": "rozeappletree",
"id": 45640029,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,193,987,104 | 4,103 | Add the `GSM8K` dataset | null | closed | https://github.com/huggingface/datasets/pull/4103 | 2022-04-06T04:07:52 | 2022-04-12T15:38:28 | 2022-04-12T10:21:16 | {
"login": "jon-tow",
"id": 41410219,
"type": "User"
} | [] | true | [] |
1,193,616,722 | 4,102 | [hub] Fix `api.create_repo` call? | null | closed | https://github.com/huggingface/datasets/pull/4102 | 2022-04-05T19:21:52 | 2023-09-24T10:01:14 | 2022-04-12T08:41:46 | {
"login": "julien-c",
"id": 326577,
"type": "User"
} | [] | true | [] |
1,193,399,204 | 4,101 | How can I download only the train and test split for full numbers using load_dataset()? | How can I download only the train and test split for full numbers using load_dataset()?
I do not need the extra split and it will take 40 mins just to download in Colab. I have very short time in hand. Please help. | open | https://github.com/huggingface/datasets/issues/4101 | 2022-04-05T16:00:15 | 2022-04-06T13:09:01 | null | {
"login": "Nakkhatra",
"id": 64383902,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,193,393,959 | 4,100 | Improve RedCaps dataset card | This PR modifies the RedCaps card to:
* fix the formatting of the Point of Contact fields on the Hub
* speed up the image fetching logic (aligns it with the [img2dataset](https://github.com/rom1504/img2dataset) tool) and make it more robust (return None if **any** exception is thrown) | closed | https://github.com/huggingface/datasets/pull/4100 | 2022-04-05T15:57:14 | 2022-04-13T14:08:54 | 2022-04-13T14:02:26 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,193,253,768 | 4,099 | UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128) | ## Describe the bug
Error "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)" is thrown when downloading dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset("nielsr/XFUN", "xfun.ja")
```
## Expected resu... | closed | https://github.com/huggingface/datasets/issues/4099 | 2022-04-05T14:42:38 | 2022-04-06T06:37:44 | 2022-04-06T06:35:54 | {
"login": "andreybond",
"id": 20210017,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,193,245,522 | 4,098 | Proposing WikiSplit metric card | Pinging @lhoestq to ensure that my distinction between the dataset and the metric are clear :sweat_smile: | closed | https://github.com/huggingface/datasets/pull/4098 | 2022-04-05T14:36:34 | 2022-10-11T09:10:21 | 2022-04-05T15:42:28 | {
"login": "sashavor",
"id": 14205986,
"type": "User"
} | [] | true | [] |
1,193,205,751 | 4,097 | Updating FrugalScore metric card | removing duplicate paragraph | closed | https://github.com/huggingface/datasets/pull/4097 | 2022-04-05T14:09:24 | 2022-04-05T15:07:35 | 2022-04-05T15:01:46 | {
"login": "sashavor",
"id": 14205986,
"type": "User"
} | [] | true | [] |
1,193,165,229 | 4,096 | Add support for streaming Zarr stores for hosted datasets | **Is your feature request related to a problem? Please describe.**
Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support streaming in data in Zarr format as far as I can tell. Zarr s... | closed | https://github.com/huggingface/datasets/issues/4096 | 2022-04-05T13:38:32 | 2023-12-07T09:01:49 | 2022-04-21T08:12:58 | {
"login": "jacobbieker",
"id": 7170359,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,192,573,353 | 4,095 | fix typo in rename_column error message | I feel bad submitting such a tiny change as a PR but it confused me today 😄 | closed | https://github.com/huggingface/datasets/pull/4095 | 2022-04-05T03:55:56 | 2022-04-05T08:54:46 | 2022-04-05T08:45:53 | {
"login": "hunterlang",
"id": 680821,
"type": "User"
} | [] | true | [] |
1,192,534,414 | 4,094 | Helo Mayfrends | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reas... | closed | https://github.com/huggingface/datasets/issues/4094 | 2022-04-05T02:42:57 | 2022-04-05T07:16:42 | 2022-04-05T07:16:42 | {
"login": "Budigming",
"id": 102933353,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,192,523,161 | 4,093 | elena-soare/crawled-ecommerce: missing dataset | elena-soare/crawled-ecommerce
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| closed | https://github.com/huggingface/datasets/issues/4093 | 2022-04-05T02:25:19 | 2022-04-12T09:34:53 | 2022-04-12T09:34:53 | {
"login": "seevaratnam",
"id": 17519354,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,192,499,903 | 4,092 | Fix dataset `amazon_us_reviews` metadata - 4/4/2022 | Fixes #4048 by running `dataset-cli test` to reprocess data and regenerate metadata. Additionally I've updated the README to include up-to-date counts for the subsets. | closed | https://github.com/huggingface/datasets/pull/4092 | 2022-04-05T01:39:45 | 2022-04-08T12:35:41 | 2022-04-08T12:29:31 | {
"login": "trentonstrong",
"id": 191985,
"type": "User"
} | [] | true | [] |
1,192,023,855 | 4,091 | Build a Dataset One Example at a Time Without Loading All Data Into Memory | **Is your feature request related to a problem? Please describe.**
I have a very large dataset stored on disk in a custom format. I have some custom code that reads one data example at a time and yields it in the form of a dictionary. I want to construct a `Dataset` with all examples, and then save it to disk. I la... | closed | https://github.com/huggingface/datasets/issues/4091 | 2022-04-04T16:19:24 | 2022-04-20T14:31:00 | 2022-04-20T14:31:00 | {
"login": "aravind-tonita",
"id": 99340348,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,191,956,734 | 4,090 | Avoid writing empty license files | This PR avoids the creation of empty `LICENSE` files. | closed | https://github.com/huggingface/datasets/pull/4090 | 2022-04-04T15:23:37 | 2022-04-07T12:46:45 | 2022-04-07T12:40:43 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,191,915,196 | 4,089 | Create metric card for Frugal Score | Proposing metric card for Frugal Score.
@albertvillanova or @lhoestq -- there are certain aspects that I'm not 100% sure on (such as how exactly the distillation between BertScore and FrugalScore is done) -- so if you find that something isn't clear, please let me know! | closed | https://github.com/huggingface/datasets/pull/4089 | 2022-04-04T14:53:49 | 2022-04-05T14:14:46 | 2022-04-05T14:06:50 | {
"login": "sashavor",
"id": 14205986,
"type": "User"
} | [] | true | [] |
1,191,901,172 | 4,088 | Remove unused legacy Beam utils | This PR removes unused legacy custom `WriteToParquet`, once official Apache Beam includes the patch since version 2.22.0:
- Patch PR: https://github.com/apache/beam/pull/11699
- Issue: https://issues.apache.org/jira/browse/BEAM-10022
In relation with:
- #204 | closed | https://github.com/huggingface/datasets/pull/4088 | 2022-04-04T14:43:51 | 2022-04-05T15:23:27 | 2022-04-05T15:17:41 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,191,819,805 | 4,087 | Fix BeamWriter output Parquet file | Since now, the `BeamWriter` saved a Parquet file with a simplified schema, where each field value was serialized to JSON. That resulted in Parquet files larger than Arrow files.
This PR:
- writes Parquet file preserving original schema and without serialization, thus avoiding serialization overhead and resulting in... | closed | https://github.com/huggingface/datasets/pull/4087 | 2022-04-04T13:46:50 | 2022-04-05T15:00:40 | 2022-04-05T14:54:48 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,191,373,374 | 4,086 | Dataset viewer issue for McGill-NLP/feedbackQA | ## Dataset viewer issue for '*McGill-NLP/feedbackQA*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/McGill-NLP/feedbackQA)*
*short description of the issue*
The dataset can be loaded correctly with `load_dataset` but the preview doesn't work. Error message:
```
Status code: 4... | closed | https://github.com/huggingface/datasets/issues/4086 | 2022-04-04T07:27:20 | 2022-04-04T22:29:53 | 2022-04-04T08:01:45 | {
"login": "cslizc",
"id": 54827718,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,190,621,345 | 4,085 | datasets.set_progress_bar_enabled(False) not working in datasets v2 | ## Describe the bug
datasets.set_progress_bar_enabled(False) not working in datasets v2
## Steps to reproduce the bug
```python
datasets.set_progress_bar_enabled(False)
```
## Expected results
datasets not using any progress bar
## Actual results
AttributeError: module 'datasets' has no attribute 'se... | closed | https://github.com/huggingface/datasets/issues/4085 | 2022-04-02T12:40:10 | 2022-09-17T02:18:03 | 2022-04-04T06:44:34 | {
"login": "virilo",
"id": 3381112,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,190,060,415 | 4,084 | Errors in `Train with Datasets` Tensorflow code section on Huggingface.co | ## Describe the bug
Hi
### Error 1
Running the Tensforlow code on [Huggingface](https://huggingface.co/docs/datasets/use_dataset) gives a TypeError: __init__() got an unexpected keyword argument 'return_tensors'
### Error 2
`DataCollatorWithPadding` isn't imported
## Steps to reproduce the bug
```python
impo... | closed | https://github.com/huggingface/datasets/issues/4084 | 2022-04-01T17:02:47 | 2022-04-04T07:24:37 | 2022-04-04T07:21:31 | {
"login": "blackhat-coder",
"id": 57095771,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.