id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
962,994,198 | 2,766 | fix typo (ShuffingConfig -> ShufflingConfig) | pretty straightforward, it should be Shuffling instead of Shuffing | closed | https://github.com/huggingface/datasets/pull/2766 | 2021-08-06T19:31:40 | 2021-08-10T14:17:03 | 2021-08-10T14:17:02 | {
"login": "daleevans",
"id": 4944007,
"type": "User"
} | [] | true | [] |
962,861,395 | 2,765 | BERTScore Error | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
predictions = ["hello there", "general kenobi"]
references = ["hello there", "general kenobi"]
bert = load_metric('bertscore')
bert.compute(predictions=predictions, references=references,lang='en')
... | closed | https://github.com/huggingface/datasets/issues/2765 | 2021-08-06T15:58:57 | 2021-08-09T11:16:25 | 2021-08-09T11:16:25 | {
"login": "gagan3012",
"id": 49101362,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
962,554,799 | 2,764 | Add DER metric for SUPERB speaker diarization task | null | closed | https://github.com/huggingface/datasets/pull/2764 | 2021-08-06T09:12:36 | 2023-07-11T09:35:23 | 2023-07-11T09:35:23 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "transfer-to-evaluate",
"color": "E3165C"
}
] | true | [] |
961,895,523 | 2,763 | English wikipedia datasets is not clean | ## Describe the bug
Wikipedia english dumps contain many wikipedia paragraphs like "References", "Category:" and "See Also" that should not be used for training.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
w = load_dataset('wikipedia', '20200501.e... | closed | https://github.com/huggingface/datasets/issues/2763 | 2021-08-05T14:37:24 | 2023-07-25T17:43:04 | 2023-07-25T17:43:04 | {
"login": "lucadiliello",
"id": 23355969,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
961,652,046 | 2,762 | Add RVL-CDIP dataset | ## Adding a Dataset
- **Name:** RVL-CDIP
- **Description:** The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The image... | closed | https://github.com/huggingface/datasets/issues/2762 | 2021-08-05T09:57:05 | 2022-04-21T17:15:41 | 2022-04-21T17:15:41 | {
"login": "NielsRogge",
"id": 48327001,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
},
{
"name": "vision",
"color": "bfdadc"
}
] | false | [] |
961,568,287 | 2,761 | Error loading C4 realnewslike dataset | ## Describe the bug
Error loading C4 realnewslike dataset. Validation part mismatch
## Steps to reproduce the bug
```python
raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir)
## Expected results
success on data loading
## Actual results
Downloading: 100%|███████████████████████... | closed | https://github.com/huggingface/datasets/issues/2761 | 2021-08-05T08:16:58 | 2021-08-08T19:44:34 | 2021-08-08T19:44:34 | {
"login": "danshirron",
"id": 32061512,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
961,372,667 | 2,760 | Add Nuswide dataset | ## Adding a Dataset
- **Name:** *NUSWIDE*
- **Description:** *[A Real-World Web Image Dataset from National University of Singapore](https://lms.comp.nus.edu.sg/wp-content/uploads/2019/research/nuswide/NUS-WIDE.html)*
- **Paper:** *[here](https://lms.comp.nus.edu.sg/wp-content/uploads/2019/research/nuswide/nuswide-c... | open | https://github.com/huggingface/datasets/issues/2760 | 2021-08-05T03:00:41 | 2021-12-08T12:06:23 | null | {
"login": "shivangibithel",
"id": 19774925,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
},
{
"name": "vision",
"color": "bfdadc"
}
] | false | [] |
960,206,575 | 2,758 | Raise ManualDownloadError when loading a dataset that requires previous manual download | This PR implements the raising of a `ManualDownloadError` when loading a dataset that requires previous manual download, and this is missing.
The `ManualDownloadError` is raised whether the dataset is loaded in normal or streaming mode.
Close #2749.
cc: @severo | closed | https://github.com/huggingface/datasets/pull/2758 | 2021-08-04T10:19:55 | 2021-08-04T11:36:30 | 2021-08-04T11:36:30 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
959,984,081 | 2,757 | Unexpected type after `concatenate_datasets` | ## Describe the bug
I am trying to concatenate two `Dataset` using `concatenate_datasets` but it turns out that after concatenation the features are casted from `torch.Tensor` to `list`.
It then leads to a weird tensors when trying to convert it to a `DataLoader`. However, if I use each `Dataset` separately everythi... | closed | https://github.com/huggingface/datasets/issues/2757 | 2021-08-04T07:10:39 | 2021-08-04T16:01:24 | 2021-08-04T16:01:23 | {
"login": "JulesBelveze",
"id": 32683010,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
959,255,646 | 2,756 | Fix metadata JSON for ubuntu_dialogs_corpus dataset | Related to #2743. | closed | https://github.com/huggingface/datasets/pull/2756 | 2021-08-03T15:48:59 | 2021-08-04T09:43:25 | 2021-08-04T09:43:25 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
959,115,888 | 2,755 | Fix metadata JSON for turkish_movie_sentiment dataset | Related to #2743. | closed | https://github.com/huggingface/datasets/pull/2755 | 2021-08-03T13:25:44 | 2021-08-04T09:06:54 | 2021-08-04T09:06:53 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
959,105,577 | 2,754 | Generate metadata JSON for telugu_books dataset | Related to #2743. | closed | https://github.com/huggingface/datasets/pull/2754 | 2021-08-03T13:14:52 | 2021-08-04T08:49:02 | 2021-08-04T08:49:02 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
959,036,995 | 2,753 | Generate metadata JSON for reclor dataset | Related to #2743. | closed | https://github.com/huggingface/datasets/pull/2753 | 2021-08-03T11:52:29 | 2021-08-04T08:07:15 | 2021-08-04T08:07:15 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
959,023,608 | 2,752 | Generate metadata JSON for lm1b dataset | Related to #2743. | closed | https://github.com/huggingface/datasets/pull/2752 | 2021-08-03T11:34:56 | 2021-08-04T06:40:40 | 2021-08-04T06:40:39 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
959,021,262 | 2,751 | Update metadata for wikihow dataset | Update metadata for wikihow dataset:
- Remove leading new line character in description and citation
- Update metadata JSON
- Remove no longer necessary `urls_checksums/checksums.txt` file
Related to #2748. | closed | https://github.com/huggingface/datasets/pull/2751 | 2021-08-03T11:31:57 | 2021-08-03T15:52:09 | 2021-08-03T15:52:09 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
958,984,730 | 2,750 | Second concatenation of datasets produces errors | Hi,
I am need to concatenate my dataset with others several times, and after I concatenate it for the second time, the features of features (e.g. tags names) are collapsed. This hinders, for instance, the usage of tokenize function with `data.map`.
```
from datasets import load_dataset, concatenate_datasets
d... | closed | https://github.com/huggingface/datasets/issues/2750 | 2021-08-03T10:47:04 | 2022-01-19T14:23:43 | 2022-01-19T14:19:05 | {
"login": "Aktsvigun",
"id": 36672861,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
958,968,748 | 2,749 | Raise a proper exception when trying to stream a dataset that requires to manually download files | ## Describe the bug
At least for 'reclor', 'telugu_books', 'turkish_movie_sentiment', 'ubuntu_dialogs_corpus', 'wikihow', trying to `load_dataset` in streaming mode raises a `TypeError` without any detail about why it fails.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = ... | closed | https://github.com/huggingface/datasets/issues/2749 | 2021-08-03T10:26:27 | 2021-08-09T08:53:35 | 2021-08-04T11:36:30 | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
958,889,041 | 2,748 | Generate metadata JSON for wikihow dataset | Related to #2743. | closed | https://github.com/huggingface/datasets/pull/2748 | 2021-08-03T08:55:40 | 2021-08-03T10:17:51 | 2021-08-03T10:17:51 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
958,867,627 | 2,747 | add multi-proc in `to_json` | Closes #2663. I've tried adding multiprocessing in `to_json`. Here's some benchmarking I did to compare the timings of current version (say v1) and multi-proc version (say v2). I did this with `cpu_count` 4 (2015 Macbook Air)
1. Dataset name: `ascent_kb` - 8.9M samples (all samples were used, reporting this for a si... | closed | https://github.com/huggingface/datasets/pull/2747 | 2021-08-03T08:30:13 | 2021-10-19T18:24:21 | 2021-09-13T13:56:37 | {
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
} | [] | true | [] |
958,551,619 | 2,746 | Cannot load `few-nerd` dataset | ## Describe the bug
Cannot load `few-nerd` dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset('few-nerd', 'supervised')
```
## Actual results
Executing above code will give the following error:
```
Using the latest cached version of the module from /Users... | closed | https://github.com/huggingface/datasets/issues/2746 | 2021-08-02T22:18:57 | 2021-11-16T08:51:34 | 2021-08-03T19:45:43 | {
"login": "Mehrad0711",
"id": 28717374,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
958,269,579 | 2,745 | added semeval18_emotion_classification dataset | I added the data set of SemEval 2018 Task 1 (Subtask 5) for emotion detection in three languages.
```
datasets-cli test datasets/semeval18_emotion_classification/ --save_infos --all_configs
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_semeval18_emotion_classification
... | closed | https://github.com/huggingface/datasets/pull/2745 | 2021-08-02T15:39:55 | 2021-10-29T09:22:05 | 2021-09-21T09:48:35 | {
"login": "maxpel",
"id": 31095360,
"type": "User"
} | [] | true | [] |
958,146,637 | 2,744 | Fix key by recreating metadata JSON for journalists_questions dataset | Close #2743. | closed | https://github.com/huggingface/datasets/pull/2744 | 2021-08-02T13:27:53 | 2021-08-03T09:25:34 | 2021-08-03T09:25:33 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
958,119,251 | 2,743 | Dataset JSON is incorrect | ## Describe the bug
The JSON file generated for https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/journalists_questions/journalists_questions.py is https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/journalists_questions/dataset... | closed | https://github.com/huggingface/datasets/issues/2743 | 2021-08-02T13:01:26 | 2021-08-03T10:06:57 | 2021-08-03T09:25:33 | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
958,114,064 | 2,742 | Improve detection of streamable file types | **Is your feature request related to a problem? Please describe.**
```python
from datasets import load_dataset_builder
from datasets.utils.streaming_download_manager import StreamingDownloadManager
builder = load_dataset_builder("journalists_questions", name="plain_text")
builder._split_generators(StreamingDownl... | closed | https://github.com/huggingface/datasets/issues/2742 | 2021-08-02T12:55:09 | 2021-11-12T17:18:10 | 2021-11-12T17:18:10 | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
957,979,559 | 2,741 | Add Hypersim dataset | ## Adding a Dataset
- **Name:** Hypersim
- **Description:** photorealistic synthetic dataset for holistic indoor scene understanding
- **Paper:** *link to the dataset paper if available*
- **Data:** https://github.com/apple/ml-hypersim
Instructions to add a new dataset can be found [here](https://github.com/hugg... | open | https://github.com/huggingface/datasets/issues/2741 | 2021-08-02T10:06:50 | 2021-12-08T12:06:51 | null | {
"login": "osanseviero",
"id": 7246357,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
},
{
"name": "vision",
"color": "bfdadc"
}
] | false | [] |
957,911,035 | 2,740 | Update release instructions | Update release instructions. | closed | https://github.com/huggingface/datasets/pull/2740 | 2021-08-02T08:46:00 | 2021-08-02T14:39:56 | 2021-08-02T14:39:56 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
957,751,260 | 2,739 | Pass tokenize to sacrebleu only if explicitly passed by user | Next `sacrebleu` release (v2.0.0) will remove `sacrebleu.DEFAULT_TOKENIZER`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15
This PR passes `tokenize` to `sacrebleu` only if explicitly passed by the user, otherwise it will not pass it (a... | closed | https://github.com/huggingface/datasets/pull/2739 | 2021-08-02T05:09:05 | 2021-08-03T04:23:37 | 2021-08-03T04:23:37 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
957,517,746 | 2,738 | Sunbird AI Ugandan low resource language dataset | Multi-way parallel text corpus of 5 key Ugandan languages for the task of machine translation. | closed | https://github.com/huggingface/datasets/pull/2738 | 2021-08-01T15:18:00 | 2022-10-03T09:37:30 | 2022-10-03T09:37:30 | {
"login": "ak3ra",
"id": 12105163,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
957,124,881 | 2,737 | SacreBLEU update | With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error.
AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER'
this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but sacrebleu.py tries... | closed | https://github.com/huggingface/datasets/issues/2737 | 2021-07-30T23:53:08 | 2021-09-22T10:47:41 | 2021-08-03T04:23:37 | {
"login": "devrimcavusoglu",
"id": 46989091,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
956,895,199 | 2,736 | Add Microsoft Building Footprints dataset | ## Adding a Dataset
- **Name:** Microsoft Building Footprints
- **Description:** With the goal to increase the coverage of building footprint data available as open data for OpenStreetMap and humanitarian efforts, we have released millions of building footprints as open data available to download free of charge.
- *... | open | https://github.com/huggingface/datasets/issues/2736 | 2021-07-30T16:17:08 | 2021-12-08T12:09:03 | null | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
},
{
"name": "vision",
"color": "bfdadc"
}
] | false | [] |
956,889,365 | 2,735 | Add Open Buildings dataset | ## Adding a Dataset
- **Name:** Open Buildings
- **Description:** A dataset of building footprints to support social good applications.
Building footprints are useful for a range of important applications, from population estimation, urban planning and humanitarian response, to environmental and climate science.... | open | https://github.com/huggingface/datasets/issues/2735 | 2021-07-30T16:08:39 | 2021-07-31T05:01:25 | null | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
956,844,874 | 2,734 | Update BibTeX entry | Update BibTeX entry. | closed | https://github.com/huggingface/datasets/pull/2734 | 2021-07-30T15:22:51 | 2021-07-30T15:47:58 | 2021-07-30T15:47:58 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
956,725,476 | 2,733 | Add missing parquet known extension | This code was failing because the parquet extension wasn't recognized:
```python
from datasets import load_dataset
base_url = "https://storage.googleapis.com/huggingface-nlp/cache/datasets/wikipedia/20200501.en/1.0.0/"
data_files = {"train": base_url + "wikipedia-train.parquet"}
wiki = load_dataset("parquet", da... | closed | https://github.com/huggingface/datasets/pull/2733 | 2021-07-30T13:01:20 | 2021-07-30T13:24:31 | 2021-07-30T13:24:30 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
956,676,360 | 2,732 | Updated TTC4900 Dataset | - The source address of the TTC4900 dataset of [@savasy](https://github.com/savasy) has been updated for direct download.
- Updated readme. | closed | https://github.com/huggingface/datasets/pull/2732 | 2021-07-30T11:52:14 | 2021-07-30T16:00:51 | 2021-07-30T15:58:14 | {
"login": "yavuzKomecoglu",
"id": 5150963,
"type": "User"
} | [] | true | [] |
956,087,452 | 2,731 | Adding to_tf_dataset method | Oh my **god** do not merge this yet, it's just a draft.
I've added a method (via a mixin) to the `arrow_dataset.Dataset` class that automatically converts our Dataset classes to TF Dataset classes ready for training. It hopefully has most of the features we want, including streaming from disk (no need to load the wh... | closed | https://github.com/huggingface/datasets/pull/2731 | 2021-07-29T18:10:25 | 2021-09-16T13:50:54 | 2021-09-16T13:50:54 | {
"login": "Rocketknight1",
"id": 12866554,
"type": "User"
} | [] | true | [] |
955,987,834 | 2,730 | Update CommonVoice with new release | ## Adding a Dataset
- **Name:** CommonVoice mid-2021 release
- **Description:** more data in CommonVoice: Languages that have increased the most by percentage are Thai (almost 20x growth, from 12 hours to 250 hours), Luganda (almost 9x growth, from 8 to 80), Esperanto (7x growth, from 100 to 840), and Tamil (almost 8... | open | https://github.com/huggingface/datasets/issues/2730 | 2021-07-29T15:59:59 | 2021-08-07T16:19:19 | null | {
"login": "yjernite",
"id": 10469459,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
955,920,489 | 2,729 | Fix IndexError while loading Arabic Billion Words dataset | Catch `IndexError` and ignore that record.
Close #2727. | closed | https://github.com/huggingface/datasets/pull/2729 | 2021-07-29T14:47:02 | 2021-07-30T13:03:55 | 2021-07-30T13:03:55 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | true | [] |
955,892,970 | 2,728 | Concurrent use of same dataset (already downloaded) | ## Describe the bug
When launching several jobs at the same time loading the same dataset trigger some errors see (last comments).
## Steps to reproduce the bug
export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets
for MODEL in "bert-base-uncased" "roberta-base" "distilbert-base-cased"; do # "bert-base-uncased" ... | open | https://github.com/huggingface/datasets/issues/2728 | 2021-07-29T14:18:38 | 2021-08-02T07:25:57 | null | {
"login": "PierreColombo",
"id": 22492839,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
955,812,149 | 2,727 | Error in loading the Arabic Billion Words Corpus | ## Describe the bug
I get `IndexError: list index out of range` when trying to load the `Techreen` and `Almustaqbal` configs of the dataset.
## Steps to reproduce the bug
```python
load_dataset("arabic_billion_words", "Techreen")
load_dataset("arabic_billion_words", "Almustaqbal")
```
## Expected results
Th... | closed | https://github.com/huggingface/datasets/issues/2727 | 2021-07-29T12:53:09 | 2021-07-30T13:03:55 | 2021-07-30T13:03:55 | {
"login": "M-Salti",
"id": 9285264,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
955,674,388 | 2,726 | Typo fix `tokenize_exemple` | There is a small typo in the main README.md | closed | https://github.com/huggingface/datasets/pull/2726 | 2021-07-29T10:03:37 | 2021-07-29T12:00:25 | 2021-07-29T12:00:25 | {
"login": "shabie",
"id": 30535146,
"type": "User"
} | [] | true | [] |
955,020,776 | 2,725 | Pass use_auth_token to request_etags | Fix #2724. | closed | https://github.com/huggingface/datasets/pull/2725 | 2021-07-28T16:13:29 | 2021-07-28T16:38:02 | 2021-07-28T16:38:02 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
954,919,607 | 2,724 | 404 Error when loading remote data files from private repo | ## Describe the bug
When loading remote data files from a private repo, a 404 error is raised.
## Steps to reproduce the bug
```python
url = hf_hub_url("lewtun/asr-preds-test", "preds.jsonl", repo_type="dataset")
dset = load_dataset("json", data_files=url, use_auth_token=True)
# HTTPError: 404 Client Error: Not... | closed | https://github.com/huggingface/datasets/issues/2724 | 2021-07-28T14:24:23 | 2021-07-29T04:58:49 | 2021-07-28T16:38:01 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
954,864,104 | 2,723 | Fix en subset by modifying dataset_info with correct validation infos | - Related to: #2682
We correct the values of `en` subset concerning the expected validation values (both `num_bytes` and `num_examples`.
Instead of having:
`{"name": "validation", "num_bytes": 828589180707, "num_examples": 364868892, "dataset_name": "c4"}`
We replace with correct values:
`{"name": "vali... | closed | https://github.com/huggingface/datasets/pull/2723 | 2021-07-28T13:36:19 | 2021-07-28T15:22:23 | 2021-07-28T15:22:23 | {
"login": "thomasw21",
"id": 24695242,
"type": "User"
} | [] | true | [] |
954,446,053 | 2,722 | Missing cache file | Strangely missing cache file after I restart my program again.
`glue_dataset = datasets.load_dataset('glue', 'sst2')`
`FileNotFoundError: [Errno 2] No such file or directory: /Users/chris/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96d6053ad/dataset_info.json... | closed | https://github.com/huggingface/datasets/issues/2722 | 2021-07-28T03:52:07 | 2022-03-21T08:27:51 | 2022-03-21T08:27:51 | {
"login": "PosoSAgapo",
"id": 33200481,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
954,238,230 | 2,721 | Deal with the bad check in test_load.py | This PR removes a check that's been added in #2684. My intention with this check was to capture an URL in the error message, but instead, it captures a substring of the previous regex match in the test function. Another option would be to replace this check with:
```python
m_paths = re.findall(r"\S*_dummy/_dummy.py\b... | closed | https://github.com/huggingface/datasets/pull/2721 | 2021-07-27T20:23:23 | 2021-07-28T09:58:34 | 2021-07-28T08:53:18 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
954,024,426 | 2,720 | fix: 🐛 fix two typos | closed | https://github.com/huggingface/datasets/pull/2720 | 2021-07-27T15:50:17 | 2021-07-27T18:38:17 | 2021-07-27T18:38:16 | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [] | true | [] | |
953,932,416 | 2,719 | Use ETag in streaming mode to detect resource updates | **Is your feature request related to a problem? Please describe.**
I want to cache data I generate from processing a dataset I've loaded in streaming mode, but I've currently no way to know if the remote data has been updated or not, thus I don't know when to invalidate my cache.
**Describe the solution you'd lik... | open | https://github.com/huggingface/datasets/issues/2719 | 2021-07-27T14:17:09 | 2021-10-22T09:36:08 | null | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
953,360,663 | 2,718 | New documentation structure | Organize Datasets documentation into four documentation types to improve clarity and discoverability of content.
**Content to add in the very short term (feel free to add anything I'm missing):**
- A discussion on why Datasets uses Arrow that includes some context and background about why we use Arrow. Would also b... | closed | https://github.com/huggingface/datasets/pull/2718 | 2021-07-26T23:15:13 | 2021-09-13T17:20:53 | 2021-09-13T17:20:52 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [] | true | [] |
952,979,976 | 2,717 | Fix shuffle on IterableDataset that disables batching in case any functions were mapped | Made a very minor change to fix the issue#2716. Added the missing argument in the constructor call.
As discussed in the bug report, the change is made to prevent the `shuffle` method call from resetting the value of `batched` attribute in `MappedExamplesIterable`
Fix #2716. | closed | https://github.com/huggingface/datasets/pull/2717 | 2021-07-26T14:42:22 | 2021-07-26T18:04:14 | 2021-07-26T16:30:06 | {
"login": "amankhandelia",
"id": 7098967,
"type": "User"
} | [] | true | [] |
952,902,778 | 2,716 | Calling shuffle on IterableDataset will disable batching in case any functions were mapped | When using dataset in streaming mode, if one applies `shuffle` method on the dataset and `map` method for which `batched=True` than the batching operation will not happen, instead `batched` will be set to `False`
I did RCA on the dataset codebase, the problem is emerging from [this line of code](https://github.com/h... | closed | https://github.com/huggingface/datasets/issues/2716 | 2021-07-26T13:24:59 | 2021-07-26T18:04:43 | 2021-07-26T18:04:43 | {
"login": "amankhandelia",
"id": 7098967,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
952,845,229 | 2,715 | Update PAN-X data URL in XTREME dataset | Related to #2710, #2691. | closed | https://github.com/huggingface/datasets/pull/2715 | 2021-07-26T12:21:17 | 2021-07-26T13:27:59 | 2021-07-26T13:27:59 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
952,580,820 | 2,714 | add more precise information for size | For the import into ELG, we would like a more precise description of the size of the dataset, instead of the current size categories. The size can be expressed in bytes, or any other preferred size unit. As suggested in the slack channel, perhaps this could be computed with a regex for existing datasets. | open | https://github.com/huggingface/datasets/issues/2714 | 2021-07-26T07:11:03 | 2021-07-26T09:16:25 | null | {
"login": "pennyl67",
"id": 1493902,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
952,515,256 | 2,713 | Enumerate all ner_tags values in WNUT 17 dataset | This PR does:
- Enumerate all ner_tags in dataset card Data Fields section
- Add all metadata tags to dataset card
Close #2709. | closed | https://github.com/huggingface/datasets/pull/2713 | 2021-07-26T05:22:16 | 2021-07-26T09:30:55 | 2021-07-26T09:30:55 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
951,723,326 | 2,710 | Update WikiANN data URL | WikiANN data source URL is no longer accessible: 404 error from Dropbox.
We have decided to host it at Hugging Face. This PR updates the data source URL, the metadata JSON file and the dataset card.
Close #2691. | closed | https://github.com/huggingface/datasets/pull/2710 | 2021-07-23T16:29:21 | 2021-07-26T09:34:23 | 2021-07-26T09:34:23 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
951,534,757 | 2,709 | Missing documentation for wnut_17 (ner_tags) | On the info page of the wnut_17 data set (https://huggingface.co/datasets/wnut_17), the model output of ner-tags is only documented for these 5 cases:
`ner_tags: a list of classification labels, with possible values including O (0), B-corporation (1), I-corporation (2), B-creative-work (3), I-creative-work (4).`
... | closed | https://github.com/huggingface/datasets/issues/2709 | 2021-07-23T12:25:32 | 2021-07-26T09:30:55 | 2021-07-26T09:30:55 | {
"login": "maxpel",
"id": 31095360,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
951,092,660 | 2,708 | QASC: incomplete training set | ## Describe the bug
The training instances are not loaded properly.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("qasc", script_version='1.10.2')
def load_instances(split):
instances = dataset[split]
print(f"split: {split} - size: {len(instanc... | closed | https://github.com/huggingface/datasets/issues/2708 | 2021-07-22T21:59:44 | 2021-07-23T13:30:07 | 2021-07-23T13:30:07 | {
"login": "danyaljj",
"id": 2441454,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
950,812,945 | 2,707 | 404 Not Found Error when loading LAMA dataset | The [LAMA](https://huggingface.co/datasets/viewer/?dataset=lama) probing dataset is not available for download:
Steps to Reproduce:
1. `from datasets import load_dataset`
2. `dataset = load_dataset('lama', 'trex')`.
Results:
`FileNotFoundError: Couldn't find file locally at lama/lama.py, or remotely ... | closed | https://github.com/huggingface/datasets/issues/2707 | 2021-07-22T15:52:33 | 2021-07-26T14:29:07 | 2021-07-26T14:29:07 | {
"login": "dwil2444",
"id": 26467159,
"type": "User"
} | [] | false | [] |
950,606,561 | 2,706 | Update BibTeX entry | Update BibTeX entry. | closed | https://github.com/huggingface/datasets/pull/2706 | 2021-07-22T12:29:29 | 2021-07-22T12:43:00 | 2021-07-22T12:43:00 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
950,488,583 | 2,705 | 404 not found error on loading WIKIANN dataset | ## Describe the bug
Unable to retreive wikiann English dataset
## Steps to reproduce the bug
```python
from datasets import list_datasets, load_dataset, list_metrics, load_metric
WIKIANN = load_dataset("wikiann","en")
```
## Expected results
Colab notebook should display successful download status
## Act... | closed | https://github.com/huggingface/datasets/issues/2705 | 2021-07-22T09:55:50 | 2021-07-23T08:07:32 | 2021-07-23T08:07:32 | {
"login": "ronbutan",
"id": 39296659,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
950,483,980 | 2,704 | Fix pick default config name message | The error message to tell which config name to load is not displayed.
This is because in the code it was considering the config kwargs to be non-empty, which is a special case for custom configs created on the fly. It appears after this change: https://github.com/huggingface/datasets/pull/2659
I fixed that by ma... | closed | https://github.com/huggingface/datasets/pull/2704 | 2021-07-22T09:49:43 | 2021-07-22T10:02:41 | 2021-07-22T10:02:40 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
950,482,284 | 2,703 | Bad message when config name is missing | When loading a dataset that have several configurations, we expect to see an error message if the user doesn't specify a config name.
However in `datasets` 1.10.0 and 1.10.1 it doesn't show the right message:
```python
import datasets
datasets.load_dataset("glue")
```
raises
```python
AttributeError: 'Bui... | closed | https://github.com/huggingface/datasets/issues/2703 | 2021-07-22T09:47:23 | 2021-07-22T10:02:40 | 2021-07-22T10:02:40 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | false | [] |
950,448,159 | 2,702 | Update BibTeX entry | Update BibTeX entry. | closed | https://github.com/huggingface/datasets/pull/2702 | 2021-07-22T09:04:39 | 2021-07-22T09:17:39 | 2021-07-22T09:17:38 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
950,422,403 | 2,701 | Fix download_mode docstrings | Fix `download_mode` docstrings. | closed | https://github.com/huggingface/datasets/pull/2701 | 2021-07-22T08:30:25 | 2021-07-22T09:33:31 | 2021-07-22T09:33:31 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
950,276,325 | 2,700 | from datasets import Dataset is failing | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import Dataset
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or... | closed | https://github.com/huggingface/datasets/issues/2700 | 2021-07-22T03:51:23 | 2021-07-22T07:23:45 | 2021-07-22T07:09:07 | {
"login": "kswamy15",
"id": 5582286,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
950,221,226 | 2,699 | cannot combine splits merging and streaming? | this does not work:
`dataset = datasets.load_dataset('mc4','iw',split='train+validation',streaming=True)`
with error:
`ValueError: Bad split: train+validation. Available splits: ['train', 'validation']`
these work:
`dataset = datasets.load_dataset('mc4','iw',split='train+validation')`
`dataset = datasets.load_d... | open | https://github.com/huggingface/datasets/issues/2699 | 2021-07-22T01:13:25 | 2024-04-08T13:26:46 | null | {
"login": "eyaler",
"id": 4436747,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
950,159,867 | 2,698 | Ignore empty batch when writing | This prevents an schema update with unknown column types, as reported in #2644.
This is my first attempt at fixing the issue. I tested the following:
- First batch returned by a batched map operation is empty.
- An intermediate batch is empty.
- `python -m unittest tests.test_arrow_writer` passes.
However, `ar... | closed | https://github.com/huggingface/datasets/pull/2698 | 2021-07-21T22:35:30 | 2021-07-26T14:56:03 | 2021-07-26T13:25:26 | {
"login": "pcuenca",
"id": 1177582,
"type": "User"
} | [] | true | [] |
950,021,623 | 2,697 | Fix import on Colab | Fix #2695, fix #2700. | closed | https://github.com/huggingface/datasets/pull/2697 | 2021-07-21T19:03:38 | 2021-07-22T07:09:08 | 2021-07-22T07:09:07 | {
"login": "nateraw",
"id": 32437151,
"type": "User"
} | [] | true | [] |
949,901,726 | 2,696 | Add support for disable_progress_bar on Windows | This PR is a continuation of #2667 and adds support for `utils.disable_progress_bar()` on Windows when using multiprocessing. This [answer](https://stackoverflow.com/a/6596695/14095927) on SO explains it nicely why the current approach (with calling `utils.is_progress_bar_enabled()` inside `Dataset._map_single`) would ... | closed | https://github.com/huggingface/datasets/pull/2696 | 2021-07-21T16:34:53 | 2021-07-26T13:31:14 | 2021-07-26T09:38:37 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
949,864,823 | 2,695 | Cannot import load_dataset on Colab | ## Describe the bug
Got tqdm concurrent module not found error during importing load_dataset from datasets.
## Steps to reproduce the bug
Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error
On colab:
```python
!pip install dataset... | closed | https://github.com/huggingface/datasets/issues/2695 | 2021-07-21T15:52:51 | 2021-07-22T07:26:25 | 2021-07-22T07:09:07 | {
"login": "bayartsogt-ya",
"id": 43239645,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
949,844,722 | 2,694 | fix: 🐛 change string format to allow copy/paste to work in bash | Before: copy/paste resulted in an error because the square bracket
characters `[]` are special characters in bash | closed | https://github.com/huggingface/datasets/pull/2694 | 2021-07-21T15:30:40 | 2021-07-22T10:41:47 | 2021-07-22T10:41:47 | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [] | true | [] |
949,797,014 | 2,693 | Fix OSCAR Esperanto | The Esperanto part (original) of OSCAR has the wrong number of examples:
```python
from datasets import load_dataset
raw_datasets = load_dataset("oscar", "unshuffled_original_eo")
```
raises
```python
NonMatchingSplitsSizesError:
[{'expected': SplitInfo(name='train', num_bytes=314188336, num_examples=121171, da... | closed | https://github.com/huggingface/datasets/pull/2693 | 2021-07-21T14:43:50 | 2021-07-21T14:53:52 | 2021-07-21T14:53:51 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
949,765,484 | 2,692 | Update BibTeX entry | Update BibTeX entry | closed | https://github.com/huggingface/datasets/pull/2692 | 2021-07-21T14:23:35 | 2021-07-21T15:31:41 | 2021-07-21T15:31:40 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
949,758,379 | 2,691 | xtreme / pan-x cannot be downloaded | ## Describe the bug
Dataset xtreme / pan-x cannot be loaded
Seems related to https://github.com/huggingface/datasets/pull/2326
## Steps to reproduce the bug
```python
dataset = load_dataset("xtreme", "PAN-X.fr")
```
## Expected results
Load the dataset
## Actual results
```
FileNotFoundError:... | closed | https://github.com/huggingface/datasets/issues/2691 | 2021-07-21T14:18:05 | 2021-07-26T09:34:22 | 2021-07-26T09:34:22 | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
949,574,500 | 2,690 | Docs details | Some comments here:
- the code samples assume the expected libraries have already been installed. Maybe add a section at start, or add it to every code sample. Something like `pip install datasets transformers torch 'datasets[streaming]'` (maybe just link to https://huggingface.co/docs/datasets/installation.html + ... | closed | https://github.com/huggingface/datasets/pull/2690 | 2021-07-21T10:43:14 | 2021-07-27T18:40:54 | 2021-07-27T18:40:54 | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [] | true | [] |
949,447,104 | 2,689 | cannot save the dataset to disk after rename_column | ## Describe the bug
If you use `rename_column` and do no other modification, you will be unable to save the dataset using `save_to_disk`
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
In [1]: from datasets import Dataset, load_from_disk
In [5]: dataset=Dataset.from_dict({'foo': [0]})... | closed | https://github.com/huggingface/datasets/issues/2689 | 2021-07-21T08:13:40 | 2025-02-11T23:23:17 | 2021-07-21T13:11:04 | {
"login": "PaulLerner",
"id": 25532159,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
949,182,074 | 2,688 | hebrew language codes he and iw should be treated as aliases | https://huggingface.co/datasets/mc4 not listed when searching for hebrew datasets (he) as it uses the older language code iw, preventing discoverability. | closed | https://github.com/huggingface/datasets/issues/2688 | 2021-07-20T23:13:52 | 2021-07-21T16:34:53 | 2021-07-21T16:34:53 | {
"login": "eyaler",
"id": 4436747,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
948,890,481 | 2,687 | Minor documentation fix | Currently, [Writing a dataset loading script](https://huggingface.co/docs/datasets/add_dataset.html) page has a small error. A link to `matinf` dataset in [_Dataset scripts of reference_](https://huggingface.co/docs/datasets/add_dataset.html#dataset-scripts-of-reference) section actually leads to `xsquad`, instead. Thi... | closed | https://github.com/huggingface/datasets/pull/2687 | 2021-07-20T17:43:23 | 2021-07-21T13:04:55 | 2021-07-21T13:04:55 | {
"login": "slowwavesleep",
"id": 44175589,
"type": "User"
} | [] | true | [] |
948,811,669 | 2,686 | Fix bad config ids that name cache directories | `data_dir=None` was considered a dataset config parameter, hence creating a special config_id for all dataset being loaded.
Since the config_id is used to name the cache directories, this leaded to datasets being regenerated for users.
I fixed this by ignoring the value of `data_dir` when it's `None` when computing... | closed | https://github.com/huggingface/datasets/pull/2686 | 2021-07-20T16:00:45 | 2021-07-20T16:27:15 | 2021-07-20T16:27:15 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
948,791,572 | 2,685 | Fix Blog Authorship Corpus dataset | This PR:
- Update the JSON metadata file, which previously was raising a `NonMatchingSplitsSizesError`
- Fix the codec of the data files (`latin_1` instead of `utf-8`), which previously was raising ` UnicodeDecodeError` for some files
Close #2679. | closed | https://github.com/huggingface/datasets/pull/2685 | 2021-07-20T15:44:50 | 2021-07-21T13:11:58 | 2021-07-21T13:11:58 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
948,771,753 | 2,684 | Print absolute local paths in load_dataset error messages | Use absolute local paths in the error messages of `load_dataset` as per @stas00's suggestion in https://github.com/huggingface/datasets/pull/2500#issuecomment-874891223 | closed | https://github.com/huggingface/datasets/pull/2684 | 2021-07-20T15:28:28 | 2021-07-22T20:48:19 | 2021-07-22T14:01:10 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
948,721,379 | 2,683 | Cache directories changed due to recent changes in how config kwargs are handled | Since #2659 I can see weird cache directory names with hashes in the config id, even though no additional config kwargs are passed. For example:
```python
from datasets import load_dataset_builder
c4_builder = load_dataset_builder("c4", "en")
print(c4_builder.cache_dir)
# /Users/quentinlhoest/.cache/huggingfac... | closed | https://github.com/huggingface/datasets/issues/2683 | 2021-07-20T14:37:57 | 2021-07-20T16:27:15 | 2021-07-20T16:27:15 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | false | [] |
948,713,137 | 2,682 | Fix c4 expected files | Some files were not registered in the list of expected files to download
Fix https://github.com/huggingface/datasets/issues/2677 | closed | https://github.com/huggingface/datasets/pull/2682 | 2021-07-20T14:29:31 | 2021-07-20T14:38:11 | 2021-07-20T14:38:10 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
948,708,645 | 2,681 | 5 duplicate datasets | ## Describe the bug
In 5 cases, I could find a dataset on Paperswithcode which references two Hugging Face datasets as dataset loaders. They are:
- https://paperswithcode.com/dataset/multinli -> https://huggingface.co/datasets/multi_nli and https://huggingface.co/datasets/multi_nli_mismatch
<img width="838... | closed | https://github.com/huggingface/datasets/issues/2681 | 2021-07-20T14:25:00 | 2021-07-20T15:44:17 | 2021-07-20T15:44:17 | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
948,649,716 | 2,680 | feat: 🎸 add paperswithcode id for qasper dataset | The reverse reference exists on paperswithcode:
https://paperswithcode.com/dataset/qasper | closed | https://github.com/huggingface/datasets/pull/2680 | 2021-07-20T13:22:29 | 2021-07-20T14:04:10 | 2021-07-20T14:04:10 | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [] | true | [] |
948,506,638 | 2,679 | Cannot load the blog_authorship_corpus due to codec errors | ## Describe the bug
A codec error is raised while loading the blog_authorship_corpus.
## Steps to reproduce the bug
```
from datasets import load_dataset
raw_datasets = load_dataset("blog_authorship_corpus")
```
## Expected results
Loading the dataset without errors.
## Actual results
An error simila... | closed | https://github.com/huggingface/datasets/issues/2679 | 2021-07-20T10:13:20 | 2021-07-21T17:02:21 | 2021-07-21T13:11:58 | {
"login": "izaskr",
"id": 38069449,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
948,471,222 | 2,678 | Import Error in Kaggle notebook | ## Describe the bug
Not able to import datasets library in kaggle notebooks
## Steps to reproduce the bug
```python
!pip install datasets
import datasets
```
## Expected results
No such error
## Actual results
```
ImportError Traceback (most recent call last)
<ipython-inp... | closed | https://github.com/huggingface/datasets/issues/2678 | 2021-07-20T09:28:38 | 2021-07-21T13:59:26 | 2021-07-21T13:03:02 | {
"login": "prikmm",
"id": 47216475,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
948,429,788 | 2,677 | Error when downloading C4 | Hi,
I am trying to download `en` corpus from C4 dataset. However, I get an error caused by validation files download (see image). My code is very primitive:
`datasets.load_dataset('c4', 'en')`
Is this a bug or do I have some configurations missing on my server?
Thanks!
<img width="1014" alt="Снимок экрана 2... | closed | https://github.com/huggingface/datasets/issues/2677 | 2021-07-20T08:37:30 | 2021-07-20T14:41:31 | 2021-07-20T14:38:10 | {
"login": "Aktsvigun",
"id": 36672861,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
947,734,909 | 2,676 | Increase json reader block_size automatically | Currently some files can't be read with the default parameters of the JSON lines reader.
For example this one:
https://huggingface.co/datasets/thomwolf/codeparrot/resolve/main/file-000000000006.json.gz
raises a pyarrow error:
```python
ArrowInvalid: straddling object straddles two block boundaries (try to increa... | closed | https://github.com/huggingface/datasets/pull/2676 | 2021-07-19T14:51:14 | 2021-07-19T17:51:39 | 2021-07-19T17:51:38 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
947,657,732 | 2,675 | Parallelize ETag requests | Since https://github.com/huggingface/datasets/pull/2628 we use the ETag or the remote data files to compute the directory in the cache where a dataset is saved. This is useful in order to reload the dataset from the cache only if the remote files haven't changed.
In this I made the ETag requests parallel using multi... | closed | https://github.com/huggingface/datasets/pull/2675 | 2021-07-19T13:30:42 | 2021-07-19T19:33:25 | 2021-07-19T19:33:25 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
947,338,202 | 2,674 | Fix sacrebleu parameter name | DONE:
- Fix parameter name: `smooth` to `smooth_method`.
- Improve kwargs description.
- Align docs on using a metric.
- Add example of passing additional arguments in using metrics.
Related to #2669. | closed | https://github.com/huggingface/datasets/pull/2674 | 2021-07-19T07:07:26 | 2021-07-19T08:07:03 | 2021-07-19T08:07:03 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
947,300,008 | 2,673 | Fix potential DuplicatedKeysError in SQuAD | DONE:
- Fix potential DiplicatedKeysError by ensuring keys are unique.
- Align examples in the docs with SQuAD code.
We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique). | closed | https://github.com/huggingface/datasets/pull/2673 | 2021-07-19T06:08:00 | 2021-07-19T07:08:03 | 2021-07-19T07:08:03 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
947,294,605 | 2,672 | Fix potential DuplicatedKeysError in LibriSpeech | DONE:
- Fix unnecessary path join.
- Fix potential DiplicatedKeysError by ensuring keys are unique.
We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique). | closed | https://github.com/huggingface/datasets/pull/2672 | 2021-07-19T06:00:49 | 2021-07-19T06:28:57 | 2021-07-19T06:28:56 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
947,273,875 | 2,671 | Mesinesp development and training data sets have been added. | https://zenodo.org/search?page=1&size=20&q=mesinesp, Mesinesp has Medical Semantic Indexed records in Spanish. Indexing is done using DeCS codes, a sort of Spanish equivalent to MeSH terms.
The Mesinesp (Spanish BioASQ track, see https://temu.bsc.es/mesinesp) development set has a total of 750 records.
The Mesinesp ... | closed | https://github.com/huggingface/datasets/pull/2671 | 2021-07-19T05:14:38 | 2021-07-19T07:32:28 | 2021-07-19T06:45:50 | {
"login": "aslihanuysall",
"id": 32900185,
"type": "User"
} | [] | true | [] |
947,120,709 | 2,670 | Using sharding to parallelize indexing | **Is your feature request related to a problem? Please describe.**
Creating an elasticsearch index on large dataset could be quite long and cannot be parallelized on shard (the index creation is colliding)
**Describe the solution you'd like**
When working on dataset shards, if an index already exists, its mapping ... | open | https://github.com/huggingface/datasets/issues/2670 | 2021-07-18T21:26:26 | 2021-10-07T13:33:25 | null | {
"login": "ggdupont",
"id": 5583410,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
946,982,998 | 2,669 | Metric kwargs are not passed to underlying external metric f1_score | ## Describe the bug
When I want to use F1 score with average="min", this keyword argument does not seem to be passed through to the underlying sklearn metric. This is evident because [sklearn](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) throws an error telling me so.
## Steps to... | closed | https://github.com/huggingface/datasets/issues/2669 | 2021-07-18T08:32:31 | 2021-07-18T18:36:05 | 2021-07-18T11:19:04 | {
"login": "BramVanroy",
"id": 2779410,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
946,867,622 | 2,668 | Add Russian SuperGLUE | Hi,
This adds the [Russian SuperGLUE](https://russiansuperglue.com/) dataset. For the most part I reused the code for the original SuperGLUE, although there are some relatively minor differences in the structure that I accounted for. | closed | https://github.com/huggingface/datasets/pull/2668 | 2021-07-17T17:41:28 | 2021-07-29T11:50:31 | 2021-07-29T11:50:31 | {
"login": "slowwavesleep",
"id": 44175589,
"type": "User"
} | [] | true | [] |
946,861,908 | 2,667 | Use tqdm from tqdm_utils | This PR replaces `tqdm` from the `tqdm` lib with `tqdm` from `datasets.utils.tqdm_utils`. With this change, it's possible to disable progress bars just by calling `disable_progress_bar`. Note this doesn't work on Windows when using multiprocessing due to how global variables are shared between processes. Currently, the... | closed | https://github.com/huggingface/datasets/pull/2667 | 2021-07-17T17:06:35 | 2021-07-19T17:39:10 | 2021-07-19T17:32:00 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
946,825,140 | 2,666 | Adds CodeClippy dataset [WIP] | CodeClippy is an opensource code dataset scrapped from github during flax-jax-community-week
https://the-eye.eu/public/AI/training_data/code_clippy_data/ | closed | https://github.com/huggingface/datasets/pull/2666 | 2021-07-17T13:32:04 | 2023-07-26T23:06:01 | 2022-10-03T09:37:35 | {
"login": "arampacha",
"id": 69807323,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
946,822,036 | 2,665 | Adds APPS dataset to the hub [WIP] | A loading script for [APPS dataset](https://github.com/hendrycks/apps) | closed | https://github.com/huggingface/datasets/pull/2665 | 2021-07-17T13:13:17 | 2022-10-03T09:38:10 | 2022-10-03T09:38:10 | {
"login": "arampacha",
"id": 69807323,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
946,552,273 | 2,663 | [`to_json`] add multi-proc sharding support | As discussed on slack it appears that `to_json` is quite slow on huge datasets like OSCAR.
I implemented sharded saving, which is much much faster - but the tqdm bars all overwrite each other, so it's hard to make sense of the progress, so if possible ideally this multi-proc support could be implemented internally i... | closed | https://github.com/huggingface/datasets/issues/2663 | 2021-07-16T19:41:50 | 2021-09-13T13:56:37 | 2021-09-13T13:56:37 | {
"login": "stas00",
"id": 10676103,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.