id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
1,123,402,426
3,678
Add code example in wikipedia card
Close #3292.
closed
https://github.com/huggingface/datasets/pull/3678
2022-02-03T18:09:02
2022-02-21T09:14:56
2022-02-04T13:21:39
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,123,192,866
3,677
Discovery cannot be streamed anymore
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python from datasets import load_dataset iterable_dataset = load_dataset("discovery", name="discovery", split="train", streaming=True) list(iterable_dataset.take(1)) ``` ## Expected results The first ...
closed
https://github.com/huggingface/datasets/issues/3677
2022-02-03T15:02:03
2022-02-10T16:51:24
2022-02-10T16:51:24
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,123,096,362
3,676
`None` replaced by `[]` after first batch in map
Sometimes `None` can be replaced by `[]` when running map: ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(4)}) ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"]) print(ds.to_pandas()) # b # 0 [None, [0]] # 1 [[], [0]] # ...
closed
https://github.com/huggingface/datasets/issues/3676
2022-02-03T13:36:48
2022-10-28T13:13:20
2022-10-28T13:13:20
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
1,123,078,408
3,675
Add CodeContests dataset
## Adding a Dataset - **Name:** CodeContests - **Description:** CodeContests is a competitive programming dataset for machine-learning. - **Paper:** - **Data:** https://github.com/deepmind/code_contests - **Motivation:** This dataset was used when training [AlphaCode](https://deepmind.com/blog/article/Competitive-...
closed
https://github.com/huggingface/datasets/issues/3675
2022-02-03T13:20:00
2022-07-20T11:07:05
2022-07-20T11:07:05
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,123,027,874
3,674
Add FrugalScore metric
This pull request add FrugalScore metric for NLG systems evaluation. FrugalScore is a reference-based metric for NLG models evaluation. It is based on a distillation approach that allows to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance. Paper: https:...
closed
https://github.com/huggingface/datasets/pull/3674
2022-02-03T12:28:52
2022-02-21T15:58:44
2022-02-21T15:58:44
{ "login": "moussaKam", "id": 28675016, "type": "User" }
[]
true
[]
1,123,010,520
3,673
`load_dataset("snli")` is different from dataset viewer
## Describe the bug The dataset that is downloaded from the Hub via `load_dataset("snli")` is different from what is available in the dataset viewer. In the viewer the labels are not encoded (i.e., "neutral", "entailment", "contradiction"), while the downloaded dataset shows the encoded labels (i.e., 0, 1, 2). Is t...
closed
https://github.com/huggingface/datasets/issues/3673
2022-02-03T12:10:43
2022-02-16T11:22:31
2022-02-11T17:01:21
{ "login": "pietrolesci", "id": 61748653, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,122,980,556
3,672
Prioritize `module.builder_kwargs` over defaults in `TestCommand`
This fixes a bug in the `TestCommand` where multiple kwargs for `name` were passed if it was set in both default and `module.builder_kwargs`. Example error: ```Python Traceback (most recent call last): File "create_metadata.py", line 96, in <module> main(**vars(args)) File "create_metadata.py", line 86, ...
closed
https://github.com/huggingface/datasets/pull/3672
2022-02-03T11:38:42
2022-02-04T12:37:20
2022-02-04T12:37:19
{ "login": "lvwerra", "id": 8264887, "type": "User" }
[]
true
[]
1,122,864,253
3,671
Give an estimate of the dataset size in DatasetInfo
**Is your feature request related to a problem? Please describe.** Currently, only part of the datasets provide `dataset_size`, `download_size`, `size_in_bytes` (and `num_bytes` and `num_examples` inside `splits`). I would want to get this information, or an estimation, for all the datasets. **Describe the soluti...
open
https://github.com/huggingface/datasets/issues/3671
2022-02-03T09:47:10
2022-02-03T09:47:10
null
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,122,439,827
3,670
feat: 🎸 generate info if dataset_infos.json does not exist
in get_dataset_infos(). Also: add the `use_auth_token` parameter, and create get_dataset_config_info() ✅ Closes: #3013
closed
https://github.com/huggingface/datasets/pull/3670
2022-02-02T22:11:56
2022-02-21T15:57:11
2022-02-21T15:57:10
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
1,122,335,622
3,669
Common voice validated partition
This patch adds access to the 'validated' partitions of CommonVoice datasets (provided by the dataset creators but not available in the HuggingFace interface yet). As 'validated' contains significantly more data than 'train' (although it contains both test and validation, so one needs to be careful there), it can be u...
closed
https://github.com/huggingface/datasets/pull/3669
2022-02-02T20:04:43
2022-02-08T17:26:52
2022-02-08T17:23:12
{ "login": "shalymin-amzn", "id": 98762373, "type": "User" }
[]
true
[]
1,122,261,736
3,668
Couldn't cast array of type string error with cast_column
## Describe the bug In OVH cloud during Huggingface Robust-speech-recognition event on a AI training notebook instance using jupyter lab and running jupyter notebook When using the dataset.cast_column("audio",Audio(sampling_rate=16_000)) method I get error ![image](https://user-images.githubusercontent.com/25264...
closed
https://github.com/huggingface/datasets/issues/3668
2022-02-02T18:33:29
2022-07-19T13:36:24
2022-07-19T13:36:24
{ "login": "R4ZZ3", "id": 25264037, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,122,060,630
3,667
Process .opus files with torchaudio
@anton-l suggested to proccess .opus files with `torchaudio` instead of `soundfile` as it's faster: ![opus](https://user-images.githubusercontent.com/16348744/152177816-2df6076c-f28b-4aef-a08d-b499b921414d.png) (moreover, I didn't manage to load .opus files with `soundfile` / `librosa` locally on any my machine an...
closed
https://github.com/huggingface/datasets/pull/3667
2022-02-02T15:23:14
2022-02-04T15:29:38
2022-02-04T15:29:38
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,122,058,894
3,666
process .opus files (for Multilingual Spoken Words)
Opus files requires `libsndfile>=1.0.30`. Add check for this version and tests. **outdated:** Add [Multillingual Spoken Words dataset](https://mlcommons.org/en/multilingual-spoken-words/) You can specify multiple languages for downloading 😌: ```python ds = load_dataset("datasets/ml_spoken_words", languages=...
closed
https://github.com/huggingface/datasets/pull/3666
2022-02-02T15:21:48
2022-02-22T10:04:03
2022-02-22T10:03:53
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,121,753,385
3,665
Fix MP3 resampling when a dataset's audio files have different sampling rates
The resampler needs to be updated if the `orig_freq` doesn't match the audio file sampling rate Fix https://github.com/huggingface/datasets/issues/3662
closed
https://github.com/huggingface/datasets/pull/3665
2022-02-02T10:31:45
2022-02-02T10:52:26
2022-02-02T10:52:26
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,121,233,301
3,664
[WIP] Return local paths to Common Voice
Fixes https://github.com/huggingface/datasets/issues/3663 This is a proposed way of returning the old local file-based generator while keeping the new streaming generator intact. TODO: - [ ] brainstorm a bit more on https://github.com/huggingface/datasets/issues/3663 to see if we can do better - [ ] refactor th...
closed
https://github.com/huggingface/datasets/pull/3664
2022-02-01T21:48:27
2022-02-22T09:14:06
2022-02-22T09:14:06
{ "login": "anton-l", "id": 26864830, "type": "User" }
[]
true
[]
1,121,067,647
3,663
[Audio] Path of Common Voice cannot be used for audio loading anymore
## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(ds[0]["audio"]["path"]) load(ds[0]["path"]) ``` ## Expected results ...
closed
https://github.com/huggingface/datasets/issues/3663
2022-02-01T18:40:10
2022-09-21T15:03:09
2022-09-21T14:56:22
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,121,024,403
3,662
[Audio] MP3 resampling is incorrect when dataset's audio files have different sampling rates
The Audio feature resampler for MP3 gets stuck with the first original frequencies it meets, which leads to subsequent decoding to be incorrect. Here is a code to reproduce the issue: Let's first consider two audio files with different sampling rates 32000 and 16000: ```python # first download a mp3 file with s...
closed
https://github.com/huggingface/datasets/issues/3662
2022-02-01T17:55:04
2022-02-02T10:52:25
2022-02-02T10:52:25
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
1,121,000,251
3,661
Remove unnecessary 'r' arg in
Originally from #3489
closed
https://github.com/huggingface/datasets/pull/3661
2022-02-01T17:29:27
2022-02-07T16:57:27
2022-02-07T16:02:42
{ "login": "bryant1410", "id": 3905501, "type": "User" }
[]
true
[]
1,120,982,671
3,660
Change HTTP links to HTTPS
I tested the links. I also fixed some typos. Originally from #3489
open
https://github.com/huggingface/datasets/pull/3660
2022-02-01T17:12:51
2022-09-21T15:16:32
null
{ "login": "bryant1410", "id": 3905501, "type": "User" }
[]
true
[]
1,120,913,672
3,659
push_to_hub but preview not working
## Dataset viewer issue for '*happifyhealth/twitter_pnn*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/happifyhealth/twitter_pnn)* I used ``` dataset.push_to_hub("happifyhealth/twitter_pnn") ``` but the preview is not working. Am I the one who added this dataset ? Yes
closed
https://github.com/huggingface/datasets/issues/3659
2022-02-01T16:23:57
2022-02-09T08:00:37
2022-02-09T08:00:37
{ "login": "thomas-happify", "id": 66082334, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,120,880,395
3,658
Dataset viewer issue for *P3*
## Dataset viewer issue for '*P3*' **Link: https://huggingface.co/datasets/bigscience/P3** ``` Status code: 400 Exception: SplitsNotFoundError Message: The split names could not be parsed from the dataset config. ``` Am I the one who added this dataset ? No
closed
https://github.com/huggingface/datasets/issues/3658
2022-02-01T15:57:56
2023-09-25T12:16:21
2023-09-25T12:16:21
{ "login": "jeffistyping", "id": 22351555, "type": "User" }
[]
false
[]
1,120,602,620
3,657
Extend dataset builder for streaming in `get_dataset_split_names`
Currently, `get_dataset_split_names` doesn't extend a builder module to support streaming, even though it uses `StreamingDownloadManager` to download data. This PR fixes that. To test the change, run the following: ```bash pip install git+https://github.com/huggingface/datasets.git@fix-get_dataset_split_names-stre...
closed
https://github.com/huggingface/datasets/pull/3657
2022-02-01T12:21:24
2022-02-03T22:49:06
2022-02-02T11:22:01
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,120,510,823
3,656
checksum error subjqa dataset
## Describe the bug I get a checksum error when loading the `subjqa` dataset (used in the transformers book). ## Steps to reproduce the bug ```python from datasets import load_dataset subjqa = load_dataset("subjqa","electronics") ``` ## Expected results Loading the dataset ## Actual results ``` ---...
closed
https://github.com/huggingface/datasets/issues/3656
2022-02-01T10:53:33
2022-02-10T10:56:59
2022-02-10T10:56:38
{ "login": "RensDimmendaal", "id": 9828683, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,119,801,077
3,655
Pubmed dataset not reachable
## Describe the bug Trying to use the `pubmed` dataset fails to reach / download the source files. ## Steps to reproduce the bug ```python pubmed_train = datasets.load_dataset('pubmed', split='train') ``` ## Expected results Should begin downloading the pubmed dataset. ## Actual results ``` ConnectionEr...
closed
https://github.com/huggingface/datasets/issues/3655
2022-01-31T18:45:47
2022-12-19T19:18:10
2022-02-14T14:15:41
{ "login": "abhi-mosaic", "id": 77638579, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,119,717,475
3,654
Better TQDM output
This PR does the following: * if `dataset_infos.json` exists for a dataset, uses `num_examples` to print the total number of examples that needs to be generated (in `builder.py`) * fixes `tqdm` + multiprocessing in Jupyter Notebook/Colab (the issue stems from this commit in the `tqdm` repo: https://github.com/tqdm/tq...
closed
https://github.com/huggingface/datasets/pull/3654
2022-01-31T17:22:43
2022-02-03T15:55:34
2022-02-03T15:55:33
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,119,186,952
3,653
`to_json` in multiprocessing fashion sometimes deadlock
## Describe the bug `to_json` in multiprocessing fashion sometimes deadlock, instead of raising exceptions. Temporary solution is to see that it deadlocks, and then reduce the number of processes or batch size in order to reduce the memory footprint. As @lhoestq pointed out, this might be related to https://bugs....
open
https://github.com/huggingface/datasets/issues/3653
2022-01-31T09:35:07
2022-01-31T09:35:07
null
{ "login": "thomasw21", "id": 24695242, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,118,808,738
3,652
sp. Columbia => Colombia
"Columbia" is various places in North America. The country is "Colombia".
closed
https://github.com/huggingface/datasets/pull/3652
2022-01-31T00:41:03
2022-02-09T16:55:25
2022-01-31T08:29:07
{ "login": "serapio", "id": 3781280, "type": "User" }
[]
true
[]
1,118,597,647
3,651
Update link in wiki_bio dataset
Fixes #3580 and makes the wiki_bio dataset work again. I changed the link and some documentation, and all the tests pass. Thanks @lhoestq for uploading the dataset to the HuggingFace data bucket. @lhoestq -- all the tests pass, but I'm still not able to import the dataset, as the old Google Drive link is cached some...
closed
https://github.com/huggingface/datasets/pull/3651
2022-01-30T16:28:54
2022-01-31T14:50:48
2022-01-31T08:38:09
{ "login": "jxmorris12", "id": 13238952, "type": "User" }
[]
true
[]
1,118,537,429
3,650
Allow 'to_json' to run in unordered fashion in order to lower memory footprint
I'm using `to_json(..., num_proc=num_proc, compressiong='gzip')` with `num_proc>1`. I'm having an issue where things seem to deadlock at some point. Eventually I see OOM. I'm guessing it's an issue where one process starts to take a long time for a specific batch, and so other process keep accumulating their results in...
closed
https://github.com/huggingface/datasets/pull/3650
2022-01-30T13:23:19
2023-09-25T06:28:51
2023-09-24T16:45:48
{ "login": "thomasw21", "id": 24695242, "type": "User" }
[]
true
[]
1,117,502,250
3,649
Add IGLUE dataset
## Adding a Dataset - **Name:** IGLUE - **Description:** IGLUE brings together 4 vision-and-language tasks across 20 languages (Twitter [thread](https://twitter.com/ebugliarello/status/1487045497583976455?s=20&t=SB4LZGDhhkUW83ugcX_m5w)) - **Paper:** https://arxiv.org/abs/2201.11732 - **Data:** https://github.com/e-...
open
https://github.com/huggingface/datasets/issues/3649
2022-01-28T14:59:41
2022-01-28T15:02:35
null
{ "login": "lewtun", "id": 26859204, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "multimodal", "color": "19E633" } ]
false
[]
1,117,465,505
3,648
Fix Windows CI: bump python to 3.7
Python>=3.7 is needed to install `tokenizers` 0.11
closed
https://github.com/huggingface/datasets/pull/3648
2022-01-28T14:24:54
2022-01-28T14:40:39
2022-01-28T14:40:39
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,117,383,675
3,647
Fix `add_column` on datasets with indices mapping
My initial idea was to avoid the `flatten_indices` call and reorder a new column instead, but in the end I decided to follow `concatenate_datasets` and use `flatten_indices` to avoid padding when `dataset._indices.num_rows != dataset._data.num_rows`. Fix #3599
closed
https://github.com/huggingface/datasets/pull/3647
2022-01-28T13:06:29
2022-01-28T15:35:58
2022-01-28T15:35:58
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,116,544,627
3,646
Fix streaming datasets that are not reset correctly
Streaming datasets that use `StreamingDownloadManager.iter_archive` and `StreamingDownloadManager.iter_files` had some issues. Indeed if you try to iterate over such dataset twice, then the second time it will be empty. This is because the two methods above are generator functions. I fixed this by making them return...
closed
https://github.com/huggingface/datasets/pull/3646
2022-01-27T17:21:02
2022-01-28T16:34:29
2022-01-28T16:34:28
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,116,541,298
3,645
Streaming dataset based on dl_manager.iter_archive/iter_files are not reset correctly
Hi ! When iterating over a streaming dataset once, it's not reset correctly because of some issues with `dl_manager.iter_archive` and `dl_manager.iter_files`. Indeed they are generator functions (so the iterator that is returned can be exhausted). They should be iterables instead, and be reset if we do a for loop again...
closed
https://github.com/huggingface/datasets/issues/3645
2022-01-27T17:17:41
2022-01-28T16:34:28
2022-01-28T16:34:28
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
1,116,519,670
3,644
Add a GROUP BY operator
**Is your feature request related to a problem? Please describe.** Using batch mapping, we can easily split examples. However, we lack an appropriate option for merging them back together by some key. Consider this example: ```python # features: # { # "example_id": datasets.Value("int32"), # "text": datas...
open
https://github.com/huggingface/datasets/issues/3644
2022-01-27T16:57:54
2025-01-28T11:39:48
null
{ "login": "felix-schneider", "id": 208336, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,116,417,428
3,643
Fix sem_eval_2018_task_1 download location
As discussed with @lhoestq in https://github.com/huggingface/datasets/issues/3549#issuecomment-1020176931_ this is the new pull request to fix the download location.
closed
https://github.com/huggingface/datasets/pull/3643
2022-01-27T15:45:00
2022-02-04T15:15:26
2022-02-04T15:15:26
{ "login": "maxpel", "id": 31095360, "type": "User" }
[]
true
[]
1,116,306,986
3,642
Fix dataset slicing with negative bounds when indices mapping is not `None`
Fix #3611
closed
https://github.com/huggingface/datasets/pull/3642
2022-01-27T14:45:53
2022-01-27T18:16:23
2022-01-27T18:16:22
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,116,284,268
3,641
Fix numpy rngs when seed is None
Fixes the NumPy RNG when `seed` is `None`. The problem becomes obvious after reading the NumPy notes on RNG (returned by `np.random.get_state()`): > The MT19937 state vector consists of a 624-element array of 32-bit unsigned integers plus a single integer value between 0 and 624 that indexes the current position wi...
closed
https://github.com/huggingface/datasets/pull/3641
2022-01-27T14:29:09
2022-01-27T18:16:08
2022-01-27T18:16:07
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,116,133,769
3,640
Issues with custom dataset in Wav2Vec2
We are training Vav2Vec using the run_speech_recognition_ctc_bnb.py-script. This is working fine with Common Voice, however using our custom dataset and data loader at [NbAiLab/NPSC]( https://huggingface.co/datasets/NbAiLab/NPSC) it crashes after roughly 1 epoch with the following stack trace: ![image](https://us...
closed
https://github.com/huggingface/datasets/issues/3640
2022-01-27T12:09:05
2022-01-27T12:29:48
2022-01-27T12:29:48
{ "login": "peregilk", "id": 9079808, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,116,021,420
3,639
same value of precision, recall, f1 score at each epoch for classification task.
**1st Epoch:** 1/27/2022 09:30:48 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/f1/default/default_experiment-1-0.arrow.59it/s] 01/27/2022 09:30:48 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/precision/default/default_experiment-1-0.arrow 01/27/2022 09:3...
closed
https://github.com/huggingface/datasets/issues/3639
2022-01-27T10:14:16
2022-02-24T09:02:18
2022-02-24T09:02:17
{ "login": "Dhanachandra", "id": 10828657, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,115,725,703
3,638
AutoTokenizer hash value got change after datasets.map
## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import load_dataset from datasets.fingerprint import Hasher tok...
open
https://github.com/huggingface/datasets/issues/3638
2022-01-27T03:19:03
2024-03-11T13:56:15
null
{ "login": "tshu-w", "id": 13161779, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,115,526,438
3,637
[TypeError: Couldn't cast array of type] Cannot load dataset in v1.18
## Describe the bug I am trying to load the [`GEM/RiSAWOZ` dataset](https://huggingface.co/datasets/GEM/RiSAWOZ) in `datasets` v1.18.1 and am running into a type error when casting the features. The strange thing is that I can load the dataset with v1.17.0. Note that the error is also present if I install from `master...
closed
https://github.com/huggingface/datasets/issues/3637
2022-01-26T21:38:02
2022-02-09T16:15:53
2022-02-09T16:15:53
{ "login": "lewtun", "id": 26859204, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,115,362,702
3,636
Update index.rst
null
closed
https://github.com/huggingface/datasets/pull/3636
2022-01-26T18:43:09
2022-01-26T18:44:55
2022-01-26T18:44:54
{ "login": "VioletteLepercq", "id": 95622912, "type": "User" }
[]
true
[]
1,115,333,219
3,635
Make `ted_talks_iwslt` dataset streamable
null
closed
https://github.com/huggingface/datasets/pull/3635
2022-01-26T18:07:56
2022-10-04T09:36:23
2022-10-03T09:44:47
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,115,133,279
3,634
Dataset.shuffle(seed=None) gives fixed row permutation
## Describe the bug Repeated attempts to `shuffle` a dataset without specifying a seed give the same results. ## Steps to reproduce the bug ```python import datasets # Some toy example data = datasets.Dataset.from_dict( {"feature": [1, 2, 3, 4, 5], "label": ["a", "b", "c", "d", "e"]} ) # Doesn't work...
closed
https://github.com/huggingface/datasets/issues/3634
2022-01-26T15:13:08
2022-01-27T18:16:07
2022-01-27T18:16:07
{ "login": "elisno", "id": 18127060, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,115,040,174
3,633
Mirror canonical datasets in prod
Push the datasets changes to the Hub in production by setting `HF_USE_PROD=1` I also added a fix that makes the script ignore the json, csv, text, parquet and pandas dataset builders. cc @SBrandeis
closed
https://github.com/huggingface/datasets/pull/3633
2022-01-26T13:49:37
2022-01-26T13:56:21
2022-01-26T13:56:21
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,115,027,185
3,632
Adding CC-100: Monolingual Datasets from Web Crawl Data (Datasets links are invalid)
## Describe the bug The dataset links are no longer valid for CC-100. It seems that the website which was keeping these files are no longer accessible and therefore this dataset became unusable. Check out the dataset [homepage](http://data.statmt.org/cc-100/) which isn't accessible. Also the URLs for dataset file ...
closed
https://github.com/huggingface/datasets/issues/3632
2022-01-26T13:35:37
2022-02-10T06:58:11
2022-02-10T06:58:11
{ "login": "AnzorGozalishvili", "id": 55232459, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,114,833,662
3,631
Labels conflict when loading a local CSV file.
## Describe the bug I am trying to load a local CSV file with a separate file containing label names. It is successfully loaded for the first time, but when I try to load it again, there is a conflict between provided labels and the cached dataset info. Disabling caching globally and/or using `download_mode="force_red...
closed
https://github.com/huggingface/datasets/issues/3631
2022-01-26T10:00:33
2022-02-11T23:02:31
2022-02-11T23:02:31
{ "login": "pichljan", "id": 8571301, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,114,578,625
3,630
DuplicatedKeysError of NewsQA dataset
After processing the dataset following official [NewsQA](https://github.com/Maluuba/newsqa), I used datasets to load it: ``` a = load_dataset('newsqa', data_dir='news') ``` and the following error occurred: ``` Using custom data configuration default-data_dir=news Downloading and preparing dataset newsqa/defaul...
closed
https://github.com/huggingface/datasets/issues/3630
2022-01-26T03:05:49
2022-02-14T08:37:19
2022-02-14T08:37:19
{ "login": "StevenTang1998", "id": 37647985, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,113,971,575
3,629
Fix Hub repos update when there's a new release
It was not listing the full list of datasets correctly cc @SBrandeis this is why it failed for 1.18.0 We should be good now !
closed
https://github.com/huggingface/datasets/pull/3629
2022-01-25T14:39:45
2022-01-25T14:55:46
2022-01-25T14:55:46
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,113,930,644
3,628
Dataset Card Creator drops information for "Additional Information" Section
First of all, the card creator is a great addition and really helpful for streamlining dataset cards! ## Describe the bug I encountered an inconvenient bug when entering "Additional Information" in the react app, which drops already entered text when switching to a previous section, and then back again to "Addition...
open
https://github.com/huggingface/datasets/issues/3628
2022-01-25T14:06:17
2022-01-25T14:09:01
null
{ "login": "dennlinger", "id": 26013491, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,113,556,837
3,627
Fix host URL in The Pile datasets
This PR fixes the host URL in The Pile datasets, once they have mirrored their data in another server. Fix #3626.
closed
https://github.com/huggingface/datasets/pull/3627
2022-01-25T08:11:28
2022-07-20T20:54:42
2022-02-14T08:40:58
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,113,534,436
3,626
The Pile cannot connect to host
## Describe the bug The Pile had issues with their previous host server and have mirrored its content to another server. The new URL server should be updated.
closed
https://github.com/huggingface/datasets/issues/3626
2022-01-25T07:43:33
2022-02-14T08:40:58
2022-02-14T08:40:58
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,113,017,522
3,625
Add a metadata field for when source data was produced
**Is your feature request related to a problem? Please describe.** The current problem is that information about when source data was produced is not easily visible. Though there are a variety of metadata fields available in the dataset viewer, time period information is not included. This feature request suggests mak...
open
https://github.com/huggingface/datasets/issues/3625
2022-01-24T18:52:39
2022-06-28T13:54:49
null
{ "login": "davanstrien", "id": 8995957, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,112,835,239
3,623
Extend support for streaming datasets that use os.path.relpath
This PR extends the support in streaming mode for datasets that use `os.path.relpath`, by patching that function. This feature will also be useful to yield the relative path of audio or image files, within an archive or parent dir. Close #3622.
closed
https://github.com/huggingface/datasets/pull/3623
2022-01-24T16:00:52
2022-02-04T14:03:55
2022-02-04T14:03:54
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,112,831,661
3,622
Extend support for streaming datasets that use os.path.relpath
Extend support for streaming datasets that use `os.path.relpath`. This feature will also be useful to yield the relative path of audio or image files.
closed
https://github.com/huggingface/datasets/issues/3622
2022-01-24T15:58:23
2022-02-04T14:03:54
2022-02-04T14:03:54
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,112,720,434
3,621
Consider adding `ipywidgets` as a dependency.
When I install `datasets` in a fresh virtualenv with jupyterlab I always see this error. ``` ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html ``` It's a bit of a nuisance, because I need to run shut down the jupyterlab ser...
closed
https://github.com/huggingface/datasets/issues/3621
2022-01-24T14:27:11
2022-02-24T09:04:36
2022-02-24T09:04:36
{ "login": "koaning", "id": 1019791, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,112,677,252
3,620
Add Fon language tag
Add Fon language tag to resources.
closed
https://github.com/huggingface/datasets/pull/3620
2022-01-24T13:52:26
2022-02-04T14:04:36
2022-02-04T14:04:35
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,112,611,415
3,619
fix meta in mls
`monolingual` value of `m ultilinguality` param in yaml meta was changed to `multilingual` :)
closed
https://github.com/huggingface/datasets/pull/3619
2022-01-24T12:54:38
2022-01-24T20:53:22
2022-01-24T20:53:22
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,112,123,365
3,618
TIMIT Dataset not working with GPU
## Describe the bug I am working trying to use the TIMIT dataset in order to fine-tune Wav2Vec2 model and I am unable to load the "audio" column from the dataset when working with a GPU. I am working on Amazon Sagemaker Studio, on the Python 3 (PyTorch 1.8 Python 3.6 GPU Optimized) environment, with a single ml.g4...
closed
https://github.com/huggingface/datasets/issues/3618
2022-01-24T03:26:03
2023-07-25T15:20:20
2023-07-25T15:20:20
{ "login": "TheSeamau5", "id": 3227869, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,111,938,691
3,617
PR for the CFPB Consumer Complaints dataset
Think I followed all the steps but please let me know if anything needs changing or any improvements I can make to the code quality
closed
https://github.com/huggingface/datasets/pull/3617
2022-01-23T17:47:12
2022-02-07T21:08:31
2022-02-07T21:08:31
{ "login": "kayvane1", "id": 42403093, "type": "User" }
[]
true
[]
1,111,587,861
3,616
Make streamable the BnL Historical Newspapers dataset
I've refactored the code in order to make the dataset streamable and to avoid it takes too long: - I've used `iter_files` Close #3615
closed
https://github.com/huggingface/datasets/pull/3616
2022-01-22T14:52:36
2022-02-04T14:05:23
2022-02-04T14:05:21
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,111,576,876
3,615
Dataset BnL Historical Newspapers does not work in streaming mode
## Describe the bug When trying to load in streaming mode, it "hangs"... ## Steps to reproduce the bug ```python ds = load_dataset("bnl_newspapers", split="train", streaming=True) ``` ## Expected results The code should be optimized, so that it works fast in streaming mode. CC: @davanstrien
closed
https://github.com/huggingface/datasets/issues/3615
2022-01-22T14:12:59
2022-02-04T14:05:21
2022-02-04T14:05:21
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,110,736,657
3,614
Minor fixes
This PR: * adds "desc" to the `ignore_kwargs` list in `Dataset.filter` * fixes the default value of `id` in `DatasetDict.prepare_for_task`
closed
https://github.com/huggingface/datasets/pull/3614
2022-01-21T17:48:44
2022-01-24T12:45:49
2022-01-24T12:45:49
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,110,684,015
3,613
Files not updating in dataset viewer
## Dataset viewer issue for '*name of the dataset*' **Link:** Some examples: * https://huggingface.co/datasets/abidlabs/crowdsourced-speech4 * https://huggingface.co/datasets/abidlabs/test-audio-13 *short description of the issue* It seems that the dataset viewer is reading a cached version of the dataset and...
closed
https://github.com/huggingface/datasets/issues/3613
2022-01-21T16:47:20
2022-01-22T08:13:13
2022-01-22T08:13:13
{ "login": "abidlabs", "id": 1778297, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,110,506,466
3,612
wikifix
This should get the wikipedia dataloading script back up and running - at least I hope so (tested with language ff and ii)
closed
https://github.com/huggingface/datasets/pull/3612
2022-01-21T14:05:11
2022-02-03T17:58:16
2022-02-03T17:58:16
{ "login": "apergo-ai", "id": 68908804, "type": "User" }
[]
true
[]
1,110,399,096
3,611
Indexing bug after dataset.select()
## Describe the bug A clear and concise description of what the bug is. Dataset indexing is not working as expected after `dataset.select(range(100))` ## Steps to reproduce the bug ```python # Sample code to reproduce the bug import datasets task_to_keys = { "cola": ("sentence", None), "mnli":...
closed
https://github.com/huggingface/datasets/issues/3611
2022-01-21T12:09:30
2022-01-27T18:16:22
2022-01-27T18:16:22
{ "login": "kamalkraj", "id": 17096858, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,109,777,314
3,610
Checksum error when trying to load amazon_review dataset
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug I am getting the issue when trying to load dataset using ``` dataset = load_dataset("amazon_polarity") ``` ## Expected results dataset loaded ## Actual results ``` -------------------------------------...
closed
https://github.com/huggingface/datasets/issues/3610
2022-01-20T21:20:32
2022-01-21T13:22:31
2022-01-21T13:22:31
{ "login": "ghost", "id": 10137, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,109,579,112
3,609
Fixes to pubmed dataset download function
Pubmed has updated its settings for 2022 and thus existing download script does not work.
closed
https://github.com/huggingface/datasets/pull/3609
2022-01-20T17:31:35
2022-03-03T16:18:52
2022-03-03T14:23:35
{ "login": "spacemanidol", "id": 3886120, "type": "User" }
[]
true
[]
1,109,310,981
3,608
Add support for continuous metrics (RMSE, MAE)
**Is your feature request related to a problem? Please describe.** I am uploading our dataset and models for the "Constructing interval measures" method we've developed, which uses item response theory to convert multiple discrete labels into a continuous spectrum for hate speech. Once we have this outcome our NLP m...
closed
https://github.com/huggingface/datasets/issues/3608
2022-01-20T13:35:36
2022-03-09T17:18:20
2022-03-09T17:18:20
{ "login": "ck37", "id": 50770, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "good first issue", "color": "7057ff" } ]
false
[]
1,109,218,370
3,607
Add MIT Scene Parsing Benchmark
Add MIT Scene Parsing Benchmark (a subset of ADE20k). TODOs: * [x] add dummy data * [x] add dataset card * [x] generate `dataset_info.json`
closed
https://github.com/huggingface/datasets/pull/3607
2022-01-20T12:03:07
2022-02-18T12:51:01
2022-02-18T12:51:00
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,108,918,701
3,606
audio column not saved correctly after resampling
## Describe the bug After resampling the audio column, saving with save_to_disk doesn't seem to save with the correct type. ## Steps to reproduce the bug - load a subset of common voice dataset (48Khz) - resample audio column to 16Khz - save with save_to_disk() - load with load_from_disk() ## Expected resul...
closed
https://github.com/huggingface/datasets/issues/3606
2022-01-20T06:37:10
2022-01-23T01:41:01
2022-01-23T01:24:14
{ "login": "laphang", "id": 24724502, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,108,738,561
3,605
Adding Turkic X-WMT evaluation set for machine translation
This dataset is a human-translated evaluation set for MT crowdsourced and provided by the [Turkic Interlingua ](turkic-interlingua.org) community. It contains eval sets for 8 Turkic languages covering 88 language directions. Languages being covered are: Azerbaijani (az) Bashkir (ba) English (en) Karakalpak (kaa) ...
closed
https://github.com/huggingface/datasets/pull/3605
2022-01-20T01:40:29
2022-01-31T09:50:57
2022-01-31T09:50:57
{ "login": "mirzakhalov", "id": 26018417, "type": "User" }
[]
true
[]
1,108,477,316
3,604
Dataset Viewer not showing Previews for Private Datasets
## Dataset viewer issue for 'abidlabs/test-audio-13' It seems that the dataset viewer does not show previews for `private` datasets, even for the user who's private dataset it is. See [1] for example. If I change the visibility to public, then it does show, but it would be useful to have the viewer even for private ...
closed
https://github.com/huggingface/datasets/issues/3604
2022-01-19T19:29:26
2022-09-26T08:04:43
2022-09-26T08:04:43
{ "login": "abidlabs", "id": 1778297, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,108,392,141
3,603
Add British Library books dataset
This pull request adds a dataset of text from digitised (primarily 19th Century) books from the British Library. This collection has previously been used for training language models, e.g. https://github.com/dbmdz/clef-hipe/blob/main/hlms.md. It would be nice to make this dataset more accessible for others to use throu...
closed
https://github.com/huggingface/datasets/pull/3603
2022-01-19T17:53:05
2022-01-31T17:22:51
2022-01-31T17:01:49
{ "login": "davanstrien", "id": 8995957, "type": "User" }
[]
true
[]
1,108,247,870
3,602
Update url for conll2003
Following https://github.com/huggingface/datasets/issues/3582 I'm changing the download URL of the conll2003 data files, since the previous host doesn't have the authorization to redistribute the data
closed
https://github.com/huggingface/datasets/pull/3602
2022-01-19T15:35:04
2022-01-20T16:23:03
2022-01-19T15:43:53
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,108,207,131
3,601
Add conll2003 licensing
Following https://github.com/huggingface/datasets/issues/3582, this PR updates the licensing section of the CoNLL2003 dataset.
closed
https://github.com/huggingface/datasets/pull/3601
2022-01-19T15:00:41
2022-01-19T17:17:28
2022-01-19T17:17:28
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,108,131,878
3,600
Use old url for conll2003
As reported in https://github.com/huggingface/datasets/issues/3582 the CoNLL2003 data files are not available in the master branch of the repo that used to host them. For now we can use the URL from an older commit to access the data files
closed
https://github.com/huggingface/datasets/pull/3600
2022-01-19T13:56:49
2022-01-19T14:16:28
2022-01-19T14:16:28
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,108,111,607
3,599
The `add_column()` method does not work if used on dataset sliced with `select()`
Hello, I posted this as a question on the forums ([here](https://discuss.huggingface.co/t/add-column-does-not-work-if-used-on-dataset-sliced-with-select/13893)): I have a dataset with 2000 entries > dataset = Dataset.from_dict({'colA': list(range(2000))}) and from which I want to extract the first one thousan...
closed
https://github.com/huggingface/datasets/issues/3599
2022-01-19T13:36:50
2022-01-28T15:35:57
2022-01-28T15:35:57
{ "login": "ThGouzias", "id": 59422506, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,108,107,199
3,598
Readme info not being parsed to show on Dataset card page
## Describe the bug The info contained in the README.md file is not being shown in the dataset main page. Basic info and table of contents are properly formatted in the README. ## Steps to reproduce the bug # Sample code to reproduce the bug The README file is this one: https://huggingface.co/datasets/softcatal...
closed
https://github.com/huggingface/datasets/issues/3598
2022-01-19T13:32:29
2022-01-21T10:20:01
2022-01-21T10:20:01
{ "login": "davidcanovas", "id": 79796807, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,108,092,864
3,597
ERROR: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode: /content
## Bug The install of streaming dataset is giving following error. ## Steps to reproduce the bug ```python ! git clone https://github.com/huggingface/datasets.git ! cd datasets ! pip install -e ".[streaming]" ``` ## Actual results Cloning into 'datasets'... remote: Enumerating objects: 50816, done. remot...
closed
https://github.com/huggingface/datasets/issues/3597
2022-01-19T13:19:28
2022-08-05T12:35:51
2022-02-14T08:46:34
{ "login": "amitkml", "id": 49492030, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,107,345,338
3,596
Loss of cast `Image` feature on certain dataset method
## Describe the bug When an a column is cast to an `Image` feature, the cast type appears to be lost during certain operations. I first noticed this when using the `push_to_hub` method on a dataset that contained urls pointing to images which had been cast to an `image`. This also happens when using select on a data...
closed
https://github.com/huggingface/datasets/issues/3596
2022-01-18T20:44:01
2022-01-21T18:07:28
2022-01-21T18:07:28
{ "login": "davanstrien", "id": 8995957, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,107,260,527
3,595
Add ImageNet toy datasets from fastai
Adds the ImageNet toy datasets from FastAI: Imagenette, Imagewoof and Imagewang. TODOs: * [ ] add dummy data * [ ] add dataset card * [ ] generate `dataset_info.json`
closed
https://github.com/huggingface/datasets/pull/3595
2022-01-18T19:03:35
2023-09-24T09:39:07
2022-09-30T14:39:35
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,107,174,619
3,594
fix multiple language downloading in mC4
If we try to access multiple languages of the [mC4 dataset](https://github.com/huggingface/datasets/tree/master/datasets/mc4), it will throw an error. For example, if we do ```python mc4_subset_two_langs = load_dataset("mc4", languages=["st", "su"]) ``` we got ``` FileNotFoundError: Couldn't find file at https:/...
closed
https://github.com/huggingface/datasets/pull/3594
2022-01-18T17:25:19
2022-01-19T11:22:57
2022-01-18T19:10:22
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,107,070,852
3,593
Update README.md
Towards license of Tweet Eval parts
closed
https://github.com/huggingface/datasets/pull/3593
2022-01-18T15:52:16
2022-01-20T17:14:53
2022-01-20T17:14:53
{ "login": "borgr", "id": 6416600, "type": "User" }
[]
true
[]
1,107,026,723
3,592
Add QuickDraw dataset
Add the QuickDraw dataset. TODOs: * [x] add dummy data * [x] add dataset card * [x] generate `dataset_info.json`
closed
https://github.com/huggingface/datasets/pull/3592
2022-01-18T15:13:39
2022-06-09T10:04:54
2022-06-09T09:56:13
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,106,928,613
3,591
Add support for time, date, duration, and decimal dtypes
Add support for the pyarrow time (maps to `datetime.time` in python), date (maps to `datetime.time` in python), duration (maps to `datetime.timedelta` in python), and decimal (maps to `decimal.decimal` in python) dtypes. This should be helpful when writing scripts for time-series datasets.
closed
https://github.com/huggingface/datasets/pull/3591
2022-01-18T13:46:05
2022-01-31T18:29:34
2022-01-20T17:37:33
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,106,784,860
3,590
Update ANLI README.md
Update license and little things concerning ANLI
closed
https://github.com/huggingface/datasets/pull/3590
2022-01-18T11:22:53
2022-01-20T16:58:41
2022-01-20T16:58:41
{ "login": "borgr", "id": 6416600, "type": "User" }
[]
true
[]
1,106,766,114
3,589
Pin torchmetrics to fix the COMET test
Torchmetrics 0.7.0 got released and has issues with `transformers` (see https://github.com/PyTorchLightning/metrics/issues/770) I'm pinning it to 0.6.0 in the CI, since 0.7.0 makes the COMET metric test fail. COMET requires torchmetrics==0.6.0 anyway.
closed
https://github.com/huggingface/datasets/pull/3589
2022-01-18T11:03:49
2022-01-18T11:04:56
2022-01-18T11:04:55
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,106,749,000
3,588
Update HellaSwag README.md
Adding information from the git repo and paper that were missing
closed
https://github.com/huggingface/datasets/pull/3588
2022-01-18T10:46:15
2022-01-20T16:57:43
2022-01-20T16:57:43
{ "login": "borgr", "id": 6416600, "type": "User" }
[]
true
[]
1,106,719,182
3,587
No module named 'fsspec.archive'
## Describe the bug Cannot import datasets after installation. ## Steps to reproduce the bug ```shell $ python Python 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> import datasets Traceback (most recent...
closed
https://github.com/huggingface/datasets/issues/3587
2022-01-18T10:17:01
2022-08-11T09:57:54
2022-01-18T10:33:10
{ "login": "shuuchen", "id": 13246825, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,106,455,672
3,586
Revisit `enable/disable_` toggle function prefix
As discussed in https://github.com/huggingface/transformers/pull/15167, we should revisit the `enable/disable_` toggle function prefix, potentially in favor of `set_enabled_`. Concretely, this translates to - De-deprecating `disable_progress_bar()` - Adding `enable_progress_bar()` - On the caching side, adding `en...
closed
https://github.com/huggingface/datasets/issues/3586
2022-01-18T04:09:55
2022-03-14T15:01:08
2022-03-14T15:01:08
{ "login": "jaketae", "id": 25360440, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,105,821,470
3,585
Datasets streaming + map doesn't work for `Audio`
## Describe the bug When using audio datasets in streaming mode, applying a `map(...)` before iterating leads to an error as the key `array` does not exist anymore. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("common_voice", "en", streaming=True, split="train")...
closed
https://github.com/huggingface/datasets/issues/3585
2022-01-17T12:55:42
2022-01-20T13:28:00
2022-01-20T13:28:00
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "duplicate", "color": "cfd3d7" } ]
false
[]
1,105,231,768
3,584
https://huggingface.co/datasets/huggingface/transformers-metadata
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
closed
https://github.com/huggingface/datasets/issues/3584
2022-01-17T00:18:14
2022-02-14T08:51:27
2022-02-14T08:51:27
{ "login": "ecankirkic", "id": 37082592, "type": "User" }
[ { "name": "wontfix", "color": "ffffff" }, { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,105,195,144
3,583
Add The Medical Segmentation Decathlon Dataset
## Adding a Dataset - **Name:** *The Medical Segmentation Decathlon Dataset* - **Description:** The underlying data set was designed to explore the axis of difficulties typically encountered when dealing with medical images, such as small data sets, unbalanced labels, multi-site data, and small objects. - **Paper:*...
open
https://github.com/huggingface/datasets/issues/3583
2022-01-16T21:42:25
2022-03-18T10:44:42
null
{ "login": "omarespejel", "id": 4755430, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "vision", "color": "bfdadc" } ]
false
[]
1,104,877,303
3,582
conll 2003 dataset source url is no longer valid
## Describe the bug Loading `conll2003` dataset fails because it was removed (just yesterday 1/14/2022) from the location it is looking for. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("conll2003") ``` ## Expected results The dataset should load. ## Actual r...
closed
https://github.com/huggingface/datasets/issues/3582
2022-01-15T23:04:17
2022-07-20T13:06:40
2022-01-21T16:57:32
{ "login": "rcanand", "id": 303900, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,104,857,822
3,581
Unable to create a dataset from a parquet file in S3
## Describe the bug Trying to create a dataset from a parquet file in S3. ## Steps to reproduce the bug ```python import s3fs from datasets import Dataset s3 = s3fs.S3FileSystem(anon=False) with s3.open(PATH_LTR_TOY_CLEAN_DATASET, 'rb') as s3file: dataset = Dataset.from_parquet(s3file) ``` ## Expe...
open
https://github.com/huggingface/datasets/issues/3581
2022-01-15T21:34:16
2022-02-14T08:52:57
null
{ "login": "regCode", "id": 18012903, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,104,663,242
3,580
Bug in wiki bio load
wiki_bio is failing to load because of a failing drive link . Can someone fix this ? ![7E90023B-A3B1-4930-BA25-45CCCB4E1710](https://user-images.githubusercontent.com/3104771/149617870-5a32a2da-2c78-483b-bff6-d7534215a423.png) ![653C1C76-C725-4A04-A0D8-084373BA612F](https://user-images.githubusercontent.com...
closed
https://github.com/huggingface/datasets/issues/3580
2022-01-15T10:04:33
2022-01-31T08:38:09
2022-01-31T08:38:09
{ "login": "tuhinjubcse", "id": 3104771, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,103,451,118
3,579
Add Text2log Dataset
Adding the text2log dataset used for training FOL sentence translating models
closed
https://github.com/huggingface/datasets/pull/3579
2022-01-14T10:45:01
2022-01-20T17:09:44
2022-01-20T17:09:44
{ "login": "apergo-ai", "id": 68908804, "type": "User" }
[]
true
[]
1,103,403,287
3,578
label information get lost after parquet serialization
## Describe the bug In *dataset_info.json* file, information about the label get lost after the dataset serialization. ## Steps to reproduce the bug ```python from datasets import load_dataset # normal save dataset = load_dataset('glue', 'sst2', split='train') dataset.save_to_disk("normal_save") # save ...
closed
https://github.com/huggingface/datasets/issues/3578
2022-01-14T10:10:38
2023-07-25T15:44:53
2023-07-25T15:44:53
{ "login": "Tudyx", "id": 56633664, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]