id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,352,539,075 | 4,903 | Fix CI reporting | Fix CI so that it reports defaults (failed and error) besides the custom (xfailed and xpassed) in the test summary.
This PR fixes a regression introduced by:
- #4845
This introduced the reporting of xfailed and xpassed, but wrongly removed the reporting of the defaults failed and error. | closed | https://github.com/huggingface/datasets/pull/4903 | 2022-08-26T17:16:30 | 2022-08-26T17:49:33 | 2022-08-26T17:46:59 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,352,469,196 | 4,902 | Name the default config `default` | Currently, if a dataset has no configuration, a default configuration is created from the dataset name.
For example, for a dataset loaded from the hub repository, such as https://huggingface.co/datasets/user/dataset (repo id is `user/dataset`), the default configuration will be `user--dataset`.
It might be easier... | closed | https://github.com/huggingface/datasets/issues/4902 | 2022-08-26T16:16:22 | 2023-07-24T21:15:31 | 2023-07-24T21:15:31 | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "question",
"color": "d876e3"
}
] | false | [] |
1,352,438,915 | 4,901 | Raise ManualDownloadError from get_dataset_config_info | This PRs raises a specific `ManualDownloadError` when `get_dataset_config_info` is called for a dataset that requires manual download.
Related to:
- #4898
CC: @severo | closed | https://github.com/huggingface/datasets/pull/4901 | 2022-08-26T15:45:56 | 2022-08-30T10:42:21 | 2022-08-30T10:40:04 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,352,405,855 | 4,900 | Dataset Viewer issue for asaxena1990/Dummy_dataset | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | closed | https://github.com/huggingface/datasets/issues/4900 | 2022-08-26T15:15:44 | 2023-07-24T15:42:09 | 2023-07-24T15:42:09 | {
"login": "ankurcl",
"id": 56627657,
"type": "User"
} | [] | false | [] |
1,352,031,286 | 4,899 | Re-add code and und language tags | This PR fixes the removal of 2 language tags done by:
- #4882
The tags are:
- "code": this is not a IANA tag but needed
- "und": this is one of the special scoped tags removed by 0d53202b9abce6fd0358cb00d06fcfd904b875af
- used in "mc4" and "udhr" datasets | closed | https://github.com/huggingface/datasets/pull/4899 | 2022-08-26T09:48:57 | 2022-08-26T10:27:18 | 2022-08-26T10:24:20 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,351,851,254 | 4,898 | Dataset Viewer issue for timit_asr | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | closed | https://github.com/huggingface/datasets/issues/4898 | 2022-08-26T07:12:05 | 2022-10-03T12:40:28 | 2022-10-03T12:40:27 | {
"login": "InayatUllah932",
"id": 91126978,
"type": "User"
} | [] | false | [] |
1,351,784,727 | 4,897 | datasets generate large arrow file | Checking the large file in disk, and found the large cache file in the cifar10 data directory:

As we know, the size of cifar10 dataset is ~130MB, but the cache file has almost 30GB size, there may be so... | closed | https://github.com/huggingface/datasets/issues/4897 | 2022-08-26T05:51:16 | 2022-09-18T05:07:52 | 2022-09-18T05:07:52 | {
"login": "jax11235",
"id": 18533904,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,351,180,409 | 4,896 | Fix missing tags in dataset cards | Fix missing tags in dataset cards:
- anli
- coarse_discourse
- commonsense_qa
- cos_e
- ilist
- lc_quad
- web_questions
- xsum
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891 | closed | https://github.com/huggingface/datasets/pull/4896 | 2022-08-25T16:41:43 | 2022-09-22T14:37:16 | 2022-08-26T04:41:48 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,350,798,527 | 4,895 | load_dataset method returns Unknown split "validation" even if this dir exists | ## Describe the bug
The `datasets.load_dataset` returns a `ValueError: Unknown split "validation". Should be one of ['train', 'test'].` when running `load_dataset(local_data_dir_path, split="validation")` even if the `validation` sub-directory exists in the local data path.
The data directories are as follows and a... | closed | https://github.com/huggingface/datasets/issues/4895 | 2022-08-25T12:11:00 | 2024-03-26T16:47:48 | 2022-09-29T08:07:50 | {
"login": "SamSamhuns",
"id": 13418507,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,350,667,270 | 4,894 | Add citation information to makhzan dataset | This PR adds the citation information to `makhzan` dataset, once they have replied to our request for that information:
- https://github.com/zeerakahmed/makhzan/issues/43 | closed | https://github.com/huggingface/datasets/pull/4894 | 2022-08-25T10:16:40 | 2022-08-30T06:21:54 | 2022-08-25T13:19:41 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,350,655,674 | 4,893 | Oversampling strategy for iterable datasets in `interleave_datasets` | In https://github.com/huggingface/datasets/pull/4831 @ylacombe added an oversampling strategy for `interleave_datasets`. However right now it doesn't work for datasets loaded using `load_dataset(..., streaming=True)`, which are `IterableDataset` objects.
It would be nice to expand `interleave_datasets` for iterable ... | closed | https://github.com/huggingface/datasets/issues/4893 | 2022-08-25T10:06:55 | 2022-10-03T12:37:46 | 2022-10-03T12:37:46 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "good second issue",
"color": "BDE59C"
}
] | false | [] |
1,350,636,499 | 4,892 | Add citation to ro_sts and ro_sts_parallel datasets | This PR adds the citation information to `ro_sts_parallel` and `ro_sts_parallel` datasets, once they have replied our request for that information:
- https://github.com/dumitrescustefan/RO-STS/issues/4 | closed | https://github.com/huggingface/datasets/pull/4892 | 2022-08-25T09:51:06 | 2022-08-25T10:49:56 | 2022-08-25T10:49:56 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,350,589,813 | 4,891 | Fix missing tags in dataset cards | Fix missing tags in dataset cards:
- aslg_pc12
- librispeech_lm
- mwsc
- opus100
- qasc
- quail
- squadshifts
- winograd_wsc
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
| closed | https://github.com/huggingface/datasets/pull/4891 | 2022-08-25T09:14:17 | 2022-09-22T14:39:02 | 2022-08-25T13:43:34 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,350,578,029 | 4,890 | add Dataset.from_list | As discussed in #4885
I initially added this bit at the end, thinking filling this field was necessary as it is done in from_dict.
However, it seems the constructor takes care of filling info when it is empty.
```
if info.features is None:
info.features = Features(
{
col: generate_from_arro... | closed | https://github.com/huggingface/datasets/pull/4890 | 2022-08-25T09:05:58 | 2022-09-02T10:22:59 | 2022-09-02T10:20:33 | {
"login": "sanderland",
"id": 48946947,
"type": "User"
} | [] | true | [] |
1,349,758,525 | 4,889 | torchaudio 11.0 yields different results than torchaudio 12.1 when loading MP3 | ## Describe the bug
When loading Common Voice with torchaudio 0.11.0 the results are different to 0.12.1 which leads to problems in transformers see: https://github.com/huggingface/transformers/pull/18749
## Steps to reproduce the bug
If you run the following code once with `torchaudio==0.11.0+cu102` and `torc... | closed | https://github.com/huggingface/datasets/issues/4889 | 2022-08-24T16:54:43 | 2023-03-02T15:33:05 | 2023-03-02T15:33:04 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,349,447,521 | 4,888 | Dataset Viewer issue for subjqa | ### Link
https://huggingface.co/datasets/subjqa
### Description
Getting the following error for this dataset:
```
Status code: 500
Exception: Status500Error
Message: 2 or more items returned, instead of 1
```
Not sure what's causing it though 🤔
### Owner
Yes | closed | https://github.com/huggingface/datasets/issues/4888 | 2022-08-24T13:26:20 | 2022-09-08T08:23:42 | 2022-09-08T08:23:42 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,349,426,693 | 4,887 | Add "cc-by-nc-sa-2.0" to list of licenses | Datasets side of https://github.com/huggingface/hub-docs/pull/285 | closed | https://github.com/huggingface/datasets/pull/4887 | 2022-08-24T13:11:49 | 2022-08-26T10:31:32 | 2022-08-26T10:29:20 | {
"login": "osanseviero",
"id": 7246357,
"type": "User"
} | [] | true | [] |
1,349,285,569 | 4,886 | Loading huggan/CelebA-HQ throws pyarrow.lib.ArrowInvalid | ## Describe the bug
Loading huggan/CelebA-HQ throws pyarrow.lib.ArrowInvalid
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('huggan/CelebA-HQ')
```
## Expected results
See https://colab.research.google.com/drive/141LJCcM2XyqprPY83nIQ-Zk3BbxWeahq?usp=sharing#... | open | https://github.com/huggingface/datasets/issues/4886 | 2022-08-24T11:24:21 | 2023-02-02T02:40:53 | null | {
"login": "JeanKaddour",
"id": 11850255,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,349,181,448 | 4,885 | Create dataset from list of dicts | I often find myself with data from a variety of sources, and a list of dicts is very common among these.
However, converting this to a Dataset is a little awkward, requiring either
```Dataset.from_pandas(pd.DataFrame(formatted_training_data))```
Which can error out on some more exotic values as 2-d arrays for reas... | closed | https://github.com/huggingface/datasets/issues/4885 | 2022-08-24T10:01:24 | 2022-09-08T16:02:52 | 2022-09-08T16:02:52 | {
"login": "sanderland",
"id": 48946947,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,349,105,946 | 4,884 | Fix documentation card of math_qa dataset | Fix documentation card of math_qa dataset. | closed | https://github.com/huggingface/datasets/pull/4884 | 2022-08-24T09:00:56 | 2022-08-24T11:33:17 | 2022-08-24T11:33:16 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,349,083,235 | 4,883 | With dataloader RSS memory consumed by HF datasets monotonically increases | ## Describe the bug
When the HF datasets is used in conjunction with PyTorch Dataloader, the RSS memory of the process keeps on increasing when it should stay constant.
## Steps to reproduce the bug
Run and observe the output of this snippet which logs RSS memory.
```python
import psutil
import os
from transf... | open | https://github.com/huggingface/datasets/issues/4883 | 2022-08-24T08:42:54 | 2024-01-23T12:42:40 | null | {
"login": "apsdehal",
"id": 3616806,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,348,913,665 | 4,882 | Fix language tags resource file | This PR fixes/updates/adds ALL language tags from IANA (as of 2022-08-08).
This PR also removes all BCP47 suffixes (the languages file only contains language subtags, i.e. ISO 639 1 or 2 codes; no script/region/variant suffixes). See:
- #4753 | closed | https://github.com/huggingface/datasets/pull/4882 | 2022-08-24T06:06:01 | 2022-08-24T13:58:33 | 2022-08-24T13:58:30 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,348,495,777 | 4,881 | Language names and language codes: connecting to a big database (rather than slow enrichment of custom list) | **The problem:**
Language diversity is an important dimension of the diversity of datasets. To find one's way around datasets, being able to search by language name and by standardized codes appears crucial.
Currently the list of language codes is [here](https://github.com/huggingface/datasets/blob/main/src/datase... | open | https://github.com/huggingface/datasets/issues/4881 | 2022-08-23T20:14:24 | 2024-04-22T15:57:28 | null | {
"login": "alexis-michaud",
"id": 6072524,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,348,452,776 | 4,880 | Added names of less-studied languages | Added names of less-studied languages (nru – Narua and jya – Japhug) for existing datasets. | closed | https://github.com/huggingface/datasets/pull/4880 | 2022-08-23T19:32:38 | 2022-08-24T12:52:46 | 2022-08-24T12:52:46 | {
"login": "BenjaminGalliot",
"id": 23100612,
"type": "User"
} | [] | true | [] |
1,348,346,407 | 4,879 | Fix Citation Information section in dataset cards | Fix Citation Information section in dataset cards:
- cc_news
- conllpp
- datacommons_factcheck
- gnad10
- id_panl_bppt
- jigsaw_toxicity_pred
- kinnews_kirnews
- kor_sarcasm
- makhzan
- reasoning_bg
- ro_sts
- ro_sts_parallel
- sanskrit_classic
- telugu_news
- thaiqa_squad
- wiki_movies
This PR parti... | closed | https://github.com/huggingface/datasets/pull/4879 | 2022-08-23T18:06:43 | 2022-09-27T14:04:45 | 2022-08-24T04:09:07 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,348,270,141 | 4,878 | [not really a bug] `identical_ok` is deprecated in huggingface-hub's `upload_file` | In the huggingface-hub dependency, the `identical_ok` argument has no effect in `upload_file` (and it will be removed soon)
See
https://github.com/huggingface/huggingface_hub/blob/43499582b19df1ed081a5b2bd7a364e9cacdc91d/src/huggingface_hub/hf_api.py#L2164-L2169
It's used here:
https://github.com/huggingfac... | closed | https://github.com/huggingface/datasets/issues/4878 | 2022-08-23T17:09:55 | 2022-09-13T14:00:06 | 2022-09-13T14:00:05 | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [
{
"name": "help wanted",
"color": "008672"
},
{
"name": "question",
"color": "d876e3"
}
] | false | [] |
1,348,246,755 | 4,877 | Fix documentation card of covid_qa_castorini dataset | Fix documentation card of covid_qa_castorini dataset. | closed | https://github.com/huggingface/datasets/pull/4877 | 2022-08-23T16:52:33 | 2022-08-23T18:05:01 | 2022-08-23T18:05:00 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,348,202,678 | 4,876 | Move DatasetInfo from `datasets_infos.json` to the YAML tags in `README.md` | Currently there are two places to find metadata for datasets:
- datasets_infos.json, which contains **per dataset config**
- description
- citation
- license
- splits and sizes
- checksums of the data files
- feature types
- and more
- YAML tags, which contain
- license
- language
- trai... | closed | https://github.com/huggingface/datasets/issues/4876 | 2022-08-23T16:16:41 | 2022-10-03T09:11:13 | 2022-10-03T09:11:13 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | false | [] |
1,348,095,686 | 4,875 | `_resolve_features` ignores the token | ## Describe the bug
When calling [`_resolve_features()`](https://github.com/huggingface/datasets/blob/54b532a8a2f5353fdb0207578162153f7b2da2ec/src/datasets/iterable_dataset.py#L1255) on a gated dataset, ie. a dataset which requires a token to be loaded, the token seems to be ignored even if it has been provided to `... | open | https://github.com/huggingface/datasets/issues/4875 | 2022-08-23T14:57:36 | 2022-10-17T13:45:47 | null | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [] | false | [] |
1,347,618,197 | 4,874 | [docs] Some tiny doc tweaks | null | closed | https://github.com/huggingface/datasets/pull/4874 | 2022-08-23T09:19:40 | 2022-08-24T17:27:57 | 2022-08-24T17:27:56 | {
"login": "julien-c",
"id": 326577,
"type": "User"
} | [] | true | [] |
1,347,592,022 | 4,873 | Multiple dataloader memory error | For the use of multiple datasets and tasks, we use around more than 200+ dataloaders, then pass it into `dataloader1, dataloader2, ..., dataloader200=accelerate.prepare(dataloader1, dataloader2, ..., dataloader200)`
It causes the memory error when generating batches. Any solutions to it?
```bash
File "/home/xxx/... | open | https://github.com/huggingface/datasets/issues/4873 | 2022-08-23T08:59:50 | 2023-01-26T02:01:11 | null | {
"login": "cyk1337",
"id": 13767887,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,347,180,765 | 4,872 | Docs for creating an audio dataset | This PR is a first draft of how to create audio datasets (`AudioFolder` and loading script). Feel free to let me know if there are any specificities I'm missing for this. 🙂 | closed | https://github.com/huggingface/datasets/pull/4872 | 2022-08-23T01:07:09 | 2022-09-22T17:19:13 | 2022-09-21T10:27:04 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
1,346,703,568 | 4,871 | Fix: wmt datasets - fix CWMT zh subsets | Fix https://github.com/huggingface/datasets/issues/4575
TODO: run `datasets-cli test`:
- [x] wmt17
- [x] wmt18
- [x] wmt19 | closed | https://github.com/huggingface/datasets/pull/4871 | 2022-08-22T16:42:09 | 2022-08-23T10:00:20 | 2022-08-23T10:00:19 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,346,160,498 | 4,870 | audio folder check CI | null | closed | https://github.com/huggingface/datasets/pull/4870 | 2022-08-22T10:15:53 | 2022-11-02T11:54:35 | 2022-08-22T12:19:40 | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [] | true | [] |
1,345,513,758 | 4,869 | Fix typos in documentation | null | closed | https://github.com/huggingface/datasets/pull/4869 | 2022-08-21T15:10:03 | 2022-08-22T09:25:39 | 2022-08-22T09:09:58 | {
"login": "fl-lo",
"id": 85993954,
"type": "User"
} | [] | true | [] |
1,345,191,322 | 4,868 | adding mafand to datasets | I'm addding the MAFAND dataset by Masakhane based on the paper/repository below:
Paper: https://aclanthology.org/2022.naacl-main.223/
Code: https://github.com/masakhane-io/lafand-mt
Please, help merge this
Everything works except for creating dummy data file | closed | https://github.com/huggingface/datasets/pull/4868 | 2022-08-20T15:26:14 | 2022-08-22T11:00:50 | 2022-08-22T08:52:23 | {
"login": "dadelani",
"id": 23586676,
"type": "User"
} | [
{
"name": "wontfix",
"color": "ffffff"
}
] | true | [] |
1,344,982,646 | 4,867 | Complete tags of superglue dataset card | Related to #4479 . | closed | https://github.com/huggingface/datasets/pull/4867 | 2022-08-19T23:44:39 | 2022-08-22T09:14:03 | 2022-08-22T08:58:31 | {
"login": "richarddwang",
"id": 17963619,
"type": "User"
} | [] | true | [] |
1,344,809,132 | 4,866 | amend docstring for dunder | display dunder method in docsting with underlines an not bold markdown. | open | https://github.com/huggingface/datasets/pull/4866 | 2022-08-19T19:09:15 | 2022-09-09T16:33:11 | null | {
"login": "schafsam",
"id": 37704298,
"type": "User"
} | [] | true | [] |
1,344,552,626 | 4,865 | Dataset Viewer issue for MoritzLaurer/multilingual_nli | ### Link
_No response_
### Description
I've just uploaded a new dataset to the hub and the viewer does not work for some reason, see here: https://huggingface.co/datasets/MoritzLaurer/multilingual_nli
It displays the error:
```
Status code: 400
Exception: Status400Error
Message: The dataset... | closed | https://github.com/huggingface/datasets/issues/4865 | 2022-08-19T14:55:20 | 2022-08-22T14:47:14 | 2022-08-22T06:13:20 | {
"login": "MoritzLaurer",
"id": 41862082,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,344,410,043 | 4,864 | Allow pathlib PoxisPath in Dataset.read_json | **Is your feature request related to a problem? Please describe.**
```
from pathlib import Path
from datasets import Dataset
ds = Dataset.read_json(Path('data.json'))
```
causes an error
```
AttributeError: 'PosixPath' object has no attribute 'decode'
```
**Describe the solution you'd like**
It should be... | open | https://github.com/huggingface/datasets/issues/4864 | 2022-08-19T12:59:17 | 2025-04-11T17:22:48 | null | {
"login": "changjonathanc",
"id": 31893406,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,343,737,668 | 4,863 | TFDS wiki_dialog dataset to Huggingface dataset | ## Adding a Dataset
- **Name:** *Wiki_dialog*
- **Description: https://github.com/google-research/dialog-inpainting#:~:text=JSON%20object%2C%20for-,example,-%3A
- **Paper: https://arxiv.org/abs/2205.09073
- **Data: https://github.com/google-research/dialog-inpainting
- **Motivation:** *Research and Development on ... | closed | https://github.com/huggingface/datasets/issues/4863 | 2022-08-18T23:06:30 | 2022-08-22T09:41:45 | 2022-08-22T05:18:53 | {
"login": "djaym7",
"id": 12378820,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,343,464,699 | 4,862 | Got "AttributeError: 'xPath' object has no attribute 'read'" when loading an excel dataset with my own code | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
# The dataset function is as follows:
from pathlib import Path
from typing import Dict, List, Tuple
import datasets
import pandas as pd
_CITATION = """\
"""... | closed | https://github.com/huggingface/datasets/issues/4862 | 2022-08-18T18:36:14 | 2022-08-31T09:25:08 | 2022-08-31T09:25:08 | {
"login": "yana-xuyan",
"id": 38536635,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,343,260,220 | 4,861 | Using disk for memory with the method `from_dict` | **Is your feature request related to a problem? Please describe.**
I start with an empty dataset. In a loop, at each iteration, I create a new dataset with the method `from_dict` (based on some data I load) and I concatenate this new dataset with the one at the previous iteration. After some iterations, I have an OOM ... | open | https://github.com/huggingface/datasets/issues/4861 | 2022-08-18T15:18:18 | 2023-01-26T18:36:28 | null | {
"login": "HugoLaurencon",
"id": 44556846,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,342,311,540 | 4,860 | Add collection3 dataset | null | closed | https://github.com/huggingface/datasets/pull/4860 | 2022-08-17T21:31:42 | 2022-08-23T20:02:45 | 2022-08-22T09:08:59 | {
"login": "pefimov",
"id": 16446994,
"type": "User"
} | [
{
"name": "wontfix",
"color": "ffffff"
}
] | true | [] |
1,342,231,016 | 4,859 | can't install using conda on Windows 10 | ## Describe the bug
I wanted to install using conda or Anaconda navigator. That didn't work, so I had to install using pip.
## Steps to reproduce the bug
conda install -c huggingface -c conda-forge datasets
## Expected results
Should have indicated successful installation.
## Actual results
Solving environ... | open | https://github.com/huggingface/datasets/issues/4859 | 2022-08-17T19:57:37 | 2022-08-17T19:57:37 | null | {
"login": "xoffey",
"id": 22627691,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,340,859,853 | 4,858 | map() function removes columns when input_columns is not None | ## Describe the bug
The map function, removes features from the dataset that are not present in the _input_columns_ list of columns, despite the columns being removed not mentioned in the _remove_columns_ argument.
## Steps to reproduce the bug
```python
from datasets import Dataset
ds = Dataset.from_dict({"a" : [... | closed | https://github.com/huggingface/datasets/issues/4858 | 2022-08-16T20:42:30 | 2022-09-22T13:55:24 | 2022-09-22T13:55:24 | {
"login": "pramodith",
"id": 16939722,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,340,397,153 | 4,857 | No preprocessed wikipedia is working on huggingface/datasets | ## Describe the bug
20220301 wikipedia dump has been deprecated, so now there is no working wikipedia dump on huggingface
https://huggingface.co/datasets/wikipedia
https://dumps.wikimedia.org/enwiki/
| closed | https://github.com/huggingface/datasets/issues/4857 | 2022-08-16T13:55:33 | 2022-08-17T13:35:08 | 2022-08-17T13:35:08 | {
"login": "aninrusimha",
"id": 30733039,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,339,779,957 | 4,856 | file missing when load_dataset with openwebtext on windows | ## Describe the bug
0015896-b1054262f7da52a0518521e29c8e352c.txt is missing when I run run_mlm.py with openwebtext. I check the cache_path and can not find 0015896-b1054262f7da52a0518521e29c8e352c.txt. but I can find this file in the 17ecf461bfccd469a1fbc264ccb03731f8606eea7b3e2e8b86e13d18040bf5b3/urlsf_subset00-16_da... | closed | https://github.com/huggingface/datasets/issues/4856 | 2022-08-16T04:04:22 | 2023-01-04T03:39:12 | 2023-01-04T03:39:12 | {
"login": "xi-loong",
"id": 10361976,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,339,699,975 | 4,855 | Dataset Viewer issue for super_glue | ### Link
https://huggingface.co/datasets/super_glue
### Description
can't view super_glue dataset on the web page
### Owner
_No response_ | closed | https://github.com/huggingface/datasets/issues/4855 | 2022-08-16T01:34:56 | 2022-08-22T10:08:01 | 2022-08-22T10:07:45 | {
"login": "wzsxxa",
"id": 54366859,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,339,456,490 | 4,853 | Fix bug and checksums in exams dataset | Fix #4852. | closed | https://github.com/huggingface/datasets/pull/4853 | 2022-08-15T20:17:57 | 2022-08-16T06:43:57 | 2022-08-16T06:29:06 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,339,450,991 | 4,852 | Bug in multilingual_with_para config of exams dataset and checksums error | ## Describe the bug
There is a bug for "multilingual_with_para" config in exams dataset:
```python
ds = load_dataset("./datasets/exams", split="train")
```
raises:
```
KeyError: 'choices'
```
Moreover, there is a NonMatchingChecksumError:
```
NonMatchingChecksumError: Checksums didn't match for dataset so... | closed | https://github.com/huggingface/datasets/issues/4852 | 2022-08-15T20:14:52 | 2022-09-16T09:50:55 | 2022-08-16T06:29:07 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,339,085,917 | 4,851 | Fix license tag and Source Data section in billsum dataset card | Fixed the data source and license fields | closed | https://github.com/huggingface/datasets/pull/4851 | 2022-08-15T14:37:00 | 2022-08-22T13:56:24 | 2022-08-22T13:40:59 | {
"login": "kashif",
"id": 8100,
"type": "User"
} | [] | true | [] |
1,338,702,306 | 4,850 | Fix test of _get_extraction_protocol for TAR files | While working in another PR, I discovered an xpass test (a test that is supposed to xfail but nevertheless passes) when testing `_get_extraction_protocol`: https://github.com/huggingface/datasets/runs/7818845285?check_suite_focus=true
```
XPASS tests/test_streaming_download_manager.py::test_streaming_dl_manager_get_e... | closed | https://github.com/huggingface/datasets/pull/4850 | 2022-08-15T08:37:58 | 2022-08-15T09:42:56 | 2022-08-15T09:28:46 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,338,273,900 | 4,849 | 1.18.x | null | closed | https://github.com/huggingface/datasets/pull/4849 | 2022-08-14T15:09:19 | 2022-08-14T15:10:02 | 2022-08-14T15:10:02 | {
"login": "Mr-Robot-001",
"id": 49282718,
"type": "User"
} | [] | true | [] |
1,338,271,833 | 4,848 | a | null | closed | https://github.com/huggingface/datasets/pull/4848 | 2022-08-14T15:01:16 | 2022-08-14T15:09:59 | 2022-08-14T15:09:59 | {
"login": "Mr-Robot-001",
"id": 49282718,
"type": "User"
} | [] | true | [] |
1,338,270,636 | 4,847 | Test win ci | aa | closed | https://github.com/huggingface/datasets/pull/4847 | 2022-08-14T14:57:00 | 2023-09-24T10:04:13 | 2022-08-14T14:57:45 | {
"login": "Mr-Robot-001",
"id": 49282718,
"type": "User"
} | [] | true | [] |
1,337,979,897 | 4,846 | Update documentation card of miam dataset | Hi !
Paper has been published at EMNLP. | closed | https://github.com/huggingface/datasets/pull/4846 | 2022-08-13T14:38:55 | 2022-08-17T00:50:04 | 2022-08-14T10:26:08 | {
"login": "PierreColombo",
"id": 22492839,
"type": "User"
} | [] | true | [] |
1,337,928,283 | 4,845 | Mark CI tests as xfail if Hub HTTP error | In order to make testing more robust (and avoid merges to master with red tests), we could mark tests as xfailed (instead of failed) when the Hub raises some temporary HTTP errors.
This PR:
- marks tests as xfailed only if the Hub raises a 500 error for:
- test_upstream_hub
- makes pytest report the xfailed/xpa... | closed | https://github.com/huggingface/datasets/pull/4845 | 2022-08-13T10:45:11 | 2022-08-23T04:57:12 | 2022-08-23T04:42:26 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,337,878,249 | 4,844 | Add 'val' to VALIDATION_KEYWORDS. | This PR fixes #4839 by adding the word `"val"` to the `VALIDATION_KEYWORDS` so that the `load_dataset()` method with `imagefolder` (and probably, some other directives as well) reads folders named `"val"` as well.
I think the supported keywords have to be mentioned in the documentation as well, but I couldn't think ... | closed | https://github.com/huggingface/datasets/pull/4844 | 2022-08-13T06:49:41 | 2022-08-30T10:17:35 | 2022-08-30T10:14:54 | {
"login": "akt42",
"id": 98386959,
"type": "User"
} | [] | true | [] |
1,337,668,699 | 4,843 | Fix typo in streaming docs | null | closed | https://github.com/huggingface/datasets/pull/4843 | 2022-08-12T20:18:21 | 2022-08-14T11:43:30 | 2022-08-14T11:02:09 | {
"login": "flozi00",
"id": 47894090,
"type": "User"
} | [] | true | [] |
1,337,527,764 | 4,842 | Update stackexchange license | The correct license of the stackexchange subset of the Pile is `cc-by-sa-4.0`, as can for example be seen here: https://stackoverflow.com/help/licensing | closed | https://github.com/huggingface/datasets/pull/4842 | 2022-08-12T17:39:06 | 2022-08-14T10:43:18 | 2022-08-14T10:28:49 | {
"login": "cakiki",
"id": 3664563,
"type": "User"
} | [] | true | [] |
1,337,401,243 | 4,841 | Update ted_talks_iwslt license to include ND | Excerpt from the paper's abstract: "Aside from its cultural and social relevance, this content, which is published under the Creative Commons BY-NC-ND license, also represents a precious language resource for the machine translation research community" | closed | https://github.com/huggingface/datasets/pull/4841 | 2022-08-12T16:14:52 | 2022-08-14T11:15:22 | 2022-08-14T11:00:22 | {
"login": "cakiki",
"id": 3664563,
"type": "User"
} | [] | true | [] |
1,337,342,672 | 4,840 | Dataset Viewer issue for darragh/demo_data_raw3 | ### Link
https://huggingface.co/datasets/darragh/demo_data_raw3
### Description
```
Exception: ValueError
Message: Arrow type extension<arrow.py_extension_type<pyarrow.lib.UnknownExtensionType>> does not have a datasets dtype equivalent.
```
reported by @NielsRogge
### Owner
No | open | https://github.com/huggingface/datasets/issues/4840 | 2022-08-12T15:22:58 | 2022-09-08T07:55:44 | null | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [] | false | [] |
1,337,206,377 | 4,839 | ImageFolder dataset builder does not read the validation data set if it is named as "val" | **Is your feature request related to a problem? Please describe.**
Currently, the `'imagefolder'` data set builder in [`load_dataset()`](https://github.com/huggingface/datasets/blob/2.4.0/src/datasets/load.py#L1541] ) only [supports](https://github.com/huggingface/datasets/blob/6c609a322da994de149b2c938f19439bca9940... | closed | https://github.com/huggingface/datasets/issues/4839 | 2022-08-12T13:26:00 | 2022-08-30T10:14:55 | 2022-08-30T10:14:55 | {
"login": "akt42",
"id": 98386959,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,337,194,918 | 4,838 | Fix documentation card of adv_glue dataset | Fix documentation card of adv_glue dataset. | closed | https://github.com/huggingface/datasets/pull/4838 | 2022-08-12T13:15:26 | 2022-08-15T10:17:14 | 2022-08-15T10:02:11 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,337,079,723 | 4,837 | Add support for CSV metadata files to ImageFolder | Fix #4814 | closed | https://github.com/huggingface/datasets/pull/4837 | 2022-08-12T11:19:18 | 2022-08-31T12:01:27 | 2022-08-31T11:59:07 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,337,067,632 | 4,836 | Is it possible to pass multiple links to a split in load script? | **Is your feature request related to a problem? Please describe.**
I wanted to use a python loading script in hugging face datasets that use different sources of text (it's somehow a compilation of multiple datasets + my own dataset) based on how `load_dataset` [works](https://huggingface.co/docs/datasets/loading) I a... | open | https://github.com/huggingface/datasets/issues/4836 | 2022-08-12T11:06:11 | 2022-08-12T11:06:11 | null | {
"login": "sadrasabouri",
"id": 43045767,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,336,994,835 | 4,835 | Fix documentation card of ethos dataset | Fix documentation card of ethos dataset. | closed | https://github.com/huggingface/datasets/pull/4835 | 2022-08-12T09:51:06 | 2022-08-12T13:13:55 | 2022-08-12T12:59:39 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,336,993,511 | 4,834 | Fix documentation card of recipe_nlg dataset | Fix documentation card of recipe_nlg dataset | closed | https://github.com/huggingface/datasets/pull/4834 | 2022-08-12T09:49:39 | 2022-08-12T11:28:18 | 2022-08-12T11:13:40 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,336,946,965 | 4,833 | Fix missing tags in dataset cards | Fix missing tags in dataset cards:
- boolq
- break_data
- definite_pronoun_resolution
- emo
- kor_nli
- pg19
- quartz
- sciq
- squad_es
- wmt14
- wmt15
- wmt16
- wmt17
- wmt18
- wmt19
- wmt_t2t
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task... | closed | https://github.com/huggingface/datasets/pull/4833 | 2022-08-12T09:04:52 | 2022-09-22T14:41:23 | 2022-08-12T09:45:55 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,336,727,389 | 4,832 | Fix tags in dataset cards | Fix wrong tags in dataset cards. | closed | https://github.com/huggingface/datasets/pull/4832 | 2022-08-12T04:11:23 | 2022-08-12T04:41:55 | 2022-08-12T04:27:24 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,336,199,643 | 4,831 | Add oversampling strategies to interleave datasets | Hello everyone,
Here is a proposal to improve `interleave_datasets` function.
Following Issue #3064, and @lhoestq [comment](https://github.com/huggingface/datasets/issues/3064#issuecomment-1022333385), I propose here a code that performs oversampling when interleaving a `Dataset` list.
I have myself encountered t... | closed | https://github.com/huggingface/datasets/pull/4831 | 2022-08-11T16:24:51 | 2023-07-11T15:57:48 | 2022-08-24T16:46:07 | {
"login": "ylacombe",
"id": 52246514,
"type": "User"
} | [] | true | [] |
1,336,177,937 | 4,830 | Fix task tags in dataset cards | null | closed | https://github.com/huggingface/datasets/pull/4830 | 2022-08-11T16:06:06 | 2022-08-11T16:37:27 | 2022-08-11T16:23:00 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,336,068,068 | 4,829 | Misalignment between card tag validation and docs | ## Describe the bug
As pointed out in other issue: https://github.com/huggingface/datasets/pull/4827#discussion_r943536284
the validation of the dataset card tags is not aligned with its documentation: e.g.
- implementation: `license: List[str]`
- docs: `license: Union[str, List[str]]`
They should be aligned.
... | open | https://github.com/huggingface/datasets/issues/4829 | 2022-08-11T14:44:45 | 2023-07-21T15:38:02 | null | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,336,040,168 | 4,828 | Support PIL Image objects in `add_item`/`add_column` | Fix #4796
PS: We should also improve the type inference in `OptimizedTypeSequence` to make it possible to also infer the complex types (only `Image` currently) in nested arrays (e.g. `[[pil_image], [pil_image, pil_image]]` or `[{"img": pil_image}`]), but I plan to address this in a separate PR. | open | https://github.com/huggingface/datasets/pull/4828 | 2022-08-11T14:25:45 | 2023-09-24T10:15:33 | null | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,335,994,312 | 4,827 | Add license metadata to pg19 | As reported over email by Roy Rijkers | closed | https://github.com/huggingface/datasets/pull/4827 | 2022-08-11T13:52:20 | 2022-08-11T15:01:03 | 2022-08-11T14:46:38 | {
"login": "julien-c",
"id": 326577,
"type": "User"
} | [] | true | [] |
1,335,987,583 | 4,826 | Fix language tags in dataset cards | Fix language tags in all dataset cards, so that they are validated (aligned with our `languages.json` resource). | closed | https://github.com/huggingface/datasets/pull/4826 | 2022-08-11T13:47:14 | 2022-08-11T14:17:48 | 2022-08-11T14:03:12 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,335,856,882 | 4,825 | [Windows] Fix Access Denied when using os.rename() | In this PR, we are including an additional step when `os.rename()` raises a PermissionError.
Basically, we will use `shutil.move()` on the temp files.
Fix #2937 | closed | https://github.com/huggingface/datasets/pull/4825 | 2022-08-11T11:57:15 | 2022-08-24T13:09:07 | 2022-08-24T13:09:07 | {
"login": "DougTrajano",
"id": 8703022,
"type": "User"
} | [] | true | [] |
1,335,826,639 | 4,824 | Fix titles in dataset cards | Fix all the titles in the dataset cards, so that they conform to the required format. | closed | https://github.com/huggingface/datasets/pull/4824 | 2022-08-11T11:27:48 | 2022-08-11T13:46:11 | 2022-08-11T12:56:49 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,335,687,033 | 4,823 | Update data URL in mkqa dataset | Update data URL in mkqa dataset.
Fix #4817. | closed | https://github.com/huggingface/datasets/pull/4823 | 2022-08-11T09:16:13 | 2022-08-11T09:51:50 | 2022-08-11T09:37:52 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,335,664,588 | 4,821 | Fix train_test_split docs | I saw that `stratify` is added to the `train_test_split` method as per #4322, hence the docs can be updated. | closed | https://github.com/huggingface/datasets/pull/4821 | 2022-08-11T08:55:45 | 2022-08-11T09:59:29 | 2022-08-11T09:45:40 | {
"login": "NielsRogge",
"id": 48327001,
"type": "User"
} | [] | true | [] |
1,335,117,132 | 4,820 | Terminating: fork() called from a process already using GNU OpenMP, this is unsafe. | Hi, when i try to run prepare_dataset function in [fine tuning ASR tutorial 4](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb) , i got this error.
I got this error
Terminating: fork() called from a process already using GNU OpenMP, this is un... | closed | https://github.com/huggingface/datasets/issues/4820 | 2022-08-10T19:42:33 | 2022-08-10T19:53:10 | 2022-08-10T19:53:10 | {
"login": "talhaanwarch",
"id": 37379131,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,335,064,449 | 4,819 | Add missing language tags to resources | Add missing language tags to resources, required by existing datasets on GitHub. | closed | https://github.com/huggingface/datasets/pull/4819 | 2022-08-10T19:06:42 | 2022-08-10T19:45:49 | 2022-08-10T19:32:15 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,334,941,810 | 4,818 | Add add cc-by-sa-2.5 license tag | - [ ] add it to moon-landing
- [ ] add it to hub-docs | closed | https://github.com/huggingface/datasets/pull/4818 | 2022-08-10T17:18:39 | 2022-10-04T13:47:24 | 2022-10-04T13:47:24 | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [] | true | [] |
1,334,572,163 | 4,817 | Outdated Link for mkqa Dataset | ## Describe the bug
The URL used to download the mkqa dataset is outdated. It seems the URL to download the dataset is currently https://github.com/apple/ml-mkqa/blob/main/dataset/mkqa.jsonl.gz instead of https://github.com/apple/ml-mkqa/raw/master/dataset/mkqa.jsonl.gz (master branch has been renamed to main).
## ... | closed | https://github.com/huggingface/datasets/issues/4817 | 2022-08-10T12:45:45 | 2022-08-11T09:37:52 | 2022-08-11T09:37:52 | {
"login": "liaeh",
"id": 52380283,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,334,099,454 | 4,816 | Update version of opus_paracrawl dataset | This PR updates OPUS ParaCrawl from 7.1 to 9 version.
Fix #4815. | closed | https://github.com/huggingface/datasets/pull/4816 | 2022-08-10T05:39:44 | 2022-08-12T14:32:29 | 2022-08-12T14:17:56 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,334,078,303 | 4,815 | Outdated loading script for OPUS ParaCrawl dataset | ## Describe the bug
Our loading script for OPUS ParaCrawl loads its 7.1 version. Current existing version is 9.
| closed | https://github.com/huggingface/datasets/issues/4815 | 2022-08-10T05:12:34 | 2022-08-12T14:17:57 | 2022-08-12T14:17:57 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
1,333,356,230 | 4,814 | Support CSV as metadata file format in AudioFolder/ImageFolder | Requested here: https://discuss.huggingface.co/t/how-to-structure-an-image-dataset-repo-using-the-image-folder-approach/21004. CSV is also used in AutoTrain for specifying metadata in image datasets. | closed | https://github.com/huggingface/datasets/issues/4814 | 2022-08-09T14:36:49 | 2022-08-31T11:59:08 | 2022-08-31T11:59:08 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,333,287,756 | 4,813 | Fix loading example in opus dataset cards | This PR:
- fixes the examples to load the datasets, with the corrected dataset name, in their dataset cards for:
- opus_dgt
- opus_paracrawl
- opus_wikipedia
- fixes their dataset cards with the missing required information: title, data instances/fields/splits
- enumerates the supported languages
- adds a ... | closed | https://github.com/huggingface/datasets/pull/4813 | 2022-08-09T13:47:38 | 2022-08-09T17:52:15 | 2022-08-09T17:38:18 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,333,051,730 | 4,812 | Fix bug in function validate_type for Python >= 3.9 | Fix `validate_type` function, so that it uses `get_origin` instead. This makes the function forward compatible.
This fixes #4811 because:
```python
In [4]: typing.Optional[str]
Out[4]: typing.Optional[str]
In [5]: get_origin(typing.Optional[str])
Out[5]: typing.Union
```
Fix #4811. | closed | https://github.com/huggingface/datasets/pull/4812 | 2022-08-09T10:32:42 | 2022-08-12T13:41:23 | 2022-08-12T13:27:04 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,333,043,421 | 4,811 | Bug in function validate_type for Python >= 3.9 | ## Describe the bug
The function `validate_type` assumes that the type `typing.Optional[str]` is automatically transformed to `typing.Union[str, NoneType]`.
```python
In [4]: typing.Optional[str]
Out[4]: typing.Union[str, NoneType]
```
However, this is not the case for Python 3.9:
```python
In [3]: typing.Opt... | closed | https://github.com/huggingface/datasets/issues/4811 | 2022-08-09T10:25:21 | 2022-08-12T13:27:05 | 2022-08-12T13:27:05 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,333,038,702 | 4,810 | Add description to hellaswag dataset | null | closed | https://github.com/huggingface/datasets/pull/4810 | 2022-08-09T10:21:14 | 2022-09-23T11:35:38 | 2022-09-23T11:33:44 | {
"login": "julien-c",
"id": 326577,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,332,842,747 | 4,809 | Complete the mlqa dataset card | I fixed the issue #4808
Details of PR:
- Added languages included in the dataset.
- Added task id and task category.
- Updated the citation information.
Fix #4808. | closed | https://github.com/huggingface/datasets/pull/4809 | 2022-08-09T07:38:06 | 2022-08-09T16:26:21 | 2022-08-09T13:26:43 | {
"login": "el2e10",
"id": 7940237,
"type": "User"
} | [] | true | [] |
1,332,840,217 | 4,808 | Add more information to the dataset card of mlqa dataset | null | closed | https://github.com/huggingface/datasets/issues/4808 | 2022-08-09T07:35:42 | 2022-08-09T13:33:23 | 2022-08-09T13:33:23 | {
"login": "el2e10",
"id": 7940237,
"type": "User"
} | [] | false | [] |
1,332,784,110 | 4,807 | document fix in opus_gnome dataset | I fixed a issue #4805.
I changed `"gnome"` to `"opus_gnome"` in[ README.md](https://github.com/huggingface/datasets/tree/main/datasets/opus_gnome#dataset-summary). | closed | https://github.com/huggingface/datasets/pull/4807 | 2022-08-09T06:38:13 | 2022-08-09T07:28:03 | 2022-08-09T07:28:03 | {
"login": "gojiteji",
"id": 38291975,
"type": "User"
} | [] | true | [] |
1,332,664,038 | 4,806 | Fix opus_gnome dataset card | I fixed a issue #4805.
I changed `"gnome"` to `"opus_gnome"` in[ README.md](https://github.com/huggingface/datasets/tree/main/datasets/opus_gnome#dataset-summary).
Fix #4805 | closed | https://github.com/huggingface/datasets/pull/4806 | 2022-08-09T03:40:15 | 2022-08-09T12:06:46 | 2022-08-09T11:52:04 | {
"login": "gojiteji",
"id": 38291975,
"type": "User"
} | [] | true | [] |
1,332,653,531 | 4,805 | Wrong example in opus_gnome dataset card | ## Describe the bug
I found that [the example on opus_gone dataset ](https://github.com/huggingface/datasets/tree/main/datasets/opus_gnome#dataset-summary) doesn't work.
## Steps to reproduce the bug
```python
load_dataset("gnome", lang1="it", lang2="pl")
```
`"gnome"` should be `"opus_gnome"`
## Expected r... | closed | https://github.com/huggingface/datasets/issues/4805 | 2022-08-09T03:21:27 | 2022-08-09T11:52:05 | 2022-08-09T11:52:05 | {
"login": "gojiteji",
"id": 38291975,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,332,630,358 | 4,804 | streaming dataset with concatenating splits raises an error | ## Describe the bug
streaming dataset with concatenating splits raises an error
## Steps to reproduce the bug
```python
from datasets import load_dataset
# no error
repo = "nateraw/ade20k-tiny"
dataset = load_dataset(repo, split="train+validation")
```
```python
from datasets import load_dataset
# er... | open | https://github.com/huggingface/datasets/issues/4804 | 2022-08-09T02:41:56 | 2023-11-25T14:52:09 | null | {
"login": "Bing-su",
"id": 37621276,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,332,079,562 | 4,803 | Support `pipeline` argument in inspect.py functions | **Is your feature request related to a problem? Please describe.**
The `wikipedia` dataset requires a `pipeline` argument to build the list of splits:
https://huggingface.co/datasets/wikipedia/blob/main/wikipedia.py#L937
But this is currently not supported in `get_dataset_config_info`:
https://github.com/hu... | open | https://github.com/huggingface/datasets/issues/4803 | 2022-08-08T16:01:24 | 2023-09-25T12:21:35 | null | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,331,676,691 | 4,802 | `with_format` behavior is inconsistent on different datasets | ## Describe the bug
I found a case where `with_format` does not transform the dataset to the requested format.
## Steps to reproduce the bug
Run:
```python
from transformers import AutoTokenizer, AutoFeatureExtractor
from datasets import load_dataset
raw = load_dataset("glue", "sst2", split="train")
raw =... | open | https://github.com/huggingface/datasets/issues/4802 | 2022-08-08T10:41:34 | 2022-08-09T16:49:09 | null | {
"login": "fxmarty",
"id": 9808326,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.