html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 63 51.8k | body stringlengths 0 36.2k ⌀ | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings list |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/3473 | Iterating over a vision dataset doesn't decode the images | Thanks @NielsRogge for the context.
So IMO everything is working as expected.
I'm closing this issue. Feel free to reopen it again if further changes of the specs should be addressed. | ## Describe the bug
If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned.
## Steps to reproduce the bug
```python
from datasets import load_dataset
import PIL
mnist = load_dataset("mnist", split="train")
first_image = mnist[0]["image"... | 31 | Iterating over a vision dataset doesn't decode the images
## Describe the bug
If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned.
## Steps to reproduce the bug
```python
from datasets import load_dataset
import PIL
mnist = load_datas... | [
-0.0911608562,
-0.3595888913,
-0.050832998,
0.408655256,
0.1412411779,
-0.0136508355,
0.12589553,
0.0593585148,
-0.006152519,
0.1949329674,
0.1792143136,
0.5321470499,
-0.0439153425,
-0.1239933521,
-0.0646307319,
-0.156975314,
0.0175925158,
0.2997390926,
0.0252632573,
-0.106763... |
https://github.com/huggingface/datasets/issues/3473 | Iterating over a vision dataset doesn't decode the images | Thanks for the details :)
I still think that it's unexpected to get different results when doing
```python
for i in range(len(dataset)):
sample = dataset[i]
```
and
```python
for sample in dataset:
pass
```
even though I understand that if you don't need to decode the data, then decoding image or a... | ## Describe the bug
If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned.
## Steps to reproduce the bug
```python
from datasets import load_dataset
import PIL
mnist = load_dataset("mnist", split="train")
first_image = mnist[0]["image"... | 83 | Iterating over a vision dataset doesn't decode the images
## Describe the bug
If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned.
## Steps to reproduce the bug
```python
from datasets import load_dataset
import PIL
mnist = load_datas... | [
-0.0234937239,
-0.3361244798,
-0.1152203232,
0.4020479321,
0.1095666438,
-0.0673545823,
0.1837366521,
0.0540369786,
-0.0070808269,
0.2113573551,
0.1559633166,
0.5565046668,
-0.0202431064,
-0.1345619261,
-0.0524968803,
-0.1830283254,
0.0578490198,
0.2923582494,
-0.0028657613,
-0... |
https://github.com/huggingface/datasets/issues/3465 | Unable to load 'cnn_dailymail' dataset | Hi @talha1503, thanks for reporting.
It seems there is an issue with one of the data files hosted at Google Drive:
```
Google Drive - Quota exceeded
Sorry, you can't view or download this file at this time.
Too many users have viewed or downloaded this file recently. Please try accessing the file again later... | ## Describe the bug
I wanted to load cnn_dailymail dataset from huggingface datasets on Google Colab, but I am getting an error while loading it.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('cnn_dailymail', '3.0.0', ignore_verifications = True)
```
## Expe... | 141 | Unable to load 'cnn_dailymail' dataset
## Describe the bug
I wanted to load cnn_dailymail dataset from huggingface datasets on Google Colab, but I am getting an error while loading it.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('cnn_dailymail', '3.0.0', ign... | [
0.0230112392,
-0.2250259519,
-0.0139150498,
0.5434482098,
0.193967402,
0.155185923,
0.3652308881,
-0.1235437617,
0.1347780377,
0.3054012954,
-0.2102699578,
-0.1074482426,
-0.3181639016,
0.3879696429,
0.0041485336,
-0.0061607179,
-0.0407173187,
-0.131753847,
0.169201389,
0.06247... |
https://github.com/huggingface/datasets/issues/3464 | struct.error: 'i' format requires -2147483648 <= number <= 2147483647 | Hi ! Can you try setting `datasets.config.MAX_TABLE_NBYTES_FOR_PICKLING` to a smaller value than `4 << 30` (4GiB), for example `500 << 20` (500MiB) ? It should reduce the maximum size of the arrow table being pickled during multiprocessing.
If it fixes the issue, we can consider lowering the default value for everyo... | ## Describe the bug
A clear and concise description of what the bug is.
using latest datasets=datasets-1.16.1-py3-none-any.whl
process my own multilingual dataset by following codes, and the number of rows in all dataset is 306000, the max_length of each sentence is 256:
. The motion file is the AMC file (Acclaim Motion Capture data). ]
Some questions :
1. How do we go about representing these features using datasets.Features and generate examples ?
2. The dataset download link ... | ## Adding a Dataset
- **Name:** CMU Graphics Lab Motion Capture database
- **Description:** The database contains free motions which you can download and use.
- **Data:** http://mocap.cs.cmu.edu/
- **Motivation:** Nice motion capture dataset
Instructions to add a new dataset can be found [here](https://github.c... | 149 | Add CMU Graphics Lab Motion Capture dataset
## Adding a Dataset
- **Name:** CMU Graphics Lab Motion Capture database
- **Description:** The database contains free motions which you can download and use.
- **Data:** http://mocap.cs.cmu.edu/
- **Motivation:** Nice motion capture dataset
Instructions to add a ne... | [
-0.1400220692,
-0.0276257154,
0.0362781696,
0.1367378086,
0.2119433433,
0.2255604565,
0.1493569613,
0.0857909173,
-0.4835159481,
-0.1561218351,
-0.0873227566,
0.0038940383,
-0.2831748128,
0.2164120823,
0.0632067174,
-0.3122251928,
-0.0616576299,
-0.1458100677,
-0.0537849516,
-0... |
https://github.com/huggingface/datasets/issues/3457 | Add CMU Graphics Lab Motion Capture dataset | Hi @dnaveenr! Thanks for working on this!
1. We can use the `Sequence(Value("string"))` feature type for the subject's AMC files and `Value("string")` for the subject's ASF file (`Value("string")` represents the file paths) + the types for categories/subcategories and descriptions.
2. We can use this URL to downloa... | ## Adding a Dataset
- **Name:** CMU Graphics Lab Motion Capture database
- **Description:** The database contains free motions which you can download and use.
- **Data:** http://mocap.cs.cmu.edu/
- **Motivation:** Nice motion capture dataset
Instructions to add a new dataset can be found [here](https://github.c... | 181 | Add CMU Graphics Lab Motion Capture dataset
## Adding a Dataset
- **Name:** CMU Graphics Lab Motion Capture database
- **Description:** The database contains free motions which you can download and use.
- **Data:** http://mocap.cs.cmu.edu/
- **Motivation:** Nice motion capture dataset
Instructions to add a ne... | [
-0.4960854053,
-0.1240224764,
0.0027483257,
-0.1107788831,
0.154397741,
0.113965854,
0.1740245968,
0.3626153171,
-0.2517265975,
0.013264128,
-0.0316175297,
0.201133877,
-0.2203564793,
0.2996661961,
0.1233676746,
-0.2626331747,
-0.1171883643,
0.0168646649,
0.0039789062,
0.050758... |
https://github.com/huggingface/datasets/issues/3457 | Add CMU Graphics Lab Motion Capture dataset | Hi @mariosasko ,
1. Thanks for this, so we can add the file paths.
2. Yes, I had already mailed the authors a couple of days back actually, asking for the metadata details[ i.e category, sub-category and motion description] . They are yet to respond though, I will wait for a couple of days and try to follow up with... | ## Adding a Dataset
- **Name:** CMU Graphics Lab Motion Capture database
- **Description:** The database contains free motions which you can download and use.
- **Data:** http://mocap.cs.cmu.edu/
- **Motivation:** Nice motion capture dataset
Instructions to add a new dataset can be found [here](https://github.c... | 107 | Add CMU Graphics Lab Motion Capture dataset
## Adding a Dataset
- **Name:** CMU Graphics Lab Motion Capture database
- **Description:** The database contains free motions which you can download and use.
- **Data:** http://mocap.cs.cmu.edu/
- **Motivation:** Nice motion capture dataset
Instructions to add a ne... | [
-0.4981459677,
-0.0221617501,
-0.0469948798,
0.0064928937,
0.1337008923,
0.0644237995,
0.1049067527,
0.4042607248,
-0.2334994823,
-0.0190540943,
0.0281625725,
0.1850056797,
-0.1908474416,
0.3392908275,
0.1693346649,
-0.3363628685,
-0.0530750602,
0.1396868527,
-0.0843887553,
0.0... |
https://github.com/huggingface/datasets/issues/3455 | Easier information editing | Hi ! I guess you are talking about the dataset cards that are in this repository on github ?
I think github allows to submit a PR even for 1 line though the `Edit file` button on the page of the dataset card.
Maybe let's mention this in `CONTRIBUTING.md` ? | **Is your feature request related to a problem? Please describe.**
It requires a lot of effort to improve a datasheet.
**Describe the solution you'd like**
UI or at least a link to the place where the code that needs to be edited is (and an easy way to edit this code directly from the site, without cloning, branc... | 50 | Easier information editing
**Is your feature request related to a problem? Please describe.**
It requires a lot of effort to improve a datasheet.
**Describe the solution you'd like**
UI or at least a link to the place where the code that needs to be edited is (and an easy way to edit this code directly from the... | [
-0.0376411676,
0.1618793309,
-0.1659287065,
-0.3433541059,
0.0584444888,
0.0013939644,
0.0719456673,
0.2496085018,
-0.3263020217,
0.4076212943,
0.2005190998,
0.3080872297,
0.3768406808,
0.224368006,
0.0078362301,
-0.0981670469,
-0.0289735068,
0.092659533,
0.1429606527,
0.142679... |
https://github.com/huggingface/datasets/issues/3452 | why the stratify option is omitted from test_train_split function? | Hi ! It's simply not added yet :)
If someone wants to contribute to add the `stratify` parameter I'd be happy to give some pointers.
In the meantime, I guess you can use `sklearn` or other tools to do a stratified train/test split over the **indices** of your dataset and then do
```
train_dataset = dataset.sele... | why the stratify option is omitted from test_train_split function?
is there any other way implement the stratify option while splitting the dataset? as it is important point to be considered while splitting the dataset. | 60 | why the stratify option is omitted from test_train_split function?
why the stratify option is omitted from test_train_split function?
is there any other way implement the stratify option while splitting the dataset? as it is important point to be considered while splitting the dataset.
Hi ! It's simply not added... | [
-0.5293185115,
-0.0548487902,
-0.1174931154,
-0.1046918258,
0.2126727104,
0.0128880581,
0.336581856,
0.254406631,
-0.0457450524,
0.362811774,
0.1107298806,
0.3568530977,
-0.0795329884,
0.3337858617,
-0.0435331874,
-0.3662008941,
-0.1260603964,
0.0729326308,
-0.146767363,
0.1772... |
https://github.com/huggingface/datasets/issues/3452 | why the stratify option is omitted from test_train_split function? | Hi @lhoestq I would like to add `stratify` parameter, can you give me some pointers for adding the same ? | why the stratify option is omitted from test_train_split function?
is there any other way implement the stratify option while splitting the dataset? as it is important point to be considered while splitting the dataset. | 20 | why the stratify option is omitted from test_train_split function?
why the stratify option is omitted from test_train_split function?
is there any other way implement the stratify option while splitting the dataset? as it is important point to be considered while splitting the dataset.
Hi @lhoestq I would like t... | [
-0.5134284496,
-0.1317567527,
-0.115856044,
-0.1088105962,
0.2143979073,
-0.0632685944,
0.3055258095,
0.1485875547,
-0.0885695368,
0.3909530342,
0.078435339,
0.3760671616,
-0.0534927361,
0.2609143257,
-0.0250584893,
-0.4552244842,
-0.0951922089,
-0.0083258748,
-0.1576342285,
0.... |
https://github.com/huggingface/datasets/issues/3452 | why the stratify option is omitted from test_train_split function? | Hi ! Sure :)
The `train_test_split` method is defined here:
https://github.com/huggingface/datasets/blob/dc62232fa1b3bcfe2fbddcb721f2d141f8908943/src/datasets/arrow_dataset.py#L3253-L3253
and inside `train_test_split ` we need to create the right `train_indices` and `test_indices` that are passed here to `.se... | why the stratify option is omitted from test_train_split function?
is there any other way implement the stratify option while splitting the dataset? as it is important point to be considered while splitting the dataset. | 114 | why the stratify option is omitted from test_train_split function?
why the stratify option is omitted from test_train_split function?
is there any other way implement the stratify option while splitting the dataset? as it is important point to be considered while splitting the dataset.
Hi ! Sure :)
The `train... | [
-0.3376852274,
-0.1832867414,
-0.1047078744,
0.1171168387,
0.2018857002,
-0.0858201459,
0.3129179776,
0.1023204327,
-0.0346965976,
0.3047149181,
-0.0190207548,
0.5014281869,
-0.0940354541,
0.2779452801,
-0.0283282306,
-0.5074386597,
-0.0960934237,
0.0319260135,
-0.2141888887,
0... |
https://github.com/huggingface/datasets/issues/3450 | Unexpected behavior doing Split + Filter | Hi ! This is an issue with `datasets` 1.12. Sorry for the inconvenience. Can you update to `>=1.13` ?
see https://github.com/huggingface/datasets/issues/3190
Maybe we should also backport the bug fix to `1.12` (in a new version `1.12.2`) | ## Describe the bug
I observed unexpected behavior when applying 'train_test_split' followed by 'filter' on dataset. Elements of the training dataset eventually end up in the test dataset (after applying the 'filter')
## Steps to reproduce the bug
```
from datasets import Dataset
import pandas as pd
dic = {'x'... | 36 | Unexpected behavior doing Split + Filter
## Describe the bug
I observed unexpected behavior when applying 'train_test_split' followed by 'filter' on dataset. Elements of the training dataset eventually end up in the test dataset (after applying the 'filter')
## Steps to reproduce the bug
```
from datasets impo... | [
-0.1384005994,
-0.1338323355,
-0.1100934669,
0.0964838117,
0.1088930294,
0.0105679287,
0.1856798977,
0.2552303672,
-0.1076799408,
-0.0520347208,
-0.1723319888,
0.4741823375,
0.0075506228,
0.403278321,
-0.0472604558,
-0.1393702477,
0.027355304,
-0.0092402995,
0.0107643297,
-0.12... |
https://github.com/huggingface/datasets/issues/3449 | Add `__add__()`, `__iadd__()` and similar to `Dataset` class | I was going through the codebase, and I believe the implementation of __add__() and __iadd__() will be similar to concatenate_datasets() after the elimination of code for arguments other than the list of datasets (info, split, axis).
(Assuming elimination of axis means concatenating over axis 1.) | **Is your feature request related to a problem? Please describe.**
No.
**Describe the solution you'd like**
I would like to be able to concatenate datasets as follows:
```python
>>> dataset["train"] += dataset["validation"]
```
... instead of using `concatenate_datasets()`:
```python
>>> raw_datasets["trai... | 45 | Add `__add__()`, `__iadd__()` and similar to `Dataset` class
**Is your feature request related to a problem? Please describe.**
No.
**Describe the solution you'd like**
I would like to be able to concatenate datasets as follows:
```python
>>> dataset["train"] += dataset["validation"]
```
... instead of usi... | [
-0.2595470846,
0.0174407102,
-0.1195520759,
-0.0258937329,
0.297565043,
0.4441085458,
0.4135001302,
0.1891679317,
-0.2233495861,
0.1560035944,
-0.1052789316,
0.3455277681,
-0.1738570482,
0.2189385146,
-0.0191160589,
-0.2958265543,
0.0940745622,
0.1944159269,
-0.2048512399,
-0.0... |
https://github.com/huggingface/datasets/issues/3448 | JSONDecodeError with HuggingFace dataset viewer | Hi ! I think the issue comes from the dataset_infos.json file: it has the "flat" field twice.
Can you try deleting this file and regenerating it please ? | ## Dataset viewer issue for 'pubmed_neg'
**Link:** https://huggingface.co/datasets/IGESML/pubmed_neg
I am getting the error:
Status code: 400
Exception: JSONDecodeError
Message: Expecting property name enclosed in double quotes: line 61 column 2 (char 1202)
I have checked all files - I am not u... | 28 | JSONDecodeError with HuggingFace dataset viewer
## Dataset viewer issue for 'pubmed_neg'
**Link:** https://huggingface.co/datasets/IGESML/pubmed_neg
I am getting the error:
Status code: 400
Exception: JSONDecodeError
Message: Expecting property name enclosed in double quotes: line 61 column 2 (c... | [
0.2295418084,
-0.1201282293,
0.0490055047,
0.2303445041,
0.1695690006,
0.0577233173,
0.0852913558,
0.1623660028,
-0.0017490645,
0.0962924287,
0.1668467969,
0.2956420183,
-0.1245155856,
0.0679474473,
-0.1284822077,
-0.257044971,
0.1415823102,
0.2197294384,
0.1280412525,
-0.07734... |
https://github.com/huggingface/datasets/issues/3448 | JSONDecodeError with HuggingFace dataset viewer | Thanks! That fixed that, but now I am getting:
Server Error
Status code: 400
Exception: KeyError
Message: 'feature'
I checked the dataset_infos.json and pubmed_neg.py script, I don't use 'feature' anywhere as a key. Is the dataset viewer expecting that I do? | ## Dataset viewer issue for 'pubmed_neg'
**Link:** https://huggingface.co/datasets/IGESML/pubmed_neg
I am getting the error:
Status code: 400
Exception: JSONDecodeError
Message: Expecting property name enclosed in double quotes: line 61 column 2 (char 1202)
I have checked all files - I am not u... | 41 | JSONDecodeError with HuggingFace dataset viewer
## Dataset viewer issue for 'pubmed_neg'
**Link:** https://huggingface.co/datasets/IGESML/pubmed_neg
I am getting the error:
Status code: 400
Exception: JSONDecodeError
Message: Expecting property name enclosed in double quotes: line 61 column 2 (c... | [
0.3349670172,
-0.1692841202,
0.0579741821,
0.1783903539,
0.1159864739,
0.0022668086,
0.165877685,
0.0293862037,
-0.0354914218,
0.0884651765,
0.2415665239,
0.2807991207,
-0.1743422896,
0.1949502826,
-0.0547123365,
-0.1336168945,
0.1672874391,
0.280290097,
0.1399284005,
-0.096214... |
https://github.com/huggingface/datasets/issues/3448 | JSONDecodeError with HuggingFace dataset viewer | It seems that the `feature` key is missing from some feature type definition in your dataset_infos.json:
```json
"tokens": {
"dtype": "list",
"id": null,
"_type": "Sequence"
},
"tags": {
"dtype": "list",
"id": null,
"_type": "Sequence"
}
```
They should be
```json
"... | ## Dataset viewer issue for 'pubmed_neg'
**Link:** https://huggingface.co/datasets/IGESML/pubmed_neg
I am getting the error:
Status code: 400
Exception: JSONDecodeError
Message: Expecting property name enclosed in double quotes: line 61 column 2 (char 1202)
I have checked all files - I am not u... | 98 | JSONDecodeError with HuggingFace dataset viewer
## Dataset viewer issue for 'pubmed_neg'
**Link:** https://huggingface.co/datasets/IGESML/pubmed_neg
I am getting the error:
Status code: 400
Exception: JSONDecodeError
Message: Expecting property name enclosed in double quotes: line 61 column 2 (c... | [
0.1093126163,
-0.0348637216,
0.0341626778,
0.1241071299,
0.1576343179,
0.0917591006,
0.0911152512,
0.1819448918,
-0.0062630931,
0.1343883127,
0.160380289,
0.3906168342,
-0.2260689139,
0.177926749,
-0.0963281766,
-0.2384369373,
0.153585434,
0.2733956277,
0.1301555037,
-0.0759228... |
https://github.com/huggingface/datasets/issues/3447 | HF_DATASETS_OFFLINE=1 didn't stop datasets.builder from downloading | Hi ! Indeed it says "downloading and preparing" but in your case it didn't need to download anything since you used local files (it would have thrown an error otherwise). I think we can improve the logging to make it clearer in this case | ## Describe the bug
According to https://huggingface.co/docs/datasets/loading_datasets.html#loading-a-dataset-builder, setting HF_DATASETS_OFFLINE to 1 should make datasets to "run in full offline mode". It didn't work for me. At the very beginning, datasets still tried to download "custom data configuration" for JSON... | 44 | HF_DATASETS_OFFLINE=1 didn't stop datasets.builder from downloading
## Describe the bug
According to https://huggingface.co/docs/datasets/loading_datasets.html#loading-a-dataset-builder, setting HF_DATASETS_OFFLINE to 1 should make datasets to "run in full offline mode". It didn't work for me. At the very beginning... | [
-0.3771060407,
0.3645550311,
0.0545568578,
0.297896266,
0.2478350103,
-0.0242792517,
0.2448200583,
0.1876754463,
0.2710855901,
0.0606375188,
-0.0563665815,
0.0012010799,
-0.0477959551,
0.2244563699,
-0.042479936,
0.1671870351,
0.0634073466,
0.0005657274,
-0.0805838034,
-0.07237... |
https://github.com/huggingface/datasets/issues/3447 | HF_DATASETS_OFFLINE=1 didn't stop datasets.builder from downloading | @lhoestq Thank you for explaining. I am sorry but I was not clear about my intention. I didn't want to kill internet traffic; I wanted to kill all write activity. In other words, you can imagine that my storage has only read access but crashes on write.
When run_clm.py is invoked with the same parameters, the hash i... | ## Describe the bug
According to https://huggingface.co/docs/datasets/loading_datasets.html#loading-a-dataset-builder, setting HF_DATASETS_OFFLINE to 1 should make datasets to "run in full offline mode". It didn't work for me. At the very beginning, datasets still tried to download "custom data configuration" for JSON... | 201 | HF_DATASETS_OFFLINE=1 didn't stop datasets.builder from downloading
## Describe the bug
According to https://huggingface.co/docs/datasets/loading_datasets.html#loading-a-dataset-builder, setting HF_DATASETS_OFFLINE to 1 should make datasets to "run in full offline mode". It didn't work for me. At the very beginning... | [
-0.3771060407,
0.3645550311,
0.0545568578,
0.297896266,
0.2478350103,
-0.0242792517,
0.2448200583,
0.1876754463,
0.2710855901,
0.0606375188,
-0.0563665815,
0.0012010799,
-0.0477959551,
0.2244563699,
-0.042479936,
0.1671870351,
0.0634073466,
0.0005657274,
-0.0805838034,
-0.07237... |
https://github.com/huggingface/datasets/issues/3447 | HF_DATASETS_OFFLINE=1 didn't stop datasets.builder from downloading | Hi ! `load_dataset` may re-generate your dataset if some parameters changed indeed. If you want to freeze a dataset loaded with `load_dataset`, I think the best solution is just to save it somewhere on your disk with `.save_to_disk(my_dataset_dir)` and reload it with `load_from_disk(my_dataset_dir)`. This way you will ... | ## Describe the bug
According to https://huggingface.co/docs/datasets/loading_datasets.html#loading-a-dataset-builder, setting HF_DATASETS_OFFLINE to 1 should make datasets to "run in full offline mode". It didn't work for me. At the very beginning, datasets still tried to download "custom data configuration" for JSON... | 58 | HF_DATASETS_OFFLINE=1 didn't stop datasets.builder from downloading
## Describe the bug
According to https://huggingface.co/docs/datasets/loading_datasets.html#loading-a-dataset-builder, setting HF_DATASETS_OFFLINE to 1 should make datasets to "run in full offline mode". It didn't work for me. At the very beginning... | [
-0.3771060407,
0.3645550311,
0.0545568578,
0.297896266,
0.2478350103,
-0.0242792517,
0.2448200583,
0.1876754463,
0.2710855901,
0.0606375188,
-0.0563665815,
0.0012010799,
-0.0477959551,
0.2244563699,
-0.042479936,
0.1671870351,
0.0634073466,
0.0005657274,
-0.0805838034,
-0.07237... |
https://github.com/huggingface/datasets/issues/3444 | Align the Dataset and IterableDataset processing API | Yes I agree, these should be as aligned as possible. Maybe we can also check the feedback in the survey at http://hf.co/oss-survey and see if people mentioned related things on the API (in particular if we go the breaking change way, it would be good to be sure we are taking the right direction for the community). | ## Intro
Currently the two classes have two distinct API for processing:
### The `.map()` method
Both have those parameters in common: function, batched, batch_size
- IterableDataset is missing those parameters:
with_indices, with_rank, input_columns, drop_last_batch, remove_columns, features, disable_null... | 57 | Align the Dataset and IterableDataset processing API
## Intro
Currently the two classes have two distinct API for processing:
### The `.map()` method
Both have those parameters in common: function, batched, batch_size
- IterableDataset is missing those parameters:
with_indices, with_rank, input_columns, ... | [
-0.3938247263,
0.0599105097,
-0.1768295169,
0.1296566725,
0.1461639255,
0.1594101638,
0.1987354755,
0.4363477528,
-0.1442689151,
0.1145086735,
-0.1134143472,
0.4282385409,
-0.004529018,
0.0938546732,
-0.1481654942,
-0.2814331353,
0.2069459409,
0.2346381545,
-0.2922790945,
-0.08... |
https://github.com/huggingface/datasets/issues/3444 | Align the Dataset and IterableDataset processing API | I like this proposal.
> There is also an important difference in terms of behavior:
Dataset.map adds new columns (with dict.update)
BUT
IterableDataset discards previous columns (it overwrites the dict)
IMO the two methods should have the same behavior. This would be an important breaking change though.
> The... | ## Intro
Currently the two classes have two distinct API for processing:
### The `.map()` method
Both have those parameters in common: function, batched, batch_size
- IterableDataset is missing those parameters:
with_indices, with_rank, input_columns, drop_last_batch, remove_columns, features, disable_null... | 322 | Align the Dataset and IterableDataset processing API
## Intro
Currently the two classes have two distinct API for processing:
### The `.map()` method
Both have those parameters in common: function, batched, batch_size
- IterableDataset is missing those parameters:
with_indices, with_rank, input_columns, ... | [
-0.3938247263,
0.0599105097,
-0.1768295169,
0.1296566725,
0.1461639255,
0.1594101638,
0.1987354755,
0.4363477528,
-0.1442689151,
0.1145086735,
-0.1134143472,
0.4282385409,
-0.004529018,
0.0938546732,
-0.1481654942,
-0.2814331353,
0.2069459409,
0.2346381545,
-0.2922790945,
-0.08... |
https://github.com/huggingface/datasets/issues/3444 | Align the Dataset and IterableDataset processing API | > If I understand this part correctly, the idea would be for Dataset.map to behave similarly to Dataset.with_transform (lazy processing) and to have an option to cache processed data (with .cache()). This idea is really nice because it can also be applied to IterableDataset to fix #3142 (again we get the aligned APIs).... | ## Intro
Currently the two classes have two distinct API for processing:
### The `.map()` method
Both have those parameters in common: function, batched, batch_size
- IterableDataset is missing those parameters:
with_indices, with_rank, input_columns, drop_last_batch, remove_columns, features, disable_null... | 105 | Align the Dataset and IterableDataset processing API
## Intro
Currently the two classes have two distinct API for processing:
### The `.map()` method
Both have those parameters in common: function, batched, batch_size
- IterableDataset is missing those parameters:
with_indices, with_rank, input_columns, ... | [
-0.3938247263,
0.0599105097,
-0.1768295169,
0.1296566725,
0.1461639255,
0.1594101638,
0.1987354755,
0.4363477528,
-0.1442689151,
0.1145086735,
-0.1134143472,
0.4282385409,
-0.004529018,
0.0938546732,
-0.1481654942,
-0.2814331353,
0.2069459409,
0.2346381545,
-0.2922790945,
-0.08... |
https://github.com/huggingface/datasets/issues/3444 | Align the Dataset and IterableDataset processing API | Yes indeed, thanks. I added it to the list of methods to align in the first post | ## Intro
Currently the two classes have two distinct API for processing:
### The `.map()` method
Both have those parameters in common: function, batched, batch_size
- IterableDataset is missing those parameters:
with_indices, with_rank, input_columns, drop_last_batch, remove_columns, features, disable_null... | 17 | Align the Dataset and IterableDataset processing API
## Intro
Currently the two classes have two distinct API for processing:
### The `.map()` method
Both have those parameters in common: function, batched, batch_size
- IterableDataset is missing those parameters:
with_indices, with_rank, input_columns, ... | [
-0.3938247263,
0.0599105097,
-0.1768295169,
0.1296566725,
0.1461639255,
0.1594101638,
0.1987354755,
0.4363477528,
-0.1442689151,
0.1145086735,
-0.1134143472,
0.4282385409,
-0.004529018,
0.0938546732,
-0.1481654942,
-0.2814331353,
0.2069459409,
0.2346381545,
-0.2922790945,
-0.08... |
https://github.com/huggingface/datasets/issues/3440 | datasets keeps reading from cached files, although I disabled it | Hi ! What version of `datasets` are you using ? Can you also provide the logs you get before it raises the error ? | ## Describe the bug
Hi,
I am trying to avoid dataset library using cached files, I get the following bug when this tried to read the cached files. I tried to do the followings:
```
from datasets import set_caching_enabled
set_caching_enabled(False)
```
also force redownlaod:
```
download_mode='force_redownloa... | 24 | datasets keeps reading from cached files, although I disabled it
## Describe the bug
Hi,
I am trying to avoid dataset library using cached files, I get the following bug when this tried to read the cached files. I tried to do the followings:
```
from datasets import set_caching_enabled
set_caching_enabled(False)... | [
-0.134791404,
0.0252022874,
0.0009025426,
0.5154468417,
0.4938995242,
0.2646331489,
0.057791438,
0.325937748,
0.0154794231,
-0.0155137088,
-0.2143104076,
0.2182268202,
-0.085602425,
-0.3043366373,
-0.1368986666,
0.2771289349,
0.1166641265,
0.0681685731,
0.0084792469,
-0.0709792... |
https://github.com/huggingface/datasets/issues/3431 | Unable to resolve any data file after loading once | Hi ! `load_dataset` accepts as input either a local dataset directory or a dataset name from the Hugging Face Hub.
So here you are getting this error the second time because it tries to load the local `wiki_dpr` directory, instead of `wiki_dpr` from the Hub. It doesn't work since it's a **cache** directory, not a **... | when I rerun my program, it occurs this error
" Unable to resolve any data file that matches '['**train*']' at /data2/whr/lzy/open_domain_data/retrieval/wiki_dpr with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'zip']", so how could i deal with this problem?
thx.
And below is my code .
... | 70 | Unable to resolve any data file after loading once
when I rerun my program, it occurs this error
" Unable to resolve any data file that matches '['**train*']' at /data2/whr/lzy/open_domain_data/retrieval/wiki_dpr with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'zip']", so how could i d... | [
0.0235010572,
-0.0923303813,
-0.0853217915,
0.5271120667,
0.3798449636,
0.14113383,
0.0799056143,
0.5025756359,
0.2032744288,
0.075701423,
0.1105002686,
-0.0108606387,
0.1759342104,
0.0473020822,
-0.0830672532,
-0.1982478648,
-0.1227090135,
0.2124536932,
0.1570390165,
0.0565082... |
https://github.com/huggingface/datasets/issues/3425 | Getting configs names takes too long | It looks like it's currently calling `HfFileSystem.ls()` ~8 times at the root and for each subdirectory:
- ""
- "en.noblocklist"
- "en.noclean"
- "en"
- "multilingual"
- "realnewslike"
Currently `ls` is slow because it iterates on all the files inside the repository.
An easy optimization would be to cache t... |
## Steps to reproduce the bug
```python
from datasets import get_dataset_config_names
get_dataset_config_names("allenai/c4")
```
## Expected results
I would expect to get the answer quickly, at least in less than 10s
## Actual results
It takes about 45s on my environment
## Environment info
- `d... | 76 | Getting configs names takes too long
## Steps to reproduce the bug
```python
from datasets import get_dataset_config_names
get_dataset_config_names("allenai/c4")
```
## Expected results
I would expect to get the answer quickly, at least in less than 10s
## Actual results
It takes about 45s on my env... | [
-0.3283521235,
0.1722298414,
-0.1880414635,
0.377035141,
0.199928537,
-0.1003855318,
0.1385930628,
0.4915502965,
-0.0290571153,
0.3562119901,
-0.1960751116,
0.352930218,
0.132097736,
-0.2432373315,
-0.2165948004,
0.0623897761,
-0.0295607448,
0.1501358598,
0.0637908578,
-0.24292... |
https://github.com/huggingface/datasets/issues/3423 | data duplicate when setting num_works > 1 with streaming data | Hi ! Thanks for reporting :)
When using a PyTorch's data loader with `num_workers>1` and an iterable dataset, each worker streams the exact same data by default, resulting in duplicate data when iterating using the data loader.
We can probably fix this in `datasets` by checking `torch.utils.data.get_worker_info()... | ## Describe the bug
The data is repeated num_works times when we load_dataset with streaming and set num_works > 1 when construct dataloader
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import pandas as pd
import numpy as np
import os
from datasets import load_dataset
from tor... | 55 | data duplicate when setting num_works > 1 with streaming data
## Describe the bug
The data is repeated num_works times when we load_dataset with streaming and set num_works > 1 when construct dataloader
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import pandas as pd
import numpy... | [
-0.232575953,
-0.3752961457,
-0.0161686987,
0.3272433281,
0.254571557,
-0.1762158871,
0.5691007376,
0.3358851373,
-0.2572708726,
0.421199739,
0.0956361741,
0.2009149492,
-0.0080959164,
0.1235197484,
0.144700557,
0.0096041495,
0.0426976569,
0.1493969113,
-0.2690355778,
0.1793185... |
https://github.com/huggingface/datasets/issues/3423 | data duplicate when setting num_works > 1 with streaming data | > Hi ! Thanks for reporting :)
>
> When using a PyTorch's data loader with `num_workers>1` and an iterable dataset, each worker streams the exact same data by default, resulting in duplicate data when iterating using the data loader.
>
> We can probably fix this in `datasets` by checking `torch.utils.data.get_wor... | ## Describe the bug
The data is repeated num_works times when we load_dataset with streaming and set num_works > 1 when construct dataloader
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import pandas as pd
import numpy as np
import os
from datasets import load_dataset
from tor... | 74 | data duplicate when setting num_works > 1 with streaming data
## Describe the bug
The data is repeated num_works times when we load_dataset with streaming and set num_works > 1 when construct dataloader
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import pandas as pd
import numpy... | [
-0.232575953,
-0.3752961457,
-0.0161686987,
0.3272433281,
0.254571557,
-0.1762158871,
0.5691007376,
0.3358851373,
-0.2572708726,
0.421199739,
0.0956361741,
0.2009149492,
-0.0080959164,
0.1235197484,
0.144700557,
0.0096041495,
0.0426976569,
0.1493969113,
-0.2690355778,
0.1793185... |
https://github.com/huggingface/datasets/issues/3423 | data duplicate when setting num_works > 1 with streaming data | Isn’t that somehow a bug on PyTorch side? (Just asking because this behavior seems quite general and maybe not what would be intended) | ## Describe the bug
The data is repeated num_works times when we load_dataset with streaming and set num_works > 1 when construct dataloader
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import pandas as pd
import numpy as np
import os
from datasets import load_dataset
from tor... | 23 | data duplicate when setting num_works > 1 with streaming data
## Describe the bug
The data is repeated num_works times when we load_dataset with streaming and set num_works > 1 when construct dataloader
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import pandas as pd
import numpy... | [
-0.232575953,
-0.3752961457,
-0.0161686987,
0.3272433281,
0.254571557,
-0.1762158871,
0.5691007376,
0.3358851373,
-0.2572708726,
0.421199739,
0.0956361741,
0.2009149492,
-0.0080959164,
0.1235197484,
0.144700557,
0.0096041495,
0.0426976569,
0.1493969113,
-0.2690355778,
0.1793185... |
https://github.com/huggingface/datasets/issues/3423 | data duplicate when setting num_works > 1 with streaming data | From PyTorch's documentation [here](https://pytorch.org/docs/stable/data.html#dataset-types):
> When using an IterableDataset with multi-process data loading. The same dataset object is replicated on each worker process, and thus the replicas must be configured differently to avoid duplicated data. See [IterableData... | ## Describe the bug
The data is repeated num_works times when we load_dataset with streaming and set num_works > 1 when construct dataloader
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import pandas as pd
import numpy as np
import os
from datasets import load_dataset
from tor... | 127 | data duplicate when setting num_works > 1 with streaming data
## Describe the bug
The data is repeated num_works times when we load_dataset with streaming and set num_works > 1 when construct dataloader
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import pandas as pd
import numpy... | [
-0.232575953,
-0.3752961457,
-0.0161686987,
0.3272433281,
0.254571557,
-0.1762158871,
0.5691007376,
0.3358851373,
-0.2572708726,
0.421199739,
0.0956361741,
0.2009149492,
-0.0080959164,
0.1235197484,
0.144700557,
0.0096041495,
0.0426976569,
0.1493969113,
-0.2690355778,
0.1793185... |
https://github.com/huggingface/datasets/issues/3422 | Error about load_metric | Hi ! I wasn't able to reproduce your error.
Can you try to clear your cache at `~/.cache/huggingface/modules` and try again ? | ## Describe the bug
File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1371, in load_metric
metric = metric_cls(
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
metric = load_metric("glue", "sst2")
```
## Environment info
- `datasets` version: ... | 22 | Error about load_metric
## Describe the bug
File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1371, in load_metric
metric = metric_cls(
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
metric = load_metric("glue", "sst2")
```
## Environment in... | [
-0.1543193907,
-0.3634535074,
0.0099821575,
0.3949095309,
0.4780282378,
-0.0076498594,
0.1825974137,
0.1899198741,
0.1398091912,
0.1264668852,
-0.2365793586,
0.0703849867,
0.0390485004,
0.2697517574,
0.2777209878,
-0.1408575922,
-0.2320583612,
0.1285481155,
-0.3948494792,
0.137... |
https://github.com/huggingface/datasets/issues/3419 | `.to_json` is extremely slow after `.select` | Hi ! It's slower indeed because a datasets on which `select`/`shard`/`train_test_split`/`shuffle` has been called has to do additional steps to retrieve the data of the dataset table in the right order.
Indeed, if you call `dataset.select([0, 5, 10])`, the underlying table of the dataset is not altered to keep the e... | ## Describe the bug
Saving a dataset to JSON with `to_json` is extremely slow after using `.select` on the original dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
original = load_dataset("squad", split="train")
original.to_json("from_original.json") # Takes 0 seconds
se... | 208 | `.to_json` is extremely slow after `.select`
## Describe the bug
Saving a dataset to JSON with `to_json` is extremely slow after using `.select` on the original dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
original = load_dataset("squad", split="train")
original.to_json... | [
-0.3134348094,
0.1887502223,
-0.0610732362,
-0.0651996881,
0.2008450627,
0.0744217187,
0.084237203,
0.4034846723,
-0.2616819143,
0.2137250602,
0.0064629111,
0.7067224979,
-0.007227724,
-0.0352007858,
0.0019335635,
-0.0267549418,
0.1643504947,
-0.1066153273,
0.0210075192,
-0.126... |
https://github.com/huggingface/datasets/issues/3419 | `.to_json` is extremely slow after `.select` | Hi, thanks for the response!
I still don't understand why it is so much slower than iterating and saving:
```python
from datasets import load_dataset
original = load_dataset("squad", split="train")
original.to_json("from_original.json") # Takes 0 seconds
selected_subset1 = original.select([i for i in range(l... | ## Describe the bug
Saving a dataset to JSON with `to_json` is extremely slow after using `.select` on the original dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
original = load_dataset("squad", split="train")
original.to_json("from_original.json") # Takes 0 seconds
se... | 157 | `.to_json` is extremely slow after `.select`
## Describe the bug
Saving a dataset to JSON with `to_json` is extremely slow after using `.select` on the original dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
original = load_dataset("squad", split="train")
original.to_json... | [
-0.141525194,
0.0571966134,
-0.0722180232,
0.0681989118,
0.136124745,
0.2194975615,
0.046746891,
0.3875999153,
-0.2274019718,
0.0900387391,
0.0242867842,
0.5975733995,
0.1035956293,
-0.005102756,
-0.0279693883,
-0.086014539,
0.2221941054,
-0.0900991112,
0.1746011972,
-0.1625322... |
https://github.com/huggingface/datasets/issues/3419 | `.to_json` is extremely slow after `.select` | There are slight differences between what you're doing and what `to_json` is actually doing.
In particular `to_json` currently converts batches of rows (as an arrow table) to a pandas dataframe, and then to JSON Lines. From your benchmark it looks like it's faster if we don't use pandas.
Thanks for investigating, I... | ## Describe the bug
Saving a dataset to JSON with `to_json` is extremely slow after using `.select` on the original dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
original = load_dataset("squad", split="train")
original.to_json("from_original.json") # Takes 0 seconds
se... | 62 | `.to_json` is extremely slow after `.select`
## Describe the bug
Saving a dataset to JSON with `to_json` is extremely slow after using `.select` on the original dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
original = load_dataset("squad", split="train")
original.to_json... | [
-0.2503959835,
0.2378024757,
-0.1249384433,
-0.1713117659,
0.1673329324,
0.0435040705,
0.0635808259,
0.6106277108,
-0.2218094766,
0.0939173326,
-0.0214232244,
0.6983178258,
0.0707772151,
-0.079195939,
-0.0524651781,
0.0165274143,
0.1349955648,
-0.0152917076,
0.0161851514,
-0.13... |
https://github.com/huggingface/datasets/issues/3419 | `.to_json` is extremely slow after `.select` | Thanks for your observations, @eladsegal! I spent some time with this and tried different approaches. Turns out that https://github.com/huggingface/datasets/blob/bb13373637b1acc55f8a468a8927a56cf4732230/src/datasets/io/json.py#L100 is giving the problem when we use `to_json` after `select`. This is when `indices` param... | ## Describe the bug
Saving a dataset to JSON with `to_json` is extremely slow after using `.select` on the original dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
original = load_dataset("squad", split="train")
original.to_json("from_original.json") # Takes 0 seconds
se... | 100 | `.to_json` is extremely slow after `.select`
## Describe the bug
Saving a dataset to JSON with `to_json` is extremely slow after using `.select` on the original dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
original = load_dataset("squad", split="train")
original.to_json... | [
-0.2212164849,
0.1027248725,
-0.0186260473,
0.1187605038,
0.2550524175,
0.1281235218,
-0.0699113682,
0.4468907118,
-0.1404294074,
0.0742776841,
-0.0600850955,
0.7489500046,
0.1289311498,
0.0481083617,
0.0308408998,
0.1059495285,
0.0500632152,
0.0200785641,
-0.0194915552,
-0.068... |
https://github.com/huggingface/datasets/issues/3419 | `.to_json` is extremely slow after `.select` | Posting it in @eladsegal's format:
For `squad`:
Saving examples using current `to_json` in 3.63 secs
Saving examples to `from_select1_fast.json` in 5.00 secs
Saving examples to `from_select2_fast.json` in 2.45 secs
Saving examples to `from_select3_fast.json` in 2.50 secs
For `squad_v2`:
Saving examples using... | ## Describe the bug
Saving a dataset to JSON with `to_json` is extremely slow after using `.select` on the original dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
original = load_dataset("squad", split="train")
original.to_json("from_original.json") # Takes 0 seconds
se... | 67 | `.to_json` is extremely slow after `.select`
## Describe the bug
Saving a dataset to JSON with `to_json` is extremely slow after using `.select` on the original dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
original = load_dataset("squad", split="train")
original.to_json... | [
-0.3101400733,
0.0925761536,
-0.1085358784,
-0.0349015072,
0.1821510494,
0.138945356,
-0.0513339117,
0.4654997885,
-0.1702192128,
0.0859459564,
0.0617815703,
0.6600276232,
0.0525261126,
0.0195463039,
-0.104922533,
-0.0219948236,
0.0947584137,
-0.0787242055,
0.1732742488,
-0.082... |
https://github.com/huggingface/datasets/issues/3416 | disaster_response_messages unavailable | Hi, thanks for reporting! This is a duplicate of https://github.com/huggingface/datasets/issues/3240. We are working on a fix.
| ## Dataset viewer issue for '* disaster_response_messages*'
**Link:** https://huggingface.co/datasets/disaster_response_messages
Dataset unavailable. Link dead: https://datasets.appen.com/appen_datasets/disaster_response_data/disaster_response_messages_training.csv
Am I the one who added this dataset ?No
| 16 | disaster_response_messages unavailable
## Dataset viewer issue for '* disaster_response_messages*'
**Link:** https://huggingface.co/datasets/disaster_response_messages
Dataset unavailable. Link dead: https://datasets.appen.com/appen_datasets/disaster_response_data/disaster_response_messages_training.csv
Am I... | [
-0.0299732853,
-0.5003219843,
-0.0671462715,
0.319308728,
0.2122155577,
0.2117061615,
0.048752252,
-0.0459239557,
0.0785185397,
0.0380444042,
-0.1369879991,
0.0033189463,
-0.3648077548,
0.0728738829,
0.3103375733,
-0.1563293934,
-0.0478129722,
0.027724579,
-0.1398403943,
0.1422... |
https://github.com/huggingface/datasets/issues/3415 | Non-deterministic tests: CI tests randomly fail | I think it might come from two different issues:
1. Google Drive is an unreliable host, mainly because of quota limitations
2. the staging environment can sometimes raise some errors
For Google Drive tests we could set up some retries with backup URLs if necessary I guess.
For staging on the other hand, I guess w... | ## Describe the bug
Some CI tests fail randomly.
1. In https://github.com/huggingface/datasets/pull/3375/commits/c10275fe36085601cb7bdb9daee9a8f1fc734f48, there were 3 failing tests, only on Linux:
```
=========================== short test summary info ============================
FAILED tests/test_str... | 67 | Non-deterministic tests: CI tests randomly fail
## Describe the bug
Some CI tests fail randomly.
1. In https://github.com/huggingface/datasets/pull/3375/commits/c10275fe36085601cb7bdb9daee9a8f1fc734f48, there were 3 failing tests, only on Linux:
```
=========================== short test summary info ====... | [
-0.1584395617,
-0.123356171,
-0.0278884545,
-0.0405436046,
0.2528170943,
-0.0344202034,
0.4490726888,
-0.1110161319,
0.0352990367,
0.5563315749,
0.1367921531,
0.0470314212,
-0.1694109738,
0.3488437533,
-0.0115375202,
-0.1670829505,
-0.2844315469,
-0.0986223891,
0.0157413092,
-0... |
https://github.com/huggingface/datasets/issues/3400 | Improve Wikipedia loading script | Thanks! See https://public.paws.wmcloud.org/User:Isaac_(WMF)/HuggingFace%20Wikipedia%20Processing.ipynb for more implementation details / some data around the overhead induced by adding the extra preprocessing steps (stripping link prefixes and magic words) | As reported by @geohci, the "wikipedia" processing/loading script could be improved by some additional small suggested processing functions:
- _extract_content(filepath):
- Replace .startswith("#redirect") with more structured approach: if elem.find(f"./{namespace}redirect") is None: continue
- _parse_and_clean_wi... | 26 | Improve Wikipedia loading script
As reported by @geohci, the "wikipedia" processing/loading script could be improved by some additional small suggested processing functions:
- _extract_content(filepath):
- Replace .startswith("#redirect") with more structured approach: if elem.find(f"./{namespace}redirect") is No... | [
0.1939674467,
0.0157280955,
-0.1693702787,
-0.0298059471,
-0.1310296953,
0.0113722272,
0.1996670663,
0.5238221288,
0.4024954438,
-0.0102350758,
0.0286973957,
0.0813336745,
0.172223419,
-0.0693214908,
0.0228339527,
-0.1620891839,
0.0589284673,
0.075187549,
-0.119600378,
-0.02074... |
https://github.com/huggingface/datasets/issues/3398 | Add URL field to Wikimedia dataset instances: wikipedia,... | @geohci, I think the field "url" does not appear in the Wikimedia dumps. Therefore I guess we should generate it, using the "title" field and making some transformation of it (replacing spaces with underscores) and prepending the domain (created using the language)? | As reported by @geohci, in order to host pre-processed data in the Hub, we should add the full URL to data instances (new field "url"), so that we conform to proper attribution from license requirement. See, e.g.: https://fair-trec.github.io/docs/Fair_Ranking_2021_Participant_Instructions.pdf#subsection.3.2
This sho... | 42 | Add URL field to Wikimedia dataset instances: wikipedia,...
As reported by @geohci, in order to host pre-processed data in the Hub, we should add the full URL to data instances (new field "url"), so that we conform to proper attribution from license requirement. See, e.g.: https://fair-trec.github.io/docs/Fair_Rankin... | [
0.1571048498,
0.2313973308,
0.0553592704,
-0.0971824825,
-0.03358436,
0.1205123737,
0.0998200104,
-0.0271696579,
0.1562780887,
0.1611171365,
0.1287598759,
0.3992244303,
0.242775768,
-0.0161260925,
0.1180695519,
-0.0430261418,
0.054670684,
-0.1464303583,
0.07935974,
-0.026732899... |
https://github.com/huggingface/datasets/issues/3398 | Add URL field to Wikimedia dataset instances: wikipedia,... | Indeed:
> To re-distribute text on Wikipedia in any form, provide credit to the authors either by including a) a [hyperlink](https://en.wikipedia.org/wiki/Hyperlink) (where possible) or [URL](https://en.wikipedia.org/wiki/URL) to the page or pages you are re-using, b) a hyperlink (where possible) or URL to an altern... | As reported by @geohci, in order to host pre-processed data in the Hub, we should add the full URL to data instances (new field "url"), so that we conform to proper attribution from license requirement. See, e.g.: https://fair-trec.github.io/docs/Fair_Ranking_2021_Participant_Instructions.pdf#subsection.3.2
This sho... | 190 | Add URL field to Wikimedia dataset instances: wikipedia,...
As reported by @geohci, in order to host pre-processed data in the Hub, we should add the full URL to data instances (new field "url"), so that we conform to proper attribution from license requirement. See, e.g.: https://fair-trec.github.io/docs/Fair_Rankin... | [
0.1469034106,
0.1563643664,
0.0456905849,
-0.0973032638,
-0.0687778667,
0.088216275,
0.2351887822,
0.1012056246,
0.2732242048,
0.0965474546,
-0.0006463797,
0.5451566577,
0.3293744922,
-0.0347850844,
0.1449322999,
0.0359689258,
-0.0703827739,
-0.0078177322,
0.090093933,
-0.11893... |
https://github.com/huggingface/datasets/issues/3398 | Add URL field to Wikimedia dataset instances: wikipedia,... | yep, sorry forgot that that wasn't already in the dumps. specifically `f"https://{language}.wikipedia.org/wiki/{title.replace(' ', '_')}` should do it | As reported by @geohci, in order to host pre-processed data in the Hub, we should add the full URL to data instances (new field "url"), so that we conform to proper attribution from license requirement. See, e.g.: https://fair-trec.github.io/docs/Fair_Ranking_2021_Participant_Instructions.pdf#subsection.3.2
This sho... | 17 | Add URL field to Wikimedia dataset instances: wikipedia,...
As reported by @geohci, in order to host pre-processed data in the Hub, we should add the full URL to data instances (new field "url"), so that we conform to proper attribution from license requirement. See, e.g.: https://fair-trec.github.io/docs/Fair_Rankin... | [
0.1569753885,
0.1533058435,
-0.0069044665,
-0.1684149206,
0.0327236876,
0.1310461462,
0.0735662058,
0.1554722637,
0.2668457925,
0.151658386,
0.0103205713,
0.550291419,
0.3304097354,
-0.0482819565,
-0.0146247707,
0.0482313037,
0.0911027268,
-0.0532020852,
0.1225777715,
-0.080034... |
https://github.com/huggingface/datasets/issues/3398 | Add URL field to Wikimedia dataset instances: wikipedia,... | Thanks @geohci.
I had already been looking for information about the conversion from title to URL and I found that apart from replacing blanks with underscores, some other special character must also be percent-encoded (e.g. `"` to `%22`): https://meta.wikimedia.org/wiki/Help:URL
Therefore, I have finally used `u... | As reported by @geohci, in order to host pre-processed data in the Hub, we should add the full URL to data instances (new field "url"), so that we conform to proper attribution from license requirement. See, e.g.: https://fair-trec.github.io/docs/Fair_Ranking_2021_Participant_Instructions.pdf#subsection.3.2
This sho... | 123 | Add URL field to Wikimedia dataset instances: wikipedia,...
As reported by @geohci, in order to host pre-processed data in the Hub, we should add the full URL to data instances (new field "url"), so that we conform to proper attribution from license requirement. See, e.g.: https://fair-trec.github.io/docs/Fair_Rankin... | [
0.2340035588,
0.201874122,
0.0728768408,
-0.1125444248,
0.1974134296,
0.1343038082,
0.1067444161,
0.0891974717,
0.0614389367,
0.1533166468,
-0.0374546908,
0.6567925215,
0.2475124598,
-0.0832400024,
0.0288755186,
-0.1070964336,
0.0564479306,
-0.1294415295,
0.1043172181,
-0.02201... |
https://github.com/huggingface/datasets/issues/3396 | Install Audio dependencies to support audio decoding | https://huggingface.co/datasets/projecte-aina/parlament_parla -> works (but we still have to show an audio player)
https://huggingface.co/datasets/openslr -> another issue: `Message: [Errno 2] No such file or directory: '/home/hf/datasets-preview-backend/zip:/asr_javanese/data/00/00004fe6aa.flac'` | ## Dataset viewer issue for '*openslr*', '*projecte-aina/parlament_parla*'
**Link:** *https://huggingface.co/datasets/openslr*
**Link:** *https://huggingface.co/datasets/projecte-aina/parlament_parla*
Error:
```
Status code: 400
Exception: ImportError
Message: To support decoding audio files, ple... | 25 | Install Audio dependencies to support audio decoding
## Dataset viewer issue for '*openslr*', '*projecte-aina/parlament_parla*'
**Link:** *https://huggingface.co/datasets/openslr*
**Link:** *https://huggingface.co/datasets/projecte-aina/parlament_parla*
Error:
```
Status code: 400
Exception: ImportErr... | [
-0.2341928035,
0.0829561949,
-0.1228213683,
0.2838071287,
0.1610811651,
0.0163821392,
-0.0711146891,
-0.0265360139,
-0.3266075253,
0.2293248028,
-0.3968032897,
0.3755854666,
-0.0406985134,
0.0464402959,
-0.1093372628,
-0.2251796871,
0.091982089,
0.3634328246,
-0.1583887637,
-0.... |
https://github.com/huggingface/datasets/issues/3396 | Install Audio dependencies to support audio decoding | But https://huggingface.co/datasets/openslr/viewer does not work
<img width="678" alt="Capture d’écran 2022-04-12 à 13 59 46" src="https://user-images.githubusercontent.com/1676121/162958013-e31ef2ae-f886-47b7-9f27-664ed3d4b5a1.png">
Same issue as #4126:
```
Status code: 400
Exception: TypeError
Mes... | ## Dataset viewer issue for '*openslr*', '*projecte-aina/parlament_parla*'
**Link:** *https://huggingface.co/datasets/openslr*
**Link:** *https://huggingface.co/datasets/projecte-aina/parlament_parla*
Error:
```
Status code: 400
Exception: ImportError
Message: To support decoding audio files, ple... | 34 | Install Audio dependencies to support audio decoding
## Dataset viewer issue for '*openslr*', '*projecte-aina/parlament_parla*'
**Link:** *https://huggingface.co/datasets/openslr*
**Link:** *https://huggingface.co/datasets/projecte-aina/parlament_parla*
Error:
```
Status code: 400
Exception: ImportErr... | [
-0.2965098321,
0.1264424175,
-0.0777170882,
0.2870647907,
0.151958853,
0.0553317517,
0.1448055059,
0.0143223889,
-0.28875947,
0.2315077037,
-0.3266481757,
0.4537605643,
-0.1193321347,
0.0719230473,
-0.1263938099,
-0.1828403771,
0.0877652392,
0.493055284,
-0.3374743462,
-0.12848... |
https://github.com/huggingface/datasets/issues/3394 | Preserve all feature types when saving a dataset on the Hub with `push_to_hub` | According to this [comment in the forum](https://discuss.huggingface.co/t/save-datasetdict-to-huggingface-hub/12075/8?u=lhoestq), using `push_to_hub` on a dataset with `ClassLabel` can also make the feature simply disappear when it's reloaded ! | Currently, if one of the dataset features is of type `ClassLabel`, saving the dataset with `push_to_hub` and reloading the dataset with `load_dataset` will return the feature of type `Value`. To fix this, we should do something similar to `save_to_disk` (which correctly preserves the types) and not only push the parque... | 25 | Preserve all feature types when saving a dataset on the Hub with `push_to_hub`
Currently, if one of the dataset features is of type `ClassLabel`, saving the dataset with `push_to_hub` and reloading the dataset with `load_dataset` will return the feature of type `Value`. To fix this, we should do something similar to ... | [
-0.2111832201,
-0.421753943,
0.0569307096,
0.3006316423,
0.2904073596,
-0.0126073407,
0.1880396903,
0.2776143551,
0.01033042,
0.0954096094,
-0.4370885491,
0.4137707353,
-0.1050539538,
0.633710146,
-0.0238210876,
0.1002005562,
0.2585051358,
0.0915538147,
0.1439446658,
-0.1857997... |
https://github.com/huggingface/datasets/issues/3394 | Preserve all feature types when saving a dataset on the Hub with `push_to_hub` | Maybe we can also fix https://github.com/huggingface/datasets/issues/3035 while working on this because, as pointed out in my initial post, `save_to_disk` also saves the `dataset_info.json` file. | Currently, if one of the dataset features is of type `ClassLabel`, saving the dataset with `push_to_hub` and reloading the dataset with `load_dataset` will return the feature of type `Value`. To fix this, we should do something similar to `save_to_disk` (which correctly preserves the types) and not only push the parque... | 24 | Preserve all feature types when saving a dataset on the Hub with `push_to_hub`
Currently, if one of the dataset features is of type `ClassLabel`, saving the dataset with `push_to_hub` and reloading the dataset with `load_dataset` will return the feature of type `Value`. To fix this, we should do something similar to ... | [
-0.3021219969,
-0.2699122429,
0.0240607709,
0.3232356012,
0.3119442165,
-0.0222277734,
0.216617316,
0.3837988377,
0.0547654517,
0.0118870577,
-0.3860481381,
0.4930749834,
-0.0736066401,
0.6211599112,
0.0524834841,
0.0989935249,
0.2516206205,
0.0786844417,
0.1469024569,
-0.15521... |
https://github.com/huggingface/datasets/issues/3392 | Dataset viewer issue for `dansbecker/hackernews_hiring_posts` | This issue was fixed by me calling `all_datasets.push_to_hub("hackernews_hiring_posts")`.
The previous problems were from calling `all_datasets.save_to_disk` and then pushing with `my_repo.git_add` and `my_repo.push_to_hub`.
| ## Dataset viewer issue for `dansbecker/hackernews_hiring_posts`
**Link:** https://huggingface.co/datasets/dansbecker/hackernews_hiring_posts
*short description of the issue*
Dataset preview not showing for uploaded DatasetDict. See https://discuss.huggingface.co/t/dataset-preview-not-showing-for-uploaded-data... | 22 | Dataset viewer issue for `dansbecker/hackernews_hiring_posts`
## Dataset viewer issue for `dansbecker/hackernews_hiring_posts`
**Link:** https://huggingface.co/datasets/dansbecker/hackernews_hiring_posts
*short description of the issue*
Dataset preview not showing for uploaded DatasetDict. See https://discus... | [
-0.5134432912,
0.1532768458,
0.0064593675,
0.237880677,
0.0082575986,
0.0074706464,
0.2240114957,
0.32554093,
0.0523446687,
0.0919944271,
-0.0460804813,
0.3024221957,
-0.0253489986,
0.0762419403,
0.1139024124,
0.2501677871,
0.1825874299,
0.0068217758,
-0.168663919,
0.0667678714... |
https://github.com/huggingface/datasets/issues/3385 | None batched `with_transform`, `set_transform` | Hi ! Thanks for the suggestion :)
It makes sense to me, and it can surely be implemented by wrapping the user's function to make it a batched function. However I'm not a big fan of the inconsistency it would create with `map`: `with_transform` is batched by default while `map` isn't.
Is there something you would li... | **Is your feature request related to a problem? Please describe.**
A `torch.utils.data.Dataset.__getitem__` operates on a single example.
But 🤗 `Datasets.with_transform` doesn't seem to allow non-batched transform.
**Describe the solution you'd like**
Have a `batched=True` argument in `Datasets.with_transfor... | 69 | None batched `with_transform`, `set_transform`
**Is your feature request related to a problem? Please describe.**
A `torch.utils.data.Dataset.__getitem__` operates on a single example.
But 🤗 `Datasets.with_transform` doesn't seem to allow non-batched transform.
**Describe the solution you'd like**
Have a `... | [
-0.4295309484,
-0.3341989815,
-0.0814519078,
-0.2455640137,
0.1934489012,
-0.1620332301,
0.4565648735,
0.3031438887,
-0.0238189008,
0.0276807845,
0.0125372317,
0.5297407508,
-0.3673674166,
-0.2638860345,
-0.0991920158,
-0.0699826404,
0.1374474168,
0.0499798283,
-0.4633925259,
-... |
https://github.com/huggingface/datasets/issues/3385 | None batched `with_transform`, `set_transform` | Hi @lhoestq ,
Sorry I missed your reply.
I would love to contribute. But I don't know which solution would be the best for this repo.
> However I'm not a big fan of the inconsistency it would create with map: with_transform is batched by default while map isn't.
I agree. What do you think about the alternativ... | **Is your feature request related to a problem? Please describe.**
A `torch.utils.data.Dataset.__getitem__` operates on a single example.
But 🤗 `Datasets.with_transform` doesn't seem to allow non-batched transform.
**Describe the solution you'd like**
Have a `batched=True` argument in `Datasets.with_transfor... | 315 | None batched `with_transform`, `set_transform`
**Is your feature request related to a problem? Please describe.**
A `torch.utils.data.Dataset.__getitem__` operates on a single example.
But 🤗 `Datasets.with_transform` doesn't seem to allow non-batched transform.
**Describe the solution you'd like**
Have a `... | [
-0.4711225033,
-0.3246157765,
-0.0549607687,
-0.1955652386,
0.1590392888,
-0.1022731364,
0.6502953172,
0.3004758656,
0.0288872514,
0.0306590348,
-0.0858125314,
0.6519104838,
-0.2961898148,
-0.3571740985,
-0.0741108432,
-0.0310693197,
0.092642203,
-0.0295144226,
-0.4197269976,
-... |
https://github.com/huggingface/datasets/issues/3385 | None batched `with_transform`, `set_transform` | I like the idea of lazy map. On the other hand we should only have either lazy map or `with_transform` (not both). That's why I'd rather stick with `with_transform` for now (but maybe we can consider it for later major releases like `datasets` v2).
I understand the issue with `with_transform` and `with_format` being... | **Is your feature request related to a problem? Please describe.**
A `torch.utils.data.Dataset.__getitem__` operates on a single example.
But 🤗 `Datasets.with_transform` doesn't seem to allow non-batched transform.
**Describe the solution you'd like**
Have a `batched=True` argument in `Datasets.with_transfor... | 83 | None batched `with_transform`, `set_transform`
**Is your feature request related to a problem? Please describe.**
A `torch.utils.data.Dataset.__getitem__` operates on a single example.
But 🤗 `Datasets.with_transform` doesn't seem to allow non-batched transform.
**Describe the solution you'd like**
Have a `... | [
-0.4687542617,
-0.312884897,
-0.0656989068,
-0.2338286489,
0.2658573389,
-0.1204679683,
0.486851126,
0.4093423486,
-0.1548074335,
-0.0003178421,
-0.1138283834,
0.5173217654,
-0.3530977368,
-0.1026493981,
-0.0778419897,
-0.1407481879,
0.2095866948,
0.0911348164,
-0.4605179131,
0... |
https://github.com/huggingface/datasets/issues/3381 | Unable to load audio_features from common_voice dataset | Hi ! Feel free to access `batch["audio"]["array"]` and `batch["audio"]["sampling_rate"]` instead
`datasets` 1.16 introduced some changes in `common_voice` and now the `path` field is no longer a path to a local file (but rather the path to the file in the archive it's extracted from) | ## Describe the bug
I am not able to load audio features from common_voice dataset
## Steps to reproduce the bug
```
from datasets import load_dataset
import torchaudio
test_dataset = load_dataset("common_voice", "hi", split="test[:2%]")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
def spe... | 44 | Unable to load audio_features from common_voice dataset
## Describe the bug
I am not able to load audio features from common_voice dataset
## Steps to reproduce the bug
```
from datasets import load_dataset
import torchaudio
test_dataset = load_dataset("common_voice", "hi", split="test[:2%]")
resampler =... | [
-0.2775656283,
-0.3776993155,
-0.0082472647,
0.4291621149,
0.3689377308,
-0.1360182017,
0.2537411451,
0.2359366417,
-0.0651013181,
0.1566340774,
-0.4660598338,
0.40844661,
-0.0926309898,
-0.199382022,
-0.2011776865,
-0.3714813292,
-0.0002535676,
0.0689454377,
-0.0112086162,
-0.... |
https://github.com/huggingface/datasets/issues/3374 | NonMatchingChecksumError for the CLUE:cluewsc2020, chid, c3 and tnews | Seems like the issue still exists,:
`Downloading and preparing dataset clue/chid (download: 127.15 MiB, generated: 259.71 MiB, post-processed: Unknown size, total: 386.86 MiB) to /mnt/cache/tanhaochen/.cache/huggingface/datasets/clue/chid/1.0.0/e55b490cb7809dcd8db31b9a87119f2e2ec87cdc060da8a9ac070b070ca3e379...
Trace... | Hi, it seems like there are updates in cluewsc2020, chid, c3 and tnews, since i could not load them due to the checksum error. | 80 | NonMatchingChecksumError for the CLUE:cluewsc2020, chid, c3 and tnews
Hi, it seems like there are updates in cluewsc2020, chid, c3 and tnews, since i could not load them due to the checksum error.
Seems like the issue still exists,:
`Downloading and preparing dataset clue/chid (download: 127.15 MiB, generated: 259... | [
0.0205239393,
0.0295573045,
-0.0395496488,
0.0844482556,
0.0949641019,
-0.0198280979,
0.0076568038,
0.3946432173,
-0.0262058005,
0.2592139542,
-0.1515903771,
0.1983184665,
0.0484179668,
-0.0398282781,
-0.2823028564,
0.4482048154,
0.1787071675,
0.0810302421,
-0.2217120677,
-0.00... |
https://github.com/huggingface/datasets/issues/3369 | [Audio] Allow resampling for audio datasets in streaming mode | This requires implementing `cast_column` for iterable datasets, it could be a very nice addition !
<s>It can also be useful to be able to disable the audio/image decoding for the dataset viewer (see PR https://github.com/huggingface/datasets/pull/3430) cc @severo </s>
EDIT: actually following https://github.com/hug... | Many audio datasets like Common Voice always need to be resampled. This can very easily be done in non-streaming mode as follows:
```python
from datasets import load_dataset
ds = load_dataset("common_voice", "ab", split="test")
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
```
However in strea... | 50 | [Audio] Allow resampling for audio datasets in streaming mode
Many audio datasets like Common Voice always need to be resampled. This can very easily be done in non-streaming mode as follows:
```python
from datasets import load_dataset
ds = load_dataset("common_voice", "ab", split="test")
ds = ds.cast_colum... | [
-0.5598964691,
-0.0399098508,
-0.0683553293,
-0.1406607926,
0.4281196594,
0.0794278383,
-0.0197002161,
0.3021991849,
-0.033088278,
0.3280914128,
-0.4938262999,
0.2927232385,
-0.2910311222,
0.2312466949,
0.0701237321,
-0.5171606541,
0.1303917319,
0.2717285156,
-0.3455243707,
0.1... |
https://github.com/huggingface/datasets/issues/3369 | [Audio] Allow resampling for audio datasets in streaming mode | Just to clarify a bit. This feature is **always** needed when using the common voice dataset in streaming mode. So I think it's quite important | Many audio datasets like Common Voice always need to be resampled. This can very easily be done in non-streaming mode as follows:
```python
from datasets import load_dataset
ds = load_dataset("common_voice", "ab", split="test")
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
```
However in strea... | 25 | [Audio] Allow resampling for audio datasets in streaming mode
Many audio datasets like Common Voice always need to be resampled. This can very easily be done in non-streaming mode as follows:
```python
from datasets import load_dataset
ds = load_dataset("common_voice", "ab", split="test")
ds = ds.cast_colum... | [
-0.6024314761,
-0.0808148235,
-0.0422009751,
-0.2135421187,
0.3806146681,
0.1127745584,
0.0144633567,
0.2519489825,
0.0138039561,
0.2485650629,
-0.3896241188,
0.1837809682,
-0.3210382164,
0.1334297508,
0.1642042547,
-0.5897549987,
0.0861243829,
0.2284358144,
-0.229306072,
0.167... |
https://github.com/huggingface/datasets/issues/3358 | add new field, and get errors | Hi,
could you please post this question on our [Forum](https://discuss.huggingface.co/) as we keep issues for bugs and feature requests? | after adding new field **tokenized_examples["example_id"]**, and get errors below,
I think it is due to changing data to tensor, and **tokenized_examples["example_id"]** is string list
**all fields**
```
***************** train_dataset 1: Dataset({
features: ['attention_mask', 'end_positions', 'example_id', '... | 19 | add new field, and get errors
after adding new field **tokenized_examples["example_id"]**, and get errors below,
I think it is due to changing data to tensor, and **tokenized_examples["example_id"]** is string list
**all fields**
```
***************** train_dataset 1: Dataset({
features: ['attention_mask', ... | [
0.0446220003,
-0.4576034844,
-0.0584968813,
0.2161904573,
0.3088058829,
0.2219411433,
0.5154497623,
0.3203382194,
-0.0904990807,
0.1890572011,
0.3385350406,
-0.0441404544,
-0.0989081934,
0.1437216103,
0.1367520243,
-0.3522278368,
0.2309164554,
-0.0397231579,
0.359590739,
0.0779... |
https://github.com/huggingface/datasets/issues/3358 | add new field, and get errors | > Hi,
>
> could you please post this question on our [Forum](https://discuss.huggingface.co/) as we keep issues for bugs and feature requests?
ok. | after adding new field **tokenized_examples["example_id"]**, and get errors below,
I think it is due to changing data to tensor, and **tokenized_examples["example_id"]** is string list
**all fields**
```
***************** train_dataset 1: Dataset({
features: ['attention_mask', 'end_positions', 'example_id', '... | 23 | add new field, and get errors
after adding new field **tokenized_examples["example_id"]**, and get errors below,
I think it is due to changing data to tensor, and **tokenized_examples["example_id"]** is string list
**all fields**
```
***************** train_dataset 1: Dataset({
features: ['attention_mask', ... | [
0.0494639538,
-0.4595842957,
-0.0416366979,
0.2271222323,
0.2988955081,
0.2176527977,
0.5312832594,
0.3263185918,
-0.1052214578,
0.1934479922,
0.3319590092,
-0.0534846149,
-0.0954505578,
0.14284724,
0.1321327686,
-0.376542449,
0.2242110372,
-0.0373567082,
0.3674286008,
0.077319... |
https://github.com/huggingface/datasets/issues/3353 | add one field "example_id", but I can't see it in the "comput_loss" function | Hi ! Your function looks fine, I used to map `squad` locally and it indeed added the `example_id` field correctly.
However I think that in the `compute_loss` method only a subset of the fields are available: the model inputs. Since `example_id` is not a model input (it's not passed as a parameter to the model), the ... | Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs
```
*********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
...,
... | 118 | add one field "example_id", but I can't see it in the "comput_loss" function
Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs
```
*********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],
... | [
-0.1465506107,
-0.5799238086,
-0.274071902,
0.1202487126,
0.1545111835,
0.0243577175,
0.518656373,
0.1191996783,
0.2118413895,
0.4652202725,
0.3532586992,
0.3118036389,
0.2009701878,
-0.1237443164,
0.3787087202,
0.0022589611,
-0.0379148796,
0.224128589,
0.2628958523,
-0.1642495... |
https://github.com/huggingface/datasets/issues/3353 | add one field "example_id", but I can't see it in the "comput_loss" function | Hi, I have set **args.remove_unused_columns=False** and **training_args.remove_unused_columns=False**, but the field doesn't been contained yet.
```
def main():
argp = HfArgumentParser(TrainingArguments)
# The HfArgumentParser object collects command-line arguments into an object (and provides default valu... | Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs
```
*********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
...,
... | 373 | add one field "example_id", but I can't see it in the "comput_loss" function
Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs
```
*********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],
... | [
-0.1465506107,
-0.5799238086,
-0.274071902,
0.1202487126,
0.1545111835,
0.0243577175,
0.518656373,
0.1191996783,
0.2118413895,
0.4652202725,
0.3532586992,
0.3118036389,
0.2009701878,
-0.1237443164,
0.3787087202,
0.0022589611,
-0.0379148796,
0.224128589,
0.2628958523,
-0.1642495... |
https://github.com/huggingface/datasets/issues/3353 | add one field "example_id", but I can't see it in the "comput_loss" function | Hi, I print the value, all are set to False, but don't work.
```
********************* training_args: TrainingArguments(
_n_gpu=1,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
ddp_find_unused_parameter... | Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs
```
*********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
...,
... | 247 | add one field "example_id", but I can't see it in the "comput_loss" function
Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs
```
*********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],
... | [
-0.1465506107,
-0.5799238086,
-0.274071902,
0.1202487126,
0.1545111835,
0.0243577175,
0.518656373,
0.1191996783,
0.2118413895,
0.4652202725,
0.3532586992,
0.3118036389,
0.2009701878,
-0.1237443164,
0.3787087202,
0.0022589611,
-0.0379148796,
0.224128589,
0.2628958523,
-0.1642495... |
https://github.com/huggingface/datasets/issues/3353 | add one field "example_id", but I can't see it in the "comput_loss" function | Hmmm, it might be because the default data collator removes all the fields with `string` type:
https://github.com/huggingface/transformers/blob/4c0dd199c8305903564c2edeae23d294edd4b321/src/transformers/data/data_collator.py#L107-L112
I guess you also need a custom data collator that doesn't remove them. | Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs
```
*********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
...,
... | 30 | add one field "example_id", but I can't see it in the "comput_loss" function
Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs
```
*********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],
... | [
-0.1465506107,
-0.5799238086,
-0.274071902,
0.1202487126,
0.1545111835,
0.0243577175,
0.518656373,
0.1191996783,
0.2118413895,
0.4652202725,
0.3532586992,
0.3118036389,
0.2009701878,
-0.1237443164,
0.3787087202,
0.0022589611,
-0.0379148796,
0.224128589,
0.2628958523,
-0.1642495... |
https://github.com/huggingface/datasets/issues/3353 | add one field "example_id", but I can't see it in the "comput_loss" function | I overwrite **get_train_dataloader**, and remove **_remove_unused_columns**, but it doesn't work.
```
def get_train_dataloader(self) -> DataLoader:
"""
Returns the training :class:`~torch.utils.data.DataLoader`.
Will use no sampler if :obj:`self.train_dataset` does not implement :ob... | Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs
```
*********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
...,
... | 116 | add one field "example_id", but I can't see it in the "comput_loss" function
Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs
```
*********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],
... | [
-0.1465506107,
-0.5799238086,
-0.274071902,
0.1202487126,
0.1545111835,
0.0243577175,
0.518656373,
0.1191996783,
0.2118413895,
0.4652202725,
0.3532586992,
0.3118036389,
0.2009701878,
-0.1237443164,
0.3787087202,
0.0022589611,
-0.0379148796,
0.224128589,
0.2628958523,
-0.1642495... |
https://github.com/huggingface/datasets/issues/3353 | add one field "example_id", but I can't see it in the "comput_loss" function | Hi, it works now, thank you.
1. **args.remove_unused_columns=False** and **training_args.remove_unused_columns=False**
2. overwrite **get_train_dataloader**, and remove **_remove_unused_columns**
3. add new fields, and can be got in **inputs**. | Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs
```
*********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
...,
... | 26 | add one field "example_id", but I can't see it in the "comput_loss" function
Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs
```
*********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],
... | [
-0.1465506107,
-0.5799238086,
-0.274071902,
0.1202487126,
0.1545111835,
0.0243577175,
0.518656373,
0.1191996783,
0.2118413895,
0.4652202725,
0.3532586992,
0.3118036389,
0.2009701878,
-0.1237443164,
0.3787087202,
0.0022589611,
-0.0379148796,
0.224128589,
0.2628958523,
-0.1642495... |
https://github.com/huggingface/datasets/issues/3346 | Failed to convert `string` with pyarrow for QED since 1.15.0 | Actually, re-opening this issue cause the error persists
```python
>>> load_dataset("qed")
Downloading and preparing dataset qed/qed (download: 13.43 MiB, generated: 9.70 MiB, post-processed: Unknown size, total: 23.14 MiB) to /home/victor_huggingface_co/.cache/huggingface/datasets/qed/qed/1.0.0/47d8b6f033393aa520... | ## Describe the bug
Loading QED was fine until 1.15.0.
related: bigscience-workshop/promptsource#659, bigscience-workshop/promptsource#670
Not sure where the root cause is, but here are some candidates:
- #3158
- #3120
- #3196
- #2891
## Steps to reproduce the bug
```python
load_dataset("qed")
```
## ... | 222 | Failed to convert `string` with pyarrow for QED since 1.15.0
## Describe the bug
Loading QED was fine until 1.15.0.
related: bigscience-workshop/promptsource#659, bigscience-workshop/promptsource#670
Not sure where the root cause is, but here are some candidates:
- #3158
- #3120
- #3196
- #2891
## Steps t... | [
-0.3966264427,
-0.0293868184,
-0.0101254024,
-0.0253717583,
0.437127471,
-0.2806428671,
0.3386761248,
0.556179285,
-0.4660204649,
0.315726459,
0.0433580056,
0.4963358641,
-0.044431705,
0.2531987429,
-0.0515403748,
-0.0324558951,
0.3282588124,
0.0855189115,
-0.1907031983,
0.1468... |
https://github.com/huggingface/datasets/issues/3345 | Failed to download species_800 from Google Drive zip file | Hi,
the dataset is downloaded normally on my machine. Maybe the URL was down at the time of your download. Could you try again? | ## Describe the bug
One can manually download the zip file on Google Drive, but `load_dataset()` cannot.
related: #3248
## Steps to reproduce the bug
```shell
> python
Python 3.7.12 (default, Sep 5 2021, 08:34:29)
[Clang 11.0.3 (clang-1103.0.32.62)] on darwin
Type "help", "copyright", "credits" or "license" ... | 24 | Failed to download species_800 from Google Drive zip file
## Describe the bug
One can manually download the zip file on Google Drive, but `load_dataset()` cannot.
related: #3248
## Steps to reproduce the bug
```shell
> python
Python 3.7.12 (default, Sep 5 2021, 08:34:29)
[Clang 11.0.3 (clang-1103.0.32.62)] ... | [
-0.3141798079,
-0.0384752676,
-0.0492311679,
0.202085346,
0.1646693647,
0.1617155075,
0.2258754224,
0.2475837171,
0.1551792473,
-0.010348252,
-0.1063770503,
0.2193597704,
-0.0180688519,
0.2488591969,
0.2162835598,
-0.0912972242,
-0.1193725467,
-0.0873610005,
0.1564017832,
0.045... |
https://github.com/huggingface/datasets/issues/3345 | Failed to download species_800 from Google Drive zip file | > Hi,
>
> the dataset is downloaded normally on my machine. Maybe the URL was down at the time of your download. Could you try again?
I have tried that many times with both load_dataset() and a browser almost simultaneously. The browser always works for me while load_dataset() fails. | ## Describe the bug
One can manually download the zip file on Google Drive, but `load_dataset()` cannot.
related: #3248
## Steps to reproduce the bug
```shell
> python
Python 3.7.12 (default, Sep 5 2021, 08:34:29)
[Clang 11.0.3 (clang-1103.0.32.62)] on darwin
Type "help", "copyright", "credits" or "license" ... | 50 | Failed to download species_800 from Google Drive zip file
## Describe the bug
One can manually download the zip file on Google Drive, but `load_dataset()` cannot.
related: #3248
## Steps to reproduce the bug
```shell
> python
Python 3.7.12 (default, Sep 5 2021, 08:34:29)
[Clang 11.0.3 (clang-1103.0.32.62)] ... | [
-0.3141798079,
-0.0384752676,
-0.0492311679,
0.202085346,
0.1646693647,
0.1617155075,
0.2258754224,
0.2475837171,
0.1551792473,
-0.010348252,
-0.1063770503,
0.2193597704,
-0.0180688519,
0.2488591969,
0.2162835598,
-0.0912972242,
-0.1193725467,
-0.0873610005,
0.1564017832,
0.045... |
https://github.com/huggingface/datasets/issues/3345 | Failed to download species_800 from Google Drive zip file | @mariosasko
> the dataset is downloaded normally on my machine. Maybe the URL was down at the time of your download. Could you try again?
I've tried yet again just a moment ago. This time I realize that, the step `(... post-processed: Unknown size, total: 20.89 MiB) to /Users/mike/.cache/huggingface/datasets/speci... | ## Describe the bug
One can manually download the zip file on Google Drive, but `load_dataset()` cannot.
related: #3248
## Steps to reproduce the bug
```shell
> python
Python 3.7.12 (default, Sep 5 2021, 08:34:29)
[Clang 11.0.3 (clang-1103.0.32.62)] on darwin
Type "help", "copyright", "credits" or "license" ... | 102 | Failed to download species_800 from Google Drive zip file
## Describe the bug
One can manually download the zip file on Google Drive, but `load_dataset()` cannot.
related: #3248
## Steps to reproduce the bug
```shell
> python
Python 3.7.12 (default, Sep 5 2021, 08:34:29)
[Clang 11.0.3 (clang-1103.0.32.62)] ... | [
-0.3141798079,
-0.0384752676,
-0.0492311679,
0.202085346,
0.1646693647,
0.1617155075,
0.2258754224,
0.2475837171,
0.1551792473,
-0.010348252,
-0.1063770503,
0.2193597704,
-0.0180688519,
0.2488591969,
0.2162835598,
-0.0912972242,
-0.1193725467,
-0.0873610005,
0.1564017832,
0.045... |
https://github.com/huggingface/datasets/issues/3341 | Mirror the canonical datasets to the Hugging Face Hub | I created a GitHub project to keep track of what needs to be done:
https://github.com/huggingface/datasets/projects/3
I also store my code in a (private for now) repository at https://github.com/huggingface/mirror_canonical_datasets_on_hub | - [ ] create a repo on https://hf.co/datasets for every canonical dataset
- [ ] on every commit related to a dataset, update the hf.co repo
See https://github.com/huggingface/moon-landing/pull/1562
@SBrandeis: I let you edit this description if needed to precise the intent. | 28 | Mirror the canonical datasets to the Hugging Face Hub
- [ ] create a repo on https://hf.co/datasets for every canonical dataset
- [ ] on every commit related to a dataset, update the hf.co repo
See https://github.com/huggingface/moon-landing/pull/1562
@SBrandeis: I let you edit this description if needed to pr... | [
-0.1399023533,
-0.2282226682,
-0.0447379313,
-0.0688015744,
-0.0952057987,
-0.0315396152,
0.2022988647,
0.3611529469,
0.2504813671,
0.2474522144,
-0.2225716561,
-0.0466736741,
-0.0990656614,
0.4645624757,
0.1602292359,
0.1148871407,
0.1948557794,
0.1492443234,
-0.3303130269,
-0... |
https://github.com/huggingface/datasets/issues/3341 | Mirror the canonical datasets to the Hugging Face Hub | I understand that the datasets are mirrored on the Hub now, right? Might I close @lhoestq @SBrandeis? | - [ ] create a repo on https://hf.co/datasets for every canonical dataset
- [ ] on every commit related to a dataset, update the hf.co repo
See https://github.com/huggingface/moon-landing/pull/1562
@SBrandeis: I let you edit this description if needed to precise the intent. | 17 | Mirror the canonical datasets to the Hugging Face Hub
- [ ] create a repo on https://hf.co/datasets for every canonical dataset
- [ ] on every commit related to a dataset, update the hf.co repo
See https://github.com/huggingface/moon-landing/pull/1562
@SBrandeis: I let you edit this description if needed to pr... | [
-0.0218214002,
-0.2926822007,
-0.0275672581,
-0.1817803234,
-0.1919669509,
0.0057732016,
0.216266185,
0.2935411334,
0.2686594427,
0.1937051266,
-0.3167091608,
-0.1450394541,
0.0045881737,
0.3757871985,
0.0550457276,
0.1298285872,
0.1976609677,
0.1898720264,
-0.2652083933,
-0.28... |
https://github.com/huggingface/datasets/issues/3339 | to_tf_dataset fails on TPU | This might be related to https://github.com/tensorflow/tensorflow/issues/38762 , what do you think @Rocketknight1 ?
> Dataset.from_generator is expected to not work with TPUs as it uses py_function underneath which is incompatible with Cloud TPU 2VM setup. If you would like to read from large datasets, maybe try to ma... | Using `to_tf_dataset` to create a dataset and then putting it in `model.fit` results in an internal error on TPUs. I've only tried on Colab and Kaggle TPUs, not GCP TPUs.
## Steps to reproduce the bug
I made a colab to show the error. https://colab.research.google.com/drive/12x_PFKzGouFxqD4OuWfnycW_1TaT276z?usp=s... | 55 | to_tf_dataset fails on TPU
Using `to_tf_dataset` to create a dataset and then putting it in `model.fit` results in an internal error on TPUs. I've only tried on Colab and Kaggle TPUs, not GCP TPUs.
## Steps to reproduce the bug
I made a colab to show the error. https://colab.research.google.com/drive/12x_PFKzGo... | [
-0.2377156913,
-0.1358828992,
0.1592501402,
0.2036827803,
0.4153629541,
0.0100355959,
0.3643489182,
0.0257144328,
-0.4569853246,
0.1801205426,
-0.0554240905,
0.2911762893,
0.2514830828,
0.4929235876,
0.1305931658,
-0.0885017514,
-0.14083004,
0.0494023189,
-0.0877252072,
-0.1319... |
https://github.com/huggingface/datasets/issues/3339 | to_tf_dataset fails on TPU | Hi @lhoestq @nbroad1881, I think it's very similar, yes. Unfortunately `to_tf_dataset` uses `tf.numpy_function` which can't be compiled - this is a necessary evil to load from the underlying Arrow dataset. We need to update the notebooks/examples to clarify that this won't work, or to identify a workaround. You may be ... | Using `to_tf_dataset` to create a dataset and then putting it in `model.fit` results in an internal error on TPUs. I've only tried on Colab and Kaggle TPUs, not GCP TPUs.
## Steps to reproduce the bug
I made a colab to show the error. https://colab.research.google.com/drive/12x_PFKzGouFxqD4OuWfnycW_1TaT276z?usp=s... | 73 | to_tf_dataset fails on TPU
Using `to_tf_dataset` to create a dataset and then putting it in `model.fit` results in an internal error on TPUs. I've only tried on Colab and Kaggle TPUs, not GCP TPUs.
## Steps to reproduce the bug
I made a colab to show the error. https://colab.research.google.com/drive/12x_PFKzGo... | [
-0.2377156913,
-0.1358828992,
0.1592501402,
0.2036827803,
0.4153629541,
0.0100355959,
0.3643489182,
0.0257144328,
-0.4569853246,
0.1801205426,
-0.0554240905,
0.2911762893,
0.2514830828,
0.4929235876,
0.1305931658,
-0.0885017514,
-0.14083004,
0.0494023189,
-0.0877252072,
-0.1319... |
https://github.com/huggingface/datasets/issues/3339 | to_tf_dataset fails on TPU | Thank you for the explanation. I didn't realize the nuances of `tf.numpy_function`. In this scenario, would it be better to use `export(format='tfrecord')` ? It's not quite the same, but for very large datasets that don't fit in memory it looks like it is the only option. I haven't used `export` before, but I do recal... | Using `to_tf_dataset` to create a dataset and then putting it in `model.fit` results in an internal error on TPUs. I've only tried on Colab and Kaggle TPUs, not GCP TPUs.
## Steps to reproduce the bug
I made a colab to show the error. https://colab.research.google.com/drive/12x_PFKzGouFxqD4OuWfnycW_1TaT276z?usp=s... | 163 | to_tf_dataset fails on TPU
Using `to_tf_dataset` to create a dataset and then putting it in `model.fit` results in an internal error on TPUs. I've only tried on Colab and Kaggle TPUs, not GCP TPUs.
## Steps to reproduce the bug
I made a colab to show the error. https://colab.research.google.com/drive/12x_PFKzGo... | [
-0.2377156913,
-0.1358828992,
0.1592501402,
0.2036827803,
0.4153629541,
0.0100355959,
0.3643489182,
0.0257144328,
-0.4569853246,
0.1801205426,
-0.0554240905,
0.2911762893,
0.2514830828,
0.4929235876,
0.1305931658,
-0.0885017514,
-0.14083004,
0.0494023189,
-0.0877252072,
-0.1319... |
https://github.com/huggingface/datasets/issues/3339 | to_tf_dataset fails on TPU | Yeah, this is something we really should have a proper guide on. I'll make a note to test some things and make a 'TF TPU best practices' notebook at some point, but in the meantime I think your solution of exporting TFRecords will probably work. | Using `to_tf_dataset` to create a dataset and then putting it in `model.fit` results in an internal error on TPUs. I've only tried on Colab and Kaggle TPUs, not GCP TPUs.
## Steps to reproduce the bug
I made a colab to show the error. https://colab.research.google.com/drive/12x_PFKzGouFxqD4OuWfnycW_1TaT276z?usp=s... | 45 | to_tf_dataset fails on TPU
Using `to_tf_dataset` to create a dataset and then putting it in `model.fit` results in an internal error on TPUs. I've only tried on Colab and Kaggle TPUs, not GCP TPUs.
## Steps to reproduce the bug
I made a colab to show the error. https://colab.research.google.com/drive/12x_PFKzGo... | [
-0.2377156913,
-0.1358828992,
0.1592501402,
0.2036827803,
0.4153629541,
0.0100355959,
0.3643489182,
0.0257144328,
-0.4569853246,
0.1801205426,
-0.0554240905,
0.2911762893,
0.2514830828,
0.4929235876,
0.1305931658,
-0.0885017514,
-0.14083004,
0.0494023189,
-0.0877252072,
-0.1319... |
https://github.com/huggingface/datasets/issues/3337 | Typing of Dataset.__getitem__ could be improved. | Hi ! Thanks for the suggestion, I didn't know about this decorator.
If you are interesting in contributing, feel free to open a pull request to add the overload methods for each typing combination :) To assign you to this issue, you can comment `#self-assign` in this thread.
`Dataset.__getitem__` is defined right... | ## Describe the bug
The newly added typing for Dataset.__getitem__ is Union[Dict, List]. This makes tools like mypy a bit awkward to use as we need to check the type manually. We could use type overloading to make this easier. [Documentation](https://docs.python.org/3/library/typing.html#typing.overload)
## Steps... | 54 | Typing of Dataset.__getitem__ could be improved.
## Describe the bug
The newly added typing for Dataset.__getitem__ is Union[Dict, List]. This makes tools like mypy a bit awkward to use as we need to check the type manually. We could use type overloading to make this easier. [Documentation](https://docs.python.org... | [
-0.2796022892,
-0.0664095283,
-0.0474868305,
0.262614429,
0.2142491788,
0.0547710843,
0.2549815774,
0.2996378839,
-0.0091785342,
0.0392907523,
0.0025685763,
0.4478225708,
-0.0814645141,
-0.0109510459,
-0.1863645613,
0.1429159939,
-0.0522088856,
-0.0558121763,
-0.0211947504,
-0.... |
https://github.com/huggingface/datasets/issues/3333 | load JSON files, get the errors | Hi ! The message you're getting is not an error. It simply says that your JSON dataset is being prepared to a location in `/root/.cache/huggingface/datasets` | Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html
`dataset = ... | 25 | load JSON files, get the errors
Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs... | [
0.1340622455,
-0.435693115,
-0.0010808157,
0.4408900142,
0.4953264296,
0.118478559,
0.0806621313,
0.37969926,
0.1051829904,
-0.0702866912,
-0.0352998264,
0.1957522631,
-0.0973738283,
0.3266998827,
-0.1525483578,
-0.0933607519,
0.0823304802,
0.074096337,
-0.1414859295,
0.0792015... |
https://github.com/huggingface/datasets/issues/3333 | load JSON files, get the errors | >
but I want to load local JSON file by command
`python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
**squad-retrain-data/train-v2.0.json** is the local JSON file, how to load it and map it to a special structure? | Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html
`dataset = ... | 37 | load JSON files, get the errors
Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs... | [
0.1912293732,
-0.3675380945,
-0.0029149048,
0.5285809636,
0.3912636936,
0.0750025883,
0.0855832845,
0.4642633796,
0.2954299152,
-0.0933140218,
-0.1045986786,
0.2233937085,
0.004662368,
0.2920066416,
-0.0838339925,
-0.1504721642,
0.0743770301,
0.101179719,
-0.0999642834,
0.07780... |
https://github.com/huggingface/datasets/issues/3333 | load JSON files, get the errors | You can load it with `dataset = datasets.load_dataset('json', data_files=args.dataset)` as you said.
Then if you need to apply additional processing to map it to a special structure, you can use rename columns or use `dataset.map`. For more information, you can check the documentation here: https://huggingface.co/docs... | Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html
`dataset = ... | 59 | load JSON files, get the errors
Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs... | [
0.0703725219,
-0.3985922933,
0.025265472,
0.4247381091,
0.493349582,
0.1074073315,
0.0497881137,
0.4321736097,
0.103668198,
-0.0652427748,
-0.1320444494,
0.3324909508,
-0.0747636333,
0.4676976204,
-0.113608636,
-0.1721077263,
0.0741180331,
0.0452481471,
-0.2256746143,
0.1473302... |
https://github.com/huggingface/datasets/issues/3333 | load JSON files, get the errors | ```
# Dataset selection
if args.dataset.endswith('.json') or args.dataset.endswith('.jsonl'):
dataset_id = None
# Load from local json/jsonl file
dataset = datasets.load_dataset('json', data_files=args.dataset)
# By default, the "json" dataset loader places all examples in the ... | Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html
`dataset = ... | 136 | load JSON files, get the errors
Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs... | [
0.115154922,
-0.4231142402,
0.0051270495,
0.5085046291,
0.3968496621,
0.1231366545,
0.1628357172,
0.3834851384,
0.1097671911,
-0.162881285,
-0.0096488977,
0.3028328419,
-0.1650662273,
0.2532165647,
-0.2283037752,
-0.135631308,
0.1069716215,
0.0491518341,
-0.0499630086,
0.104918... |
https://github.com/huggingface/datasets/issues/3333 | load JSON files, get the errors | If your JSON has the same format as the SQuAD dataset, then you need to pass `field="data"` to `load_dataset`, since the SQuAD format is one big JSON object in which the "data" field contains the list of questions and answers.
```python
dataset = datasets.load_dataset('json', data_files=args.dataset, field="data")
`... | Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html
`dataset = ... | 54 | load JSON files, get the errors
Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs... | [
0.1509884447,
-0.4040397704,
0.009308829,
0.4606098533,
0.5046644211,
0.107641615,
0.1017882675,
0.380048871,
0.0705014169,
-0.0852152482,
-0.0908838287,
0.3143799901,
-0.0701697394,
0.2513686121,
-0.1658848226,
-0.0779561773,
0.0888331905,
0.0130545571,
-0.0961667895,
0.057455... |
https://github.com/huggingface/datasets/issues/3333 | load JSON files, get the errors | Yes, code works. but the format is not as expected.
```
dataset = datasets.load_dataset('json', data_files=args.dataset, field="data")
```
```
python3 run.py --do_train --task qa --dataset squad --output_dir ./re_trained_model/
```
************ train_dataset: Dataset({
features: ['id', 'title', 'context', ... | Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html
`dataset = ... | 88 | load JSON files, get the errors
Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs... | [
0.2061130106,
-0.4305946231,
0.0149481921,
0.5158005357,
0.5084050298,
0.1291912496,
0.1452688873,
0.4161996543,
0.0887658671,
-0.1716812849,
-0.1046591774,
0.2278075069,
-0.0519552827,
0.2885661125,
-0.1512390822,
-0.1212860569,
0.1154834852,
0.0552953891,
-0.1520932615,
0.095... |
https://github.com/huggingface/datasets/issues/3333 | load JSON files, get the errors | Ok I see, you have the paragraphs so you just need to process them to extract the questions and answers. I think you can process the SQuAD-like data this way:
```python
def process_squad(articles):
out = {
"title": [],
"context": [],
"question": [],
"id": [],
"answers... | Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html
`dataset = ... | 135 | load JSON files, get the errors
Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs... | [
0.2765028477,
-0.3245532811,
-0.0041768397,
0.503570497,
0.491323024,
0.1213673949,
0.123728402,
0.4133349955,
-0.0751962215,
-0.0259798802,
-0.1301123351,
0.3398238719,
-0.0528954938,
0.1487220079,
-0.1727153063,
-0.0546034798,
0.1016581357,
-0.0482199714,
-0.0415603146,
0.007... |
https://github.com/huggingface/datasets/issues/3333 | load JSON files, get the errors | Yes, this works. But how to get the training output during training the squad by **Trainer**
for example https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/trainer_qa.py
I want the training inputs, labels, outputs for every epoch and step to produce the training dynamic grap... | Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html
`dataset = ... | 37 | load JSON files, get the errors
Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs... | [
0.163686946,
-0.5511723757,
0.0138233257,
0.5805276036,
0.5453881025,
0.0155081423,
0.1837683618,
0.3460627198,
-0.1402783394,
-0.0444165915,
-0.1037509963,
0.4005067945,
-0.0872832015,
0.3356801867,
-0.0048746583,
-0.1373547316,
-0.0322655141,
0.0883197039,
-0.2120637745,
-0.0... |
https://github.com/huggingface/datasets/issues/3333 | load JSON files, get the errors | I think you may need to implement your own Trainer, from the `QuestionAnsweringTrainer` for example.
This way you can have the flexibility of saving all the inputs/output used at each step | Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html
`dataset = ... | 31 | load JSON files, get the errors
Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs... | [
0.1112693399,
-0.3947230875,
0.0247266795,
0.4891625047,
0.5235649943,
0.0685908943,
0.1418220997,
0.4170047343,
0.050956022,
0.0277666785,
-0.0371602811,
0.2889541984,
-0.2010877877,
0.4082460403,
-0.1074439138,
-0.1509508342,
0.0424837321,
0.1274573952,
-0.2073004991,
0.07874... |
https://github.com/huggingface/datasets/issues/3333 | load JSON files, get the errors | > does there have any function to be overwritten to do this?
ok, I overwrote the compute_loss, thank you. | Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html
`dataset = ... | 19 | load JSON files, get the errors
Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs... | [
0.081510745,
-0.3851127923,
-0.0033681933,
0.5282001495,
0.4719850421,
0.068586424,
0.0534329824,
0.3249766231,
0.1042508632,
-0.0913522989,
-0.0549960695,
0.2146299928,
-0.0638782531,
0.3335618079,
-0.1426368058,
-0.1661891937,
0.1353709847,
0.0852735117,
-0.1390956789,
0.0327... |
https://github.com/huggingface/datasets/issues/3333 | load JSON files, get the errors | Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs
```
*********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
...,
... | Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html
`dataset = ... | 576 | load JSON files, get the errors
Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs... | [
0.0734394193,
-0.4256197512,
-0.025549721,
0.5651884675,
0.4744012356,
0.2006401718,
0.1329942197,
0.3065040112,
0.2205163687,
0.0062531428,
0.0286499094,
0.2669722438,
0.0389592573,
0.1274508685,
-0.0082309134,
-0.1110081747,
0.0925866067,
0.1436215639,
-0.0355725028,
0.044425... |
https://github.com/huggingface/datasets/issues/3331 | AttributeError: 'CommunityDatasetModuleFactoryWithoutScript' object has no attribute 'path' | Hi,
the fix was merged and will be available in the next release of `datasets`.
In the meantime, you can use it by installing `datasets` directly from master as follows:
```
pip install git+https://github.com/huggingface/datasets.git
``` | ## Describe the bug
I add a new question answering dataset to huggingface datasets manually. Here is the link: [luozhouyang/question-answering-datasets](https://huggingface.co/datasets/luozhouyang/question-answering-datasets)
But when I load the dataset, an error raised:
```bash
AttributeError: 'CommunityDatas... | 35 | AttributeError: 'CommunityDatasetModuleFactoryWithoutScript' object has no attribute 'path'
## Describe the bug
I add a new question answering dataset to huggingface datasets manually. Here is the link: [luozhouyang/question-answering-datasets](https://huggingface.co/datasets/luozhouyang/question-answering-datasets)... | [
-0.2787340283,
0.0727966651,
0.1128334329,
0.4117526412,
0.4317401648,
-0.0501596369,
0.1598860174,
0.0534961484,
0.1838313639,
0.1735432744,
-0.0451257862,
0.5018624067,
-0.2404233962,
-0.0016095933,
0.0799399465,
-0.0443504453,
-0.0731380209,
0.0257308185,
-0.3315460682,
-0.1... |
https://github.com/huggingface/datasets/issues/3329 | Map function: Type error on iter #999 | Hi, thanks for reporting.
It would be really helpful if you could provide the actual code of the `text_numbers_to_int` function so we can reproduce the error. | ## Describe the bug
Using the map function, it throws a type error on iter #999
Here is the code I am calling:
```
dataset = datasets.load_dataset('squad')
dataset['validation'].map(text_numbers_to_int, input_columns=['context'], fn_kwargs={'column': 'context'})
```
text_numbers_to_int returns the input text ... | 26 | Map function: Type error on iter #999
## Describe the bug
Using the map function, it throws a type error on iter #999
Here is the code I am calling:
```
dataset = datasets.load_dataset('squad')
dataset['validation'].map(text_numbers_to_int, input_columns=['context'], fn_kwargs={'column': 'context'})
```
tex... | [
-0.0695655048,
-0.2543754578,
-0.029409619,
0.3593807817,
0.1476682872,
0.0972299725,
0.3781249523,
0.3375345767,
0.4425684214,
-0.09956678,
0.1078727767,
0.6538008451,
0.1616187543,
0.1190466881,
-0.0926601514,
0.0629927963,
0.0863813534,
0.0088729793,
-0.1706884652,
0.1339479... |
https://github.com/huggingface/datasets/issues/3329 | Map function: Type error on iter #999 | ```
def text_numbers_to_int(text, column=""):
"""
Convert text numbers to int.
:param text: text numbers
:return: int
"""
try:
numbers = find_numbers(text)
if not numbers:
return text
result = ""
i, j = 0, 0
while i < len(text):
... | ## Describe the bug
Using the map function, it throws a type error on iter #999
Here is the code I am calling:
```
dataset = datasets.load_dataset('squad')
dataset['validation'].map(text_numbers_to_int, input_columns=['context'], fn_kwargs={'column': 'context'})
```
text_numbers_to_int returns the input text ... | 91 | Map function: Type error on iter #999
## Describe the bug
Using the map function, it throws a type error on iter #999
Here is the code I am calling:
```
dataset = datasets.load_dataset('squad')
dataset['validation'].map(text_numbers_to_int, input_columns=['context'], fn_kwargs={'column': 'context'})
```
tex... | [
-0.1012620106,
-0.2614865601,
-0.0429167934,
0.360131681,
0.1728987098,
0.0626525655,
0.35847646,
0.3256571293,
0.5001993775,
-0.0529093295,
0.0537561253,
0.6227624416,
0.1381005049,
0.0644423664,
-0.1447207034,
0.016932277,
0.0577133074,
-0.001892627,
-0.1074982211,
0.11142396... |
https://github.com/huggingface/datasets/issues/3329 | Map function: Type error on iter #999 | Maybe this is because of the `return text` line ? I think it should return a dictionary rather than a string | ## Describe the bug
Using the map function, it throws a type error on iter #999
Here is the code I am calling:
```
dataset = datasets.load_dataset('squad')
dataset['validation'].map(text_numbers_to_int, input_columns=['context'], fn_kwargs={'column': 'context'})
```
text_numbers_to_int returns the input text ... | 21 | Map function: Type error on iter #999
## Describe the bug
Using the map function, it throws a type error on iter #999
Here is the code I am calling:
```
dataset = datasets.load_dataset('squad')
dataset['validation'].map(text_numbers_to_int, input_columns=['context'], fn_kwargs={'column': 'context'})
```
tex... | [
-0.0686894432,
-0.2696540356,
-0.0422955789,
0.365426451,
0.1748293936,
0.0543161184,
0.3326847553,
0.3805121481,
0.4761452973,
-0.0788380057,
0.108046487,
0.6700221896,
0.1724000275,
0.1302317828,
-0.1137837693,
0.0445300005,
0.0352328643,
0.0365198441,
-0.1528561115,
0.074783... |
https://github.com/huggingface/datasets/issues/3317 | Add desc parameter to Dataset filter method | Hi,
`Dataset.map` allows more generic transforms compared to `Dataset.filter`, which purpose is very specific (to filter examples based on a condition). That's why I don't think we need the `desc` parameter there for consistency. #3196 has added descriptions to the `Dataset` methods that call `.map` internally, but... | **Is your feature request related to a problem? Please describe.**
As I was filtering very large datasets I noticed the filter method doesn't have the desc parameter which is available in the map method. Why don't we add a desc parameter to the filter method both for consistency and it's nice to give some feedback to ... | 80 | Add desc parameter to Dataset filter method
**Is your feature request related to a problem? Please describe.**
As I was filtering very large datasets I noticed the filter method doesn't have the desc parameter which is available in the map method. Why don't we add a desc parameter to the filter method both for consi... | [
-0.1674710065,
0.142106086,
-0.1380548179,
-0.3212193847,
0.1586352289,
-0.21651645,
-0.0088287583,
0.2359305024,
-0.1663409323,
0.3248026073,
0.245251447,
0.4766329825,
0.0076421858,
0.0727190673,
-0.0954583138,
0.0160770603,
-0.1457529068,
0.0721857771,
-0.15230079,
0.1824496... |
https://github.com/huggingface/datasets/issues/3317 | Add desc parameter to Dataset filter method | I'm personally ok with adding the `desc` parameter actually. Let's say you have different filters, it can be nice to differentiate between the different filters when they're running no ? | **Is your feature request related to a problem? Please describe.**
As I was filtering very large datasets I noticed the filter method doesn't have the desc parameter which is available in the map method. Why don't we add a desc parameter to the filter method both for consistency and it's nice to give some feedback to ... | 30 | Add desc parameter to Dataset filter method
**Is your feature request related to a problem? Please describe.**
As I was filtering very large datasets I noticed the filter method doesn't have the desc parameter which is available in the map method. Why don't we add a desc parameter to the filter method both for consi... | [
-0.1440145373,
0.0003157938,
-0.1841550171,
-0.2966308594,
0.1535660923,
-0.2308935076,
0.0006400757,
0.0876240283,
0.0169918686,
0.4104134738,
0.2275137007,
0.3330960274,
-0.0531127267,
0.1219301149,
-0.1807418466,
-0.0010676922,
-0.2211546749,
-0.0066084312,
0.1080233678,
0.0... |
https://github.com/huggingface/datasets/issues/3317 | Add desc parameter to Dataset filter method | @mariosasko the use case is filtering of a dataset prior to tokenization and subsequent training. As the dataset is huge it's just a matter of giving a user (model trainer) some feedback on what's going on. Otherwise, feedback is given for all steps in training preparation and not for filtering and the filtering in my ... | **Is your feature request related to a problem? Please describe.**
As I was filtering very large datasets I noticed the filter method doesn't have the desc parameter which is available in the map method. Why don't we add a desc parameter to the filter method both for consistency and it's nice to give some feedback to ... | 96 | Add desc parameter to Dataset filter method
**Is your feature request related to a problem? Please describe.**
As I was filtering very large datasets I noticed the filter method doesn't have the desc parameter which is available in the map method. Why don't we add a desc parameter to the filter method both for consi... | [
-0.2420123369,
0.0004968568,
-0.1222747043,
-0.3864940405,
0.2071845084,
-0.2034219503,
-0.028897021,
0.1976792365,
0.0098760705,
0.3720795214,
0.3573766947,
0.2891174555,
-0.1399621069,
0.1705904007,
-0.0199872553,
0.0162452497,
-0.2642205358,
0.004869265,
0.014310305,
0.14865... |
https://github.com/huggingface/datasets/issues/3317 | Add desc parameter to Dataset filter method | I don't have a strong opinion on that, so having `desc` as a parameter is also OK. | **Is your feature request related to a problem? Please describe.**
As I was filtering very large datasets I noticed the filter method doesn't have the desc parameter which is available in the map method. Why don't we add a desc parameter to the filter method both for consistency and it's nice to give some feedback to ... | 17 | Add desc parameter to Dataset filter method
**Is your feature request related to a problem? Please describe.**
As I was filtering very large datasets I noticed the filter method doesn't have the desc parameter which is available in the map method. Why don't we add a desc parameter to the filter method both for consi... | [
-0.1287522465,
-0.0055353492,
-0.1885538846,
-0.3054766655,
0.1359180063,
-0.272687912,
0.0132565005,
0.1285357624,
0.0402684659,
0.364382267,
0.2256278396,
0.322019726,
-0.0392239057,
0.1355443448,
-0.1702383906,
0.0217238124,
-0.1972773373,
0.0151643213,
0.0741908476,
0.06931... |
https://github.com/huggingface/datasets/issues/3313 | TriviaQA License Mismatch | Hi ! You're completely right, this must be mentioned in the dataset card.
If you're interesting in contributing, feel free to open a pull request to mention this in the `trivia_qa` dataset card in the "Licensing Information" section at https://github.com/huggingface/datasets/blob/master/datasets/trivia_qa/README.md | ## Describe the bug
TriviaQA Webpage at http://nlp.cs.washington.edu/triviaqa/ says they do not own the copyright to the data. However, Huggingface datasets at https://huggingface.co/datasets/trivia_qa mentions that the dataset is released under Apache License
Is the License Information on HuggingFace correct? | 40 | TriviaQA License Mismatch
## Describe the bug
TriviaQA Webpage at http://nlp.cs.washington.edu/triviaqa/ says they do not own the copyright to the data. However, Huggingface datasets at https://huggingface.co/datasets/trivia_qa mentions that the dataset is released under Apache License
Is the License Informatio... | [
0.098267749,
-0.066150479,
-0.0082703521,
0.3446457088,
-0.202952683,
0.1115984097,
0.0691819564,
0.2062563002,
-0.201384902,
-0.2456243336,
0.0544378459,
0.1735272706,
0.2805500329,
0.0263393838,
0.0973243192,
-0.1761307567,
0.1311206073,
-0.252302587,
-0.1003351584,
-0.058407... |
https://github.com/huggingface/datasets/issues/3310 | Fatal error condition occurred in aws-c-io | Hi ! Are you having this issue only with this specific dataset, or it also happens with other ones like `squad` ? | ## Describe the bug
Fatal error when using the library
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
```
## Expected results
No fatal errors
## Actual results
```
Fatal error condition occurred in D:\bld\aws-c-io_1633633258269\work\sou... | 22 | Fatal error condition occurred in aws-c-io
## Describe the bug
Fatal error when using the library
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
```
## Expected results
No fatal errors
## Actual results
```
Fatal error condition occur... | [
-0.4823270142,
-0.0941876471,
-0.1207199469,
0.0980168432,
0.0666415095,
-0.0746589378,
0.2302931845,
0.080716446,
0.0498347208,
0.2464572787,
0.0314400084,
0.6151476502,
0.0799872056,
0.0899449214,
-0.2150688022,
0.0799752995,
0.1774080247,
0.2226745933,
-0.383779645,
0.150604... |
https://github.com/huggingface/datasets/issues/3310 | Fatal error condition occurred in aws-c-io | @lhoestq It happens also on `squad`. It successfully downloads the whole dataset and then crashes on:
```
Fatal error condition occurred in D:\bld\aws-c-io_1633633258269\work\source\event_loop.c:74: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCES... | ## Describe the bug
Fatal error when using the library
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
```
## Expected results
No fatal errors
## Actual results
```
Fatal error condition occurred in D:\bld\aws-c-io_1633633258269\work\sou... | 61 | Fatal error condition occurred in aws-c-io
## Describe the bug
Fatal error when using the library
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
```
## Expected results
No fatal errors
## Actual results
```
Fatal error condition occur... | [
-0.5945265889,
-0.0300624389,
-0.0512790903,
0.2207445651,
0.0420040414,
0.0443693697,
0.0953230858,
0.1442599148,
0.1222059354,
0.1434797347,
-0.0616347119,
0.5850592256,
0.0375699773,
0.1205552444,
-0.3080869317,
0.0348106101,
0.1885121018,
0.2404175103,
-0.4595812559,
0.1869... |
https://github.com/huggingface/datasets/issues/3310 | Fatal error condition occurred in aws-c-io | I see the same error in Windows-10.0.19042 as of a few days ago:
`Fatal error condition occurred in D:\bld\aws-c-io_1633633258269\work\source\event_loop.c:74: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS`
python 3.8.12 ... | ## Describe the bug
Fatal error when using the library
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
```
## Expected results
No fatal errors
## Actual results
```
Fatal error condition occurred in D:\bld\aws-c-io_1633633258269\work\sou... | 95 | Fatal error condition occurred in aws-c-io
## Describe the bug
Fatal error when using the library
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
```
## Expected results
No fatal errors
## Actual results
```
Fatal error condition occur... | [
-0.5499545336,
-0.0519863255,
-0.09508048,
0.0924665034,
0.0359084643,
-0.0551969595,
0.2764567733,
0.1207169741,
0.0653902963,
0.1175083295,
-0.0487395078,
0.6524448991,
0.0359029174,
0.133851558,
-0.2451501042,
0.0767988414,
0.2522391081,
0.2151824683,
-0.4482079744,
0.237342... |
https://github.com/huggingface/datasets/issues/3310 | Fatal error condition occurred in aws-c-io | I'm not sure what `datasets` has to do with a crash that seems related to `aws-c-io`, could it be an issue with your environment ? | ## Describe the bug
Fatal error when using the library
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
```
## Expected results
No fatal errors
## Actual results
```
Fatal error condition occurred in D:\bld\aws-c-io_1633633258269\work\sou... | 25 | Fatal error condition occurred in aws-c-io
## Describe the bug
Fatal error when using the library
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
```
## Expected results
No fatal errors
## Actual results
```
Fatal error condition occur... | [
-0.5580906868,
-0.0866051167,
-0.1800108254,
0.2331554592,
0.1390397102,
-0.0578074157,
0.1020180956,
0.1604058444,
0.1959947348,
0.2191379368,
0.0472115204,
0.5035427213,
-0.08148662,
0.1524396539,
-0.0512856543,
-0.0705785975,
0.2270471007,
0.1326992959,
-0.3635615408,
0.2618... |
https://github.com/huggingface/datasets/issues/3310 | Fatal error condition occurred in aws-c-io | > I'm not sure what `datasets` has to do with a crash that seems related to `aws-c-io`, could it be an issue with your environment ?
Agreed, this issue is not likely a bug in datasets, since I get the identical error without datasets installed. | ## Describe the bug
Fatal error when using the library
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
```
## Expected results
No fatal errors
## Actual results
```
Fatal error condition occurred in D:\bld\aws-c-io_1633633258269\work\sou... | 45 | Fatal error condition occurred in aws-c-io
## Describe the bug
Fatal error when using the library
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
```
## Expected results
No fatal errors
## Actual results
```
Fatal error condition occur... | [
-0.5877727866,
-0.098560214,
-0.1441142559,
0.2184793502,
0.0923991203,
-0.0479464978,
0.1061487943,
0.1226170287,
0.2112212479,
0.1895913482,
0.0708782971,
0.5784412622,
-0.0558514819,
0.142910853,
-0.1018204689,
-0.060227938,
0.2509334981,
0.148444593,
-0.3987596035,
0.290788... |
https://github.com/huggingface/datasets/issues/3310 | Fatal error condition occurred in aws-c-io | Will close this issue. Bug in `aws-c-io` shouldn't be in `datasets` repo. Nevertheless, it can be useful to know that it happens. Thanks @leehaust @lhoestq | ## Describe the bug
Fatal error when using the library
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
```
## Expected results
No fatal errors
## Actual results
```
Fatal error condition occurred in D:\bld\aws-c-io_1633633258269\work\sou... | 25 | Fatal error condition occurred in aws-c-io
## Describe the bug
Fatal error when using the library
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
```
## Expected results
No fatal errors
## Actual results
```
Fatal error condition occur... | [
-0.4938253164,
-0.0435249135,
-0.1619390398,
0.1124572083,
0.0005887501,
0.0266229678,
0.1630836129,
0.1379013211,
0.1356272846,
0.1860264689,
0.0934361219,
0.5520917177,
0.0804899633,
0.135408476,
-0.1256678998,
0.0155789685,
0.2077867985,
0.2217728347,
-0.3824179471,
0.195904... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.