html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 63 51.8k | body stringlengths 0 36.2k ⌀ | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings list |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/2552 | Keys should be unique error on code_search_net | Hi! Got same error when loading other dataset:
```python3
load_dataset('wikicorpus', 'raw_en')
```
tb:
```pytb
---------------------------------------------------------------------------
DuplicatedKeysError Traceback (most recent call last)
/opt/conda/lib/python3.8/site-packages/datasets... | ## Describe the bug
Loading `code_search_net` seems not possible at the moment.
## Steps to reproduce the bug
```python
>>> load_dataset('code_search_net')
Downloading: 8.50kB [00:00, 3.09MB/s] ... | 91 | Keys should be unique error on code_search_net
## Describe the bug
Loading `code_search_net` seems not possible at the moment.
## Steps to reproduce the bug
```python
>>> load_dataset('code_search_net')
Downloading: 8.50kB [00:00, 3.09MB/s] ... | [
-0.0506566651,
-0.0265310016,
-0.0967086107,
0.3376675844,
0.0938269347,
-0.0051849312,
0.2096367329,
0.2723225057,
0.0978668928,
0.0846779123,
-0.0767667219,
0.3797585964,
-0.1855771989,
0.1312499642,
0.1376766562,
0.0704791695,
-0.0451736115,
0.1832595468,
0.1311877221,
-0.03... |
https://github.com/huggingface/datasets/issues/2552 | Keys should be unique error on code_search_net | The wikicorpus issue has been fixed by https://github.com/huggingface/datasets/pull/2844
We'll do a new release of `datasets` soon :) | ## Describe the bug
Loading `code_search_net` seems not possible at the moment.
## Steps to reproduce the bug
```python
>>> load_dataset('code_search_net')
Downloading: 8.50kB [00:00, 3.09MB/s] ... | 17 | Keys should be unique error on code_search_net
## Describe the bug
Loading `code_search_net` seems not possible at the moment.
## Steps to reproduce the bug
```python
>>> load_dataset('code_search_net')
Downloading: 8.50kB [00:00, 3.09MB/s] ... | [
-0.074281022,
-0.0351796523,
-0.0866822228,
0.358160466,
0.0623262562,
-0.0354926847,
0.1684751809,
0.2639012933,
0.0927660167,
0.1236142963,
-0.0445141606,
0.3784417808,
-0.1519160122,
0.1347225904,
0.1137825102,
0.0393824168,
-0.0278353263,
0.1998657137,
0.100810267,
-0.03489... |
https://github.com/huggingface/datasets/issues/2549 | Handling unlabeled datasets | Hi @nelson-liu,
You can pass the parameter `features` to `load_dataset`: https://huggingface.co/docs/datasets/_modules/datasets/load.html#load_dataset
If you look at the code of the MNLI script you referred in your question (https://github.com/huggingface/datasets/blob/master/datasets/multi_nli/multi_nli.py#L62-L... | Hi!
Is there a way for datasets to produce unlabeled instances (e.g., the `ClassLabel` can be nullable).
For example, I want to use the MNLI dataset reader ( https://github.com/huggingface/datasets/blob/master/datasets/multi_nli/multi_nli.py ) on a file that doesn't have the `gold_label` field. I tried setting `"... | 55 | Handling unlabeled datasets
Hi!
Is there a way for datasets to produce unlabeled instances (e.g., the `ClassLabel` can be nullable).
For example, I want to use the MNLI dataset reader ( https://github.com/huggingface/datasets/blob/master/datasets/multi_nli/multi_nli.py ) on a file that doesn't have the `gold_la... | [
0.0588131547,
0.0781008825,
0.1316252798,
0.3273363113,
0.2503512204,
0.1380116791,
0.7898839712,
-0.1512292027,
0.0572320037,
0.1499076784,
-0.2000777423,
0.5356730223,
-0.2677469552,
0.2440220714,
-0.1644506454,
0.4176734984,
-0.1134414151,
0.226728797,
0.2766295671,
-0.25992... |
https://github.com/huggingface/datasets/issues/2548 | Field order issue in loading json | Hi @luyug, thanks for reporting.
The good news is that we fixed this issue only 9 days ago: #2507.
The patch is already in the master branch of our repository and it will be included in our next `datasets` release version 1.9.0.
Feel free to reopen the issue if the problem persists. | ## Describe the bug
The `load_dataset` function expects columns in alphabetical order when loading json files.
Similar bug was previously reported for csv in #623 and fixed in #684.
## Steps to reproduce the bug
For a json file `j.json`,
```
{"c":321, "a": 1, "b": 2}
```
Running the following,
```
f= data... | 52 | Field order issue in loading json
## Describe the bug
The `load_dataset` function expects columns in alphabetical order when loading json files.
Similar bug was previously reported for csv in #623 and fixed in #684.
## Steps to reproduce the bug
For a json file `j.json`,
```
{"c":321, "a": 1, "b": 2}
```
... | [
0.1825092882,
0.2316933274,
-0.015927596,
0.2105988413,
0.3290199041,
-0.063003853,
0.2235180736,
0.4288537204,
0.009034506,
0.0226683468,
0.0754535049,
0.6574708819,
0.3253485262,
-0.0124665564,
-0.0245099533,
-0.1077707484,
-0.0028558748,
0.2404137999,
-0.0928332657,
0.095793... |
https://github.com/huggingface/datasets/issues/2547 | Dataset load_from_disk is too slow | Hi ! It looks like an issue with the virtual disk you are using.
We load datasets using memory mapping. In general it makes it possible to load very big files instantaneously since it doesn't have to read the file (it just assigns virtual memory to the file on disk).
However there happens to be issues with virtual ... | @lhoestq
## Describe the bug
It's not normal that I have to wait 7-8 hours for a dataset to be loaded from disk, as there are no preprocessing steps, it's only loading it with load_from_disk. I have 96 cpus, however only 1 is used for this, which is inefficient. Moreover, its usage is at 1%... This is happening in t... | 121 | Dataset load_from_disk is too slow
@lhoestq
## Describe the bug
It's not normal that I have to wait 7-8 hours for a dataset to be loaded from disk, as there are no preprocessing steps, it's only loading it with load_from_disk. I have 96 cpus, however only 1 is used for this, which is inefficient. Moreover, its usa... | [
-0.4744635224,
-0.3652115166,
-0.0910321325,
0.507807076,
0.2949646711,
0.0777807012,
0.0872955397,
0.2941163778,
0.5636073947,
0.1137159169,
0.103027083,
0.3100512028,
-0.0364946052,
-0.1310181618,
-0.0510298498,
0.0120531274,
0.2169386297,
0.0340657011,
0.1016837656,
-0.16221... |
https://github.com/huggingface/datasets/issues/2547 | Dataset load_from_disk is too slow | Okay, that's exactly my case, with spot instances... Therefore this isn't something we can change in any way to be able to load the dataset faster? I mean, what do you do internally at huggingface for being able to use spot instances with datasets efficiently? | @lhoestq
## Describe the bug
It's not normal that I have to wait 7-8 hours for a dataset to be loaded from disk, as there are no preprocessing steps, it's only loading it with load_from_disk. I have 96 cpus, however only 1 is used for this, which is inefficient. Moreover, its usage is at 1%... This is happening in t... | 45 | Dataset load_from_disk is too slow
@lhoestq
## Describe the bug
It's not normal that I have to wait 7-8 hours for a dataset to be loaded from disk, as there are no preprocessing steps, it's only loading it with load_from_disk. I have 96 cpus, however only 1 is used for this, which is inefficient. Moreover, its usa... | [
-0.3196371794,
-0.5587564707,
-0.062026605,
0.5160353184,
0.2328600287,
0.0203787778,
0.1246296093,
0.137935251,
0.5566673875,
0.1127060205,
-0.0614802353,
0.3583335876,
-0.0153431511,
0.2113302946,
0.0768190995,
-0.0829385146,
0.2018540651,
0.0174786821,
0.0967766568,
-0.03347... |
https://github.com/huggingface/datasets/issues/2547 | Dataset load_from_disk is too slow | There are no solutions yet unfortunately.
We're still trying to figure out a way to make the loading instantaneous on such disks, I'll keep you posted | @lhoestq
## Describe the bug
It's not normal that I have to wait 7-8 hours for a dataset to be loaded from disk, as there are no preprocessing steps, it's only loading it with load_from_disk. I have 96 cpus, however only 1 is used for this, which is inefficient. Moreover, its usage is at 1%... This is happening in t... | 26 | Dataset load_from_disk is too slow
@lhoestq
## Describe the bug
It's not normal that I have to wait 7-8 hours for a dataset to be loaded from disk, as there are no preprocessing steps, it's only loading it with load_from_disk. I have 96 cpus, however only 1 is used for this, which is inefficient. Moreover, its usa... | [
-0.4694786966,
-0.2423021346,
-0.1178940311,
0.3941161036,
0.2421811819,
0.0529477745,
0.1993057132,
0.2657708228,
0.4733240306,
0.1050991639,
0.0533881672,
0.4494889379,
-0.0244760048,
0.013596829,
-0.0266762953,
-0.013558263,
0.2004589438,
0.0515529886,
0.209393084,
-0.121018... |
https://github.com/huggingface/datasets/issues/2543 | switching some low-level log.info's to log.debug? | Hi @stas00, thanks for pointing out this issue with logging.
I agree that `datasets` can sometimes be too verbose... I can create a PR and we could discuss there the choice of the log levels for different parts of the code. | In https://github.com/huggingface/transformers/pull/12276 we are now changing the examples to have `datasets` on the same log level as `transformers`, so that one setting can do a consistent logging across all involved components.
The trouble is that now we get a ton of these:
```
06/23/2021 12:15:31 - INFO - da... | 41 | switching some low-level log.info's to log.debug?
In https://github.com/huggingface/transformers/pull/12276 we are now changing the examples to have `datasets` on the same log level as `transformers`, so that one setting can do a consistent logging across all involved components.
The trouble is that now we get a t... | [
0.1725228131,
-0.3411974609,
0.0871564299,
0.2024419308,
0.2560845613,
0.1124865711,
0.552028954,
0.3101121783,
-0.0718619227,
-0.3018712401,
-0.0416376032,
0.1757283807,
-0.2178721428,
0.28082937,
0.0503805801,
-0.163519755,
-0.0827341378,
-0.0110471826,
-0.2270590812,
-0.0930... |
https://github.com/huggingface/datasets/issues/2542 | `datasets.keyhash.DuplicatedKeysError` for `drop` and `adversarial_qa/adversarialQA` | Hi @VictorSanh, thank you for reporting this issue with duplicated keys.
- The issue with "adversarial_qa" was fixed 23 days ago: #2433. Current version of `datasets` (1.8.0) includes the patch.
- I am investigating the issue with `drop`. I'll ping you to keep you informed. | ## Describe the bug
Failure to generate the datasets (`drop` and subset `adversarialQA` from `adversarial_qa`) because of duplicate keys.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("drop")
load_dataset("adversarial_qa", "adversarialQA")
```
## Expected results
Th... | 45 | `datasets.keyhash.DuplicatedKeysError` for `drop` and `adversarial_qa/adversarialQA`
## Describe the bug
Failure to generate the datasets (`drop` and subset `adversarialQA` from `adversarial_qa`) because of duplicate keys.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("... | [
-0.090508908,
-0.1160068065,
0.0094552077,
0.470907867,
0.0840875059,
0.0286584627,
0.3028094172,
0.2397652566,
0.1277972013,
0.1814350933,
-0.1439214945,
0.505746901,
-0.0482960269,
0.1198931336,
0.0266698692,
-0.0327850543,
0.0012849018,
0.0136726182,
-0.2982800603,
-0.076210... |
https://github.com/huggingface/datasets/issues/2542 | `datasets.keyhash.DuplicatedKeysError` for `drop` and `adversarial_qa/adversarialQA` | Hi @VictorSanh, the issue is already fixed and merged into master branch and will be included in our next release version 1.9.0. | ## Describe the bug
Failure to generate the datasets (`drop` and subset `adversarialQA` from `adversarial_qa`) because of duplicate keys.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("drop")
load_dataset("adversarial_qa", "adversarialQA")
```
## Expected results
Th... | 22 | `datasets.keyhash.DuplicatedKeysError` for `drop` and `adversarial_qa/adversarialQA`
## Describe the bug
Failure to generate the datasets (`drop` and subset `adversarialQA` from `adversarial_qa`) because of duplicate keys.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("... | [
-0.090508908,
-0.1160068065,
0.0094552077,
0.470907867,
0.0840875059,
0.0286584627,
0.3028094172,
0.2397652566,
0.1277972013,
0.1814350933,
-0.1439214945,
0.505746901,
-0.0482960269,
0.1198931336,
0.0266698692,
-0.0327850543,
0.0012849018,
0.0136726182,
-0.2982800603,
-0.076210... |
https://github.com/huggingface/datasets/issues/2538 | Loading partial dataset when debugging | Hi ! `load_dataset` downloads the full dataset once and caches it, so that subsequent calls to `load_dataset` just reloads the dataset from your disk.
Then when you specify a `split` in `load_dataset`, it will just load the requested split from the disk. If your specified split is a sliced split (e.g. `"train[:10]"`),... | I am using PyTorch Lightning along with datasets (thanks for so many datasets already prepared and the great splits).
Every time I execute load_dataset for the imdb dataset it takes some time even if I specify a split involving very few samples. I guess this due to hashing as per the other issues.
Is there a wa... | 98 | Loading partial dataset when debugging
I am using PyTorch Lightning along with datasets (thanks for so many datasets already prepared and the great splits).
Every time I execute load_dataset for the imdb dataset it takes some time even if I specify a split involving very few samples. I guess this due to hashing ... | [
-0.3671889305,
-0.1431687474,
-0.0216083545,
0.3622044325,
-0.0675304532,
0.2756692767,
0.5740377307,
0.4168040752,
0.2612754405,
-0.0835102722,
-0.0828445926,
0.1403205097,
-0.173921138,
0.182088986,
0.0995866358,
0.1087071821,
-0.1426477581,
0.3165221214,
0.0734355748,
-0.078... |
https://github.com/huggingface/datasets/issues/2538 | Loading partial dataset when debugging | Hi @reachtarunhere.
Besides the above insights provided by @lhoestq and @thomwolf, there is also a Dataset feature in progress (I plan to finish it this week): #2249, which will allow you, when calling `load_dataset`, to pass the option to download/preprocess/cache only some specific split(s), which will definitely ... | I am using PyTorch Lightning along with datasets (thanks for so many datasets already prepared and the great splits).
Every time I execute load_dataset for the imdb dataset it takes some time even if I specify a split involving very few samples. I guess this due to hashing as per the other issues.
Is there a wa... | 71 | Loading partial dataset when debugging
I am using PyTorch Lightning along with datasets (thanks for so many datasets already prepared and the great splits).
Every time I execute load_dataset for the imdb dataset it takes some time even if I specify a split involving very few samples. I guess this due to hashing ... | [
-0.4255476296,
-0.1098934934,
-0.0348532721,
0.3032792509,
-0.0704116672,
0.2823928893,
0.5759146214,
0.4311379492,
0.3011429608,
-0.0897591561,
-0.0946434364,
0.1700279862,
-0.1675224304,
0.2383287847,
0.047568839,
0.0998504087,
-0.1617254913,
0.3242846727,
0.0387710556,
-0.06... |
https://github.com/huggingface/datasets/issues/2538 | Loading partial dataset when debugging | Thanks all for responding.
Hey @albertvillanova
Thanks. Yes, I would be interested.
@lhoestq I think even if a small split is specified it loads up the full dataset from the disk (please correct me if this is not the case). Because it does seem to be slow to me even on subsequent calls. There is no repeated d... | I am using PyTorch Lightning along with datasets (thanks for so many datasets already prepared and the great splits).
Every time I execute load_dataset for the imdb dataset it takes some time even if I specify a split involving very few samples. I guess this due to hashing as per the other issues.
Is there a wa... | 85 | Loading partial dataset when debugging
I am using PyTorch Lightning along with datasets (thanks for so many datasets already prepared and the great splits).
Every time I execute load_dataset for the imdb dataset it takes some time even if I specify a split involving very few samples. I guess this due to hashing ... | [
-0.4159125686,
-0.1481461972,
-0.0125231082,
0.3571515083,
-0.0992515311,
0.2774940729,
0.5004059076,
0.4125295281,
0.3049232364,
-0.1691030711,
-0.0966694504,
0.120726794,
-0.1135294139,
0.2518862188,
0.1217294335,
0.0623837635,
-0.1812787652,
0.3625116944,
0.0558306128,
-0.12... |
https://github.com/huggingface/datasets/issues/2532 | Tokenizer's normalization preprocessor cause misalignment in return_offsets_mapping for tokenizer classification task | Hi @jerryIsHere, thanks for reporting the issue. But are you sure this is a bug in HuggingFace **Datasets**? | [This colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) implements a token classification input pipeline extending the logic from [this hugging example](https://huggingface.co/transformers/custom_datasets.html#tok-ner).
The pipeline works fine with most instance i... | 18 | Tokenizer's normalization preprocessor cause misalignment in return_offsets_mapping for tokenizer classification task
[This colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) implements a token classification input pipeline extending the logic from [this hugging exa... | [
-0.1536965072,
0.0790130571,
0.073781766,
-0.0382317975,
0.0921574235,
-0.3246813118,
-0.0899813995,
0.1060318649,
-0.4617609084,
0.1986739784,
-0.2085372806,
0.3074204922,
0.2766765952,
0.0308850538,
-0.2191912085,
-0.1632775217,
0.1479932666,
0.219057247,
0.0056233373,
0.0611... |
https://github.com/huggingface/datasets/issues/2532 | Tokenizer's normalization preprocessor cause misalignment in return_offsets_mapping for tokenizer classification task | > Hi @jerryIsHere, thanks for reporting the issue. But are you sure this is a bug in HuggingFace **Datasets**?
Oh, I am sorry
I would reopen the post on huggingface/transformers | [This colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) implements a token classification input pipeline extending the logic from [this hugging example](https://huggingface.co/transformers/custom_datasets.html#tok-ner).
The pipeline works fine with most instance i... | 30 | Tokenizer's normalization preprocessor cause misalignment in return_offsets_mapping for tokenizer classification task
[This colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) implements a token classification input pipeline extending the logic from [this hugging exa... | [
-0.1536965072,
0.0790130571,
0.073781766,
-0.0382317975,
0.0921574235,
-0.3246813118,
-0.0899813995,
0.1060318649,
-0.4617609084,
0.1986739784,
-0.2085372806,
0.3074204922,
0.2766765952,
0.0308850538,
-0.2191912085,
-0.1632775217,
0.1479932666,
0.219057247,
0.0056233373,
0.0611... |
https://github.com/huggingface/datasets/issues/2526 | Add COCO datasets | I'm currently adding it, the entire dataset is quite big around 30 GB so I add splits separately. You can take a look here https://huggingface.co/datasets/merve/coco | ## Adding a Dataset
- **Name:** COCO
- **Description:** COCO is a large-scale object detection, segmentation, and captioning dataset.
- **Paper + website:** https://cocodataset.org/#home
- **Data:** https://cocodataset.org/#download
- **Motivation:** It would be great to have COCO available in HuggingFace datasets... | 25 | Add COCO datasets
## Adding a Dataset
- **Name:** COCO
- **Description:** COCO is a large-scale object detection, segmentation, and captioning dataset.
- **Paper + website:** https://cocodataset.org/#home
- **Data:** https://cocodataset.org/#download
- **Motivation:** It would be great to have COCO available in ... | [
-0.215303123,
-0.4916904569,
-0.1370099187,
0.2117820531,
0.1525026858,
0.0362498686,
0.1267591417,
0.1192202494,
-0.1120572686,
0.2191454768,
-0.5324268937,
0.1477140337,
-0.2628600895,
0.4540187418,
0.1149193421,
-0.3077385724,
0.0416929424,
-0.0030285954,
-0.370855242,
0.015... |
https://github.com/huggingface/datasets/issues/2526 | Add COCO datasets | I talked to @lhoestq and it's best if I download this dataset through TensorFlow datasets instead, so I'll be implementing that one really soon.
@NielsRogge | ## Adding a Dataset
- **Name:** COCO
- **Description:** COCO is a large-scale object detection, segmentation, and captioning dataset.
- **Paper + website:** https://cocodataset.org/#home
- **Data:** https://cocodataset.org/#download
- **Motivation:** It would be great to have COCO available in HuggingFace datasets... | 25 | Add COCO datasets
## Adding a Dataset
- **Name:** COCO
- **Description:** COCO is a large-scale object detection, segmentation, and captioning dataset.
- **Paper + website:** https://cocodataset.org/#home
- **Data:** https://cocodataset.org/#download
- **Motivation:** It would be great to have COCO available in ... | [
-0.2278995365,
-0.5181818604,
-0.1593757719,
0.1203790456,
0.178519994,
0.0045879027,
0.1941074729,
0.1587902308,
-0.062192224,
0.2586833239,
-0.4073513746,
0.1429158896,
-0.2488185465,
0.3926751316,
0.1611342579,
-0.2805984318,
0.0450157374,
0.0291468985,
-0.3613948822,
-0.007... |
https://github.com/huggingface/datasets/issues/2526 | Add COCO datasets | I started adding COCO, will be done tomorrow EOD
my work so far https://github.com/merveenoyan/datasets (my fork) | ## Adding a Dataset
- **Name:** COCO
- **Description:** COCO is a large-scale object detection, segmentation, and captioning dataset.
- **Paper + website:** https://cocodataset.org/#home
- **Data:** https://cocodataset.org/#download
- **Motivation:** It would be great to have COCO available in HuggingFace datasets... | 16 | Add COCO datasets
## Adding a Dataset
- **Name:** COCO
- **Description:** COCO is a large-scale object detection, segmentation, and captioning dataset.
- **Paper + website:** https://cocodataset.org/#home
- **Data:** https://cocodataset.org/#download
- **Motivation:** It would be great to have COCO available in ... | [
-0.1830970645,
-0.4954968989,
-0.1293637007,
0.1958060712,
0.2031092346,
0.0522426553,
0.0611664914,
0.1078286245,
-0.1408274025,
0.2152936608,
-0.5061154962,
0.1604238898,
-0.2287673801,
0.4129447937,
0.1122533679,
-0.227375716,
0.0648131445,
-0.0308302697,
-0.3308284283,
-0.0... |
https://github.com/huggingface/datasets/issues/2526 | Add COCO datasets | Hi Merve @merveenoyan , thank you so much for your great contribution! May I ask about the current progress of your implementation? Cuz I see the pull request is still in progess here. Or can I just run the COCO scripts in your fork repo? | ## Adding a Dataset
- **Name:** COCO
- **Description:** COCO is a large-scale object detection, segmentation, and captioning dataset.
- **Paper + website:** https://cocodataset.org/#home
- **Data:** https://cocodataset.org/#download
- **Motivation:** It would be great to have COCO available in HuggingFace datasets... | 45 | Add COCO datasets
## Adding a Dataset
- **Name:** COCO
- **Description:** COCO is a large-scale object detection, segmentation, and captioning dataset.
- **Paper + website:** https://cocodataset.org/#home
- **Data:** https://cocodataset.org/#download
- **Motivation:** It would be great to have COCO available in ... | [
-0.1481827646,
-0.6077491045,
-0.1135477349,
0.1493353993,
0.1304032803,
-0.0674014911,
0.0679598898,
0.0632960498,
-0.1546793282,
0.1748440117,
-0.4082759321,
0.1341958195,
-0.3136062622,
0.4300085306,
0.1826981455,
-0.2286396027,
0.1053287089,
-0.0457623377,
-0.3508669138,
-0... |
https://github.com/huggingface/datasets/issues/2526 | Add COCO datasets | Hello @yixuanren I had another prioritized project about to be merged, but I'll start continuing today will finish up soon. | ## Adding a Dataset
- **Name:** COCO
- **Description:** COCO is a large-scale object detection, segmentation, and captioning dataset.
- **Paper + website:** https://cocodataset.org/#home
- **Data:** https://cocodataset.org/#download
- **Motivation:** It would be great to have COCO available in HuggingFace datasets... | 20 | Add COCO datasets
## Adding a Dataset
- **Name:** COCO
- **Description:** COCO is a large-scale object detection, segmentation, and captioning dataset.
- **Paper + website:** https://cocodataset.org/#home
- **Data:** https://cocodataset.org/#download
- **Motivation:** It would be great to have COCO available in ... | [
-0.2072724253,
-0.5107597113,
-0.1322675347,
0.1920367628,
0.2045855969,
0.0509704351,
0.1211445183,
0.1154923365,
-0.0961643904,
0.2260365784,
-0.4832244515,
0.1559068859,
-0.2563147247,
0.4645494819,
0.1836660206,
-0.3129406571,
0.0753298402,
-0.0195131768,
-0.3688814938,
-0.... |
https://github.com/huggingface/datasets/issues/2526 | Add COCO datasets | > Hello @yixuanren I had another prioritized project about to be merged, but I'll start continuing today will finish up soon.
It's really nice of you!! I see you've commited another version just now | ## Adding a Dataset
- **Name:** COCO
- **Description:** COCO is a large-scale object detection, segmentation, and captioning dataset.
- **Paper + website:** https://cocodataset.org/#home
- **Data:** https://cocodataset.org/#download
- **Motivation:** It would be great to have COCO available in HuggingFace datasets... | 34 | Add COCO datasets
## Adding a Dataset
- **Name:** COCO
- **Description:** COCO is a large-scale object detection, segmentation, and captioning dataset.
- **Paper + website:** https://cocodataset.org/#home
- **Data:** https://cocodataset.org/#download
- **Motivation:** It would be great to have COCO available in ... | [
-0.19347018,
-0.5309682488,
-0.1402341127,
0.1794767529,
0.2048459351,
0.0255118106,
0.1151226386,
0.1192391142,
-0.1104230583,
0.2151785046,
-0.4630050957,
0.1605530083,
-0.2300151885,
0.4517652094,
0.146490857,
-0.3006081283,
0.0694587007,
-0.0106533365,
-0.3615534902,
-0.037... |
https://github.com/huggingface/datasets/issues/2522 | Documentation Mistakes in Dataset: emotion | Hi,
this issue has been already reported in the dataset repo (https://github.com/dair-ai/emotion_dataset/issues/2), so this is a bug on their side. | As per documentation,
Dataset: emotion
Homepage: https://github.com/dair-ai/emotion_dataset
Dataset: https://github.com/huggingface/datasets/blob/master/datasets/emotion/emotion.py
Permalink: https://huggingface.co/datasets/viewer/?dataset=emotion
Emotion is a dataset of English Twitter messages with eight b... | 20 | Documentation Mistakes in Dataset: emotion
As per documentation,
Dataset: emotion
Homepage: https://github.com/dair-ai/emotion_dataset
Dataset: https://github.com/huggingface/datasets/blob/master/datasets/emotion/emotion.py
Permalink: https://huggingface.co/datasets/viewer/?dataset=emotion
Emotion is a dat... | [
0.2212707996,
-0.3492064178,
-0.1087544784,
0.5762393475,
0.2677911222,
0.1737760752,
0.2942092419,
0.1110397875,
-0.1283250749,
0.2038234323,
-0.1274427325,
0.0429524854,
-0.2023336142,
-0.1131505519,
-0.0055955262,
-0.1862745881,
0.0835630596,
-0.0872550011,
0.31988585,
-0.11... |
https://github.com/huggingface/datasets/issues/2522 | Documentation Mistakes in Dataset: emotion | The documentation has another bug in the dataset card [here](https://huggingface.co/datasets/emotion).
In the dataset summary **six** emotions are mentioned: *"six basic emotions: anger, fear, joy, love, sadness, and surprise"*, however, in the datafields section we have only **five**:
```
label: a classification... | As per documentation,
Dataset: emotion
Homepage: https://github.com/dair-ai/emotion_dataset
Dataset: https://github.com/huggingface/datasets/blob/master/datasets/emotion/emotion.py
Permalink: https://huggingface.co/datasets/viewer/?dataset=emotion
Emotion is a dataset of English Twitter messages with eight b... | 57 | Documentation Mistakes in Dataset: emotion
As per documentation,
Dataset: emotion
Homepage: https://github.com/dair-ai/emotion_dataset
Dataset: https://github.com/huggingface/datasets/blob/master/datasets/emotion/emotion.py
Permalink: https://huggingface.co/datasets/viewer/?dataset=emotion
Emotion is a dat... | [
0.2278897613,
-0.3761100173,
-0.0679694563,
0.5594589114,
0.2020364553,
0.2261342257,
0.1969555318,
0.022524396,
0.0012619502,
0.1313014328,
-0.0691814423,
0.011118819,
-0.2228649557,
-0.0502625071,
-0.013823553,
-0.2161300778,
0.069133915,
-0.0917544737,
0.3126777112,
-0.26968... |
https://github.com/huggingface/datasets/issues/2516 | datasets.map pickle issue resulting in invalid mapping function | Hi ! `map` calls `__getstate__` using `dill` to hash your map function. This is used by the caching mechanism to recover previously computed results. That's why you don't see any `__setstate__` call.
Why do you change an attribute of your tokenizer when `__getstate__` is called ? | I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it's correct but it fails when I use it inside a function which is m... | 46 | datasets.map pickle issue resulting in invalid mapping function
I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it... | [
-0.2248876691,
0.1931208223,
0.1127529368,
0.0287751779,
0.0341957025,
-0.323770225,
0.138536334,
0.1950848252,
0.3468084633,
-0.1000181139,
0.1921814382,
0.7253851891,
-0.1929601431,
0.2086699158,
-0.0016191815,
0.0595790707,
0.0705188662,
-0.0354074687,
-0.0051376354,
-0.0640... |
https://github.com/huggingface/datasets/issues/2516 | datasets.map pickle issue resulting in invalid mapping function | @lhoestq because if I try to pickle my custom tokenizer (it contains a pure python pretokenization step in an otherwise rust backed tokenizer) I get
> Exception: Error while attempting to pickle Tokenizer: Custom PreTokenizer cannot be serialized
So I remove the Custom PreTokenizer in `__getstate__` and then rest... | I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it's correct but it fails when I use it inside a function which is m... | 121 | datasets.map pickle issue resulting in invalid mapping function
I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it... | [
-0.2248876691,
0.1931208223,
0.1127529368,
0.0287751779,
0.0341957025,
-0.323770225,
0.138536334,
0.1950848252,
0.3468084633,
-0.1000181139,
0.1921814382,
0.7253851891,
-0.1929601431,
0.2086699158,
-0.0016191815,
0.0595790707,
0.0705188662,
-0.0354074687,
-0.0051376354,
-0.0640... |
https://github.com/huggingface/datasets/issues/2516 | datasets.map pickle issue resulting in invalid mapping function | Actually, maybe I need to deep copy `self.__dict__`? That way `self` isn't modified. That was my intention and I thought it was working - I'll double-check after the weekend. | I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it's correct but it fails when I use it inside a function which is m... | 29 | datasets.map pickle issue resulting in invalid mapping function
I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it... | [
-0.2248876691,
0.1931208223,
0.1127529368,
0.0287751779,
0.0341957025,
-0.323770225,
0.138536334,
0.1950848252,
0.3468084633,
-0.1000181139,
0.1921814382,
0.7253851891,
-0.1929601431,
0.2086699158,
-0.0016191815,
0.0595790707,
0.0705188662,
-0.0354074687,
-0.0051376354,
-0.0640... |
https://github.com/huggingface/datasets/issues/2516 | datasets.map pickle issue resulting in invalid mapping function | Doing a deep copy results in the warning:
> 06/20/2021 16:02:15 - WARNING - datasets.fingerprint - Parameter 'function'=<function tokenize_function at 0x7f1e95f05d40> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms a... | I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it's correct but it fails when I use it inside a function which is m... | 114 | datasets.map pickle issue resulting in invalid mapping function
I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it... | [
-0.2248876691,
0.1931208223,
0.1127529368,
0.0287751779,
0.0341957025,
-0.323770225,
0.138536334,
0.1950848252,
0.3468084633,
-0.1000181139,
0.1921814382,
0.7253851891,
-0.1929601431,
0.2086699158,
-0.0016191815,
0.0595790707,
0.0705188662,
-0.0354074687,
-0.0051376354,
-0.0640... |
https://github.com/huggingface/datasets/issues/2516 | datasets.map pickle issue resulting in invalid mapping function | Looks like there is still an object that is not pickable in your `tokenize_function` function.
You can test if an object can be pickled and hashed by using
```python
from datasets.fingerprint import Hasher
Hasher.hash(my_object)
```
Under the hood it pickles the object to compute its hash, so it calls `__g... | I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it's correct but it fails when I use it inside a function which is m... | 52 | datasets.map pickle issue resulting in invalid mapping function
I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it... | [
-0.2248876691,
0.1931208223,
0.1127529368,
0.0287751779,
0.0341957025,
-0.323770225,
0.138536334,
0.1950848252,
0.3468084633,
-0.1000181139,
0.1921814382,
0.7253851891,
-0.1929601431,
0.2086699158,
-0.0016191815,
0.0595790707,
0.0705188662,
-0.0354074687,
-0.0051376354,
-0.0640... |
https://github.com/huggingface/datasets/issues/2516 | datasets.map pickle issue resulting in invalid mapping function | I figured it out, the problem is deep copy itself uses pickle (unless you implement `__deepcopy__`). So when I changed `__getstate__` it started throwing an error.
I'm sure there's a better way of doing this, but in order to return the `__dict__` without the non-pikelable pre-tokeniser and without modifying self I r... | I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it's correct but it fails when I use it inside a function which is m... | 126 | datasets.map pickle issue resulting in invalid mapping function
I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it... | [
-0.2248876691,
0.1931208223,
0.1127529368,
0.0287751779,
0.0341957025,
-0.323770225,
0.138536334,
0.1950848252,
0.3468084633,
-0.1000181139,
0.1921814382,
0.7253851891,
-0.1929601431,
0.2086699158,
-0.0016191815,
0.0595790707,
0.0705188662,
-0.0354074687,
-0.0051376354,
-0.0640... |
https://github.com/huggingface/datasets/issues/2516 | datasets.map pickle issue resulting in invalid mapping function | I'm glad you figured something out :)
Regarding hashing: we're not using hashing for the same purpose as the python `__hash__` purpose (which is in general for dictionary lookups). For example it is allowed for python hashing to not return the same hash across sessions, while our hashing must return the same hashes ... | I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it's correct but it fails when I use it inside a function which is m... | 61 | datasets.map pickle issue resulting in invalid mapping function
I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it... | [
-0.2248876691,
0.1931208223,
0.1127529368,
0.0287751779,
0.0341957025,
-0.323770225,
0.138536334,
0.1950848252,
0.3468084633,
-0.1000181139,
0.1921814382,
0.7253851891,
-0.1929601431,
0.2086699158,
-0.0016191815,
0.0595790707,
0.0705188662,
-0.0354074687,
-0.0051376354,
-0.0640... |
https://github.com/huggingface/datasets/issues/2514 | Can datasets remove duplicated rows? | Hi ! For now this is probably the best option.
We might add a feature like this in the feature as well.
Do you know any deduplication method that works on arbitrary big datasets without filling up RAM ?
Otherwise we can have do the deduplication in memory like pandas but I feel like this is going to be limiting fo... | **Is your feature request related to a problem? Please describe.**
i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that..
**Describe the solution you'd like*... | 63 | Can datasets remove duplicated rows?
**Is your feature request related to a problem? Please describe.**
i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that..
... | [
0.1406852454,
-0.1654824317,
-0.1650884897,
0.1540047079,
0.0489432141,
0.2394908667,
0.2769091427,
0.063439168,
-0.3376515508,
0.0192050375,
-0.0341104977,
0.2346978635,
0.1190901622,
0.2062146366,
-0.0816839635,
0.0102524804,
0.0978407338,
0.1727189124,
-0.0850970969,
0.03569... |
https://github.com/huggingface/datasets/issues/2514 | Can datasets remove duplicated rows? | Yes, I'd like to work on this feature once I'm done with #2500, but first I have to do some research, and see if the implementation wouldn't be too complex.
In the meantime, maybe [this lib](https://github.com/TomScheffers/pyarrow_ops) can help. However, note that this lib operates directly on pyarrow tables and rel... | **Is your feature request related to a problem? Please describe.**
i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that..
**Describe the solution you'd like*... | 80 | Can datasets remove duplicated rows?
**Is your feature request related to a problem? Please describe.**
i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that..
... | [
0.1044108942,
0.0183884911,
-0.1353912205,
0.0949540585,
0.0331350788,
0.1407127231,
0.3450478315,
0.2536357045,
-0.6958327293,
0.0018974398,
-0.0936517492,
0.3721430004,
0.0341933221,
0.1650431752,
0.1942413449,
0.0286320914,
0.1286832541,
0.2465854436,
-0.1124957502,
0.116292... |
https://github.com/huggingface/datasets/issues/2514 | Can datasets remove duplicated rows? | > Hi ! For now this is probably the best option.
> We might add a feature like this in the feature as well.
>
> Do you know any deduplication method that works on arbitrary big datasets without filling up RAM ?
> Otherwise we can have do the deduplication in memory like pandas but I feel like this is going to be l... | **Is your feature request related to a problem? Please describe.**
i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that..
**Describe the solution you'd like*... | 119 | Can datasets remove duplicated rows?
**Is your feature request related to a problem? Please describe.**
i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that..
... | [
0.1012985483,
-0.1910532117,
-0.1392701715,
0.2864695489,
0.0751882717,
0.203661263,
0.2715583146,
0.0986697078,
-0.3335005939,
-0.0163344052,
-0.0008101578,
0.2826297879,
0.040186666,
0.1977716833,
-0.0249465592,
-0.0297983903,
0.1026339084,
0.1306527257,
-0.1036724672,
0.0501... |
https://github.com/huggingface/datasets/issues/2514 | Can datasets remove duplicated rows? | Hello,
I'm also interested in this feature.
Has there been progress on this issue?
Could we use a similar trick as above, but with a better hashing algorithm like SHA?
We could also use a [bloom filter](https://en.wikipedia.org/wiki/Bloom_filter), should we care a lot about collision in this case? | **Is your feature request related to a problem? Please describe.**
i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that..
**Describe the solution you'd like*... | 47 | Can datasets remove duplicated rows?
**Is your feature request related to a problem? Please describe.**
i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that..
... | [
0.2065140158,
-0.0604593195,
-0.1443625987,
0.1692011207,
0.0345458202,
0.2040782422,
0.4904040694,
0.1726583242,
-0.3908337951,
-0.0726336539,
0.0128693487,
0.1752269715,
-0.0061636837,
0.2583808899,
0.0183521584,
-0.0009968603,
0.1563222557,
0.094084315,
-0.093430087,
0.02505... |
https://github.com/huggingface/datasets/issues/2514 | Can datasets remove duplicated rows? | For reference, we can get a solution fairly easily if we assume that we can hold in memory all unique values.
```python
from datasets import Dataset
from itertools import cycle
from functools import partial
memory = set()
def is_unique(elem:Any , column: str, memory: set) -> bool:
if elem[column] in mem... | **Is your feature request related to a problem? Please describe.**
i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that..
**Describe the solution you'd like*... | 117 | Can datasets remove duplicated rows?
**Is your feature request related to a problem? Please describe.**
i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that..
... | [
0.091190882,
-0.1682117432,
-0.112867251,
0.1837811172,
-0.0014150817,
0.3015861809,
0.3887412548,
0.0179417357,
-0.3053526282,
-0.0860437155,
-0.1428192407,
0.2531287074,
-0.0621695705,
0.1245172843,
-0.1006836668,
-0.0454652235,
0.0753492489,
0.0675856248,
-0.0565774441,
-0.0... |
https://github.com/huggingface/datasets/issues/2514 | Can datasets remove duplicated rows? | An approach that works assuming you can hold the all the unique document hashes in memory:
```python
from datasets import load_dataset
def get_hash(example):
"""Get hash of content field."""
return {"hash": hash(example["content"])} # can use any hashing function here
def check_uniques(example, un... | **Is your feature request related to a problem? Please describe.**
i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that..
**Describe the solution you'd like*... | 105 | Can datasets remove duplicated rows?
**Is your feature request related to a problem? Please describe.**
i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that..
... | [
0.1261447668,
-0.0498199277,
-0.105399482,
0.2112496048,
0.1417158544,
0.3746893108,
0.3247411847,
0.1965274066,
-0.2755320668,
-0.1827452183,
-0.0420246869,
0.314861387,
-0.1531117558,
0.0484705642,
0.0363268927,
0.0288591795,
0.0412531197,
0.0360550955,
-0.0045451298,
-0.0276... |
https://github.com/huggingface/datasets/issues/2511 | Add C4 | Update on this: I'm computing the checksums of the data files. It will be available soon | ## Adding a Dataset
- **Name:** *C4*
- **Description:** *https://github.com/allenai/allennlp/discussions/5056*
- **Paper:** *https://arxiv.org/abs/1910.10683*
- **Data:** *https://huggingface.co/datasets/allenai/c4*
- **Motivation:** *Used a lot for pretraining*
Instructions to add a new dataset can be found [h... | 16 | Add C4
## Adding a Dataset
- **Name:** *C4*
- **Description:** *https://github.com/allenai/allennlp/discussions/5056*
- **Paper:** *https://arxiv.org/abs/1910.10683*
- **Data:** *https://huggingface.co/datasets/allenai/c4*
- **Motivation:** *Used a lot for pretraining*
Instructions to add a new dataset can be... | [
-0.2586115897,
-0.2368932813,
-0.2156255096,
0.0744191855,
0.2085537314,
-0.08748018,
0.1077217609,
0.3031299114,
0.0643012747,
0.3231355846,
-0.2132708281,
0.0666779876,
-0.1542896479,
0.4332844317,
-0.0734489784,
0.030804947,
0.0055631911,
0.2454855293,
-0.2516650259,
-0.0612... |
https://github.com/huggingface/datasets/issues/2508 | Load Image Classification Dataset from Local | Hi ! Is this folder structure a standard, a bit like imagenet ?
In this case maybe we can consider having a dataset loader for cifar-like, imagenet-like, squad-like, conll-like etc. datasets ?
```python
from datasets import load_dataset
my_custom_cifar = load_dataset("cifar_like", data_dir="path/to/data/dir")
``... | **Is your feature request related to a problem? Please describe.**
Yes - we would like to load an image classification dataset with datasets without having to write a custom data loader.
**Describe the solution you'd like**
Given a folder structure with images of each class in each folder, the ability to load th... | 48 | Load Image Classification Dataset from Local
**Is your feature request related to a problem? Please describe.**
Yes - we would like to load an image classification dataset with datasets without having to write a custom data loader.
**Describe the solution you'd like**
Given a folder structure with images of e... | [
-0.1711522639,
-0.1667466462,
0.0313529558,
0.4532006681,
0.2632548511,
-0.0811185464,
0.2285080701,
0.0517845564,
0.3194645047,
0.2681328058,
-0.1980352104,
0.0934314877,
-0.3347459137,
0.394392103,
0.0378464721,
-0.2031165808,
-0.2279978245,
0.2942554057,
-0.1917820275,
-0.21... |
https://github.com/huggingface/datasets/issues/2508 | Load Image Classification Dataset from Local | @lhoestq I think we'll want a generic `image-folder` dataset (same as 'imagenet-like'). This is like `torchvision.datasets.ImageFolder`, and is something vision folks are used to seeing. | **Is your feature request related to a problem? Please describe.**
Yes - we would like to load an image classification dataset with datasets without having to write a custom data loader.
**Describe the solution you'd like**
Given a folder structure with images of each class in each folder, the ability to load th... | 25 | Load Image Classification Dataset from Local
**Is your feature request related to a problem? Please describe.**
Yes - we would like to load an image classification dataset with datasets without having to write a custom data loader.
**Describe the solution you'd like**
Given a folder structure with images of e... | [
-0.2203831226,
-0.1735642999,
0.0379922651,
0.4659894407,
0.1979662925,
-0.0509575307,
0.1801863462,
0.0725657344,
0.3113462031,
0.240524292,
-0.1453477442,
0.141892612,
-0.2927548885,
0.2843792737,
0.0848558024,
-0.2747692764,
-0.2471708357,
0.331314981,
-0.2056817263,
-0.1957... |
https://github.com/huggingface/datasets/issues/2508 | Load Image Classification Dataset from Local | Opening this back up, since I'm planning on tackling this. Already posted a quick version of it on my account on the hub.
```python
from datasets import load_dataset
ds = load_dataset('nateraw/image-folder', data_files='PetImages/')
``` | **Is your feature request related to a problem? Please describe.**
Yes - we would like to load an image classification dataset with datasets without having to write a custom data loader.
**Describe the solution you'd like**
Given a folder structure with images of each class in each folder, the ability to load th... | 33 | Load Image Classification Dataset from Local
**Is your feature request related to a problem? Please describe.**
Yes - we would like to load an image classification dataset with datasets without having to write a custom data loader.
**Describe the solution you'd like**
Given a folder structure with images of e... | [
-0.2027493864,
-0.1886907667,
0.0131783234,
0.4011636078,
0.2593049705,
-0.0279741269,
0.197597295,
0.1451176703,
0.311989516,
0.2857657373,
-0.1935353577,
0.1235473379,
-0.3047413826,
0.3270210028,
0.0650775135,
-0.2292605042,
-0.2278255969,
0.3378815651,
-0.1562045217,
-0.247... |
https://github.com/huggingface/datasets/issues/2503 | SubjQA wrong boolean values in entries | @arnaudstiegler I have just checked that these mismatches are already present in the original dataset: https://github.com/megagonlabs/SubjQA
We are going to contact the dataset owners to report this. | ## Describe the bug
SubjQA seems to have a boolean that's consistently wrong.
It defines:
- question_subj_level: The subjectiviy level of the question (on a 1 to 5 scale with 1 being the most subjective).
- is_ques_subjective: A boolean subjectivity label derived from question_subj_level (i.e., scores below 4 are... | 27 | SubjQA wrong boolean values in entries
## Describe the bug
SubjQA seems to have a boolean that's consistently wrong.
It defines:
- question_subj_level: The subjectiviy level of the question (on a 1 to 5 scale with 1 being the most subjective).
- is_ques_subjective: A boolean subjectivity label derived from ques... | [
0.123565875,
0.0693044141,
0.0270316787,
0.1645201892,
-0.3365960717,
-0.0068990169,
0.0865070522,
0.0452396125,
-0.092797786,
0.1864389032,
-0.0919595808,
0.4202955663,
0.2149701566,
0.2549098432,
-0.2184266746,
0.1229734123,
0.2286489606,
0.1842407286,
-0.13635993,
-0.1547205... |
https://github.com/huggingface/datasets/issues/2503 | SubjQA wrong boolean values in entries | I have:
- opened an issue in their repo: https://github.com/megagonlabs/SubjQA/issues/3
- written an email to all the paper authors | ## Describe the bug
SubjQA seems to have a boolean that's consistently wrong.
It defines:
- question_subj_level: The subjectiviy level of the question (on a 1 to 5 scale with 1 being the most subjective).
- is_ques_subjective: A boolean subjectivity label derived from question_subj_level (i.e., scores below 4 are... | 19 | SubjQA wrong boolean values in entries
## Describe the bug
SubjQA seems to have a boolean that's consistently wrong.
It defines:
- question_subj_level: The subjectiviy level of the question (on a 1 to 5 scale with 1 being the most subjective).
- is_ques_subjective: A boolean subjectivity label derived from ques... | [
0.1710736901,
0.058735244,
0.044212278,
0.0991450548,
-0.318543911,
-0.1043599397,
0.0993456393,
0.0244080443,
-0.1466469467,
0.2234867066,
-0.0614028908,
0.3840119243,
0.227516219,
0.2279732376,
-0.2728486657,
0.1101417467,
0.2479696572,
0.1666609347,
-0.0700225309,
-0.1800619... |
https://github.com/huggingface/datasets/issues/2499 | Python Programming Puzzles | Thanks @VictorSanh!
There's also a [notebook](https://aka.ms/python_puzzles) and [demo](https://aka.ms/python_puzzles_study) available now to try out some of the puzzles | ## Adding a Dataset
- **Name:** Python Programming Puzzles
- **Description:** Programming challenge called programming puzzles, as an objective and comprehensive evaluation of program synthesis
- **Paper:** https://arxiv.org/pdf/2106.05784.pdf
- **Data:** https://github.com/microsoft/PythonProgrammingPuzzles ([Scro... | 17 | Python Programming Puzzles
## Adding a Dataset
- **Name:** Python Programming Puzzles
- **Description:** Programming challenge called programming puzzles, as an objective and comprehensive evaluation of program synthesis
- **Paper:** https://arxiv.org/pdf/2106.05784.pdf
- **Data:** https://github.com/microsoft/P... | [
-0.0690371767,
-0.1561272889,
-0.2759868503,
-0.052750729,
-0.0006478101,
0.0701650679,
0.0195630565,
0.2562189996,
0.0034625744,
0.1276417524,
0.0386163816,
0.3095811903,
-0.3876285553,
0.3575747907,
0.2331802845,
-0.3480177224,
-0.042529773,
-0.004457009,
-0.0990919024,
0.049... |
https://github.com/huggingface/datasets/issues/2498 | Improve torch formatting performance | That’s interesting thanks, let’s see what we can do. Can you detail your last sentence? I’m not sure I understand it well. | **Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia an... | 22 | Improve torch formatting performance
**Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ec... | [
-0.3501353264,
-0.2402968705,
-0.073668085,
0.1903572977,
0.1027541161,
0.1477066576,
0.2599889934,
0.6868165135,
-0.1211307198,
-0.0641197935,
-0.356641978,
0.2431405336,
-0.1145986915,
-0.0332219154,
0.0663354322,
-0.3101110458,
0.0007660564,
0.1003078073,
-0.0678958595,
-0.1... |
https://github.com/huggingface/datasets/issues/2498 | Improve torch formatting performance | Hi ! I just re-ran a quick benchmark and using `to_numpy()` seems to be faster now:
```python
import pyarrow as pa # I used pyarrow 3.0.0
import numpy as np
n, max_length = 1_000, 512
low, high, size = 0, 2 << 16, (n, max_length)
table = pa.Table.from_pydict({
"input_ids": np.random.default_rng(42).in... | **Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia an... | 150 | Improve torch formatting performance
**Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ec... | [
-0.3501353264,
-0.2402968705,
-0.073668085,
0.1903572977,
0.1027541161,
0.1477066576,
0.2599889934,
0.6868165135,
-0.1211307198,
-0.0641197935,
-0.356641978,
0.2431405336,
-0.1145986915,
-0.0332219154,
0.0663354322,
-0.3101110458,
0.0007660564,
0.1003078073,
-0.0678958595,
-0.1... |
https://github.com/huggingface/datasets/issues/2498 | Improve torch formatting performance | Sounds like a plan @lhoestq If you create a PR I'll pick it up and try it out right away! | **Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia an... | 20 | Improve torch formatting performance
**Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ec... | [
-0.3501353264,
-0.2402968705,
-0.073668085,
0.1903572977,
0.1027541161,
0.1477066576,
0.2599889934,
0.6868165135,
-0.1211307198,
-0.0641197935,
-0.356641978,
0.2431405336,
-0.1145986915,
-0.0332219154,
0.0663354322,
-0.3101110458,
0.0007660564,
0.1003078073,
-0.0678958595,
-0.1... |
https://github.com/huggingface/datasets/issues/2498 | Improve torch formatting performance | I’m not exactly sure how to read the graph but it seems that to_categorical take a lot of time here. Could you share more informations on the features/stats of your datasets so we could maybe design a synthetic datasets that looks more similar for debugging testing? | **Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia an... | 46 | Improve torch formatting performance
**Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ec... | [
-0.3501353264,
-0.2402968705,
-0.073668085,
0.1903572977,
0.1027541161,
0.1477066576,
0.2599889934,
0.6868165135,
-0.1211307198,
-0.0641197935,
-0.356641978,
0.2431405336,
-0.1145986915,
-0.0332219154,
0.0663354322,
-0.3101110458,
0.0007660564,
0.1003078073,
-0.0678958595,
-0.1... |
https://github.com/huggingface/datasets/issues/2498 | Improve torch formatting performance | > I’m not exactly sure how to read the graph but it seems that to_categorical take a lot of time here. Could you share more informations on the features/stats of your datasets so we could maybe design a synthetic datasets that looks more similar for debugging testing?
@thomwolf starting from the top, each rectangle ... | **Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia an... | 140 | Improve torch formatting performance
**Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ec... | [
-0.3501353264,
-0.2402968705,
-0.073668085,
0.1903572977,
0.1027541161,
0.1477066576,
0.2599889934,
0.6868165135,
-0.1211307198,
-0.0641197935,
-0.356641978,
0.2431405336,
-0.1145986915,
-0.0332219154,
0.0663354322,
-0.3101110458,
0.0007660564,
0.1003078073,
-0.0678958595,
-0.1... |
https://github.com/huggingface/datasets/issues/2498 | Improve torch formatting performance | @lhoestq the proposed branch is faster, but overall training speedup is a few percentage points. I couldn't figure out how to include the GitHub branch into setup.py, so I couldn't start NVidia optimized Docker-based pre-training run. But on bare metal, there is a slight improvement. I'll do some more performance trac... | **Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia an... | 51 | Improve torch formatting performance
**Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ec... | [
-0.3501353264,
-0.2402968705,
-0.073668085,
0.1903572977,
0.1027541161,
0.1477066576,
0.2599889934,
0.6868165135,
-0.1211307198,
-0.0641197935,
-0.356641978,
0.2431405336,
-0.1145986915,
-0.0332219154,
0.0663354322,
-0.3101110458,
0.0007660564,
0.1003078073,
-0.0678958595,
-0.1... |
https://github.com/huggingface/datasets/issues/2498 | Improve torch formatting performance | Hi @vblagoje, to install Datasets from @lhoestq PR reference #2505, you can use:
```shell
pip install git+ssh://git@github.com/huggingface/datasets.git@refs/pull/2505/head#egg=datasets
``` | **Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia an... | 18 | Improve torch formatting performance
**Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ec... | [
-0.3501353264,
-0.2402968705,
-0.073668085,
0.1903572977,
0.1027541161,
0.1477066576,
0.2599889934,
0.6868165135,
-0.1211307198,
-0.0641197935,
-0.356641978,
0.2431405336,
-0.1145986915,
-0.0332219154,
0.0663354322,
-0.3101110458,
0.0007660564,
0.1003078073,
-0.0678958595,
-0.1... |
https://github.com/huggingface/datasets/issues/2498 | Improve torch formatting performance | Hey @albertvillanova yes thank you, I am aware, I can easily pull it from a terminal command line but then I can't automate docker image builds as dependencies are picked up from setup.py and for some reason setup.py doesn't accept this string format. | **Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia an... | 43 | Improve torch formatting performance
**Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ec... | [
-0.3501353264,
-0.2402968705,
-0.073668085,
0.1903572977,
0.1027541161,
0.1477066576,
0.2599889934,
0.6868165135,
-0.1211307198,
-0.0641197935,
-0.356641978,
0.2431405336,
-0.1145986915,
-0.0332219154,
0.0663354322,
-0.3101110458,
0.0007660564,
0.1003078073,
-0.0678958595,
-0.1... |
https://github.com/huggingface/datasets/issues/2498 | Improve torch formatting performance | @vblagoje in that case, you can add this to your `setup.py`:
```python
install_requires=[
"datasets @ git+ssh://git@github.com/huggingface/datasets.git@refs/pull/2505/head",
``` | **Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia an... | 17 | Improve torch formatting performance
**Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ec... | [
-0.3501353264,
-0.2402968705,
-0.073668085,
0.1903572977,
0.1027541161,
0.1477066576,
0.2599889934,
0.6868165135,
-0.1211307198,
-0.0641197935,
-0.356641978,
0.2431405336,
-0.1145986915,
-0.0332219154,
0.0663354322,
-0.3101110458,
0.0007660564,
0.1003078073,
-0.0678958595,
-0.1... |
https://github.com/huggingface/datasets/issues/2498 | Improve torch formatting performance | @lhoestq @thomwolf @albertvillanova The new approach is definitely faster, dataloader now takes less than 3% cumulative time (pink rectangle two rectangles to the right of tensor.py backward invocation)
 for example. | **Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia an... | 20 | Improve torch formatting performance
**Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ec... | [
-0.3501353264,
-0.2402968705,
-0.073668085,
0.1903572977,
0.1027541161,
0.1477066576,
0.2599889934,
0.6868165135,
-0.1211307198,
-0.0641197935,
-0.356641978,
0.2431405336,
-0.1145986915,
-0.0332219154,
0.0663354322,
-0.3101110458,
0.0007660564,
0.1003078073,
-0.0678958595,
-0.1... |
https://github.com/huggingface/datasets/issues/2481 | Delete extracted files to save disk space | My suggestion for this would be to have this enabled by default.
Plus I don't know if there should be a dedicated issue to that is another functionality. But I propose layered building rather than all at once. That is:
1. uncompress a handful of files via a generator enough to generate one arrow file
2. process ... | As discussed with @stas00 and @lhoestq, allowing the deletion of extracted files would save a great amount of disk space to typical user. | 164 | Delete extracted files to save disk space
As discussed with @stas00 and @lhoestq, allowing the deletion of extracted files would save a great amount of disk space to typical user.
My suggestion for this would be to have this enabled by default.
Plus I don't know if there should be a dedicated issue to that is an... | [
-0.135620892,
-0.3015793264,
-0.1714524329,
0.1553417742,
-0.0760239288,
-0.0403154157,
-0.1220022589,
0.5078598261,
-0.0759533718,
0.5109491348,
0.1839046776,
0.3984466791,
-0.3721125722,
0.2519609332,
-0.1107843146,
-0.191850692,
-0.0977597237,
0.3879250586,
0.085256353,
0.16... |
https://github.com/huggingface/datasets/issues/2480 | Set download/extracted paths configurable | For example to be able to send uncompressed and temp build files to another volume/partition, so that the user gets the minimal disk usage on their primary setup - and ends up with just the downloaded compressed data + arrow files, but outsourcing the huge files and building to another partition. e.g. on JZ there is a... | As discussed with @stas00 and @lhoestq, setting these paths configurable may allow to overcome disk space limitation on different partitions/drives.
TODO:
- [x] Set configurable extracted datasets path: #2487
- [x] Set configurable downloaded datasets path: #2488
- [ ] Set configurable "incomplete" datasets path? | 85 | Set download/extracted paths configurable
As discussed with @stas00 and @lhoestq, setting these paths configurable may allow to overcome disk space limitation on different partitions/drives.
TODO:
- [x] Set configurable extracted datasets path: #2487
- [x] Set configurable downloaded datasets path: #2488
- [ ] ... | [
-0.2962553203,
-0.1055006906,
-0.1821804792,
0.189564541,
0.1187054664,
-0.0878328532,
-0.0424846075,
0.2916614413,
0.0462338626,
0.3479492068,
-0.1803241819,
0.1907629669,
0.0141953118,
0.3997288048,
0.0212998558,
-0.0809284151,
-0.2167209834,
0.2052153945,
-0.2652952969,
0.18... |
https://github.com/huggingface/datasets/issues/2474 | cache_dir parameter for load_from_disk ? | Hi ! `load_from_disk` doesn't move the data. If you specify a local path to your mounted drive, then the dataset is going to be loaded directly from the arrow file in this directory. The cache files that result from `map` operations are also stored in the same directory by default.
However note than writing data to ... | **Is your feature request related to a problem? Please describe.**
When using Google Colab big datasets can be an issue, as they won't fit on the VM's disk. Therefore mounting google drive could be a possible solution. Unfortunatly when loading my own dataset by using the _load_from_disk_ function, the data gets cache... | 84 | cache_dir parameter for load_from_disk ?
**Is your feature request related to a problem? Please describe.**
When using Google Colab big datasets can be an issue, as they won't fit on the VM's disk. Therefore mounting google drive could be a possible solution. Unfortunatly when loading my own dataset by using the _lo... | [
-0.2717246413,
-0.12313582,
-0.0702049285,
0.1266332269,
0.2155743241,
-0.0214381535,
0.2847646773,
0.0848473161,
0.4296783507,
0.3610469997,
-0.1760807782,
0.2474941611,
-0.2383665591,
0.2422405332,
0.2703162432,
0.0099472301,
0.0024950539,
-0.0644661486,
-0.0686380789,
-0.133... |
https://github.com/huggingface/datasets/issues/2474 | cache_dir parameter for load_from_disk ? | Thanks for your answer! I am a little surprised since I just want to read the dataset.
After debugging a bit, I noticed that the VM’s disk fills up when the tables (generator) are converted to a list:
https://github.com/huggingface/datasets/blob/5ba149773d23369617563d752aca922081277ec2/src/datasets/table.py#L850
... | **Is your feature request related to a problem? Please describe.**
When using Google Colab big datasets can be an issue, as they won't fit on the VM's disk. Therefore mounting google drive could be a possible solution. Unfortunatly when loading my own dataset by using the _load_from_disk_ function, the data gets cache... | 69 | cache_dir parameter for load_from_disk ?
**Is your feature request related to a problem? Please describe.**
When using Google Colab big datasets can be an issue, as they won't fit on the VM's disk. Therefore mounting google drive could be a possible solution. Unfortunatly when loading my own dataset by using the _lo... | [
-0.3292522728,
-0.1360468566,
-0.0741690025,
0.3395152092,
0.3054477274,
0.0242110845,
0.1128010526,
0.1102483347,
0.3817077875,
0.4735017121,
-0.1144265458,
0.3308243454,
-0.3873734772,
0.2628678083,
0.3451521695,
0.1399834305,
0.0198713522,
0.0477789417,
-0.0935732573,
-0.179... |
https://github.com/huggingface/datasets/issues/2474 | cache_dir parameter for load_from_disk ? | Indeed reading the data shouldn't increase the VM's disk. Not sure what google colab does under the hood for that to happen | **Is your feature request related to a problem? Please describe.**
When using Google Colab big datasets can be an issue, as they won't fit on the VM's disk. Therefore mounting google drive could be a possible solution. Unfortunatly when loading my own dataset by using the _load_from_disk_ function, the data gets cache... | 22 | cache_dir parameter for load_from_disk ?
**Is your feature request related to a problem? Please describe.**
When using Google Colab big datasets can be an issue, as they won't fit on the VM's disk. Therefore mounting google drive could be a possible solution. Unfortunatly when loading my own dataset by using the _lo... | [
-0.2947949171,
-0.204613179,
-0.1245888248,
0.2407285273,
0.1985379606,
0.0009989187,
0.209419623,
0.0836257339,
0.35598737,
0.4815081656,
-0.1554522514,
0.2485088557,
-0.3452038765,
0.2828160822,
0.2444072217,
0.0840862393,
-0.0638254508,
-0.1021248922,
-0.0527959913,
-0.15004... |
https://github.com/huggingface/datasets/issues/2474 | cache_dir parameter for load_from_disk ? | Apparently, Colab uses a local cache of the data files read/written from Google Drive. See:
- https://github.com/googlecolab/colabtools/issues/2087#issuecomment-860818457
- https://github.com/googlecolab/colabtools/issues/1915#issuecomment-804234540
- https://github.com/googlecolab/colabtools/issues/2147#issuecommen... | **Is your feature request related to a problem? Please describe.**
When using Google Colab big datasets can be an issue, as they won't fit on the VM's disk. Therefore mounting google drive could be a possible solution. Unfortunatly when loading my own dataset by using the _load_from_disk_ function, the data gets cache... | 21 | cache_dir parameter for load_from_disk ?
**Is your feature request related to a problem? Please describe.**
When using Google Colab big datasets can be an issue, as they won't fit on the VM's disk. Therefore mounting google drive could be a possible solution. Unfortunatly when loading my own dataset by using the _lo... | [
-0.3244538009,
-0.1029563844,
-0.074411422,
0.1464906037,
0.1778115034,
-0.0287650023,
0.3205730617,
0.075773336,
0.2688013911,
0.4276996553,
-0.0916887075,
0.3176705539,
-0.2790685296,
0.2631691694,
0.133872062,
0.1159453765,
-0.0612637587,
-0.0113294683,
-0.0055348058,
-0.147... |
https://github.com/huggingface/datasets/issues/2472 | Fix automatic generation of Zenodo DOI | I have received a reply from Zenodo support:
> We are currently investigating and fixing this issue related to GitHub releases. As soon as we have solved it we will reach back to you. | After the last release of Datasets (1.8.0), the automatic generation of the Zenodo DOI failed: it appears in yellow as "Received", instead of in green as "Published".
I have contacted Zenodo support to fix this issue.
TODO:
- [x] Check with Zenodo to fix the issue
- [x] Check BibTeX entry is right | 34 | Fix automatic generation of Zenodo DOI
After the last release of Datasets (1.8.0), the automatic generation of the Zenodo DOI failed: it appears in yellow as "Received", instead of in green as "Published".
I have contacted Zenodo support to fix this issue.
TODO:
- [x] Check with Zenodo to fix the issue
- [x] ... | [
-0.0967247561,
0.370500803,
-0.0376104824,
0.0805336908,
0.1489465237,
-0.1063509658,
0.3863852024,
0.4031184614,
-0.0611241087,
0.2544852495,
-0.0367732197,
0.1469545811,
0.2090732902,
-0.0825825334,
-0.1057560444,
0.0061599673,
-0.0564027019,
0.3477693498,
-0.0588134006,
-0.3... |
https://github.com/huggingface/datasets/issues/2472 | Fix automatic generation of Zenodo DOI | Other repo maintainers had the same problem with Zenodo.
There is an open issue on their GitHub repo: zenodo/zenodo#2181 | After the last release of Datasets (1.8.0), the automatic generation of the Zenodo DOI failed: it appears in yellow as "Received", instead of in green as "Published".
I have contacted Zenodo support to fix this issue.
TODO:
- [x] Check with Zenodo to fix the issue
- [x] Check BibTeX entry is right | 19 | Fix automatic generation of Zenodo DOI
After the last release of Datasets (1.8.0), the automatic generation of the Zenodo DOI failed: it appears in yellow as "Received", instead of in green as "Published".
I have contacted Zenodo support to fix this issue.
TODO:
- [x] Check with Zenodo to fix the issue
- [x] ... | [
-0.0932824537,
0.3942429721,
-0.0347407386,
0.0991305783,
0.1498730779,
-0.1066873372,
0.3806300759,
0.3882215619,
-0.0758683383,
0.2535549998,
-0.0645630807,
0.2056422085,
0.2068251818,
-0.0944910571,
-0.1203861758,
0.0168307163,
-0.0314425193,
0.386361748,
-0.0573519394,
-0.3... |
https://github.com/huggingface/datasets/issues/2472 | Fix automatic generation of Zenodo DOI | I have received the following request from Zenodo support:
> Could you send us the link to the repository as well as the release tag?
My reply:
> Sure, here it is:
> - Link to the repository: https://github.com/huggingface/datasets
> - Link to the repository at the release tag: https://github.com/huggingface/dat... | After the last release of Datasets (1.8.0), the automatic generation of the Zenodo DOI failed: it appears in yellow as "Received", instead of in green as "Published".
I have contacted Zenodo support to fix this issue.
TODO:
- [x] Check with Zenodo to fix the issue
- [x] Check BibTeX entry is right | 55 | Fix automatic generation of Zenodo DOI
After the last release of Datasets (1.8.0), the automatic generation of the Zenodo DOI failed: it appears in yellow as "Received", instead of in green as "Published".
I have contacted Zenodo support to fix this issue.
TODO:
- [x] Check with Zenodo to fix the issue
- [x] ... | [
0.0188328288,
0.2643164992,
-0.0537577644,
0.1654240191,
0.1444076598,
-0.048619885,
0.2956548333,
0.3008949757,
0.0125508206,
0.242185846,
-0.0125960149,
0.1527001113,
0.1531535238,
-0.0105605274,
-0.0150588481,
-0.0113823153,
-0.1902421862,
0.383312881,
0.0192256179,
-0.27282... |
https://github.com/huggingface/datasets/issues/2470 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`. | Hi ! It looks like the issue comes from pyarrow. What version of pyarrow are you using ? How did you install it ? | ## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure whether the issue is on my end, because it's difficult for me to debug! Any ti... | 24 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`.
## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure ... | [
-0.4569717646,
-0.2173909098,
0.0228779931,
0.4219457507,
0.3113080263,
-0.2264820784,
0.2361347973,
0.1525053084,
0.0431260504,
0.3643296063,
0.5303295851,
0.2979788184,
-0.3346508443,
-0.0465700179,
0.0477297306,
-0.0265676696,
0.3659131527,
-0.2182353288,
0.2511512637,
0.358... |
https://github.com/huggingface/datasets/issues/2470 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`. | Thank you for the quick reply! I have `pyarrow==4.0.0`, and I am installing with `pip`. It's not one of my explicit dependencies, so I assume it came along with something else. | ## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure whether the issue is on my end, because it's difficult for me to debug! Any ti... | 31 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`.
## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure ... | [
-0.4569717646,
-0.2173909098,
0.0228779931,
0.4219457507,
0.3113080263,
-0.2264820784,
0.2361347973,
0.1525053084,
0.0431260504,
0.3643296063,
0.5303295851,
0.2979788184,
-0.3346508443,
-0.0465700179,
0.0477297306,
-0.0265676696,
0.3659131527,
-0.2182353288,
0.2511512637,
0.358... |
https://github.com/huggingface/datasets/issues/2470 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`. | Could you trying reinstalling pyarrow with pip ?
I'm not sure why it would check in your multicurtural-sc directory for source files. | ## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure whether the issue is on my end, because it's difficult for me to debug! Any ti... | 22 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`.
## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure ... | [
-0.4569717646,
-0.2173909098,
0.0228779931,
0.4219457507,
0.3113080263,
-0.2264820784,
0.2361347973,
0.1525053084,
0.0431260504,
0.3643296063,
0.5303295851,
0.2979788184,
-0.3346508443,
-0.0465700179,
0.0477297306,
-0.0265676696,
0.3659131527,
-0.2182353288,
0.2511512637,
0.358... |
https://github.com/huggingface/datasets/issues/2470 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`. | Sure! I tried reinstalling to get latest. pip was mad because it looks like Datasets currently wants <4.0.0 (which is interesting, because apparently I ended up with 4.0.0 already?), but I gave it a shot anyway:
```bash
$ pip install --upgrade --force-reinstall pyarrow
Collecting pyarrow
Downloading pyarrow-4.0... | ## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure whether the issue is on my end, because it's difficult for me to debug! Any ti... | 305 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`.
## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure ... | [
-0.4569717646,
-0.2173909098,
0.0228779931,
0.4219457507,
0.3113080263,
-0.2264820784,
0.2361347973,
0.1525053084,
0.0431260504,
0.3643296063,
0.5303295851,
0.2979788184,
-0.3346508443,
-0.0465700179,
0.0477297306,
-0.0265676696,
0.3659131527,
-0.2182353288,
0.2511512637,
0.358... |
https://github.com/huggingface/datasets/issues/2470 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`. | Good catch ! Not sure why it could raise such a weird issue from pyarrow though
We should definitely reduce num_proc to the length of the dataset if needed and log a warning. | ## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure whether the issue is on my end, because it's difficult for me to debug! Any ti... | 33 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`.
## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure ... | [
-0.4569717646,
-0.2173909098,
0.0228779931,
0.4219457507,
0.3113080263,
-0.2264820784,
0.2361347973,
0.1525053084,
0.0431260504,
0.3643296063,
0.5303295851,
0.2979788184,
-0.3346508443,
-0.0465700179,
0.0477297306,
-0.0265676696,
0.3659131527,
-0.2182353288,
0.2511512637,
0.358... |
https://github.com/huggingface/datasets/issues/2470 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`. | This has been fixed in #2566, thanks @connor-mccarthy !
We'll make a new release soon that includes the fix ;) | ## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure whether the issue is on my end, because it's difficult for me to debug! Any ti... | 20 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`.
## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure ... | [
-0.4569717646,
-0.2173909098,
0.0228779931,
0.4219457507,
0.3113080263,
-0.2264820784,
0.2361347973,
0.1525053084,
0.0431260504,
0.3643296063,
0.5303295851,
0.2979788184,
-0.3346508443,
-0.0465700179,
0.0477297306,
-0.0265676696,
0.3659131527,
-0.2182353288,
0.2511512637,
0.358... |
https://github.com/huggingface/datasets/issues/2450 | BLUE file not found | Hi ! The `blue` metric doesn't exist, but the `bleu` metric does.
You can get the full list of metrics [here](https://github.com/huggingface/datasets/tree/master/metrics) or by running
```python
from datasets import list_metrics
print(list_metrics())
``` | Hi, I'm having the following issue when I try to load the `blue` metric.
```shell
import datasets
metric = datasets.load_metric('blue')
Traceback (most recent call last):
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 320, in prepare_module
local... | 31 | BLUE file not found
Hi, I'm having the following issue when I try to load the `blue` metric.
```shell
import datasets
metric = datasets.load_metric('blue')
Traceback (most recent call last):
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 320, in pre... | [
-0.2038621902,
-0.2862857282,
-0.0685010776,
0.4027132094,
0.3037271202,
0.0655113086,
0.2102442086,
0.3386648595,
0.088597402,
0.1891394854,
-0.2280899137,
-0.1502069682,
0.0800001025,
-0.3868106604,
0.2086266726,
-0.083835043,
-0.0976856574,
0.2933503389,
0.1404933035,
-0.011... |
https://github.com/huggingface/datasets/issues/2447 | dataset adversarial_qa has no answers in the "test" set | Hi ! I'm pretty sure that the answers are not made available for the test set on purpose because it is part of the DynaBench benchmark, for which you can submit your predictions on the website.
In any case we should mention this in the dataset card of this dataset. | ## Describe the bug
When loading the adversarial_qa dataset the 'test' portion has no answers. Only the 'train' and 'validation' portions do. This occurs with all four of the configs ('adversarialQA', 'dbidaf', 'dbert', 'droberta')
## Steps to reproduce the bug
```
from datasets import load_dataset
examples ... | 50 | dataset adversarial_qa has no answers in the "test" set
## Describe the bug
When loading the adversarial_qa dataset the 'test' portion has no answers. Only the 'train' and 'validation' portions do. This occurs with all four of the configs ('adversarialQA', 'dbidaf', 'dbert', 'droberta')
## Steps to reproduce th... | [
-0.1097809374,
0.1781137437,
-0.1390380561,
0.3103133142,
0.1634771526,
-0.0822485611,
0.3094849288,
0.4499256909,
-0.0519626811,
0.1181876883,
0.1611240059,
0.3690260053,
-0.1847234368,
-0.2479674518,
0.0857575238,
0.109060511,
-0.1185330003,
0.0875693634,
-0.07204514,
-0.3029... |
https://github.com/huggingface/datasets/issues/2447 | dataset adversarial_qa has no answers in the "test" set | Makes sense, but not intuitive for someone searching through the datasets. Thanks for adding the note to clarify. | ## Describe the bug
When loading the adversarial_qa dataset the 'test' portion has no answers. Only the 'train' and 'validation' portions do. This occurs with all four of the configs ('adversarialQA', 'dbidaf', 'dbert', 'droberta')
## Steps to reproduce the bug
```
from datasets import load_dataset
examples ... | 18 | dataset adversarial_qa has no answers in the "test" set
## Describe the bug
When loading the adversarial_qa dataset the 'test' portion has no answers. Only the 'train' and 'validation' portions do. This occurs with all four of the configs ('adversarialQA', 'dbidaf', 'dbert', 'droberta')
## Steps to reproduce th... | [
-0.0380188413,
0.1775164753,
-0.1093790531,
0.3759938776,
0.198221162,
-0.0062383912,
0.3613941371,
0.3695279956,
-0.0641322657,
0.0352284722,
0.1037120819,
0.3792203665,
-0.1990858912,
-0.1972341985,
0.0432112925,
0.1114881411,
-0.0155536504,
0.1072036996,
-0.0602785684,
-0.31... |
https://github.com/huggingface/datasets/issues/2446 | `yelp_polarity` is broken | ```
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/script_runner.py", line 332, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp-viewer/run.py", line 233, in <module>
configs = get_confs(option)
File "/home/sasha/.local/share/virtualenvs/lib-og... | 
| 118 | `yelp_polarity` is broken

```
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/script_runner.py", line 332, in _run_script
exec(code, module.__dict__)
... | [
-0.1024580374,
-0.4164357185,
-0.159715727,
0.1106381193,
0.2700802088,
-0.1342438161,
0.062041603,
0.3248960972,
0.1150244698,
0.0208609719,
-0.073303543,
0.0981359109,
-0.0775284246,
0.0881604329,
-0.0272905901,
-0.0547003821,
-0.0670887977,
0.5007215142,
-0.2798650861,
-0.10... |
https://github.com/huggingface/datasets/issues/2444 | Sentence Boundaries missing in Dataset: xtreme / udpos | Hi,
This is a known issue. More info on this issue can be found in #2061. If you are looking for an open-source contribution, there are step-by-step instructions in the linked issue that you can follow to fix it. | I was browsing through annotation guidelines, as suggested by the datasets introduction.
The guidlines saids "There must be exactly one blank line after every sentence, including the last sentence in the file. Empty sentences are not allowed." in the [Sentence Boundaries and Comments section](https://universaldepend... | 39 | Sentence Boundaries missing in Dataset: xtreme / udpos
I was browsing through annotation guidelines, as suggested by the datasets introduction.
The guidlines saids "There must be exactly one blank line after every sentence, including the last sentence in the file. Empty sentences are not allowed." in the [Sentence... | [
0.2943786979,
-0.3165703118,
-0.000339701,
-0.0429260433,
0.2085691839,
-0.2181831747,
0.2861591876,
-0.2059638798,
-0.0860842392,
0.0388183184,
-0.1333659738,
-0.0321441293,
0.0991283879,
0.0123276748,
-0.0314270444,
-0.3733593524,
0.0662140846,
-0.0248681419,
-0.089078851,
-0... |
https://github.com/huggingface/datasets/issues/2443 | Some tests hang on Windows | Hi ! That would be nice indeed to at least have a warning, since we don't handle the max path length limit.
Also if we could have an error instead of an infinite loop I'm sure windows users will appreciate that | Currently, several tests hang on Windows if the max path limit of 260 characters is not disabled. This happens due to the changes introduced by #2223 that cause an infinite loop in `WindowsFileLock` described in #2220. This can be very tricky to debug, so I think now is a good time to address these issues/PRs. IMO thr... | 41 | Some tests hang on Windows
Currently, several tests hang on Windows if the max path limit of 260 characters is not disabled. This happens due to the changes introduced by #2223 that cause an infinite loop in `WindowsFileLock` described in #2220. This can be very tricky to debug, so I think now is a good time to addr... | [
-0.2391643077,
-0.0217614975,
-0.1179871187,
0.0013240133,
0.3143849075,
-0.0502431653,
0.1832035482,
0.1566535532,
0.1643405706,
0.3078767359,
0.705950141,
-0.0965390503,
-0.4796690047,
0.0425141789,
-0.1620286405,
-0.1105980948,
0.1247817501,
-0.1221860573,
-0.0834780037,
0.2... |
https://github.com/huggingface/datasets/issues/2443 | Some tests hang on Windows | Unfortunately, I know this problem very well... 😅
I remember having proposed to throw an error instead of hanging in an infinite loop #2220: 60c7d1b6b71469599a27147a08100f594e7a3f84, 8c8ab60018b00463edf1eca500e434ff061546fc
but @lhoestq told me:
> Note that the filelock module comes from this project that hasn'... | Currently, several tests hang on Windows if the max path limit of 260 characters is not disabled. This happens due to the changes introduced by #2223 that cause an infinite loop in `WindowsFileLock` described in #2220. This can be very tricky to debug, so I think now is a good time to address these issues/PRs. IMO thr... | 85 | Some tests hang on Windows
Currently, several tests hang on Windows if the max path limit of 260 characters is not disabled. This happens due to the changes introduced by #2223 that cause an infinite loop in `WindowsFileLock` described in #2220. This can be very tricky to debug, so I think now is a good time to addr... | [
-0.131635502,
-0.0140704662,
-0.1183957011,
0.0093429284,
0.2163329124,
0.009574499,
0.2583824098,
0.1399551034,
0.2193910778,
0.1932308376,
0.6185331345,
-0.2044838816,
-0.4562078714,
-0.0727648363,
-0.1960127503,
-0.0045185229,
0.1505008936,
-0.1235168204,
-0.0943985805,
0.23... |
https://github.com/huggingface/datasets/issues/2443 | Some tests hang on Windows | @albertvillanova Thanks for additional info on this issue.
Yes, I think the best option is to throw an error instead of suppressing it in a loop. I've considered 2 more options, but I don't really like them:
1. create a temporary file with a filename longer than 255 characters on import; if this fails, long paths a... | Currently, several tests hang on Windows if the max path limit of 260 characters is not disabled. This happens due to the changes introduced by #2223 that cause an infinite loop in `WindowsFileLock` described in #2220. This can be very tricky to debug, so I think now is a good time to address these issues/PRs. IMO thr... | 109 | Some tests hang on Windows
Currently, several tests hang on Windows if the max path limit of 260 characters is not disabled. This happens due to the changes introduced by #2223 that cause an infinite loop in `WindowsFileLock` described in #2220. This can be very tricky to debug, so I think now is a good time to addr... | [
-0.2172691524,
0.011509384,
-0.1549203843,
0.0968996361,
0.2777681351,
-0.1149823889,
0.2952308655,
0.1591923684,
0.2075245231,
0.3304066062,
0.5141474605,
-0.1415563822,
-0.370552063,
0.0012732256,
-0.1112246066,
0.0127635123,
0.1575828493,
-0.009089658,
-0.1111329496,
0.24195... |
https://github.com/huggingface/datasets/issues/2441 | DuplicatedKeysError on personal dataset | Hi ! In your dataset script you must be yielding examples like
```python
for line in file:
...
yield key, {...}
```
Since `datasets` 1.7.0 we enforce the keys to be unique.
However it looks like your examples generator creates duplicate keys: at least two examples have key 0.
You can fix that by mak... | ## Describe the bug
Ever since today, I have been getting a DuplicatedKeysError while trying to load my dataset from my own script.
Error returned when running this line: `dataset = load_dataset('/content/drive/MyDrive/Thesis/Datasets/book_preprocessing/goodreads_maharjan_trimmed_and_nered/goodreadsnered.py')`
Note ... | 104 | DuplicatedKeysError on personal dataset
## Describe the bug
Ever since today, I have been getting a DuplicatedKeysError while trying to load my dataset from my own script.
Error returned when running this line: `dataset = load_dataset('/content/drive/MyDrive/Thesis/Datasets/book_preprocessing/goodreads_maharjan_tri... | [
-0.1864342839,
-0.0565234423,
0.0734841898,
0.2193855345,
0.1425481737,
0.0292863324,
0.5267678499,
0.2207098752,
0.0588229001,
0.0872556865,
-0.1790748984,
0.2924932539,
-0.1353885829,
-0.1258907616,
0.3225170374,
0.1644581854,
0.0292238574,
-0.0048960675,
-0.1378866285,
0.056... |
https://github.com/huggingface/datasets/issues/2440 | Remove `extended` field from dataset tagger | The tagger also doesn't insert the value for the `size_categories` field automatically, so this should be fixed too | ## Describe the bug
While working on #2435 I used the [dataset tagger](https://huggingface.co/datasets/tagging/) to generate the missing tags for the YAML metadata of each README.md file. However, it seems that our CI raises an error when the `extended` field is included:
```
dataset_name = 'arcd'
@pytest.m... | 18 | Remove `extended` field from dataset tagger
## Describe the bug
While working on #2435 I used the [dataset tagger](https://huggingface.co/datasets/tagging/) to generate the missing tags for the YAML metadata of each README.md file. However, it seems that our CI raises an error when the `extended` field is included:
... | [
-0.0470578372,
-0.0278829467,
-0.0063309604,
0.2230905741,
0.3980687559,
0.3846163452,
0.3813193142,
0.2038032115,
0.1331928223,
0.3070669174,
0.3498387635,
0.3464971185,
-0.3376170099,
0.0101224435,
-0.1240958497,
0.0616647974,
0.1870363951,
-0.084820196,
0.2828484774,
0.17965... |
https://github.com/huggingface/datasets/issues/2440 | Remove `extended` field from dataset tagger | Thanks for reporting. Indeed the `extended` tag doesn't exist. Not sure why we had that in the tagger.
The repo of the tagger is here if someone wants to give this a try: https://github.com/huggingface/datasets-tagging
Otherwise I can probably fix it next week | ## Describe the bug
While working on #2435 I used the [dataset tagger](https://huggingface.co/datasets/tagging/) to generate the missing tags for the YAML metadata of each README.md file. However, it seems that our CI raises an error when the `extended` field is included:
```
dataset_name = 'arcd'
@pytest.m... | 42 | Remove `extended` field from dataset tagger
## Describe the bug
While working on #2435 I used the [dataset tagger](https://huggingface.co/datasets/tagging/) to generate the missing tags for the YAML metadata of each README.md file. However, it seems that our CI raises an error when the `extended` field is included:
... | [
0.0003136369,
-0.0987944975,
-0.003311205,
0.1912124455,
0.4241887927,
0.4057804346,
0.3773061037,
0.2321251631,
0.1182759181,
0.2808995843,
0.3130054772,
0.4294628799,
-0.2918014526,
0.0003101898,
-0.0962808207,
0.0519277975,
0.2192724496,
-0.0698823035,
0.2905522883,
0.149889... |
https://github.com/huggingface/datasets/issues/2434 | Extend QuestionAnsweringExtractive template to handle nested columns | this is also the case for the following datasets and configurations:
* `mlqa` with config `mlqa-translate-train.ar`
| Currently the `QuestionAnsweringExtractive` task template and `preprare_for_task` only support "flat" features. We should extend the functionality to cover QA datasets like:
* `iapp_wiki_qa_squad`
* `parsinlu_reading_comprehension`
where the nested features differ with those from `squad` and trigger an `ArrowNot... | 16 | Extend QuestionAnsweringExtractive template to handle nested columns
Currently the `QuestionAnsweringExtractive` task template and `preprare_for_task` only support "flat" features. We should extend the functionality to cover QA datasets like:
* `iapp_wiki_qa_squad`
* `parsinlu_reading_comprehension`
where the ... | [
-0.40049842,
-0.3374340832,
-0.0784154907,
0.21249336,
0.0623007566,
-0.0605395511,
0.3427180052,
0.6537571549,
0.2787898183,
0.164431259,
-0.3850080371,
0.8013216853,
0.09380541,
0.1116524562,
-0.2240298539,
-0.1465635896,
0.028561933,
0.190527156,
-0.069093354,
0.1011490077,
... |
https://github.com/huggingface/datasets/issues/2431 | DuplicatedKeysError when trying to load adversarial_qa | Thanks for reporting !
#2433 fixed the issue, thanks @mariosasko :)
We'll do a patch release soon of the library.
In the meantime, you can use the fixed version of adversarial_qa by adding `script_version="master"` in `load_dataset` | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
dataset = load_dataset('adversarial_qa', 'adversarialQA')
```
## Expected results
The dataset should be loaded into memory
## Actual results
>DuplicatedKeysError: FAILURE TO GENERATE DATASET ... | 36 | DuplicatedKeysError when trying to load adversarial_qa
## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
dataset = load_dataset('adversarial_qa', 'adversarialQA')
```
## Expected results
The dataset should be loaded into memory
## Actual resu... | [
-0.1690550894,
0.0245383233,
0.0173125062,
0.2362532467,
0.2499978542,
-0.2299037576,
0.3607740998,
0.2238567322,
0.0596654937,
0.1077194065,
0.1362491399,
0.5518652797,
-0.1071598083,
-0.1683702618,
0.1626173854,
0.0517195016,
-0.0379656926,
0.1883040816,
0.0392817147,
0.02876... |
https://github.com/huggingface/datasets/issues/2426 | Saving Graph/Structured Data in Datasets | It should probably work out of the box to save structured data. If you want to show an example we can help you. | Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python value type when inferring an Arrow data ty... | 23 | Saving Graph/Structured Data in Datasets
Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python... | [
-0.2739241719,
0.0389148779,
-0.0391023457,
0.249229461,
0.1765463054,
-0.1106186658,
-0.0422584936,
0.0588871501,
0.038546361,
0.1575485468,
-0.1895997822,
0.4191501439,
-0.4267216027,
0.6438037157,
-0.1479444951,
-0.2593743503,
0.2712562978,
0.0387915708,
0.2738685906,
0.0248... |
https://github.com/huggingface/datasets/issues/2426 | Saving Graph/Structured Data in Datasets | An example of a toy dataset is like:
```json
[
{
"name": "mike",
"friends": [
"tom",
"lily"
],
"articles": [
{
"title": "aaaaa",
"reader": [
"tom",
"lucy"
... | Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python value type when inferring an Arrow data ty... | 131 | Saving Graph/Structured Data in Datasets
Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python... | [
-0.2233349234,
0.1035752371,
-0.0550302714,
0.2553001046,
0.1664750725,
-0.0883491561,
-0.0420414098,
0.0489242189,
0.1316881031,
0.0172997229,
-0.1817379594,
0.3413712084,
-0.4227152467,
0.5882749557,
-0.2004153728,
-0.2558908463,
0.2201135457,
-0.0427170433,
0.3299938738,
0.0... |
https://github.com/huggingface/datasets/issues/2426 | Saving Graph/Structured Data in Datasets | Hi,
you can do the following to load this data into a `Dataset`:
```python
from datasets import Dataset
examples = [
{
"name": "mike",
"friends": [
"tom",
"lily"
],
"articles": [
{
"title": "aaaaa",
"... | Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python value type when inferring an Arrow data ty... | 93 | Saving Graph/Structured Data in Datasets
Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python... | [
-0.2856689095,
0.0489123799,
-0.0369233713,
0.2954734862,
0.1761306673,
-0.0925180763,
0.0076428913,
0.0941089541,
0.1752268672,
0.0359997824,
-0.2266490012,
0.414647758,
-0.4315072596,
0.632694304,
-0.168003723,
-0.1835945249,
0.2540252805,
-0.0057011368,
0.2815617025,
0.06755... |
https://github.com/huggingface/datasets/issues/2426 | Saving Graph/Structured Data in Datasets | Thank you so much, and that works! I also have a question that if the dataset is very large, that cannot be loaded into the memory. How to create the Dataset? | Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python value type when inferring an Arrow data ty... | 31 | Saving Graph/Structured Data in Datasets
Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python... | [
-0.2723234296,
0.0077557601,
-0.0335844457,
0.3313357234,
0.1474173963,
-0.0770561546,
-0.0660822466,
0.0120474836,
0.0752711296,
0.1229879707,
-0.1731628031,
0.3133798838,
-0.4319546223,
0.5779836178,
-0.0695674494,
-0.238957569,
0.2762146294,
0.0760911107,
0.2164651603,
-0.00... |
https://github.com/huggingface/datasets/issues/2426 | Saving Graph/Structured Data in Datasets | If your dataset doesn't fit in memory, store it in a local file and load it from there. Check out [this chapter](https://huggingface.co/docs/datasets/master/loading_datasets.html#from-local-files) in the docs for more info. | Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python value type when inferring an Arrow data ty... | 28 | Saving Graph/Structured Data in Datasets
Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python... | [
-0.2378104776,
-0.1662292182,
0.0330215506,
0.3780059814,
0.2134671658,
-0.0944956616,
-0.0819413587,
0.0058012605,
0.2306799144,
0.1115747318,
-0.3286621273,
0.2727148831,
-0.4143138528,
0.7541980743,
0.034412615,
-0.2124169767,
0.2468241304,
-0.0094741555,
0.2130974084,
0.072... |
https://github.com/huggingface/datasets/issues/2424 | load_from_disk and save_to_disk are not compatible with each other | Hi,
`load_dataset` returns an instance of `DatasetDict` if `split` is not specified, so instead of `Dataset.load_from_disk`, use `DatasetDict.load_from_disk` to load the dataset from disk. | ## Describe the bug
load_from_disk and save_to_disk are not compatible. When I use save_to_disk to save a dataset to disk it works perfectly but given the same directory load_from_disk throws an error that it can't find state.json. looks like the load_from_disk only works on one split
## Steps to reproduce the bug
... | 24 | load_from_disk and save_to_disk are not compatible with each other
## Describe the bug
load_from_disk and save_to_disk are not compatible. When I use save_to_disk to save a dataset to disk it works perfectly but given the same directory load_from_disk throws an error that it can't find state.json. looks like the loa... | [
-0.1792235225,
-0.2252514809,
-0.1031945199,
0.4437658787,
0.1700458229,
-0.0056648673,
0.288072139,
0.3563304245,
0.3407510221,
0.0764620453,
0.0730308965,
0.375390321,
0.084836103,
0.1554040909,
-0.1350736618,
-0.039158199,
0.2747603953,
0.0480071679,
0.2421992719,
-0.1150248... |
https://github.com/huggingface/datasets/issues/2424 | load_from_disk and save_to_disk are not compatible with each other | Though I see a stream of issues open by people lost between datasets and datasets dicts so maybe there is here something that could be better in terms of UX. Could be better error handling or something else smarter to even avoid said errors but maybe we should think about this. Reopening to use this issue as a discussi... | ## Describe the bug
load_from_disk and save_to_disk are not compatible. When I use save_to_disk to save a dataset to disk it works perfectly but given the same directory load_from_disk throws an error that it can't find state.json. looks like the load_from_disk only works on one split
## Steps to reproduce the bug
... | 73 | load_from_disk and save_to_disk are not compatible with each other
## Describe the bug
load_from_disk and save_to_disk are not compatible. When I use save_to_disk to save a dataset to disk it works perfectly but given the same directory load_from_disk throws an error that it can't find state.json. looks like the loa... | [
-0.1793289036,
-0.1415982097,
-0.1055233702,
0.500690341,
0.232578963,
-0.0382465646,
0.257823348,
0.2903445959,
0.338676244,
0.0991070271,
0.1047386676,
0.3404803276,
0.0760166794,
0.1851237863,
-0.2164249271,
-0.046750471,
0.3020483851,
-0.0183788072,
0.1567265093,
-0.0449476... |
https://github.com/huggingface/datasets/issues/2424 | load_from_disk and save_to_disk are not compatible with each other | We should probably improve the error message indeed.
Also note that there exists a function `load_from_disk` that can load a Dataset or a DatasetDict. Under the hood it calls either `Dataset.load_from_disk` or `DatasetDict.load_from_disk`:
```python
from datasets import load_from_disk
dataset_dict = load_fr... | ## Describe the bug
load_from_disk and save_to_disk are not compatible. When I use save_to_disk to save a dataset to disk it works perfectly but given the same directory load_from_disk throws an error that it can't find state.json. looks like the load_from_disk only works on one split
## Steps to reproduce the bug
... | 45 | load_from_disk and save_to_disk are not compatible with each other
## Describe the bug
load_from_disk and save_to_disk are not compatible. When I use save_to_disk to save a dataset to disk it works perfectly but given the same directory load_from_disk throws an error that it can't find state.json. looks like the loa... | [
-0.2114285082,
-0.1918449998,
-0.1080046222,
0.4175552726,
0.1788411438,
0.009406195,
0.2517886758,
0.3598579466,
0.3484732211,
0.1281710118,
0.0906295031,
0.3500767946,
0.0675619841,
0.1234151274,
-0.1204767898,
-0.0371128246,
0.2396690249,
0.0431478694,
0.2499335706,
-0.10753... |
https://github.com/huggingface/datasets/issues/2415 | Cached dataset not loaded | It actually seems to happen all the time in above configuration:
* the function `filter_by_duration` correctly loads cached processed dataset
* the function `prepare_dataset` is always reexecuted
I end up solving the issue by saving to disk my dataset at the end but I'm still wondering if it's a bug or limitation ... | ## Describe the bug
I have a large dataset (common_voice, english) where I use several map and filter functions.
Sometimes my cached datasets after specific functions are not loaded.
I always use the same arguments, same functions, no seed…
## Steps to reproduce the bug
```python
def filter_by_duration(batch):
... | 53 | Cached dataset not loaded
## Describe the bug
I have a large dataset (common_voice, english) where I use several map and filter functions.
Sometimes my cached datasets after specific functions are not loaded.
I always use the same arguments, same functions, no seed…
## Steps to reproduce the bug
```python
def... | [
-0.2978541553,
-0.0423547663,
-0.0758302212,
0.4236163795,
-0.0719462335,
-0.0276885871,
0.2326417416,
0.3540042043,
0.2089607716,
-0.0697673634,
0.0410522372,
0.1886657476,
-0.2849697471,
-0.2252907753,
0.1128702685,
0.2975532115,
0.0528037809,
0.1180420443,
-0.1090572774,
-0.... |
https://github.com/huggingface/datasets/issues/2415 | Cached dataset not loaded | Hi ! The hash used for caching `map` results is the fingerprint of the resulting dataset. It is computed using three things:
- the old fingerprint of the dataset
- the hash of the function
- the hash of the other parameters passed to `map`
You can compute the hash of your function (or any python object) with
```... | ## Describe the bug
I have a large dataset (common_voice, english) where I use several map and filter functions.
Sometimes my cached datasets after specific functions are not loaded.
I always use the same arguments, same functions, no seed…
## Steps to reproduce the bug
```python
def filter_by_duration(batch):
... | 94 | Cached dataset not loaded
## Describe the bug
I have a large dataset (common_voice, english) where I use several map and filter functions.
Sometimes my cached datasets after specific functions are not loaded.
I always use the same arguments, same functions, no seed…
## Steps to reproduce the bug
```python
def... | [
-0.2978541553,
-0.0423547663,
-0.0758302212,
0.4236163795,
-0.0719462335,
-0.0276885871,
0.2326417416,
0.3540042043,
0.2089607716,
-0.0697673634,
0.0410522372,
0.1886657476,
-0.2849697471,
-0.2252907753,
0.1128702685,
0.2975532115,
0.0528037809,
0.1180420443,
-0.1090572774,
-0.... |
https://github.com/huggingface/datasets/issues/2415 | Cached dataset not loaded | > If `prepare_dataset` is always executed, maybe this is because your `processor` has a different hash each time you want to execute it.
Yes I think that was the issue.
For the hash of the function:
* does it consider just the name or the actual code of the function
* does it consider variables that are not pas... | ## Describe the bug
I have a large dataset (common_voice, english) where I use several map and filter functions.
Sometimes my cached datasets after specific functions are not loaded.
I always use the same arguments, same functions, no seed…
## Steps to reproduce the bug
```python
def filter_by_duration(batch):
... | 70 | Cached dataset not loaded
## Describe the bug
I have a large dataset (common_voice, english) where I use several map and filter functions.
Sometimes my cached datasets after specific functions are not loaded.
I always use the same arguments, same functions, no seed…
## Steps to reproduce the bug
```python
def... | [
-0.2978541553,
-0.0423547663,
-0.0758302212,
0.4236163795,
-0.0719462335,
-0.0276885871,
0.2326417416,
0.3540042043,
0.2089607716,
-0.0697673634,
0.0410522372,
0.1886657476,
-0.2849697471,
-0.2252907753,
0.1128702685,
0.2975532115,
0.0528037809,
0.1180420443,
-0.1090572774,
-0.... |
https://github.com/huggingface/datasets/issues/2415 | Cached dataset not loaded | > does it consider just the name or the actual code of the function
It looks at the name and the actual code and all variables such as recursively. It uses `dill` to do so, which is based on `pickle`.
Basically the hash is computed using the pickle bytes of your function (computed using `dill` to support most pytho... | ## Describe the bug
I have a large dataset (common_voice, english) where I use several map and filter functions.
Sometimes my cached datasets after specific functions are not loaded.
I always use the same arguments, same functions, no seed…
## Steps to reproduce the bug
```python
def filter_by_duration(batch):
... | 87 | Cached dataset not loaded
## Describe the bug
I have a large dataset (common_voice, english) where I use several map and filter functions.
Sometimes my cached datasets after specific functions are not loaded.
I always use the same arguments, same functions, no seed…
## Steps to reproduce the bug
```python
def... | [
-0.2978541553,
-0.0423547663,
-0.0758302212,
0.4236163795,
-0.0719462335,
-0.0276885871,
0.2326417416,
0.3540042043,
0.2089607716,
-0.0697673634,
0.0410522372,
0.1886657476,
-0.2849697471,
-0.2252907753,
0.1128702685,
0.2975532115,
0.0528037809,
0.1180420443,
-0.1090572774,
-0.... |
https://github.com/huggingface/datasets/issues/2413 | AttributeError: 'DatasetInfo' object has no attribute 'task_templates' | Hi ! Can you try using a more up-to-date version ? We added the task_templates in `datasets` 1.7.0.
Ideally when you're working on new datasets, you should install and use the local version of your fork of `datasets`. Here I think you tried to run the 1.7.0 tests with the 1.6.2 code | ## Describe the bug
Hello,
I'm trying to add dataset and contribute, but test keep fail with below cli.
` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<my_dataset>`
## Steps to reproduce the bug
It seems like a bug when I see an error with the existing dataset,... | 52 | AttributeError: 'DatasetInfo' object has no attribute 'task_templates'
## Describe the bug
Hello,
I'm trying to add dataset and contribute, but test keep fail with below cli.
` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<my_dataset>`
## Steps to reproduce th... | [
-0.4735526145,
-0.2719743252,
-0.0121507924,
0.059786167,
0.2671816349,
0.2102233469,
0.254593581,
0.3851531744,
0.0319132395,
0.1861087829,
0.0951913968,
0.4420557618,
-0.1509744525,
0.0039890949,
0.0563466437,
-0.0786743239,
-0.0314524136,
0.1539634317,
0.3197635114,
0.017136... |
https://github.com/huggingface/datasets/issues/2407 | .map() function got an unexpected keyword argument 'cache_file_name' | Hi @cindyxinyiwang,
Did you try adding `.arrow` after `cache_file_name` argument? Here I think they're expecting something like that only for a cache file:
https://github.com/huggingface/datasets/blob/e08362256fb157c0b3038437fc0d7a0bbb50de5c/src/datasets/arrow_dataset.py#L1556-L1558 | ## Describe the bug
I'm trying to save the result of datasets.map() to a specific file, so that I can easily share it among multiple computers without reprocessing the dataset. However, when I try to pass an argument 'cache_file_name' to the .map() function, it throws an error that ".map() function got an unexpected... | 24 | .map() function got an unexpected keyword argument 'cache_file_name'
## Describe the bug
I'm trying to save the result of datasets.map() to a specific file, so that I can easily share it among multiple computers without reprocessing the dataset. However, when I try to pass an argument 'cache_file_name' to the .map... | [
0.1322996318,
0.0689883605,
0.038236279,
0.0000558724,
0.0520308204,
0.2963717878,
0.1708025038,
0.3389789164,
-0.0635625795,
-0.0203008857,
0.1289218366,
0.610024333,
-0.2410924584,
-0.5483765602,
0.048802793,
0.0902601331,
0.3929361105,
-0.0865352377,
0.2103405297,
0.01606989... |
https://github.com/huggingface/datasets/issues/2407 | .map() function got an unexpected keyword argument 'cache_file_name' | Hi ! `cache_file_name` is an argument of the `Dataset.map` method. Can you check that your `dataset` is indeed a `Dataset` object ?
If you loaded several splits, then it would actually be a `DatasetDict` (one dataset per split, in a dictionary).
In this case, since there are several datasets in the dict, the `Datas... | ## Describe the bug
I'm trying to save the result of datasets.map() to a specific file, so that I can easily share it among multiple computers without reprocessing the dataset. However, when I try to pass an argument 'cache_file_name' to the .map() function, it throws an error that ".map() function got an unexpected... | 72 | .map() function got an unexpected keyword argument 'cache_file_name'
## Describe the bug
I'm trying to save the result of datasets.map() to a specific file, so that I can easily share it among multiple computers without reprocessing the dataset. However, when I try to pass an argument 'cache_file_name' to the .map... | [
0.0843965709,
0.0935960039,
0.0456615165,
0.0183025207,
0.0415575802,
0.2793121934,
0.221486792,
0.3731049597,
-0.0277665332,
-0.0000235153,
0.1310780346,
0.6276783347,
-0.2474796772,
-0.4753702581,
0.074065268,
0.1246321276,
0.3925753534,
-0.0903926194,
0.2432288826,
-0.006880... |
https://github.com/huggingface/datasets/issues/2407 | .map() function got an unexpected keyword argument 'cache_file_name' | I think you are right. I used cache_file_names={data1: name1, data2: name2} and it works. Thank you! | ## Describe the bug
I'm trying to save the result of datasets.map() to a specific file, so that I can easily share it among multiple computers without reprocessing the dataset. However, when I try to pass an argument 'cache_file_name' to the .map() function, it throws an error that ".map() function got an unexpected... | 16 | .map() function got an unexpected keyword argument 'cache_file_name'
## Describe the bug
I'm trying to save the result of datasets.map() to a specific file, so that I can easily share it among multiple computers without reprocessing the dataset. However, when I try to pass an argument 'cache_file_name' to the .map... | [
0.1506062448,
0.0931878984,
0.0462349318,
-0.0068932394,
0.0496352613,
0.2938352823,
0.2111459523,
0.3586552441,
-0.0644399896,
0.0176358819,
0.1194804981,
0.6066251397,
-0.2475740314,
-0.4879621267,
0.0684003085,
0.1206601039,
0.4148991406,
-0.0868131965,
0.205010578,
0.002126... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.