html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 70 51.8k | body stringlengths 0 29.8k | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings list |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/2498 | Improve torch formatting performance | Sounds like a plan @lhoestq If you create a PR I'll pick it up and try it out right away! | **Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia an... | 20 | Improve torch formatting performance
**Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ec... | [
-0.35013502836227417,
-0.2402968555688858,
-0.07366806268692017,
0.19035720825195312,
0.10275429487228394,
0.14770673215389252,
0.2599892020225525,
0.6868165135383606,
-0.1211305633187294,
-0.06412012130022049,
-0.3566420376300812,
0.24314066767692566,
-0.11459874361753464,
-0.033222105354... |
https://github.com/huggingface/datasets/issues/2498 | Improve torch formatting performance | I’m not exactly sure how to read the graph but it seems that to_categorical take a lot of time here. Could you share more informations on the features/stats of your datasets so we could maybe design a synthetic datasets that looks more similar for debugging testing? | **Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia an... | 46 | Improve torch formatting performance
**Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ec... | [
-0.35013502836227417,
-0.2402968555688858,
-0.07366806268692017,
0.19035720825195312,
0.10275429487228394,
0.14770673215389252,
0.2599892020225525,
0.6868165135383606,
-0.1211305633187294,
-0.06412012130022049,
-0.3566420376300812,
0.24314066767692566,
-0.11459874361753464,
-0.033222105354... |
https://github.com/huggingface/datasets/issues/2498 | Improve torch formatting performance | > I’m not exactly sure how to read the graph but it seems that to_categorical take a lot of time here. Could you share more informations on the features/stats of your datasets so we could maybe design a synthetic datasets that looks more similar for debugging testing?
@thomwolf starting from the top, each rectangle ... | **Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia an... | 140 | Improve torch formatting performance
**Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ec... | [
-0.35013502836227417,
-0.2402968555688858,
-0.07366806268692017,
0.19035720825195312,
0.10275429487228394,
0.14770673215389252,
0.2599892020225525,
0.6868165135383606,
-0.1211305633187294,
-0.06412012130022049,
-0.3566420376300812,
0.24314066767692566,
-0.11459874361753464,
-0.033222105354... |
https://github.com/huggingface/datasets/issues/2498 | Improve torch formatting performance | @lhoestq the proposed branch is faster, but overall training speedup is a few percentage points. I couldn't figure out how to include the GitHub branch into setup.py, so I couldn't start NVidia optimized Docker-based pre-training run. But on bare metal, there is a slight improvement. I'll do some more performance trac... | **Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia an... | 51 | Improve torch formatting performance
**Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ec... | [
-0.35013502836227417,
-0.2402968555688858,
-0.07366806268692017,
0.19035720825195312,
0.10275429487228394,
0.14770673215389252,
0.2599892020225525,
0.6868165135383606,
-0.1211305633187294,
-0.06412012130022049,
-0.3566420376300812,
0.24314066767692566,
-0.11459874361753464,
-0.033222105354... |
https://github.com/huggingface/datasets/issues/2498 | Improve torch formatting performance | Hi @vblagoje, to install Datasets from @lhoestq PR reference #2505, you can use:
```shell
pip install git+ssh://git@github.com/huggingface/datasets.git@refs/pull/2505/head#egg=datasets
``` | **Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia an... | 18 | Improve torch formatting performance
**Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ec... | [
-0.35013502836227417,
-0.2402968555688858,
-0.07366806268692017,
0.19035720825195312,
0.10275429487228394,
0.14770673215389252,
0.2599892020225525,
0.6868165135383606,
-0.1211305633187294,
-0.06412012130022049,
-0.3566420376300812,
0.24314066767692566,
-0.11459874361753464,
-0.033222105354... |
https://github.com/huggingface/datasets/issues/2498 | Improve torch formatting performance | Hey @albertvillanova yes thank you, I am aware, I can easily pull it from a terminal command line but then I can't automate docker image builds as dependencies are picked up from setup.py and for some reason setup.py doesn't accept this string format. | **Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia an... | 43 | Improve torch formatting performance
**Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ec... | [
-0.35013502836227417,
-0.2402968555688858,
-0.07366806268692017,
0.19035720825195312,
0.10275429487228394,
0.14770673215389252,
0.2599892020225525,
0.6868165135383606,
-0.1211305633187294,
-0.06412012130022049,
-0.3566420376300812,
0.24314066767692566,
-0.11459874361753464,
-0.033222105354... |
https://github.com/huggingface/datasets/issues/2498 | Improve torch formatting performance | @vblagoje in that case, you can add this to your `setup.py`:
```python
install_requires=[
"datasets @ git+ssh://git@github.com/huggingface/datasets.git@refs/pull/2505/head",
``` | **Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia an... | 17 | Improve torch formatting performance
**Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ec... | [
-0.35013502836227417,
-0.2402968555688858,
-0.07366806268692017,
0.19035720825195312,
0.10275429487228394,
0.14770673215389252,
0.2599892020225525,
0.6868165135383606,
-0.1211305633187294,
-0.06412012130022049,
-0.3566420376300812,
0.24314066767692566,
-0.11459874361753464,
-0.033222105354... |
https://github.com/huggingface/datasets/issues/2498 | Improve torch formatting performance | @lhoestq @thomwolf @albertvillanova The new approach is definitely faster, dataloader now takes less than 3% cumulative time (pink rectangle two rectangles to the right of tensor.py backward invocation)
 for example. | **Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia an... | 20 | Improve torch formatting performance
**Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ec... | [
-0.35013502836227417,
-0.2402968555688858,
-0.07366806268692017,
0.19035720825195312,
0.10275429487228394,
0.14770673215389252,
0.2599892020225525,
0.6868165135383606,
-0.1211305633187294,
-0.06412012130022049,
-0.3566420376300812,
0.24314066767692566,
-0.11459874361753464,
-0.033222105354... |
https://github.com/huggingface/datasets/issues/2481 | Delete extracted files to save disk space | My suggestion for this would be to have this enabled by default.
Plus I don't know if there should be a dedicated issue to that is another functionality. But I propose layered building rather than all at once. That is:
1. uncompress a handful of files via a generator enough to generate one arrow file
2. process ... | As discussed with @stas00 and @lhoestq, allowing the deletion of extracted files would save a great amount of disk space to typical user. | 164 | Delete extracted files to save disk space
As discussed with @stas00 and @lhoestq, allowing the deletion of extracted files would save a great amount of disk space to typical user.
My suggestion for this would be to have this enabled by default.
Plus I don't know if there should be a dedicated issue to that is an... | [
-0.13562092185020447,
-0.3015790283679962,
-0.1714523434638977,
0.15534156560897827,
-0.07602404057979584,
-0.04031558707356453,
-0.12200211733579636,
0.5078597068786621,
-0.07595329731702805,
0.5109490752220154,
0.18390505015850067,
0.39844661951065063,
-0.3721123933792114,
0.251961052417... |
https://github.com/huggingface/datasets/issues/2480 | Set download/extracted paths configurable | For example to be able to send uncompressed and temp build files to another volume/partition, so that the user gets the minimal disk usage on their primary setup - and ends up with just the downloaded compressed data + arrow files, but outsourcing the huge files and building to another partition. e.g. on JZ there is a... | As discussed with @stas00 and @lhoestq, setting these paths configurable may allow to overcome disk space limitation on different partitions/drives.
TODO:
- [x] Set configurable extracted datasets path: #2487
- [x] Set configurable downloaded datasets path: #2488
- [ ] Set configurable "incomplete" datasets path? | 85 | Set download/extracted paths configurable
As discussed with @stas00 and @lhoestq, setting these paths configurable may allow to overcome disk space limitation on different partitions/drives.
TODO:
- [x] Set configurable extracted datasets path: #2487
- [x] Set configurable downloaded datasets path: #2488
- [ ] ... | [
-0.2962554693222046,
-0.1055004820227623,
-0.18218062818050385,
0.18956461548805237,
0.11870545148849487,
-0.08783268183469772,
-0.04248460754752159,
0.2916615903377533,
0.046233873814344406,
0.34794965386390686,
-0.1803240180015564,
0.19076299667358398,
0.014195245690643787,
0.39972886443... |
https://github.com/huggingface/datasets/issues/2474 | cache_dir parameter for load_from_disk ? | Hi ! `load_from_disk` doesn't move the data. If you specify a local path to your mounted drive, then the dataset is going to be loaded directly from the arrow file in this directory. The cache files that result from `map` operations are also stored in the same directory by default.
However note than writing data to ... | **Is your feature request related to a problem? Please describe.**
When using Google Colab big datasets can be an issue, as they won't fit on the VM's disk. Therefore mounting google drive could be a possible solution. Unfortunatly when loading my own dataset by using the _load_from_disk_ function, the data gets cache... | 84 | cache_dir parameter for load_from_disk ?
**Is your feature request related to a problem? Please describe.**
When using Google Colab big datasets can be an issue, as they won't fit on the VM's disk. Therefore mounting google drive could be a possible solution. Unfortunatly when loading my own dataset by using the _lo... | [
-0.27172455191612244,
-0.12313587218523026,
-0.07020500302314758,
0.12663321197032928,
0.21557456254959106,
-0.02143816463649273,
0.28476420044898987,
0.08484715223312378,
0.4296785891056061,
0.3610471785068512,
-0.1760806292295456,
0.2474941909313202,
-0.2383667230606079,
0.24224032461643... |
https://github.com/huggingface/datasets/issues/2474 | cache_dir parameter for load_from_disk ? | Thanks for your answer! I am a little surprised since I just want to read the dataset.
After debugging a bit, I noticed that the VM’s disk fills up when the tables (generator) are converted to a list:
https://github.com/huggingface/datasets/blob/5ba149773d23369617563d752aca922081277ec2/src/datasets/table.py#L850
... | **Is your feature request related to a problem? Please describe.**
When using Google Colab big datasets can be an issue, as they won't fit on the VM's disk. Therefore mounting google drive could be a possible solution. Unfortunatly when loading my own dataset by using the _load_from_disk_ function, the data gets cache... | 69 | cache_dir parameter for load_from_disk ?
**Is your feature request related to a problem? Please describe.**
When using Google Colab big datasets can be an issue, as they won't fit on the VM's disk. Therefore mounting google drive could be a possible solution. Unfortunatly when loading my own dataset by using the _lo... | [
-0.32925236225128174,
-0.13604721426963806,
-0.07416883856058121,
0.3395153284072876,
0.30544769763946533,
0.024210982024669647,
0.11280108988285065,
0.11024819314479828,
0.38170763850212097,
0.47350209951400757,
-0.11442645639181137,
0.33082470297813416,
-0.3873733580112457,
0.26286724209... |
https://github.com/huggingface/datasets/issues/2474 | cache_dir parameter for load_from_disk ? | Indeed reading the data shouldn't increase the VM's disk. Not sure what google colab does under the hood for that to happen | **Is your feature request related to a problem? Please describe.**
When using Google Colab big datasets can be an issue, as they won't fit on the VM's disk. Therefore mounting google drive could be a possible solution. Unfortunatly when loading my own dataset by using the _load_from_disk_ function, the data gets cache... | 22 | cache_dir parameter for load_from_disk ?
**Is your feature request related to a problem? Please describe.**
When using Google Colab big datasets can be an issue, as they won't fit on the VM's disk. Therefore mounting google drive could be a possible solution. Unfortunatly when loading my own dataset by using the _lo... | [
-0.29479485750198364,
-0.20461329817771912,
-0.12458879500627518,
0.24072867631912231,
0.19853807985782623,
0.0009987636003643274,
0.20941965281963348,
0.0836256816983223,
0.3559872508049011,
0.4815081059932709,
-0.15545235574245453,
0.24850888550281525,
-0.3452039062976837,
0.282815903425... |
https://github.com/huggingface/datasets/issues/2472 | Fix automatic generation of Zenodo DOI | I have received a reply from Zenodo support:
> We are currently investigating and fixing this issue related to GitHub releases. As soon as we have solved it we will reach back to you. | After the last release of Datasets (1.8.0), the automatic generation of the Zenodo DOI failed: it appears in yellow as "Received", instead of in green as "Published".
I have contacted Zenodo support to fix this issue.
TODO:
- [x] Check with Zenodo to fix the issue
- [x] Check BibTeX entry is right | 34 | Fix automatic generation of Zenodo DOI
After the last release of Datasets (1.8.0), the automatic generation of the Zenodo DOI failed: it appears in yellow as "Received", instead of in green as "Published".
I have contacted Zenodo support to fix this issue.
TODO:
- [x] Check with Zenodo to fix the issue
- [x] ... | [
-0.09672468900680542,
0.3705008029937744,
-0.03761047124862671,
0.08053350448608398,
0.148946613073349,
-0.10635074228048325,
0.38638538122177124,
0.4031180143356323,
-0.0611240491271019,
0.25448501110076904,
-0.03677310422062874,
0.1469547301530838,
0.20907336473464966,
-0.082582578063011... |
https://github.com/huggingface/datasets/issues/2472 | Fix automatic generation of Zenodo DOI | Other repo maintainers had the same problem with Zenodo.
There is an open issue on their GitHub repo: zenodo/zenodo#2181 | After the last release of Datasets (1.8.0), the automatic generation of the Zenodo DOI failed: it appears in yellow as "Received", instead of in green as "Published".
I have contacted Zenodo support to fix this issue.
TODO:
- [x] Check with Zenodo to fix the issue
- [x] Check BibTeX entry is right | 19 | Fix automatic generation of Zenodo DOI
After the last release of Datasets (1.8.0), the automatic generation of the Zenodo DOI failed: it appears in yellow as "Received", instead of in green as "Published".
I have contacted Zenodo support to fix this issue.
TODO:
- [x] Check with Zenodo to fix the issue
- [x] ... | [
-0.09328220784664154,
0.3942429721355438,
-0.034740936011075974,
0.09913063794374466,
0.14987331628799438,
-0.10668722540140152,
0.38062968850135803,
0.3882213830947876,
-0.07586859166622162,
0.25355541706085205,
-0.06456300616264343,
0.20564252138137817,
0.20682507753372192,
-0.0944910570... |
https://github.com/huggingface/datasets/issues/2472 | Fix automatic generation of Zenodo DOI | I have received the following request from Zenodo support:
> Could you send us the link to the repository as well as the release tag?
My reply:
> Sure, here it is:
> - Link to the repository: https://github.com/huggingface/datasets
> - Link to the repository at the release tag: https://github.com/huggingface/dat... | After the last release of Datasets (1.8.0), the automatic generation of the Zenodo DOI failed: it appears in yellow as "Received", instead of in green as "Published".
I have contacted Zenodo support to fix this issue.
TODO:
- [x] Check with Zenodo to fix the issue
- [x] Check BibTeX entry is right | 55 | Fix automatic generation of Zenodo DOI
After the last release of Datasets (1.8.0), the automatic generation of the Zenodo DOI failed: it appears in yellow as "Received", instead of in green as "Published".
I have contacted Zenodo support to fix this issue.
TODO:
- [x] Check with Zenodo to fix the issue
- [x] ... | [
0.018832774832844734,
0.26431676745414734,
-0.053757574409246445,
0.16542401909828186,
0.1444074660539627,
-0.04861997067928314,
0.29565495252609253,
0.30089473724365234,
0.012550690211355686,
0.24218589067459106,
-0.01259581744670868,
0.15270039439201355,
0.1531534641981125,
-0.0105608673... |
https://github.com/huggingface/datasets/issues/2470 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`. | Hi ! It looks like the issue comes from pyarrow. What version of pyarrow are you using ? How did you install it ? | ## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure whether the issue is on my end, because it's difficult for me to debug! Any ti... | 24 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`.
## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure ... | [
-0.4569718837738037,
-0.21739105880260468,
0.022877894341945648,
0.4219459593296051,
0.3113078773021698,
-0.22648246586322784,
0.2361345887184143,
0.15250526368618011,
0.04312596097588539,
0.36432957649230957,
0.5303295850753784,
0.29797884821891785,
-0.3346506655216217,
-0.046570051461458... |
https://github.com/huggingface/datasets/issues/2470 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`. | Thank you for the quick reply! I have `pyarrow==4.0.0`, and I am installing with `pip`. It's not one of my explicit dependencies, so I assume it came along with something else. | ## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure whether the issue is on my end, because it's difficult for me to debug! Any ti... | 31 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`.
## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure ... | [
-0.4569718837738037,
-0.21739105880260468,
0.022877894341945648,
0.4219459593296051,
0.3113078773021698,
-0.22648246586322784,
0.2361345887184143,
0.15250526368618011,
0.04312596097588539,
0.36432957649230957,
0.5303295850753784,
0.29797884821891785,
-0.3346506655216217,
-0.046570051461458... |
https://github.com/huggingface/datasets/issues/2470 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`. | Could you trying reinstalling pyarrow with pip ?
I'm not sure why it would check in your multicurtural-sc directory for source files. | ## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure whether the issue is on my end, because it's difficult for me to debug! Any ti... | 22 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`.
## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure ... | [
-0.4569718837738037,
-0.21739105880260468,
0.022877894341945648,
0.4219459593296051,
0.3113078773021698,
-0.22648246586322784,
0.2361345887184143,
0.15250526368618011,
0.04312596097588539,
0.36432957649230957,
0.5303295850753784,
0.29797884821891785,
-0.3346506655216217,
-0.046570051461458... |
https://github.com/huggingface/datasets/issues/2470 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`. | Sure! I tried reinstalling to get latest. pip was mad because it looks like Datasets currently wants <4.0.0 (which is interesting, because apparently I ended up with 4.0.0 already?), but I gave it a shot anyway:
```bash
$ pip install --upgrade --force-reinstall pyarrow
Collecting pyarrow
Downloading pyarrow-4.0... | ## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure whether the issue is on my end, because it's difficult for me to debug! Any ti... | 305 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`.
## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure ... | [
-0.4569718837738037,
-0.21739105880260468,
0.022877894341945648,
0.4219459593296051,
0.3113078773021698,
-0.22648246586322784,
0.2361345887184143,
0.15250526368618011,
0.04312596097588539,
0.36432957649230957,
0.5303295850753784,
0.29797884821891785,
-0.3346506655216217,
-0.046570051461458... |
https://github.com/huggingface/datasets/issues/2470 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`. | Good catch ! Not sure why it could raise such a weird issue from pyarrow though
We should definitely reduce num_proc to the length of the dataset if needed and log a warning. | ## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure whether the issue is on my end, because it's difficult for me to debug! Any ti... | 33 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`.
## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure ... | [
-0.4569718837738037,
-0.21739105880260468,
0.022877894341945648,
0.4219459593296051,
0.3113078773021698,
-0.22648246586322784,
0.2361345887184143,
0.15250526368618011,
0.04312596097588539,
0.36432957649230957,
0.5303295850753784,
0.29797884821891785,
-0.3346506655216217,
-0.046570051461458... |
https://github.com/huggingface/datasets/issues/2470 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`. | This has been fixed in #2566, thanks @connor-mccarthy !
We'll make a new release soon that includes the fix ;) | ## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure whether the issue is on my end, because it's difficult for me to debug! Any ti... | 20 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`.
## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure ... | [
-0.4569718837738037,
-0.21739105880260468,
0.022877894341945648,
0.4219459593296051,
0.3113078773021698,
-0.22648246586322784,
0.2361345887184143,
0.15250526368618011,
0.04312596097588539,
0.36432957649230957,
0.5303295850753784,
0.29797884821891785,
-0.3346506655216217,
-0.046570051461458... |
https://github.com/huggingface/datasets/issues/2450 | BLUE file not found | Hi ! The `blue` metric doesn't exist, but the `bleu` metric does.
You can get the full list of metrics [here](https://github.com/huggingface/datasets/tree/master/metrics) or by running
```python
from datasets import list_metrics
print(list_metrics())
``` | Hi, I'm having the following issue when I try to load the `blue` metric.
```shell
import datasets
metric = datasets.load_metric('blue')
Traceback (most recent call last):
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 320, in prepare_module
local... | 31 | BLUE file not found
Hi, I'm having the following issue when I try to load the `blue` metric.
```shell
import datasets
metric = datasets.load_metric('blue')
Traceback (most recent call last):
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 320, in pre... | [
-0.20386211574077606,
-0.28628554940223694,
-0.06850098073482513,
0.4027133285999298,
0.3037268817424774,
0.06551139056682587,
0.210244283080101,
0.33866506814956665,
0.08859745413064957,
0.18913932144641876,
-0.2280893623828888,
-0.15020713210105896,
0.08000043779611588,
-0.38681045174598... |
https://github.com/huggingface/datasets/issues/2447 | dataset adversarial_qa has no answers in the "test" set | Hi ! I'm pretty sure that the answers are not made available for the test set on purpose because it is part of the DynaBench benchmark, for which you can submit your predictions on the website.
In any case we should mention this in the dataset card of this dataset. | ## Describe the bug
When loading the adversarial_qa dataset the 'test' portion has no answers. Only the 'train' and 'validation' portions do. This occurs with all four of the configs ('adversarialQA', 'dbidaf', 'dbert', 'droberta')
## Steps to reproduce the bug
```
from datasets import load_dataset
examples ... | 50 | dataset adversarial_qa has no answers in the "test" set
## Describe the bug
When loading the adversarial_qa dataset the 'test' portion has no answers. Only the 'train' and 'validation' portions do. This occurs with all four of the configs ('adversarialQA', 'dbidaf', 'dbert', 'droberta')
## Steps to reproduce th... | [
-0.10978088527917862,
0.17811360955238342,
-0.13903804123401642,
0.31031334400177,
0.16347704827785492,
-0.08224867284297943,
0.30948492884635925,
0.44992557168006897,
-0.05196256935596466,
0.11818758398294449,
0.16112400591373444,
0.3690260946750641,
-0.18472330272197723,
-0.2479673177003... |
https://github.com/huggingface/datasets/issues/2447 | dataset adversarial_qa has no answers in the "test" set | Makes sense, but not intuitive for someone searching through the datasets. Thanks for adding the note to clarify. | ## Describe the bug
When loading the adversarial_qa dataset the 'test' portion has no answers. Only the 'train' and 'validation' portions do. This occurs with all four of the configs ('adversarialQA', 'dbidaf', 'dbert', 'droberta')
## Steps to reproduce the bug
```
from datasets import load_dataset
examples ... | 18 | dataset adversarial_qa has no answers in the "test" set
## Describe the bug
When loading the adversarial_qa dataset the 'test' portion has no answers. Only the 'train' and 'validation' portions do. This occurs with all four of the configs ('adversarialQA', 'dbidaf', 'dbert', 'droberta')
## Steps to reproduce th... | [
-0.03801890090107918,
0.17751659452915192,
-0.10937903076410294,
0.375993937253952,
0.1982213705778122,
-0.0062383306212723255,
0.36139416694641113,
0.3695280849933624,
-0.06413246691226959,
0.03522840142250061,
0.10371216386556625,
0.379220575094223,
-0.1990858018398285,
-0.19723418354988... |
https://github.com/huggingface/datasets/issues/2446 | `yelp_polarity` is broken | ```
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/script_runner.py", line 332, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp-viewer/run.py", line 233, in <module>
configs = get_confs(option)
File "/home/sasha/.local/share/virtualenvs/lib-og... | 
| 118 | `yelp_polarity` is broken

```
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/script_runner.py", line 332, in _run_script
exec(code, module.__dict__)
... | [
-0.1024581715464592,
-0.41643571853637695,
-0.15971587598323822,
0.11063803732395172,
0.2700802683830261,
-0.13424374163150787,
0.06204196810722351,
0.32489603757858276,
0.11502435058355331,
0.020860794931650162,
-0.07330338656902313,
0.0981360524892807,
-0.07752864062786102,
0.08816050738... |
https://github.com/huggingface/datasets/issues/2444 | Sentence Boundaries missing in Dataset: xtreme / udpos | Hi,
This is a known issue. More info on this issue can be found in #2061. If you are looking for an open-source contribution, there are step-by-step instructions in the linked issue that you can follow to fix it. | I was browsing through annotation guidelines, as suggested by the datasets introduction.
The guidlines saids "There must be exactly one blank line after every sentence, including the last sentence in the file. Empty sentences are not allowed." in the [Sentence Boundaries and Comments section](https://universaldepend... | 39 | Sentence Boundaries missing in Dataset: xtreme / udpos
I was browsing through annotation guidelines, as suggested by the datasets introduction.
The guidlines saids "There must be exactly one blank line after every sentence, including the last sentence in the file. Empty sentences are not allowed." in the [Sentence... | [
0.2943785786628723,
-0.3165702521800995,
-0.0003397200780455023,
-0.04292609915137291,
0.20856918394565582,
-0.21818317472934723,
0.2861592769622803,
-0.20596399903297424,
-0.08608443289995193,
0.03881857544183731,
-0.13336564600467682,
-0.03214385733008385,
0.0991285964846611,
0.012327589... |
https://github.com/huggingface/datasets/issues/2443 | Some tests hang on Windows | Hi ! That would be nice indeed to at least have a warning, since we don't handle the max path length limit.
Also if we could have an error instead of an infinite loop I'm sure windows users will appreciate that | Currently, several tests hang on Windows if the max path limit of 260 characters is not disabled. This happens due to the changes introduced by #2223 that cause an infinite loop in `WindowsFileLock` described in #2220. This can be very tricky to debug, so I think now is a good time to address these issues/PRs. IMO thr... | 41 | Some tests hang on Windows
Currently, several tests hang on Windows if the max path limit of 260 characters is not disabled. This happens due to the changes introduced by #2223 that cause an infinite loop in `WindowsFileLock` described in #2220. This can be very tricky to debug, so I think now is a good time to addr... | [
-0.23916436731815338,
-0.021761411800980568,
-0.11798710376024246,
0.0013239303370937705,
0.31438523530960083,
-0.0502430684864521,
0.18320338428020477,
0.15665391087532043,
0.16434034705162048,
0.3078768253326416,
0.7059504985809326,
-0.0965394526720047,
-0.4796690046787262,
0.04251420125... |
https://github.com/huggingface/datasets/issues/2443 | Some tests hang on Windows | Unfortunately, I know this problem very well... 😅
I remember having proposed to throw an error instead of hanging in an infinite loop #2220: 60c7d1b6b71469599a27147a08100f594e7a3f84, 8c8ab60018b00463edf1eca500e434ff061546fc
but @lhoestq told me:
> Note that the filelock module comes from this project that hasn'... | Currently, several tests hang on Windows if the max path limit of 260 characters is not disabled. This happens due to the changes introduced by #2223 that cause an infinite loop in `WindowsFileLock` described in #2220. This can be very tricky to debug, so I think now is a good time to address these issues/PRs. IMO thr... | 85 | Some tests hang on Windows
Currently, several tests hang on Windows if the max path limit of 260 characters is not disabled. This happens due to the changes introduced by #2223 that cause an infinite loop in `WindowsFileLock` described in #2220. This can be very tricky to debug, so I think now is a good time to addr... | [
-0.13163557648658752,
-0.014070513658225536,
-0.1183958575129509,
0.009342987090349197,
0.2163330316543579,
0.009574444964528084,
0.2583823800086975,
0.13995540142059326,
0.21939101815223694,
0.19323091208934784,
0.6185330748558044,
-0.2044835090637207,
-0.4562079608440399,
-0.072764597833... |
https://github.com/huggingface/datasets/issues/2443 | Some tests hang on Windows | @albertvillanova Thanks for additional info on this issue.
Yes, I think the best option is to throw an error instead of suppressing it in a loop. I've considered 2 more options, but I don't really like them:
1. create a temporary file with a filename longer than 255 characters on import; if this fails, long paths a... | Currently, several tests hang on Windows if the max path limit of 260 characters is not disabled. This happens due to the changes introduced by #2223 that cause an infinite loop in `WindowsFileLock` described in #2220. This can be very tricky to debug, so I think now is a good time to address these issues/PRs. IMO thr... | 109 | Some tests hang on Windows
Currently, several tests hang on Windows if the max path limit of 260 characters is not disabled. This happens due to the changes introduced by #2223 that cause an infinite loop in `WindowsFileLock` described in #2220. This can be very tricky to debug, so I think now is a good time to addr... | [
-0.2172691822052002,
0.01150926947593689,
-0.15492035448551178,
0.0968996211886406,
0.2777683138847351,
-0.1149824783205986,
0.29523104429244995,
0.15919233858585358,
0.20752476155757904,
0.3304065763950348,
0.5141473412513733,
-0.1415562629699707,
-0.3705521523952484,
0.001272943452931940... |
https://github.com/huggingface/datasets/issues/2441 | DuplicatedKeysError on personal dataset | Hi ! In your dataset script you must be yielding examples like
```python
for line in file:
...
yield key, {...}
```
Since `datasets` 1.7.0 we enforce the keys to be unique.
However it looks like your examples generator creates duplicate keys: at least two examples have key 0.
You can fix that by mak... | ## Describe the bug
Ever since today, I have been getting a DuplicatedKeysError while trying to load my dataset from my own script.
Error returned when running this line: `dataset = load_dataset('/content/drive/MyDrive/Thesis/Datasets/book_preprocessing/goodreads_maharjan_trimmed_and_nered/goodreadsnered.py')`
Note ... | 104 | DuplicatedKeysError on personal dataset
## Describe the bug
Ever since today, I have been getting a DuplicatedKeysError while trying to load my dataset from my own script.
Error returned when running this line: `dataset = load_dataset('/content/drive/MyDrive/Thesis/Datasets/book_preprocessing/goodreads_maharjan_tri... | [
-0.18643417954444885,
-0.05652330443263054,
0.07348430156707764,
0.2193852812051773,
0.14254805445671082,
0.029286116361618042,
0.5267680287361145,
0.22071005403995514,
0.05882250517606735,
0.08725596219301224,
-0.17907491326332092,
0.29249367117881775,
-0.13538841903209686,
-0.12589095532... |
https://github.com/huggingface/datasets/issues/2440 | Remove `extended` field from dataset tagger | The tagger also doesn't insert the value for the `size_categories` field automatically, so this should be fixed too | ## Describe the bug
While working on #2435 I used the [dataset tagger](https://huggingface.co/datasets/tagging/) to generate the missing tags for the YAML metadata of each README.md file. However, it seems that our CI raises an error when the `extended` field is included:
```
dataset_name = 'arcd'
@pytest.m... | 18 | Remove `extended` field from dataset tagger
## Describe the bug
While working on #2435 I used the [dataset tagger](https://huggingface.co/datasets/tagging/) to generate the missing tags for the YAML metadata of each README.md file. However, it seems that our CI raises an error when the `extended` field is included:
... | [
-0.04705759882926941,
-0.027882670983672142,
-0.006330742500722408,
0.22309046983718872,
0.398068904876709,
0.38461631536483765,
0.3813196122646332,
0.20380330085754395,
0.13319285213947296,
0.3070668578147888,
0.34983891248703003,
0.346497118473053,
-0.3376169800758362,
0.0101225217804312... |
https://github.com/huggingface/datasets/issues/2440 | Remove `extended` field from dataset tagger | Thanks for reporting. Indeed the `extended` tag doesn't exist. Not sure why we had that in the tagger.
The repo of the tagger is here if someone wants to give this a try: https://github.com/huggingface/datasets-tagging
Otherwise I can probably fix it next week | ## Describe the bug
While working on #2435 I used the [dataset tagger](https://huggingface.co/datasets/tagging/) to generate the missing tags for the YAML metadata of each README.md file. However, it seems that our CI raises an error when the `extended` field is included:
```
dataset_name = 'arcd'
@pytest.m... | 42 | Remove `extended` field from dataset tagger
## Describe the bug
While working on #2435 I used the [dataset tagger](https://huggingface.co/datasets/tagging/) to generate the missing tags for the YAML metadata of each README.md file. However, it seems that our CI raises an error when the `extended` field is included:
... | [
0.00031396670965477824,
-0.09879472851753235,
-0.0033114305697381496,
0.19121214747428894,
0.42418891191482544,
0.40578022599220276,
0.377306193113327,
0.2321251928806305,
0.11827567964792252,
0.2808994650840759,
0.31300556659698486,
0.4294629991054535,
-0.2918013036251068,
0.0003100160392... |
https://github.com/huggingface/datasets/issues/2434 | Extend QuestionAnsweringExtractive template to handle nested columns | this is also the case for the following datasets and configurations:
* `mlqa` with config `mlqa-translate-train.ar`
| Currently the `QuestionAnsweringExtractive` task template and `preprare_for_task` only support "flat" features. We should extend the functionality to cover QA datasets like:
* `iapp_wiki_qa_squad`
* `parsinlu_reading_comprehension`
where the nested features differ with those from `squad` and trigger an `ArrowNot... | 16 | Extend QuestionAnsweringExtractive template to handle nested columns
Currently the `QuestionAnsweringExtractive` task template and `preprare_for_task` only support "flat" features. We should extend the functionality to cover QA datasets like:
* `iapp_wiki_qa_squad`
* `parsinlu_reading_comprehension`
where the ... | [
-0.40049824118614197,
-0.33743390440940857,
-0.07841547578573227,
0.21249310672283173,
0.062300749123096466,
-0.060539670288562775,
0.3427179753780365,
0.6537572145462036,
0.2787894010543823,
0.1644311547279358,
-0.38500794768333435,
0.8013217449188232,
0.09380558878183365,
0.1116523370146... |
https://github.com/huggingface/datasets/issues/2431 | DuplicatedKeysError when trying to load adversarial_qa | Thanks for reporting !
#2433 fixed the issue, thanks @mariosasko :)
We'll do a patch release soon of the library.
In the meantime, you can use the fixed version of adversarial_qa by adding `script_version="master"` in `load_dataset` | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
dataset = load_dataset('adversarial_qa', 'adversarialQA')
```
## Expected results
The dataset should be loaded into memory
## Actual results
>DuplicatedKeysError: FAILURE TO GENERATE DATASET ... | 36 | DuplicatedKeysError when trying to load adversarial_qa
## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
dataset = load_dataset('adversarial_qa', 'adversarialQA')
```
## Expected results
The dataset should be loaded into memory
## Actual resu... | [
-0.1690552979707718,
0.024538246914744377,
0.017312390729784966,
0.23625333607196808,
0.24999822676181793,
-0.22990372776985168,
0.36077386140823364,
0.22385700047016144,
0.05966595932841301,
0.10771896690130234,
0.13624915480613708,
0.5518653988838196,
-0.10716013610363007,
-0.16837033629... |
https://github.com/huggingface/datasets/issues/2426 | Saving Graph/Structured Data in Datasets | It should probably work out of the box to save structured data. If you want to show an example we can help you. | Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python value type when inferring an Arrow data ty... | 23 | Saving Graph/Structured Data in Datasets
Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python... | [
-0.2739242613315582,
0.038915086537599564,
-0.03910231962800026,
0.2492293417453766,
0.1765468418598175,
-0.11061874032020569,
-0.04225819185376167,
0.05888725072145462,
0.038546446710824966,
0.15754833817481995,
-0.18959973752498627,
0.41915029287338257,
-0.4267214834690094,
0.64380335807... |
https://github.com/huggingface/datasets/issues/2426 | Saving Graph/Structured Data in Datasets | An example of a toy dataset is like:
```json
[
{
"name": "mike",
"friends": [
"tom",
"lily"
],
"articles": [
{
"title": "aaaaa",
"reader": [
"tom",
"lucy"
... | Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python value type when inferring an Arrow data ty... | 131 | Saving Graph/Structured Data in Datasets
Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python... | [
-0.2233351171016693,
0.10357489436864853,
-0.05503031238913536,
0.25530001521110535,
0.16647519171237946,
-0.08834895491600037,
-0.04204151779413223,
0.04892462491989136,
0.1316879838705063,
0.017299801111221313,
-0.1817377805709839,
0.341371089220047,
-0.4227153956890106,
0.58827555179595... |
https://github.com/huggingface/datasets/issues/2426 | Saving Graph/Structured Data in Datasets | Hi,
you can do the following to load this data into a `Dataset`:
```python
from datasets import Dataset
examples = [
{
"name": "mike",
"friends": [
"tom",
"lily"
],
"articles": [
{
"title": "aaaaa",
"... | Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python value type when inferring an Arrow data ty... | 93 | Saving Graph/Structured Data in Datasets
Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python... | [
-0.2856689989566803,
0.04891207441687584,
-0.0369233712553978,
0.2954736351966858,
0.17613089084625244,
-0.09251818060874939,
0.0076429895125329494,
0.09410903602838516,
0.17522695660591125,
0.03599987551569939,
-0.22664880752563477,
0.4146476984024048,
-0.43150725960731506,
0.632694184780... |
https://github.com/huggingface/datasets/issues/2426 | Saving Graph/Structured Data in Datasets | Thank you so much, and that works! I also have a question that if the dataset is very large, that cannot be loaded into the memory. How to create the Dataset? | Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python value type when inferring an Arrow data ty... | 31 | Saving Graph/Structured Data in Datasets
Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python... | [
-0.27232348918914795,
0.007755912374705076,
-0.033584535121917725,
0.33133554458618164,
0.14741764962673187,
-0.07705627381801605,
-0.06608230620622635,
0.012047367170453072,
0.07527127861976624,
0.1229880079627037,
-0.173162579536438,
0.31338047981262207,
-0.43195465207099915,
0.577983617... |
https://github.com/huggingface/datasets/issues/2426 | Saving Graph/Structured Data in Datasets | If your dataset doesn't fit in memory, store it in a local file and load it from there. Check out [this chapter](https://huggingface.co/docs/datasets/master/loading_datasets.html#from-local-files) in the docs for more info. | Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python value type when inferring an Arrow data ty... | 28 | Saving Graph/Structured Data in Datasets
Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python... | [
-0.23781022429466248,
-0.1662292629480362,
0.03302155062556267,
0.37800613045692444,
0.2134670913219452,
-0.09449587762355804,
-0.08194179832935333,
0.005801288411021233,
0.23067988455295563,
0.1115746945142746,
-0.3286619484424591,
0.2727149724960327,
-0.41431406140327454,
0.7541984915733... |
https://github.com/huggingface/datasets/issues/2424 | load_from_disk and save_to_disk are not compatible with each other | Hi,
`load_dataset` returns an instance of `DatasetDict` if `split` is not specified, so instead of `Dataset.load_from_disk`, use `DatasetDict.load_from_disk` to load the dataset from disk. | ## Describe the bug
load_from_disk and save_to_disk are not compatible. When I use save_to_disk to save a dataset to disk it works perfectly but given the same directory load_from_disk throws an error that it can't find state.json. looks like the load_from_disk only works on one split
## Steps to reproduce the bug
... | 24 | load_from_disk and save_to_disk are not compatible with each other
## Describe the bug
load_from_disk and save_to_disk are not compatible. When I use save_to_disk to save a dataset to disk it works perfectly but given the same directory load_from_disk throws an error that it can't find state.json. looks like the loa... | [
-0.17922310531139374,
-0.22525106370449066,
-0.10319455713033676,
0.443765789270401,
0.1700461208820343,
-0.005664934869855642,
0.28807199001312256,
0.3563304543495178,
0.3407508134841919,
0.07646181434392929,
0.07303103059530258,
0.37539035081863403,
0.08483617752790451,
0.155404001474380... |
https://github.com/huggingface/datasets/issues/2424 | load_from_disk and save_to_disk are not compatible with each other | Though I see a stream of issues open by people lost between datasets and datasets dicts so maybe there is here something that could be better in terms of UX. Could be better error handling or something else smarter to even avoid said errors but maybe we should think about this. Reopening to use this issue as a discussi... | ## Describe the bug
load_from_disk and save_to_disk are not compatible. When I use save_to_disk to save a dataset to disk it works perfectly but given the same directory load_from_disk throws an error that it can't find state.json. looks like the load_from_disk only works on one split
## Steps to reproduce the bug
... | 73 | load_from_disk and save_to_disk are not compatible with each other
## Describe the bug
load_from_disk and save_to_disk are not compatible. When I use save_to_disk to save a dataset to disk it works perfectly but given the same directory load_from_disk throws an error that it can't find state.json. looks like the loa... | [
-0.1793288290500641,
-0.14159834384918213,
-0.10552337765693665,
0.5006906390190125,
0.2325790971517563,
-0.038246769458055496,
0.25782355666160583,
0.29034456610679626,
0.33867648243904114,
0.09910715371370316,
0.10473886877298355,
0.340480238199234,
0.07601690292358398,
0.185123562812805... |
https://github.com/huggingface/datasets/issues/2424 | load_from_disk and save_to_disk are not compatible with each other | We should probably improve the error message indeed.
Also note that there exists a function `load_from_disk` that can load a Dataset or a DatasetDict. Under the hood it calls either `Dataset.load_from_disk` or `DatasetDict.load_from_disk`:
```python
from datasets import load_from_disk
dataset_dict = load_fr... | ## Describe the bug
load_from_disk and save_to_disk are not compatible. When I use save_to_disk to save a dataset to disk it works perfectly but given the same directory load_from_disk throws an error that it can't find state.json. looks like the load_from_disk only works on one split
## Steps to reproduce the bug
... | 45 | load_from_disk and save_to_disk are not compatible with each other
## Describe the bug
load_from_disk and save_to_disk are not compatible. When I use save_to_disk to save a dataset to disk it works perfectly but given the same directory load_from_disk throws an error that it can't find state.json. looks like the loa... | [
-0.21142855286598206,
-0.19184502959251404,
-0.10800468921661377,
0.4175550937652588,
0.17884120345115662,
0.009406113997101784,
0.25178879499435425,
0.35985785722732544,
0.3484733998775482,
0.1281708925962448,
0.09062927216291428,
0.3500765264034271,
0.06756177544593811,
0.123415157198905... |
https://github.com/huggingface/datasets/issues/2415 | Cached dataset not loaded | It actually seems to happen all the time in above configuration:
* the function `filter_by_duration` correctly loads cached processed dataset
* the function `prepare_dataset` is always reexecuted
I end up solving the issue by saving to disk my dataset at the end but I'm still wondering if it's a bug or limitation ... | ## Describe the bug
I have a large dataset (common_voice, english) where I use several map and filter functions.
Sometimes my cached datasets after specific functions are not loaded.
I always use the same arguments, same functions, no seed…
## Steps to reproduce the bug
```python
def filter_by_duration(batch):
... | 53 | Cached dataset not loaded
## Describe the bug
I have a large dataset (common_voice, english) where I use several map and filter functions.
Sometimes my cached datasets after specific functions are not loaded.
I always use the same arguments, same functions, no seed…
## Steps to reproduce the bug
```python
def... | [
-0.2978542745113373,
-0.04235473647713661,
-0.07583027333021164,
0.4236162602901459,
-0.07194618135690689,
-0.027688685804605484,
0.23264199495315552,
0.35400402545928955,
0.20896068215370178,
-0.06976734101772308,
0.04105209559202194,
0.1886657178401947,
-0.28496986627578735,
-0.225290641... |
https://github.com/huggingface/datasets/issues/2415 | Cached dataset not loaded | Hi ! The hash used for caching `map` results is the fingerprint of the resulting dataset. It is computed using three things:
- the old fingerprint of the dataset
- the hash of the function
- the hash of the other parameters passed to `map`
You can compute the hash of your function (or any python object) with
```... | ## Describe the bug
I have a large dataset (common_voice, english) where I use several map and filter functions.
Sometimes my cached datasets after specific functions are not loaded.
I always use the same arguments, same functions, no seed…
## Steps to reproduce the bug
```python
def filter_by_duration(batch):
... | 94 | Cached dataset not loaded
## Describe the bug
I have a large dataset (common_voice, english) where I use several map and filter functions.
Sometimes my cached datasets after specific functions are not loaded.
I always use the same arguments, same functions, no seed…
## Steps to reproduce the bug
```python
def... | [
-0.2978542745113373,
-0.04235473647713661,
-0.07583027333021164,
0.4236162602901459,
-0.07194618135690689,
-0.027688685804605484,
0.23264199495315552,
0.35400402545928955,
0.20896068215370178,
-0.06976734101772308,
0.04105209559202194,
0.1886657178401947,
-0.28496986627578735,
-0.225290641... |
https://github.com/huggingface/datasets/issues/2415 | Cached dataset not loaded | > If `prepare_dataset` is always executed, maybe this is because your `processor` has a different hash each time you want to execute it.
Yes I think that was the issue.
For the hash of the function:
* does it consider just the name or the actual code of the function
* does it consider variables that are not pas... | ## Describe the bug
I have a large dataset (common_voice, english) where I use several map and filter functions.
Sometimes my cached datasets after specific functions are not loaded.
I always use the same arguments, same functions, no seed…
## Steps to reproduce the bug
```python
def filter_by_duration(batch):
... | 70 | Cached dataset not loaded
## Describe the bug
I have a large dataset (common_voice, english) where I use several map and filter functions.
Sometimes my cached datasets after specific functions are not loaded.
I always use the same arguments, same functions, no seed…
## Steps to reproduce the bug
```python
def... | [
-0.2978542745113373,
-0.04235473647713661,
-0.07583027333021164,
0.4236162602901459,
-0.07194618135690689,
-0.027688685804605484,
0.23264199495315552,
0.35400402545928955,
0.20896068215370178,
-0.06976734101772308,
0.04105209559202194,
0.1886657178401947,
-0.28496986627578735,
-0.225290641... |
https://github.com/huggingface/datasets/issues/2415 | Cached dataset not loaded | > does it consider just the name or the actual code of the function
It looks at the name and the actual code and all variables such as recursively. It uses `dill` to do so, which is based on `pickle`.
Basically the hash is computed using the pickle bytes of your function (computed using `dill` to support most pytho... | ## Describe the bug
I have a large dataset (common_voice, english) where I use several map and filter functions.
Sometimes my cached datasets after specific functions are not loaded.
I always use the same arguments, same functions, no seed…
## Steps to reproduce the bug
```python
def filter_by_duration(batch):
... | 87 | Cached dataset not loaded
## Describe the bug
I have a large dataset (common_voice, english) where I use several map and filter functions.
Sometimes my cached datasets after specific functions are not loaded.
I always use the same arguments, same functions, no seed…
## Steps to reproduce the bug
```python
def... | [
-0.2978542745113373,
-0.04235473647713661,
-0.07583027333021164,
0.4236162602901459,
-0.07194618135690689,
-0.027688685804605484,
0.23264199495315552,
0.35400402545928955,
0.20896068215370178,
-0.06976734101772308,
0.04105209559202194,
0.1886657178401947,
-0.28496986627578735,
-0.225290641... |
https://github.com/huggingface/datasets/issues/2413 | AttributeError: 'DatasetInfo' object has no attribute 'task_templates' | Hi ! Can you try using a more up-to-date version ? We added the task_templates in `datasets` 1.7.0.
Ideally when you're working on new datasets, you should install and use the local version of your fork of `datasets`. Here I think you tried to run the 1.7.0 tests with the 1.6.2 code | ## Describe the bug
Hello,
I'm trying to add dataset and contribute, but test keep fail with below cli.
` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<my_dataset>`
## Steps to reproduce the bug
It seems like a bug when I see an error with the existing dataset,... | 52 | AttributeError: 'DatasetInfo' object has no attribute 'task_templates'
## Describe the bug
Hello,
I'm trying to add dataset and contribute, but test keep fail with below cli.
` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<my_dataset>`
## Steps to reproduce th... | [
-0.4735526442527771,
-0.27197447419166565,
-0.01215079240500927,
0.059786371886730194,
0.26718202233314514,
0.21022343635559082,
0.25459393858909607,
0.3851533532142639,
0.03191307932138443,
0.18610885739326477,
0.09519129246473312,
0.4420560300350189,
-0.15097428858280182,
0.0039888969622... |
https://github.com/huggingface/datasets/issues/2407 | .map() function got an unexpected keyword argument 'cache_file_name' | Hi @cindyxinyiwang,
Did you try adding `.arrow` after `cache_file_name` argument? Here I think they're expecting something like that only for a cache file:
https://github.com/huggingface/datasets/blob/e08362256fb157c0b3038437fc0d7a0bbb50de5c/src/datasets/arrow_dataset.py#L1556-L1558 | ## Describe the bug
I'm trying to save the result of datasets.map() to a specific file, so that I can easily share it among multiple computers without reprocessing the dataset. However, when I try to pass an argument 'cache_file_name' to the .map() function, it throws an error that ".map() function got an unexpected... | 24 | .map() function got an unexpected keyword argument 'cache_file_name'
## Describe the bug
I'm trying to save the result of datasets.map() to a specific file, so that I can easily share it among multiple computers without reprocessing the dataset. However, when I try to pass an argument 'cache_file_name' to the .map... | [
0.13229957222938538,
0.06898843497037888,
0.03823640197515488,
0.000056010645494097844,
0.05203091725707054,
0.29637160897254944,
0.17080248892307281,
0.3389790654182434,
-0.06356235593557358,
-0.020300714299082756,
0.12892167270183563,
0.6100243330001831,
-0.24109235405921936,
-0.54837673... |
https://github.com/huggingface/datasets/issues/2407 | .map() function got an unexpected keyword argument 'cache_file_name' | Hi ! `cache_file_name` is an argument of the `Dataset.map` method. Can you check that your `dataset` is indeed a `Dataset` object ?
If you loaded several splits, then it would actually be a `DatasetDict` (one dataset per split, in a dictionary).
In this case, since there are several datasets in the dict, the `Datas... | ## Describe the bug
I'm trying to save the result of datasets.map() to a specific file, so that I can easily share it among multiple computers without reprocessing the dataset. However, when I try to pass an argument 'cache_file_name' to the .map() function, it throws an error that ".map() function got an unexpected... | 72 | .map() function got an unexpected keyword argument 'cache_file_name'
## Describe the bug
I'm trying to save the result of datasets.map() to a specific file, so that I can easily share it among multiple computers without reprocessing the dataset. However, when I try to pass an argument 'cache_file_name' to the .map... | [
0.0843966007232666,
0.09359589219093323,
0.04566144198179245,
0.018302464857697487,
0.041557516902685165,
0.2793118953704834,
0.2214866280555725,
0.3731047809123993,
-0.027766546234488487,
-0.00002371343543927651,
0.13107818365097046,
0.6276782751083374,
-0.24747967720031738,
-0.4753704369... |
https://github.com/huggingface/datasets/issues/2407 | .map() function got an unexpected keyword argument 'cache_file_name' | I think you are right. I used cache_file_names={data1: name1, data2: name2} and it works. Thank you! | ## Describe the bug
I'm trying to save the result of datasets.map() to a specific file, so that I can easily share it among multiple computers without reprocessing the dataset. However, when I try to pass an argument 'cache_file_name' to the .map() function, it throws an error that ".map() function got an unexpected... | 16 | .map() function got an unexpected keyword argument 'cache_file_name'
## Describe the bug
I'm trying to save the result of datasets.map() to a specific file, so that I can easily share it among multiple computers without reprocessing the dataset. However, when I try to pass an argument 'cache_file_name' to the .map... | [
0.15060639381408691,
0.09318822622299194,
0.04623483121395111,
-0.006893530022352934,
0.04963534325361252,
0.29383549094200134,
0.21114608645439148,
0.35865533351898193,
-0.064439557492733,
0.017635909840464592,
0.11948040872812271,
0.6066254377365112,
-0.24757397174835205,
-0.487962305545... |
https://github.com/huggingface/datasets/issues/2400 | Concatenate several datasets with removed columns is not working. | Hi,
did you fill out the env info section manually or by copy-pasting the output of the `datasets-cli env` command?
This code should work without issues on 1.6.2 version (I'm working on master (1.6.2.dev0 version) and can't reproduce this error). | ## Describe the bug
You can't concatenate datasets when you removed columns before.
## Steps to reproduce the bug
```python
from datasets import load_dataset, concatenate_datasets
wikiann= load_dataset("wikiann","en")
wikiann["train"] = wikiann["train"].remove_columns(["langs","spans"])
wikiann["test"] =... | 40 | Concatenate several datasets with removed columns is not working.
## Describe the bug
You can't concatenate datasets when you removed columns before.
## Steps to reproduce the bug
```python
from datasets import load_dataset, concatenate_datasets
wikiann= load_dataset("wikiann","en")
wikiann["train"] = w... | [
-0.050395987927913666,
-0.16309896111488342,
-0.03724927455186844,
0.2921581268310547,
0.07128666341304779,
0.16081508994102478,
0.4275573790073395,
0.39729902148246765,
-0.1280238777399063,
0.09719809144735336,
-0.21168510615825653,
0.28301116824150085,
0.11641492694616318,
0.240146875381... |
https://github.com/huggingface/datasets/issues/2396 | strange datasets from OSCAR corpus | Hi ! Thanks for reporting
cc @pjox is this an issue from the data ?
Anyway we should at least mention that OSCAR could contain such contents in the dataset card, you're totally right @jerryIsHere | 

From the [official site ](https://oscar-corpus.com/), the Yue Chinese dataset should have 2.2K... | 35 | strange datasets from OSCAR corpus


From the [official site ](https://oscar-corpus.com/), the... | [
0.26769936084747314,
0.05709485709667206,
-0.002301169792190194,
0.5770171880722046,
0.15650059282779694,
0.06097016483545303,
-0.012423336505889893,
0.24926795065402985,
-0.26459333300590515,
0.048612937331199646,
-0.5796422958374023,
-0.10799799114465714,
0.04310476779937744,
-0.05117551... |
https://github.com/huggingface/datasets/issues/2396 | strange datasets from OSCAR corpus | Hi @jerryIsHere , sorry for the late response! Sadly this is normal, the problem comes form fasttext's classifier which we used to create the original corpus. In general the classifier is not really capable of properly recognizing Yue Chineese so the file ends un being just noise from Common Crawl. Some of these proble... | 

From the [official site ](https://oscar-corpus.com/), the Yue Chinese dataset should have 2.2K... | 93 | strange datasets from OSCAR corpus


From the [official site ](https://oscar-corpus.com/), the... | [
0.26769936084747314,
0.05709485709667206,
-0.002301169792190194,
0.5770171880722046,
0.15650059282779694,
0.06097016483545303,
-0.012423336505889893,
0.24926795065402985,
-0.26459333300590515,
0.048612937331199646,
-0.5796422958374023,
-0.10799799114465714,
0.04310476779937744,
-0.05117551... |
https://github.com/huggingface/datasets/issues/2391 | Missing original answers in kilt-TriviaQA | That could be useful indeed! Feel free to open a PR on the dataset card if you already have some code that runs, otherwise we'll take care of it soon :) | I previously opened an issue at https://github.com/facebookresearch/KILT/issues/42 but from the answer of @fabiopetroni it seems that the problem comes from HF-datasets
## Describe the bug
The `answer` field in kilt-TriviaQA, e.g. `kilt_tasks['train_triviaqa'][0]['output']['answer']` contains a list of alternative ... | 31 | Missing original answers in kilt-TriviaQA
I previously opened an issue at https://github.com/facebookresearch/KILT/issues/42 but from the answer of @fabiopetroni it seems that the problem comes from HF-datasets
## Describe the bug
The `answer` field in kilt-TriviaQA, e.g. `kilt_tasks['train_triviaqa'][0]['output'... | [
0.5064161419868469,
-0.3483676612377167,
-0.035132791846990585,
0.06285117566585541,
-0.12134303152561188,
-0.10113469511270523,
0.03444229066371918,
0.31334078311920166,
0.1268099695444107,
0.1838567554950714,
0.16401542723178864,
0.49276742339134216,
0.11229772865772247,
0.40526881814002... |
https://github.com/huggingface/datasets/issues/2391 | Missing original answers in kilt-TriviaQA | I can open a PR but there is 2 details to fix:
- the name for the corresponding key (e.g. `original_answer`)
- how to implement it: I’m not sure what happens when you map `lambda x: {'input': ...}` as it keeps the other keys (e.g. `output`) intact but here since we want to set a nested value (e.g. `x['output']['origi... | I previously opened an issue at https://github.com/facebookresearch/KILT/issues/42 but from the answer of @fabiopetroni it seems that the problem comes from HF-datasets
## Describe the bug
The `answer` field in kilt-TriviaQA, e.g. `kilt_tasks['train_triviaqa'][0]['output']['answer']` contains a list of alternative ... | 84 | Missing original answers in kilt-TriviaQA
I previously opened an issue at https://github.com/facebookresearch/KILT/issues/42 but from the answer of @fabiopetroni it seems that the problem comes from HF-datasets
## Describe the bug
The `answer` field in kilt-TriviaQA, e.g. `kilt_tasks['train_triviaqa'][0]['output'... | [
0.51557856798172,
-0.3537389636039734,
0.03276112675666809,
0.03958847001194954,
-0.10538562387228012,
-0.07997734844684601,
-0.02959076128900051,
0.2565182149410248,
0.08557357639074326,
0.132102832198143,
0.04429170861840248,
0.543709933757782,
0.19369861483573914,
0.4238484501838684,
... |
https://github.com/huggingface/datasets/issues/2387 | datasets 1.6 ignores cache | Looks like there are multiple issues regarding this (#2386, #2322) and it's a WIP #2329. Currently these datasets are being loaded in-memory which is causing this issue. Quoting @mariosasko here for a quick fix:
> set `keep_in_memory` to `False` when loading a dataset (`sst = load_dataset("sst", keep_in_memory=False... | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/... | 69 | datasets 1.6 ignores cache
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/ope... | [
-0.21584858000278473,
0.020310932770371437,
0.042242590337991714,
0.14041508734226227,
0.1821746975183487,
0.0462312325835228,
0.14098100364208221,
0.3139573037624359,
0.05515850707888603,
-0.06445320695638657,
-0.23593808710575104,
-0.05794750526547432,
-0.024700535461306572,
-0.331577807... |
https://github.com/huggingface/datasets/issues/2387 | datasets 1.6 ignores cache | Hi ! Since `datasets` 1.6.0 we no longer keep small datasets (<250MB) on disk and load them in RAM instead by default. This makes data processing and iterating on data faster. However datasets in RAM currently have no way to reload previous results from the cache (since nothing is written on disk). We are working on ma... | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/... | 106 | datasets 1.6 ignores cache
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/ope... | [
-0.21584858000278473,
0.020310932770371437,
0.042242590337991714,
0.14041508734226227,
0.1821746975183487,
0.0462312325835228,
0.14098100364208221,
0.3139573037624359,
0.05515850707888603,
-0.06445320695638657,
-0.23593808710575104,
-0.05794750526547432,
-0.024700535461306572,
-0.331577807... |
https://github.com/huggingface/datasets/issues/2387 | datasets 1.6 ignores cache | OK, It doesn't look like we can use the proposed workaround - see https://github.com/huggingface/transformers/issues/11801
Could you please add an env var for us to be able to turn off this unwanted in our situation behavior? It is really problematic for dev work, when one needs to restart the training very often an... | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/... | 104 | datasets 1.6 ignores cache
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/ope... | [
-0.21584858000278473,
0.020310932770371437,
0.042242590337991714,
0.14041508734226227,
0.1821746975183487,
0.0462312325835228,
0.14098100364208221,
0.3139573037624359,
0.05515850707888603,
-0.06445320695638657,
-0.23593808710575104,
-0.05794750526547432,
-0.024700535461306572,
-0.331577807... |
https://github.com/huggingface/datasets/issues/2387 | datasets 1.6 ignores cache | Hi @stas00,
You are right: an env variable is needed to turn off this behavior. I am adding it.
For the moment there is a config parameter to turn off this behavior: `datasets.config.MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES = None`
You can find this info in the docs:
- in the docstring of the parameter `keep_in_m... | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/... | 115 | datasets 1.6 ignores cache
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/ope... | [
-0.21584858000278473,
0.020310932770371437,
0.042242590337991714,
0.14041508734226227,
0.1821746975183487,
0.0462312325835228,
0.14098100364208221,
0.3139573037624359,
0.05515850707888603,
-0.06445320695638657,
-0.23593808710575104,
-0.05794750526547432,
-0.024700535461306572,
-0.331577807... |
https://github.com/huggingface/datasets/issues/2387 | datasets 1.6 ignores cache | Yes, but this still requires one to edit the standard example scripts, so if I'm doing that already I just as well can add `keep_in_memory=False`.
May be the low hanging fruit is to add `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES` env var to match the config, and if the user sets it to 0, then it'll be the same as `keep_in... | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/... | 58 | datasets 1.6 ignores cache
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/ope... | [
-0.21584858000278473,
0.020310932770371437,
0.042242590337991714,
0.14041508734226227,
0.1821746975183487,
0.0462312325835228,
0.14098100364208221,
0.3139573037624359,
0.05515850707888603,
-0.06445320695638657,
-0.23593808710575104,
-0.05794750526547432,
-0.024700535461306572,
-0.331577807... |
https://github.com/huggingface/datasets/issues/2387 | datasets 1.6 ignores cache | @stas00, however, for the moment, setting the value to `0` is equivalent to the opposite, i.e. `keep_in_memory=True`. This means the max size until which I load in memory is 0 bytes.
Tell me if this is logical/convenient, or I should change it. | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/... | 42 | datasets 1.6 ignores cache
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/ope... | [
-0.21584858000278473,
0.020310932770371437,
0.042242590337991714,
0.14041508734226227,
0.1821746975183487,
0.0462312325835228,
0.14098100364208221,
0.3139573037624359,
0.05515850707888603,
-0.06445320695638657,
-0.23593808710575104,
-0.05794750526547432,
-0.024700535461306572,
-0.331577807... |
https://github.com/huggingface/datasets/issues/2387 | datasets 1.6 ignores cache | In my PR, to turn off current default bahavior, you should set env variable to one of: `{"", "OFF", "NO", "FALSE"}`.
For example:
```
MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=
``` | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/... | 26 | datasets 1.6 ignores cache
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/ope... | [
-0.21584858000278473,
0.020310932770371437,
0.042242590337991714,
0.14041508734226227,
0.1821746975183487,
0.0462312325835228,
0.14098100364208221,
0.3139573037624359,
0.05515850707888603,
-0.06445320695638657,
-0.23593808710575104,
-0.05794750526547432,
-0.024700535461306572,
-0.331577807... |
https://github.com/huggingface/datasets/issues/2387 | datasets 1.6 ignores cache | IMHO, this behaviour is not very intuitive, as 0 is a normal quantity of bytes. So `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=0` to me reads as don't cache ever.
Also "SIZE_IN_BYTES" that can take one of `{"", "OFF", "NO", "FALSE"}` is also quite odd.
I think supporting a very simple `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTE... | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/... | 89 | datasets 1.6 ignores cache
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/ope... | [
-0.21584858000278473,
0.020310932770371437,
0.042242590337991714,
0.14041508734226227,
0.1821746975183487,
0.0462312325835228,
0.14098100364208221,
0.3139573037624359,
0.05515850707888603,
-0.06445320695638657,
-0.23593808710575104,
-0.05794750526547432,
-0.024700535461306572,
-0.331577807... |
https://github.com/huggingface/datasets/issues/2387 | datasets 1.6 ignores cache | I understand your point @stas00, as I am not very convinced with current implementation.
My concern is: which numerical value should then pass a user who wants `keep_in_memory=True` by default, independently of dataset size? Currently it is `0` for this case. | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/... | 41 | datasets 1.6 ignores cache
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/ope... | [
-0.21584858000278473,
0.020310932770371437,
0.042242590337991714,
0.14041508734226227,
0.1821746975183487,
0.0462312325835228,
0.14098100364208221,
0.3139573037624359,
0.05515850707888603,
-0.06445320695638657,
-0.23593808710575104,
-0.05794750526547432,
-0.024700535461306572,
-0.331577807... |
https://github.com/huggingface/datasets/issues/2387 | datasets 1.6 ignores cache | That's a good question, and again the normal bytes can be used for that:
```
MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=1e12 # (~2**40)
```
Since it's unlikely that anybody will have more than 1TB RAM.
It's also silly that it uses BYTES and not MBYTES - that level of refinement doesn't seem to be of a practical use in ... | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/... | 127 | datasets 1.6 ignores cache
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/ope... | [
-0.21584858000278473,
0.020310932770371437,
0.042242590337991714,
0.14041508734226227,
0.1821746975183487,
0.0462312325835228,
0.14098100364208221,
0.3139573037624359,
0.05515850707888603,
-0.06445320695638657,
-0.23593808710575104,
-0.05794750526547432,
-0.024700535461306572,
-0.331577807... |
https://github.com/huggingface/datasets/issues/2387 | datasets 1.6 ignores cache | Great! Thanks, @stas00.
I am implementing your suggestion to turn off default value when set to `0`.
For the other suggestion (allowing different metric prefixes), I will discuss with @lhoestq to agree on its implementation. | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/... | 35 | datasets 1.6 ignores cache
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/ope... | [
-0.21584858000278473,
0.020310932770371437,
0.042242590337991714,
0.14041508734226227,
0.1821746975183487,
0.0462312325835228,
0.14098100364208221,
0.3139573037624359,
0.05515850707888603,
-0.06445320695638657,
-0.23593808710575104,
-0.05794750526547432,
-0.024700535461306572,
-0.331577807... |
https://github.com/huggingface/datasets/issues/2377 | ArrowDataset.save_to_disk produces files that cannot be read using pyarrow.feather | Hi ! This is because we are actually using the arrow streaming format. We plan to switch to the arrow IPC format.
More info at #1933 | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from pyarrow import feather
dataset = load_dataset('imdb', split='train')
dataset.save_to_disk('dataset_dir')
table = feather.read_table('dataset_dir/dataset.arro... | 26 | ArrowDataset.save_to_disk produces files that cannot be read using pyarrow.feather
## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from pyarrow import feather
dataset = load_dataset('imdb', split='train')
data... | [
-0.20403321087360382,
0.16021889448165894,
-0.02409166842699051,
0.37150099873542786,
0.14897434413433075,
0.25441041588783264,
-0.006909106392413378,
0.5975210070610046,
-0.516123354434967,
-0.11885382235050201,
-0.36670395731925964,
0.6461247205734253,
-0.0722651407122612,
-0.74788349866... |
https://github.com/huggingface/datasets/issues/2373 | Loading dataset from local path | Version below works, checked again in the docs, and data_files should be a path.
```
ds = datasets.load_dataset('my_script.py',
data_files='/data/dir/corpus.txt',
cache_dir='.')
``` | I'm trying to load a local dataset with the code below
```
ds = datasets.load_dataset('my_script.py',
data_files='corpus.txt',
data_dir='/data/dir',
cache_dir='.')
```
But internally a BuilderConfig is created, which tries to u... | 21 | Loading dataset from local path
I'm trying to load a local dataset with the code below
```
ds = datasets.load_dataset('my_script.py',
data_files='corpus.txt',
data_dir='/data/dir',
cache_dir='.')
```
But internally a BuilderC... | [
-0.36348938941955566,
0.22408446669578552,
0.05344430357217789,
0.4514977037906647,
0.1415300965309143,
-0.12070620059967041,
0.4987969994544983,
0.06493589282035828,
0.11394523829221725,
0.12790358066558838,
0.16706225275993347,
0.1164790540933609,
-0.00002638513979036361,
0.1618971675634... |
https://github.com/huggingface/datasets/issues/2363 | Trying to use metric.compute but get OSError | also, I test the function on some little data , get the same message:
```
Python 3.8.5 (default, Jan 27 2021, 15:41:15)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_metric
>>> metric = load_metric('accuracy')
>>> metric.add_batch(pre... | I want to use metric.compute from load_metric('accuracy') to get training accuracy, but receive OSError. I am wondering what is the mechanism behind the metric calculation, why would it report an OSError?
```python
195 for epoch in range(num_train_epochs):
196 model.train()
197 for step, batch... | 113 | Trying to use metric.compute but get OSError
I want to use metric.compute from load_metric('accuracy') to get training accuracy, but receive OSError. I am wondering what is the mechanism behind the metric calculation, why would it report an OSError?
```python
195 for epoch in range(num_train_epochs):
196 ... | [
-0.39441871643066406,
-0.2529711425304413,
-0.03301170468330383,
0.36319419741630554,
0.2531311511993408,
-0.1011047437787056,
0.2328529953956604,
0.139267235994339,
0.21068891882896423,
0.6745795011520386,
-0.027120595797896385,
0.06741409003734589,
-0.056196894496679306,
0.00079509109491... |
https://github.com/huggingface/datasets/issues/2363 | Trying to use metric.compute but get OSError | Hi @hyusterr,
If you look at the example provided in `metrics/accuracy.py`, it only does `metric.compute()` to calculate the accuracy. Here's an example:
```
from datasets import load_metric
metric = load_metric('accuracy')
output = metric.compute(predictions=[1, 1, 1, 1], references=[1, 1, 0, 0])
print(output['a... | I want to use metric.compute from load_metric('accuracy') to get training accuracy, but receive OSError. I am wondering what is the mechanism behind the metric calculation, why would it report an OSError?
```python
195 for epoch in range(num_train_epochs):
196 model.train()
197 for step, batch... | 44 | Trying to use metric.compute but get OSError
I want to use metric.compute from load_metric('accuracy') to get training accuracy, but receive OSError. I am wondering what is the mechanism behind the metric calculation, why would it report an OSError?
```python
195 for epoch in range(num_train_epochs):
196 ... | [
-0.39441871643066406,
-0.2529711425304413,
-0.03301170468330383,
0.36319419741630554,
0.2531311511993408,
-0.1011047437787056,
0.2328529953956604,
0.139267235994339,
0.21068891882896423,
0.6745795011520386,
-0.027120595797896385,
0.06741409003734589,
-0.056196894496679306,
0.00079509109491... |
https://github.com/huggingface/datasets/issues/2363 | Trying to use metric.compute but get OSError | I thought I can use Metric to collect predictions and references, this follows the step from huggingface's sample colab.
BTW, I fix the problem by setting other cache_dir in load_metric, but I'm still wondering about the mechanism. | I want to use metric.compute from load_metric('accuracy') to get training accuracy, but receive OSError. I am wondering what is the mechanism behind the metric calculation, why would it report an OSError?
```python
195 for epoch in range(num_train_epochs):
196 model.train()
197 for step, batch... | 37 | Trying to use metric.compute but get OSError
I want to use metric.compute from load_metric('accuracy') to get training accuracy, but receive OSError. I am wondering what is the mechanism behind the metric calculation, why would it report an OSError?
```python
195 for epoch in range(num_train_epochs):
196 ... | [
-0.39441871643066406,
-0.2529711425304413,
-0.03301170468330383,
0.36319419741630554,
0.2531311511993408,
-0.1011047437787056,
0.2328529953956604,
0.139267235994339,
0.21068891882896423,
0.6745795011520386,
-0.027120595797896385,
0.06741409003734589,
-0.056196894496679306,
0.00079509109491... |
https://github.com/huggingface/datasets/issues/2363 | Trying to use metric.compute but get OSError | I tried this code on a colab notebook and it worked fine (with gpu enabled):
```
from datasets import load_metric
metric = load_metric('accuracy')
output = metric.add_batch(predictions=[1, 1, 1, 1], references=[1, 1, 0, 0])
final_score = metric.compute()
print(final_score) # 0.5
```
Also, in `load_metric`, I s... | I want to use metric.compute from load_metric('accuracy') to get training accuracy, but receive OSError. I am wondering what is the mechanism behind the metric calculation, why would it report an OSError?
```python
195 for epoch in range(num_train_epochs):
196 model.train()
197 for step, batch... | 53 | Trying to use metric.compute but get OSError
I want to use metric.compute from load_metric('accuracy') to get training accuracy, but receive OSError. I am wondering what is the mechanism behind the metric calculation, why would it report an OSError?
```python
195 for epoch in range(num_train_epochs):
196 ... | [
-0.39441871643066406,
-0.2529711425304413,
-0.03301170468330383,
0.36319419741630554,
0.2531311511993408,
-0.1011047437787056,
0.2328529953956604,
0.139267235994339,
0.21068891882896423,
0.6745795011520386,
-0.027120595797896385,
0.06741409003734589,
-0.056196894496679306,
0.00079509109491... |
https://github.com/huggingface/datasets/issues/2363 | Trying to use metric.compute but get OSError | Hi ! By default it caches the predictions and references used to compute the metric in `~/.cache/huggingface/datasets/metrics` (not `~/.datasets/`). Let me update the documentation @bhavitvyamalik .
The cache is used to store all the predictions and references passed to `add_batch` for example in order to compute th... | I want to use metric.compute from load_metric('accuracy') to get training accuracy, but receive OSError. I am wondering what is the mechanism behind the metric calculation, why would it report an OSError?
```python
195 for epoch in range(num_train_epochs):
196 model.train()
197 for step, batch... | 87 | Trying to use metric.compute but get OSError
I want to use metric.compute from load_metric('accuracy') to get training accuracy, but receive OSError. I am wondering what is the mechanism behind the metric calculation, why would it report an OSError?
```python
195 for epoch in range(num_train_epochs):
196 ... | [
-0.39441871643066406,
-0.2529711425304413,
-0.03301170468330383,
0.36319419741630554,
0.2531311511993408,
-0.1011047437787056,
0.2328529953956604,
0.139267235994339,
0.21068891882896423,
0.6745795011520386,
-0.027120595797896385,
0.06741409003734589,
-0.056196894496679306,
0.00079509109491... |
https://github.com/huggingface/datasets/issues/2356 | How to Add New Metrics Guide | Hi ! sorry for the late response
It would be fantastic to have a guide for adding metrics as well ! Currently we only have this template here:
https://github.com/huggingface/datasets/blob/master/templates/new_metric_script.py
We can also include test utilities for metrics in the guide.
We have a pytest suite... | **Is your feature request related to a problem? Please describe.**
Currently there is an absolutely fantastic guide for how to contribute a new dataset to the library. However, there isn't one for adding new metrics.
**Describe the solution you'd like**
I'd like for a guide in a similar style to the dataset guide ... | 176 | How to Add New Metrics Guide
**Is your feature request related to a problem? Please describe.**
Currently there is an absolutely fantastic guide for how to contribute a new dataset to the library. However, there isn't one for adding new metrics.
**Describe the solution you'd like**
I'd like for a guide in a simi... | [
0.01842932030558586,
0.009165908209979534,
-0.0002698737080208957,
-0.15583206713199615,
0.03944471850991249,
0.08829467743635178,
-0.06632766872644424,
0.03053980879485607,
0.02642138861119747,
0.08176086097955704,
0.01602325774729252,
0.26922428607940674,
-0.13361269235610962,
0.27912622... |
https://github.com/huggingface/datasets/issues/2350 | `FaissIndex.save` throws error on GPU | Just in case, this is a workaround that I use in my code and it seems to do the job.
```python
if use_gpu_index:
data["train"]._indexes["text_emb"].faiss_index = faiss.index_gpu_to_cpu(data["train"]._indexes["text_emb"].faiss_index)
``` | ## Describe the bug
After training an index with a factory string `OPQ16_128,IVF512,PQ32` on GPU, `.save_faiss_index` throws this error.
```
File "index_wikipedia.py", line 119, in <module>
data["train"].save_faiss_index("text_emb", index_save_path)
File "/home/vlialin/miniconda3/envs/cat/lib/python3.8... | 27 | `FaissIndex.save` throws error on GPU
## Describe the bug
After training an index with a factory string `OPQ16_128,IVF512,PQ32` on GPU, `.save_faiss_index` throws this error.
```
File "index_wikipedia.py", line 119, in <module>
data["train"].save_faiss_index("text_emb", index_save_path)
File "/home/v... | [
-0.11386918276548386,
0.23111629486083984,
0.04932146519422531,
0.2159426361322403,
0.3592373728752136,
0.16243861615657806,
0.5158488750457764,
0.5090636610984802,
0.25088971853256226,
0.22490347921848297,
0.06692984700202942,
0.23696017265319824,
0.05093231052160263,
-0.06800377368927002... |
https://github.com/huggingface/datasets/issues/2347 | Add an API to access the language and pretty name of a dataset | Hi ! With @bhavitvyamalik we discussed about having something like
```python
from datasets import load_dataset_card
dataset_card = load_dataset_card("squad")
print(dataset_card.metadata.pretty_name)
# Stanford Question Answering Dataset (SQuAD)
print(dataset_card.metadata.languages)
# ["en"]
```
What do yo... | It would be super nice to have an API to get some metadata of the dataset from the name and args passed to `load_dataset`. This way we could programmatically infer the language and the name of a dataset when creating model cards automatically in the Transformers examples scripts. | 95 | Add an API to access the language and pretty name of a dataset
It would be super nice to have an API to get some metadata of the dataset from the name and args passed to `load_dataset`. This way we could programmatically infer the language and the name of a dataset when creating model cards automatically in the Trans... | [
-0.18838678300380707,
0.11107898503541946,
-0.11896520107984543,
0.3318738639354706,
0.3078961670398712,
0.12758108973503113,
0.2714891731739044,
0.2705742418766022,
-0.16304756700992584,
0.23702988028526306,
0.1823674887418747,
0.530703604221344,
-0.2200145572423935,
0.3426443636417389,
... |
https://github.com/huggingface/datasets/issues/2347 | Add an API to access the language and pretty name of a dataset | What dataset_info method are you talking about @julien-c ? In `huggingface_hub` I can only see `model_info`. | It would be super nice to have an API to get some metadata of the dataset from the name and args passed to `load_dataset`. This way we could programmatically infer the language and the name of a dataset when creating model cards automatically in the Transformers examples scripts. | 16 | Add an API to access the language and pretty name of a dataset
It would be super nice to have an API to get some metadata of the dataset from the name and args passed to `load_dataset`. This way we could programmatically infer the language and the name of a dataset when creating model cards automatically in the Trans... | [
-0.2759428322315216,
-0.14917504787445068,
-0.0873623713850975,
0.4150778651237488,
0.3421383798122406,
0.11897239089012146,
0.38973894715309143,
0.3709771931171417,
-0.022360844537615776,
0.3746368885040283,
0.0779033973813057,
0.5090551376342773,
-0.060151174664497375,
0.3841820657253265... |
https://github.com/huggingface/datasets/issues/2345 | [Question] How to move and reuse preprocessed dataset? | <s>Hi :) Can you share with us the code you used ?</s>
EDIT: from https://github.com/huggingface/transformers/issues/11665#issuecomment-838348291 I understand you're using the run_clm.py script. Can you share your logs ?
| Hi, I am training a gpt-2 from scratch using run_clm.py.
I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess),
I tried to :
copy path_to_cache_dir/datasets to new_cache_dir/datasets
set export HF_DATASETS_CACHE="new_cache_dir/"
but the program still re-preprocess the whole dataset... | 28 | [Question] How to move and reuse preprocessed dataset?
Hi, I am training a gpt-2 from scratch using run_clm.py.
I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess),
I tried to :
copy path_to_cache_dir/datasets to new_cache_dir/datasets
set export HF_DATASETS_CACHE="new_cache_di... | [
-0.16012407839298248,
-0.3261338472366333,
0.06538311392068863,
0.33965083956718445,
0.28646010160446167,
0.17157304286956787,
0.13545756042003632,
0.18122412264347076,
-0.08091142028570175,
-0.1558372527360916,
-0.13778996467590332,
0.03987719491124153,
-0.16895928978919983,
0.05871983245... |
https://github.com/huggingface/datasets/issues/2345 | [Question] How to move and reuse preprocessed dataset? | Also note that for the caching to work, you must reuse the exact same parameters as in the first run. Did you change any parameter ? The `preprocessing_num_workers` should also stay the same | Hi, I am training a gpt-2 from scratch using run_clm.py.
I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess),
I tried to :
copy path_to_cache_dir/datasets to new_cache_dir/datasets
set export HF_DATASETS_CACHE="new_cache_dir/"
but the program still re-preprocess the whole dataset... | 33 | [Question] How to move and reuse preprocessed dataset?
Hi, I am training a gpt-2 from scratch using run_clm.py.
I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess),
I tried to :
copy path_to_cache_dir/datasets to new_cache_dir/datasets
set export HF_DATASETS_CACHE="new_cache_di... | [
-0.2378593236207962,
-0.15300802886486053,
0.0416560024023056,
0.30848467350006104,
0.1833171248435974,
0.21348848938941956,
-0.05760880932211876,
0.2500772774219513,
-0.10958166420459747,
-0.22529207170009613,
-0.04646367207169533,
0.15800416469573975,
-0.1260225772857666,
-0.035717170685... |
https://github.com/huggingface/datasets/issues/2345 | [Question] How to move and reuse preprocessed dataset? | > Also note that for the caching to work, you must reuse the exact same parameters as in the first run. Did you change any parameter ? The `preprocessing_num_workers` should also stay the same
I only changed the `preprocessing_num_workers` maybe it is the problem~ I will try again~ | Hi, I am training a gpt-2 from scratch using run_clm.py.
I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess),
I tried to :
copy path_to_cache_dir/datasets to new_cache_dir/datasets
set export HF_DATASETS_CACHE="new_cache_dir/"
but the program still re-preprocess the whole dataset... | 48 | [Question] How to move and reuse preprocessed dataset?
Hi, I am training a gpt-2 from scratch using run_clm.py.
I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess),
I tried to :
copy path_to_cache_dir/datasets to new_cache_dir/datasets
set export HF_DATASETS_CACHE="new_cache_di... | [
-0.23027929663658142,
-0.18237082660198212,
0.041217099875211716,
0.3404657542705536,
0.19807103276252747,
0.20862628519535065,
-0.03750691935420036,
0.251460999250412,
-0.10370364040136337,
-0.2087678760290146,
-0.031436290591955185,
0.16302575170993805,
-0.13057352602481842,
-0.044030692... |
https://github.com/huggingface/datasets/issues/2344 | Is there a way to join multiple datasets in one? | Hi ! We don't have `join`/`merge` on a certain column as in pandas.
Maybe you can just use the [concatenate_datasets](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate#datasets.concatenate_datasets) function.
| **Is your feature request related to a problem? Please describe.**
I need to join 2 datasets, one that is in the hub and another I've created from my files. Is there an easy way to join these 2?
**Describe the solution you'd like**
Id like to join them with a merge or join method, just like pandas dataframes.
**Add... | 21 | Is there a way to join multiple datasets in one?
**Is your feature request related to a problem? Please describe.**
I need to join 2 datasets, one that is in the hub and another I've created from my files. Is there an easy way to join these 2?
**Describe the solution you'd like**
Id like to join them with a merge o... | [
-0.4773367643356323,
-0.6935086846351624,
-0.09794393181800842,
0.17958742380142212,
0.13994434475898743,
0.3247908651828766,
-0.2019500881433487,
0.09135663509368896,
0.023280765861272812,
0.03147125244140625,
-0.5830705165863037,
-0.01475087832659483,
0.20583777129650116,
0.4802640378475... |
https://github.com/huggingface/datasets/issues/2337 | NonMatchingChecksumError for web_of_science dataset | I've raised a PR for this. Should work with `dataset = load_dataset("web_of_science", "WOS11967", ignore_verifications=True)`once it gets merged into the main branch. Thanks for reporting this! | NonMatchingChecksumError when trying to download the web_of_science dataset.
>NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://data.mendeley.com/datasets/9rw3vkcfy4/6/files/c9ea673d-5542-44c0-ab7b-f1311f7d61df/WebOfScience.zip?dl=1']
Setting `ignore_verfications=True` results... | 25 | NonMatchingChecksumError for web_of_science dataset
NonMatchingChecksumError when trying to download the web_of_science dataset.
>NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://data.mendeley.com/datasets/9rw3vkcfy4/6/files/c9ea673d-5542-44c0-ab7b-f1311f7d61df/WebOfScience.zi... | [
-0.08515828847885132,
-0.07917497307062149,
-0.06877348572015762,
0.21224834024906158,
0.14462944865226746,
0.16705013811588287,
-0.06744653731584549,
0.356159508228302,
0.3357108533382416,
0.1703767329454422,
-0.184128537774086,
-0.0882943794131279,
0.03202442824840546,
-0.049100816249847... |
https://github.com/huggingface/datasets/issues/2330 | Allow passing `desc` to `tqdm` in `Dataset.map()` | I think the user could pass the `desc` parameter to `map` so that it can be displayed in the tqdm progress bar, as suggested by @cccntu.
When there's no multiprocessing, the `desc` of the progress bar could be the `desc` passed by the user.
In multiprocessing, we were already using a `desc` equal to `"#" + str(rank... | It's normal to have many `map()` calls, and some of them can take a few minutes,
it would be nice to have a description on the progress bar.
Alternative solution:
Print the description before/after the `map()` call. | 145 | Allow passing `desc` to `tqdm` in `Dataset.map()`
It's normal to have many `map()` calls, and some of them can take a few minutes,
it would be nice to have a description on the progress bar.
Alternative solution:
Print the description before/after the `map()` call.
I think the user could pass the `desc` parame... | [
-0.35430338978767395,
0.0359199233353138,
-0.06053311005234718,
-0.1690315306186676,
0.37445056438446045,
-0.16675710678100586,
0.2543702721595764,
0.21774381399154663,
-0.3104320168495178,
0.4162532091140747,
0.2674727737903595,
0.6729352474212646,
-0.0182513277977705,
0.08399834483861923... |
https://github.com/huggingface/datasets/issues/2327 | A syntax error in example | cc @beurkinger but I think this has been fixed internally and will soon be updated right ? | 
Sorry to report with an image, I can't find the template source code of this snippet. | 17 | A syntax error in example

Sorry to report with an image, I can't find the template source code of this snippet.
cc @beurkinger but I think this has been fixed internally and will soon be updated right ? | [
0.07156345993280411,
-0.4951070249080658,
-0.17972807586193085,
-0.1540905088186264,
0.07747307419776917,
-0.21504434943199158,
0.19750089943408966,
0.2825199365615845,
-0.3641221225261688,
0.16879171133041382,
0.2558271586894989,
0.39970484375953674,
0.052938684821128845,
0.00390873709693... |
https://github.com/huggingface/datasets/issues/2323 | load_dataset("timit_asr") gives back duplicates of just one sample text | Thanks @ekeleshian for having reported.
I am closing this issue once that you updated `datasets`. Feel free to reopen it if the problem persists. | ## Describe the bug
When you look up on key ["train"] and then ['text'], you get back a list with just one sentence duplicated 4620 times. Namely, the sentence "Would such an act of refusal be useful?". Similarly when you look up ['test'] and then ['text'], the list is one sentence repeated "The bungalow was pleasant... | 24 | load_dataset("timit_asr") gives back duplicates of just one sample text
## Describe the bug
When you look up on key ["train"] and then ['text'], you get back a list with just one sentence duplicated 4620 times. Namely, the sentence "Would such an act of refusal be useful?". Similarly when you look up ['test'] and t... | [
0.2821880877017975,
-0.48902955651283264,
0.06362927705049515,
0.38354843854904175,
0.15507684648036957,
0.009322430938482285,
0.2025512009859085,
0.19717755913734436,
-0.19604292511940002,
0.022564003244042397,
-0.053785864263772964,
0.25881636142730713,
-0.06076079607009888,
-0.001916359... |
https://github.com/huggingface/datasets/issues/2322 | Calls to map are not cached. | I tried upgrading to `datasets==1.6.2` and downgrading to `1.6.0`. Both versions produce the same output.
Downgrading to `1.5.0` works and produces the following output for me:
```bash
Downloading: 9.20kB [00:00, 3.94MB/s]
Downloading: 5.99kB [00:00, 3.29MB/s]
No config sp... | ## Describe the bug
Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed?
## Steps to reproduce the bug
```python
import datasets
datasets.set_caching_enabled(True)
sst = datasets.load_dataset("sst")
def foo(samples, i):
print("executed", i[:10])... | 387 | Calls to map are not cached.
## Describe the bug
Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed?
## Steps to reproduce the bug
```python
import datasets
datasets.set_caching_enabled(True)
sst = datasets.load_dataset("sst")
def foo(samples, i):... | [
-0.1652984917163849,
-0.39597171545028687,
-0.008847803808748722,
0.14566098153591156,
0.20624323189258575,
-0.10191751271486282,
0.30026817321777344,
0.12512896955013275,
0.3953081965446472,
0.020620401948690414,
-0.03697800636291504,
0.242855042219162,
0.1699715256690979,
-0.284935653209... |
https://github.com/huggingface/datasets/issues/2322 | Calls to map are not cached. | Hi,
set `keep_in_memory` to False when loading a dataset (`sst = load_dataset("sst", keep_in_memory=False)`) to prevent it from loading in-memory. Currently, in-memory datasets fail to find cached files due to this check (always False for them):
https://github.com/huggingface/datasets/blob/241a0b4a3a868778ee91e76... | ## Describe the bug
Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed?
## Steps to reproduce the bug
```python
import datasets
datasets.set_caching_enabled(True)
sst = datasets.load_dataset("sst")
def foo(samples, i):
print("executed", i[:10])... | 46 | Calls to map are not cached.
## Describe the bug
Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed?
## Steps to reproduce the bug
```python
import datasets
datasets.set_caching_enabled(True)
sst = datasets.load_dataset("sst")
def foo(samples, i):... | [
-0.1652984917163849,
-0.39597171545028687,
-0.008847803808748722,
0.14566098153591156,
0.20624323189258575,
-0.10191751271486282,
0.30026817321777344,
0.12512896955013275,
0.3953081965446472,
0.020620401948690414,
-0.03697800636291504,
0.242855042219162,
0.1699715256690979,
-0.284935653209... |
https://github.com/huggingface/datasets/issues/2322 | Calls to map are not cached. | Hi @villmow, thanks for reporting.
As @mariosasko has pointed out, we did not consider this case when introducing the feature of automatic in-memory for small datasets. This needs to be fixed. | ## Describe the bug
Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed?
## Steps to reproduce the bug
```python
import datasets
datasets.set_caching_enabled(True)
sst = datasets.load_dataset("sst")
def foo(samples, i):
print("executed", i[:10])... | 31 | Calls to map are not cached.
## Describe the bug
Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed?
## Steps to reproduce the bug
```python
import datasets
datasets.set_caching_enabled(True)
sst = datasets.load_dataset("sst")
def foo(samples, i):... | [
-0.1652984917163849,
-0.39597171545028687,
-0.008847803808748722,
0.14566098153591156,
0.20624323189258575,
-0.10191751271486282,
0.30026817321777344,
0.12512896955013275,
0.3953081965446472,
0.020620401948690414,
-0.03697800636291504,
0.242855042219162,
0.1699715256690979,
-0.284935653209... |
https://github.com/huggingface/datasets/issues/2322 | Calls to map are not cached. | Hi ! Currently a dataset that is in memory doesn't know doesn't know in which directory it has to read/write cache files.
On the other hand, a dataset that loaded from the disk (via memory mapping) uses the directory from which the dataset is located to read/write cache files.
Because of that, currently in-memory d... | ## Describe the bug
Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed?
## Steps to reproduce the bug
```python
import datasets
datasets.set_caching_enabled(True)
sst = datasets.load_dataset("sst")
def foo(samples, i):
print("executed", i[:10])... | 82 | Calls to map are not cached.
## Describe the bug
Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed?
## Steps to reproduce the bug
```python
import datasets
datasets.set_caching_enabled(True)
sst = datasets.load_dataset("sst")
def foo(samples, i):... | [
-0.1652984917163849,
-0.39597171545028687,
-0.008847803808748722,
0.14566098153591156,
0.20624323189258575,
-0.10191751271486282,
0.30026817321777344,
0.12512896955013275,
0.3953081965446472,
0.020620401948690414,
-0.03697800636291504,
0.242855042219162,
0.1699715256690979,
-0.284935653209... |
https://github.com/huggingface/datasets/issues/2319 | UnicodeDecodeError for OSCAR (Afrikaans) | Thanks for reporting, @sgraaf.
I am going to have a look at it.
I guess the expected codec is "UTF-8". Normally, when no explicitly codec is passed, Python uses one which is platform-dependent. For Linux machines, the default codec is `utf_8`, which is OK. However for Windows machine, the default codec is `cp125... | ## Describe the bug
When loading the [OSCAR dataset](https://huggingface.co/datasets/oscar) (specifically `unshuffled_deduplicated_af`), I encounter a `UnicodeDecodeError`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("oscar", "unshuffled_deduplicated_af")
```... | 57 | UnicodeDecodeError for OSCAR (Afrikaans)
## Describe the bug
When loading the [OSCAR dataset](https://huggingface.co/datasets/oscar) (specifically `unshuffled_deduplicated_af`), I encounter a `UnicodeDecodeError`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset(... | [
-0.0694422721862793,
-0.06897531449794769,
-0.06865914911031723,
0.5462199449539185,
0.49986955523490906,
0.02358071878552437,
0.16044268012046814,
0.1600639969110489,
-0.25320765376091003,
0.26161664724349976,
0.10814129561185837,
0.07456579804420471,
-0.13298742473125458,
-0.162126675248... |
https://github.com/huggingface/datasets/issues/2319 | UnicodeDecodeError for OSCAR (Afrikaans) | @sgraaf, I have just merged the fix in the master branch.
You can either:
- install `datasets` from source code
- wait until we make the next release of `datasets`
- set the `utf-8` codec as your default instead of `cp1252`. This can be done by activating the Python [UTF-8 mode](https://www.python.org/dev/peps/pe... | ## Describe the bug
When loading the [OSCAR dataset](https://huggingface.co/datasets/oscar) (specifically `unshuffled_deduplicated_af`), I encounter a `UnicodeDecodeError`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("oscar", "unshuffled_deduplicated_af")
```... | 66 | UnicodeDecodeError for OSCAR (Afrikaans)
## Describe the bug
When loading the [OSCAR dataset](https://huggingface.co/datasets/oscar) (specifically `unshuffled_deduplicated_af`), I encounter a `UnicodeDecodeError`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset(... | [
-0.0694422721862793,
-0.06897531449794769,
-0.06865914911031723,
0.5462199449539185,
0.49986955523490906,
0.02358071878552437,
0.16044268012046814,
0.1600639969110489,
-0.25320765376091003,
0.26161664724349976,
0.10814129561185837,
0.07456579804420471,
-0.13298742473125458,
-0.162126675248... |
https://github.com/huggingface/datasets/issues/2318 | [api request] API to obtain "dataset_module" dynamic path? | Hi @richardliaw,
First, thanks for the compliments.
In relation with your request, currently, the dynamic modules path is obtained this way:
```python
from datasets.load import init_dynamic_modules, MODULE_NAME_FOR_DYNAMIC_MODULES
dynamic_modules_path = init_dynamic_modules(MODULE_NAME_FOR_DYNAMIC_MODULES)
... | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
This is an awesome library.
It seems like the dynamic module path in this library has broken some of hyperparameter tuning functionality: https://discuss.huggingface.co/t/using-hyperparamet... | 63 | [api request] API to obtain "dataset_module" dynamic path?
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
This is an awesome library.
It seems like the dynamic module path in this library has broken some of hyperparameter tuning funct... | [
-0.10380298644304276,
-0.2795987129211426,
-0.13102872669696808,
0.05274278670549393,
0.3254472613334656,
-0.26348721981048584,
-0.06139547377824783,
0.1563362032175064,
-0.2127552479505539,
0.3393891453742981,
-0.07494526356458664,
0.7778300642967224,
-0.472523033618927,
0.316654294729232... |
https://github.com/huggingface/datasets/issues/2318 | [api request] API to obtain "dataset_module" dynamic path? | Hi @richardliaw, the feature is on the master branch and will be included in the next release in a couple of weeks. | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
This is an awesome library.
It seems like the dynamic module path in this library has broken some of hyperparameter tuning functionality: https://discuss.huggingface.co/t/using-hyperparamet... | 22 | [api request] API to obtain "dataset_module" dynamic path?
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
This is an awesome library.
It seems like the dynamic module path in this library has broken some of hyperparameter tuning funct... | [
-0.12703987956047058,
-0.22530506551265717,
-0.1368042230606079,
0.12058556079864502,
0.294010192155838,
-0.30054864287376404,
-0.0833774283528328,
0.17475028336048126,
-0.21367326378822327,
0.3675940930843353,
-0.06890172511339188,
0.7768820524215698,
-0.46218428015708923,
0.3776903450489... |
https://github.com/huggingface/datasets/issues/2308 | Add COCO evaluation metrics | Hi @NielsRogge,
I'd like to contribute these metrics to datasets. Let's start with `CocoEvaluator` first? Currently how are are you sending the ground truths and predictions in coco_evaluator?
| I'm currently working on adding Facebook AI's DETR model (end-to-end object detection with Transformers) to HuggingFace Transformers. The model is working fine, but regarding evaluation, I'm currently relying on external `CocoEvaluator` and `PanopticEvaluator` objects which are defined in the original repository ([here... | 28 | Add COCO evaluation metrics
I'm currently working on adding Facebook AI's DETR model (end-to-end object detection with Transformers) to HuggingFace Transformers. The model is working fine, but regarding evaluation, I'm currently relying on external `CocoEvaluator` and `PanopticEvaluator` objects which are defined in ... | [
-0.3099650740623474,
-0.21098625659942627,
-0.05908068269491196,
0.03650001436471939,
0.1357957124710083,
-0.12801209092140198,
0.03278497979044914,
-0.14469532668590546,
-0.2019300013780594,
0.12698112428188324,
-0.6827289462089539,
0.12396987527608871,
-0.13958074152469635,
0.12380193173... |
https://github.com/huggingface/datasets/issues/2308 | Add COCO evaluation metrics | Great!
Here's a notebook that illustrates how I'm using `CocoEvaluator`: https://drive.google.com/file/d/1VV92IlaUiuPOORXULIuAdtNbBWCTCnaj/view?usp=sharing
The evaluation is near the end of the notebook.
| I'm currently working on adding Facebook AI's DETR model (end-to-end object detection with Transformers) to HuggingFace Transformers. The model is working fine, but regarding evaluation, I'm currently relying on external `CocoEvaluator` and `PanopticEvaluator` objects which are defined in the original repository ([here... | 20 | Add COCO evaluation metrics
I'm currently working on adding Facebook AI's DETR model (end-to-end object detection with Transformers) to HuggingFace Transformers. The model is working fine, but regarding evaluation, I'm currently relying on external `CocoEvaluator` and `PanopticEvaluator` objects which are defined in ... | [
-0.2997460961341858,
-0.21730875968933105,
-0.06629110127687454,
0.02016844041645527,
0.12738299369812012,
-0.14389529824256897,
0.029904337599873543,
-0.14437271654605865,
-0.20418782532215118,
0.1345137655735016,
-0.6726329326629639,
0.12211496382951736,
-0.13538891077041626,
0.129140213... |
https://github.com/huggingface/datasets/issues/2308 | Add COCO evaluation metrics | I went through the code you've [mentioned](https://github.com/facebookresearch/detr/blob/a54b77800eb8e64e3ad0d8237789fcbf2f8350c5/datasets/coco_eval.py) and I think there are 2 options on how we can go ahead:
1) Implement how DETR people have done this (they're relying very heavily on the official implementation and... | I'm currently working on adding Facebook AI's DETR model (end-to-end object detection with Transformers) to HuggingFace Transformers. The model is working fine, but regarding evaluation, I'm currently relying on external `CocoEvaluator` and `PanopticEvaluator` objects which are defined in the original repository ([here... | 133 | Add COCO evaluation metrics
I'm currently working on adding Facebook AI's DETR model (end-to-end object detection with Transformers) to HuggingFace Transformers. The model is working fine, but regarding evaluation, I'm currently relying on external `CocoEvaluator` and `PanopticEvaluator` objects which are defined in ... | [
-0.27624547481536865,
-0.20859147608280182,
-0.06540317088365555,
0.046630315482616425,
0.1744096428155899,
-0.1518915295600891,
0.02464929036796093,
-0.13047006726264954,
-0.19319698214530945,
0.12735682725906372,
-0.6784846782684326,
0.12130969762802124,
-0.1210908517241478,
0.1138262897... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.