html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 63 51.8k | body stringlengths 0 36.2k ⌀ | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings list |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/2243 | Map is slow and processes batches one after another | Hi @villmow, thanks for reporting.
Could you please try with the Datasets version 1.6? We released it yesterday and it fixes some issues about the processing speed. You can see the fix implemented by @lhoestq here: #2122.
Once you update Datasets, please confirm if the problem persists. | ## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 million samples) this problem occurs. Thats why I can't giv... | 47 | Map is slow and processes batches one after another
## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 milli... | [
-0.3536399007,
-0.4107637703,
-0.0976410657,
0.2570492029,
-0.0076194503,
0.1119760126,
0.1884652227,
0.3910496831,
0.4355854094,
-0.01323881,
0.2567104399,
0.3540892601,
0.0090388618,
-0.151752457,
-0.1746493876,
0.0698187426,
0.1166414246,
-0.143186748,
-0.0087069068,
-0.1623... |
https://github.com/huggingface/datasets/issues/2243 | Map is slow and processes batches one after another | Hi @albertvillanova, thanks for the reply. I just tried the new version and the problem still persists.
Do I need to rebuild the saved dataset (which I load from disk) with the 1.6.0 version of datasets? My script loads this dataset and creates new datasets from it. I tried it without rebuilding.
See this short ... | ## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 million samples) this problem occurs. Thats why I can't giv... | 70 | Map is slow and processes batches one after another
## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 milli... | [
-0.3630150259,
-0.3355690241,
-0.0869322494,
0.291497916,
0.0048047183,
0.0850248784,
0.2076529264,
0.3980060518,
0.4348948002,
-0.0089639975,
0.2125944048,
0.2832152843,
-0.0250466373,
-0.2042584568,
-0.1974658221,
0.157870248,
0.1223638058,
-0.089183718,
-0.0129012587,
-0.194... |
https://github.com/huggingface/datasets/issues/2243 | Map is slow and processes batches one after another | There can be a bit of delay between the creations of the processes but this delay should be the same for both your `map` calls. We should look into this.
Also if you hav some code that reproduces this issue on google colab that'd be really useful !
Regarding the speed differences:
This looks like a similar issue a... | ## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 million samples) this problem occurs. Thats why I can't giv... | 103 | Map is slow and processes batches one after another
## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 milli... | [
-0.319485575,
-0.4774054289,
-0.074757643,
0.3302207291,
-0.0672012717,
0.0412486494,
0.2097328156,
0.3263497949,
0.4367240667,
0.0293655656,
0.2416933179,
0.3490332365,
0.0190937854,
-0.0388513245,
-0.1365030706,
0.005576733,
0.1733697504,
-0.1191432849,
0.016522456,
-0.137628... |
https://github.com/huggingface/datasets/issues/2243 | Map is slow and processes batches one after another | Upgrade to 1.6.1 solved my problem somehow. I did not change any of my code, but now it starts all processes around the same time. | ## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 million samples) this problem occurs. Thats why I can't giv... | 25 | Map is slow and processes batches one after another
## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 milli... | [
-0.3285203874,
-0.4251143932,
-0.0902181864,
0.3159683645,
0.017064875,
0.0901565328,
0.2094432116,
0.4447202086,
0.4825790823,
0.0649624839,
0.2510420978,
0.3160923123,
-0.0560899004,
-0.3143903911,
-0.2324121743,
0.1299516857,
0.1601344347,
-0.1192272305,
-0.0551297031,
-0.20... |
https://github.com/huggingface/datasets/issues/2243 | Map is slow and processes batches one after another | Nice ! I'm glad this works now.
Closing for now, but feel free to re-open if you experience this issue again. | ## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 million samples) this problem occurs. Thats why I can't giv... | 21 | Map is slow and processes batches one after another
## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 milli... | [
-0.3070742786,
-0.469504267,
-0.0922935605,
0.2605443597,
0.0237791855,
0.0816691369,
0.2060970068,
0.3872797191,
0.4719320238,
0.020734895,
0.2926185131,
0.3568670154,
0.0152329588,
-0.1723435968,
-0.1459612548,
0.0627288669,
0.1288415492,
-0.1695840806,
0.0138114076,
-0.17984... |
https://github.com/huggingface/datasets/issues/2239 | Error loading wikihow dataset | Hi @odellus, thanks for reporting.
The `wikihow` dataset has 2 versions:
- `all`: Consisting of the concatenation of all paragraphs as the articles and the bold lines as the reference summaries.
- `sep`: Consisting of each paragraph and its summary.
Therefore, in order to load it, you have to specify which vers... | ## Describe the bug
When attempting to load wikihow into a dataset with
```python
from datasets import load_dataset
dataset = load_dataset('wikihow', data_dir='./wikihow')
```
I get the message:
```
AttributeError: 'BuilderConfig' object has no attribute 'filename'
```
at the end of a [full stack trace](htt... | 71 | Error loading wikihow dataset
## Describe the bug
When attempting to load wikihow into a dataset with
```python
from datasets import load_dataset
dataset = load_dataset('wikihow', data_dir='./wikihow')
```
I get the message:
```
AttributeError: 'BuilderConfig' object has no attribute 'filename'
```
at the... | [
-0.2274996191,
0.3702228665,
0.0238096323,
0.3987949193,
0.2537097931,
0.2747787237,
0.4276535809,
0.4391777813,
0.2458432764,
0.0935080573,
0.2162852138,
0.3853775859,
-0.0128945336,
0.1863868237,
0.120502308,
-0.2686141729,
0.0110667096,
0.104524672,
0.223709926,
0.1890233308... |
https://github.com/huggingface/datasets/issues/2239 | Error loading wikihow dataset | Good call out. I did try that and that's when it told me to download the
dataset. Don't believe I have tried it with local files. Will try first
thing in the morning and get back to you.
On Mon, Apr 19, 2021, 11:17 PM Albert Villanova del Moral <
***@***.***> wrote:
> Hi @odellus <https://github.com/odellus>, thanks ... | ## Describe the bug
When attempting to load wikihow into a dataset with
```python
from datasets import load_dataset
dataset = load_dataset('wikihow', data_dir='./wikihow')
```
I get the message:
```
AttributeError: 'BuilderConfig' object has no attribute 'filename'
```
at the end of a [full stack trace](htt... | 168 | Error loading wikihow dataset
## Describe the bug
When attempting to load wikihow into a dataset with
```python
from datasets import load_dataset
dataset = load_dataset('wikihow', data_dir='./wikihow')
```
I get the message:
```
AttributeError: 'BuilderConfig' object has no attribute 'filename'
```
at the... | [
-0.2274996191,
0.3702228665,
0.0238096323,
0.3987949193,
0.2537097931,
0.2747787237,
0.4276535809,
0.4391777813,
0.2458432764,
0.0935080573,
0.2162852138,
0.3853775859,
-0.0128945336,
0.1863868237,
0.120502308,
-0.2686141729,
0.0110667096,
0.104524672,
0.223709926,
0.1890233308... |
https://github.com/huggingface/datasets/issues/2239 | Error loading wikihow dataset | Hi @odellus, yes you are right.
Due to the server where the `wikihow` dataset is hosted, the dataset can't be downloaded automatically by `huggingface` and you have to download it manually as you did.
Nevertheless, you have to specify which dataset version you would like to load anyway:
```python
dataset = load... | ## Describe the bug
When attempting to load wikihow into a dataset with
```python
from datasets import load_dataset
dataset = load_dataset('wikihow', data_dir='./wikihow')
```
I get the message:
```
AttributeError: 'BuilderConfig' object has no attribute 'filename'
```
at the end of a [full stack trace](htt... | 90 | Error loading wikihow dataset
## Describe the bug
When attempting to load wikihow into a dataset with
```python
from datasets import load_dataset
dataset = load_dataset('wikihow', data_dir='./wikihow')
```
I get the message:
```
AttributeError: 'BuilderConfig' object has no attribute 'filename'
```
at the... | [
-0.2274996191,
0.3702228665,
0.0238096323,
0.3987949193,
0.2537097931,
0.2747787237,
0.4276535809,
0.4391777813,
0.2458432764,
0.0935080573,
0.2162852138,
0.3853775859,
-0.0128945336,
0.1863868237,
0.120502308,
-0.2686141729,
0.0110667096,
0.104524672,
0.223709926,
0.1890233308... |
https://github.com/huggingface/datasets/issues/2237 | Update Dataset.dataset_size after transformed with map | @albertvillanova I would like to take this up. It would be great if you could point me as to how the dataset size is calculated in HF. Thanks! | After loading a dataset, if we transform it by using `.map` its `dataset_size` attirbute is not updated. | 28 | Update Dataset.dataset_size after transformed with map
After loading a dataset, if we transform it by using `.map` its `dataset_size` attirbute is not updated.
@albertvillanova I would like to take this up. It would be great if you could point me as to how the dataset size is calculated in HF. Thanks! | [
-0.2140058428,
-0.3152700961,
-0.1228159517,
0.1519186348,
0.0606086366,
0.0196819361,
0.2813274562,
-0.1209013537,
0.1866591573,
0.1171899438,
-0.1872753203,
0.0038693664,
0.3808468878,
0.184631452,
0.2711485922,
0.0805767551,
0.2649388313,
0.1283429116,
-0.554933846,
-0.08209... |
https://github.com/huggingface/datasets/issues/2230 | Keys yielded while generating dataset are not being checked | Hi ! Indeed there's no verification on the uniqueness nor the types of the keys.
Do you already have some ideas of what you would like to implement and how ? | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https... | 31 | Keys yielded while generating dataset are not being checked
The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for a... | [
0.0275366325,
-0.2159311026,
0.0291703828,
0.4762760699,
0.0684193149,
-0.2363622934,
0.4168975353,
0.0804131702,
0.4312431812,
0.1339113861,
0.1672189683,
0.3502566218,
-0.0206805002,
0.2065162957,
-0.0032002621,
0.4027806222,
0.0770777389,
0.0507333316,
-0.3874768615,
-0.1746... |
https://github.com/huggingface/datasets/issues/2230 | Keys yielded while generating dataset are not being checked | Hey @lhoestq, thank you so much for the opportunity.
Although I haven't had much experience with the HF Datasets code, after a careful look at how the `ArrowWriter` functions, I think we can implement this as follows:
1. First, we would have to update the `ArrowWriter.write()` function here:
https://github.com/hu... | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https... | 235 | Keys yielded while generating dataset are not being checked
The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for a... | [
0.0275366325,
-0.2159311026,
0.0291703828,
0.4762760699,
0.0684193149,
-0.2363622934,
0.4168975353,
0.0804131702,
0.4312431812,
0.1339113861,
0.1672189683,
0.3502566218,
-0.0206805002,
0.2065162957,
-0.0032002621,
0.4027806222,
0.0770777389,
0.0507333316,
-0.3874768615,
-0.1746... |
https://github.com/huggingface/datasets/issues/2230 | Keys yielded while generating dataset are not being checked | Interesting !
We keep the dataset sorted in the order examples are generated by the builder (we expect the dataset builders to generate examples in deterministic order). Therefore I don't think we should shuffle the examples with the hashing. Let me know what you think.
Other that that, I really like the idea of chec... | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https... | 86 | Keys yielded while generating dataset are not being checked
The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for a... | [
0.0275366325,
-0.2159311026,
0.0291703828,
0.4762760699,
0.0684193149,
-0.2363622934,
0.4168975353,
0.0804131702,
0.4312431812,
0.1339113861,
0.1672189683,
0.3502566218,
-0.0206805002,
0.2065162957,
-0.0032002621,
0.4027806222,
0.0770777389,
0.0507333316,
-0.3874768615,
-0.1746... |
https://github.com/huggingface/datasets/issues/2230 | Keys yielded while generating dataset are not being checked | @lhoestq I'm glad you liked the idea!
I think that since the keys will be unique and deterministic in the nature themselves, so even if we shuffle the examples according to the hash, a deterministic order would still be maintained (as the keys will always have the same hash, whenever the dataset is generated).
And s... | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https... | 171 | Keys yielded while generating dataset are not being checked
The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for a... | [
0.0275366325,
-0.2159311026,
0.0291703828,
0.4762760699,
0.0684193149,
-0.2363622934,
0.4168975353,
0.0804131702,
0.4312431812,
0.1339113861,
0.1672189683,
0.3502566218,
-0.0206805002,
0.2065162957,
-0.0032002621,
0.4027806222,
0.0770777389,
0.0507333316,
-0.3874768615,
-0.1746... |
https://github.com/huggingface/datasets/issues/2230 | Keys yielded while generating dataset are not being checked | When users load their own data, they expect the order to stay the same. I think that shuffling the data can make things inconvenient.
> I think that this is also what was originally envisioned as mentioned in the documentation here:
This part was originally developed by tensorflow datasets, and tensorflow dataset... | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https... | 224 | Keys yielded while generating dataset are not being checked
The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for a... | [
0.0275366325,
-0.2159311026,
0.0291703828,
0.4762760699,
0.0684193149,
-0.2363622934,
0.4168975353,
0.0804131702,
0.4312431812,
0.1339113861,
0.1672189683,
0.3502566218,
-0.0206805002,
0.2065162957,
-0.0032002621,
0.4027806222,
0.0770777389,
0.0507333316,
-0.3874768615,
-0.1746... |
https://github.com/huggingface/datasets/issues/2230 | Keys yielded while generating dataset are not being checked | Yes I think we want to keep the original order by default and only shuffle when the user ask for it (for instance by calling `dataset.shuffle()`). That’s how I had it in mind originally. | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https... | 34 | Keys yielded while generating dataset are not being checked
The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for a... | [
0.0275366325,
-0.2159311026,
0.0291703828,
0.4762760699,
0.0684193149,
-0.2363622934,
0.4168975353,
0.0804131702,
0.4312431812,
0.1339113861,
0.1672189683,
0.3502566218,
-0.0206805002,
0.2065162957,
-0.0032002621,
0.4027806222,
0.0770777389,
0.0507333316,
-0.3874768615,
-0.1746... |
https://github.com/huggingface/datasets/issues/2230 | Keys yielded while generating dataset are not being checked | Hey @lhoestq, I just had a more in-depth look at the original TFDS code about why the keys and hash were used in the first place.
In my opinion, the only use that the `hash(key)` serves is that it allows us to shuffle the examples in a deterministic order (as each example will always yield the same key and thus, the... | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https... | 160 | Keys yielded while generating dataset are not being checked
The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for a... | [
0.0275366325,
-0.2159311026,
0.0291703828,
0.4762760699,
0.0684193149,
-0.2363622934,
0.4168975353,
0.0804131702,
0.4312431812,
0.1339113861,
0.1672189683,
0.3502566218,
-0.0206805002,
0.2065162957,
-0.0032002621,
0.4027806222,
0.0770777389,
0.0507333316,
-0.3874768615,
-0.1746... |
https://github.com/huggingface/datasets/issues/2230 | Keys yielded while generating dataset are not being checked | In `datasets` the keys are currently ignored.
For shuffling we don't use the keys. Instead we shuffle an array of indices. Since both the original order of the dataset and the indices shuffling are deterministic, then `dataset.shuffle` is deterministic as well.
We can use it to:
1. detect duplicates
2. verify that ... | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https... | 62 | Keys yielded while generating dataset are not being checked
The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for a... | [
0.0275366325,
-0.2159311026,
0.0291703828,
0.4762760699,
0.0684193149,
-0.2363622934,
0.4168975353,
0.0804131702,
0.4312431812,
0.1339113861,
0.1672189683,
0.3502566218,
-0.0206805002,
0.2065162957,
-0.0032002621,
0.4027806222,
0.0770777389,
0.0507333316,
-0.3874768615,
-0.1746... |
https://github.com/huggingface/datasets/issues/2230 | Keys yielded while generating dataset are not being checked | Thanks a lot @lhoestq. I think I understand what we need to do now. The keys can indeed be used for detecting duplicates in generated examples as well as ensuring the order.
> Maybe we cam simply keep track of the hashes of of each batch being written ? The size of the batch when the data are save in arrow is 10 000... | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https... | 119 | Keys yielded while generating dataset are not being checked
The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for a... | [
0.0275366325,
-0.2159311026,
0.0291703828,
0.4762760699,
0.0684193149,
-0.2363622934,
0.4168975353,
0.0804131702,
0.4312431812,
0.1339113861,
0.1672189683,
0.3502566218,
-0.0206805002,
0.2065162957,
-0.0032002621,
0.4027806222,
0.0770777389,
0.0507333316,
-0.3874768615,
-0.1746... |
https://github.com/huggingface/datasets/issues/2229 | `xnli` dataset creating a tuple key while yielding instead of `str` or `int` | Hi ! Sure sounds good. Also if you find other datasets that use tuples instead of str/int, you can also fix them !
thanks :) | When using `ds = datasets.load_dataset('xnli', 'ar')`, the dataset generation script uses the following section of code in the egging, which yields a tuple key instead of the specified `str` or `int` key:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196
... | 25 | `xnli` dataset creating a tuple key while yielding instead of `str` or `int`
When using `ds = datasets.load_dataset('xnli', 'ar')`, the dataset generation script uses the following section of code in the egging, which yields a tuple key instead of the specified `str` or `int` key:
https://github.com/huggingface/dat... | [
-0.0539386682,
0.0509748757,
0.045168139,
0.1262667626,
0.2513766885,
0.0173005834,
0.4449459314,
0.2987627983,
0.6008676887,
0.2404711396,
0.1092354208,
0.4075710177,
0.0195067357,
0.2159905881,
-0.0843308941,
-0.0561309941,
-0.0785401091,
0.235207215,
-0.224451229,
-0.1137271... |
https://github.com/huggingface/datasets/issues/2229 | `xnli` dataset creating a tuple key while yielding instead of `str` or `int` | @lhoestq I have sent a PR for fixing the issue. Would be great if you could have a look! Thanks! | When using `ds = datasets.load_dataset('xnli', 'ar')`, the dataset generation script uses the following section of code in the egging, which yields a tuple key instead of the specified `str` or `int` key:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196
... | 20 | `xnli` dataset creating a tuple key while yielding instead of `str` or `int`
When using `ds = datasets.load_dataset('xnli', 'ar')`, the dataset generation script uses the following section of code in the egging, which yields a tuple key instead of the specified `str` or `int` key:
https://github.com/huggingface/dat... | [
-0.0702302381,
0.0961905047,
0.0778554976,
0.1273570955,
0.1900972724,
0.0126105035,
0.4497656524,
0.2994562387,
0.6488564014,
0.1779719293,
0.087863028,
0.4414196908,
0.0609020069,
0.2075349092,
-0.0426088125,
-0.0038824501,
-0.0590647832,
0.2475440949,
-0.2700885236,
-0.14406... |
https://github.com/huggingface/datasets/issues/2226 | Batched map fails when removing all columns | I found the problem. I called `set_format` on some columns before. This makes it crash. Here is a complete example to reproduce:
```python
from datasets import load_dataset
sst = load_dataset("sst")
sst.set_format("torch", columns=["label"], output_all_columns=True)
ds = sst["train"]
# crashes
ds.map(
l... | Hi @lhoestq ,
I'm hijacking this issue, because I'm currently trying to do the approach you recommend:
> Currently the optimal setup for single-column computations is probably to do something like
>
> ```python
> result = dataset.map(f, input_columns="my_col", remove_columns=dataset.column_names)
> ```
He... | 49 | Batched map fails when removing all columns
Hi @lhoestq ,
I'm hijacking this issue, because I'm currently trying to do the approach you recommend:
> Currently the optimal setup for single-column computations is probably to do something like
>
> ```python
> result = dataset.map(f, input_columns="my_col", rem... | [
-0.1897990257,
0.1186694652,
0.0169570111,
0.0345565602,
0.2957543731,
0.1997557729,
0.7953290939,
0.3469544053,
0.2681218684,
0.5083371997,
0.1278548837,
0.3986316323,
-0.2221596092,
-0.1497608423,
-0.2561759949,
-0.2258547693,
0.18031542,
0.1939366311,
-0.1543454677,
0.113828... |
https://github.com/huggingface/datasets/issues/2226 | Batched map fails when removing all columns | Thanks for reporting and for providing this code to reproduce the issue, this is really helpful ! | Hi @lhoestq ,
I'm hijacking this issue, because I'm currently trying to do the approach you recommend:
> Currently the optimal setup for single-column computations is probably to do something like
>
> ```python
> result = dataset.map(f, input_columns="my_col", remove_columns=dataset.column_names)
> ```
He... | 17 | Batched map fails when removing all columns
Hi @lhoestq ,
I'm hijacking this issue, because I'm currently trying to do the approach you recommend:
> Currently the optimal setup for single-column computations is probably to do something like
>
> ```python
> result = dataset.map(f, input_columns="my_col", rem... | [
-0.1897990257,
0.1186694652,
0.0169570111,
0.0345565602,
0.2957543731,
0.1997557729,
0.7953290939,
0.3469544053,
0.2681218684,
0.5083371997,
0.1278548837,
0.3986316323,
-0.2221596092,
-0.1497608423,
-0.2561759949,
-0.2258547693,
0.18031542,
0.1939366311,
-0.1543454677,
0.113828... |
https://github.com/huggingface/datasets/issues/2226 | Batched map fails when removing all columns | I merged a fix, it should work on `master` now :)
We'll do a new release soon ! | Hi @lhoestq ,
I'm hijacking this issue, because I'm currently trying to do the approach you recommend:
> Currently the optimal setup for single-column computations is probably to do something like
>
> ```python
> result = dataset.map(f, input_columns="my_col", remove_columns=dataset.column_names)
> ```
He... | 18 | Batched map fails when removing all columns
Hi @lhoestq ,
I'm hijacking this issue, because I'm currently trying to do the approach you recommend:
> Currently the optimal setup for single-column computations is probably to do something like
>
> ```python
> result = dataset.map(f, input_columns="my_col", rem... | [
-0.1897990257,
0.1186694652,
0.0169570111,
0.0345565602,
0.2957543731,
0.1997557729,
0.7953290939,
0.3469544053,
0.2681218684,
0.5083371997,
0.1278548837,
0.3986316323,
-0.2221596092,
-0.1497608423,
-0.2561759949,
-0.2258547693,
0.18031542,
0.1939366311,
-0.1543454677,
0.113828... |
https://github.com/huggingface/datasets/issues/2218 | Duplicates in the LAMA dataset | Hi,
currently the datasets API doesn't have a dedicated function to remove duplicate rows, but since the LAMA dataset is not too big (it fits in RAM), we can leverage pandas to help us remove duplicates:
```python
>>> from datasets import load_dataset, Dataset
>>> dataset = load_dataset('lama', split='train')
>>... | I observed duplicates in the LAMA probing dataset, see a minimal code below.
```
>>> import datasets
>>> dataset = datasets.load_dataset('lama')
No config specified, defaulting to: lama/trex
Reusing dataset lama (/home/anam/.cache/huggingface/datasets/lama/trex/1.1.0/97deffae13eca0a18e77dfb3960bb31741e973586f5c... | 94 | Duplicates in the LAMA dataset
I observed duplicates in the LAMA probing dataset, see a minimal code below.
```
>>> import datasets
>>> dataset = datasets.load_dataset('lama')
No config specified, defaulting to: lama/trex
Reusing dataset lama (/home/anam/.cache/huggingface/datasets/lama/trex/1.1.0/97deffae13e... | [
0.2615723908,
-0.3211717606,
-0.0303782318,
0.6542944312,
0.3174535632,
-0.1388740093,
0.3127516508,
0.3269730806,
-0.5472853184,
0.3365436494,
-0.3454262912,
0.3600647449,
0.0819622129,
-0.2813016772,
0.1604103893,
-0.1338146031,
0.0254945178,
-0.1661694348,
-0.2045629025,
-0.... |
https://github.com/huggingface/datasets/issues/2218 | Duplicates in the LAMA dataset | Oh, seems like my question wasn't specified well. I'm _not_ asking how to remove duplicates, but whether duplicates should be removed if I want to do the evaluation on the LAMA dataset as it was proposed in the original paper/repository? In other words, will I get the same result if evaluate on the de-duplicated datase... | I observed duplicates in the LAMA probing dataset, see a minimal code below.
```
>>> import datasets
>>> dataset = datasets.load_dataset('lama')
No config specified, defaulting to: lama/trex
Reusing dataset lama (/home/anam/.cache/huggingface/datasets/lama/trex/1.1.0/97deffae13eca0a18e77dfb3960bb31741e973586f5c... | 77 | Duplicates in the LAMA dataset
I observed duplicates in the LAMA probing dataset, see a minimal code below.
```
>>> import datasets
>>> dataset = datasets.load_dataset('lama')
No config specified, defaulting to: lama/trex
Reusing dataset lama (/home/anam/.cache/huggingface/datasets/lama/trex/1.1.0/97deffae13e... | [
0.2615723908,
-0.3211717606,
-0.0303782318,
0.6542944312,
0.3174535632,
-0.1388740093,
0.3127516508,
0.3269730806,
-0.5472853184,
0.3365436494,
-0.3454262912,
0.3600647449,
0.0819622129,
-0.2813016772,
0.1604103893,
-0.1338146031,
0.0254945178,
-0.1661694348,
-0.2045629025,
-0.... |
https://github.com/huggingface/datasets/issues/2218 | Duplicates in the LAMA dataset | So it looks like the person who added LAMA to the library chose to have one item per piece of evidence rather than one per relation - and in this case, there are duplicate pieces of evidence for the target relation
If I understand correctly, to reproduce reported results, you would have to aggregate predictions for ... | I observed duplicates in the LAMA probing dataset, see a minimal code below.
```
>>> import datasets
>>> dataset = datasets.load_dataset('lama')
No config specified, defaulting to: lama/trex
Reusing dataset lama (/home/anam/.cache/huggingface/datasets/lama/trex/1.1.0/97deffae13eca0a18e77dfb3960bb31741e973586f5c... | 77 | Duplicates in the LAMA dataset
I observed duplicates in the LAMA probing dataset, see a minimal code below.
```
>>> import datasets
>>> dataset = datasets.load_dataset('lama')
No config specified, defaulting to: lama/trex
Reusing dataset lama (/home/anam/.cache/huggingface/datasets/lama/trex/1.1.0/97deffae13e... | [
0.2615723908,
-0.3211717606,
-0.0303782318,
0.6542944312,
0.3174535632,
-0.1388740093,
0.3127516508,
0.3269730806,
-0.5472853184,
0.3365436494,
-0.3454262912,
0.3600647449,
0.0819622129,
-0.2813016772,
0.1604103893,
-0.1338146031,
0.0254945178,
-0.1661694348,
-0.2045629025,
-0.... |
https://github.com/huggingface/datasets/issues/2214 | load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings' | Hi @nsaphra, thanks for reporting.
This issue was fixed in `datasets` version 1.3.0. Could you please update `datasets` and tell me if the problem persists?
```shell
pip install -U datasets
``` | I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_metric
>>> metric = load_metric("glue", "sst2")
Traceback (most recent call last):
File "<std... | 31 | load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings'
I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_me... | [
-0.2723239064,
-0.2135580778,
0.0193224531,
0.1852924973,
0.4195098579,
0.0630552694,
0.2729540169,
0.1801060736,
0.0756962672,
-0.0524456128,
-0.1940381229,
0.154596135,
-0.0744485036,
0.2974148691,
0.0652140975,
-0.0471374057,
-0.0315006934,
0.0150831072,
-0.262602061,
0.0296... |
https://github.com/huggingface/datasets/issues/2214 | load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings' | There might be a bug in the conda version of `datasets` 1.2.1 where the datasets/metric scripts are downloaded from `master` instead of the `1.2.1` repo.
You can try setting the env var `HF_SCRIPTS_VERSION="1.2.1"` as a workaround. Let me know if that helps. | I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_metric
>>> metric = load_metric("glue", "sst2")
Traceback (most recent call last):
File "<std... | 42 | load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings'
I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_me... | [
-0.2723239064,
-0.2135580778,
0.0193224531,
0.1852924973,
0.4195098579,
0.0630552694,
0.2729540169,
0.1801060736,
0.0756962672,
-0.0524456128,
-0.1940381229,
0.154596135,
-0.0744485036,
0.2974148691,
0.0652140975,
-0.0471374057,
-0.0315006934,
0.0150831072,
-0.262602061,
0.0296... |
https://github.com/huggingface/datasets/issues/2214 | load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings' | I just faced the same issue. I was using 1.2.1 from conda and received the same AttributeError complaining about 'add_start_docstrings'. Uninstalling the conda installed datasets and then installing the latest datasets (version 1.5.0) using pip install solved the issue for me. I don't like mixing up conda and pip insta... | I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_metric
>>> metric = load_metric("glue", "sst2")
Traceback (most recent call last):
File "<std... | 69 | load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings'
I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_me... | [
-0.2723239064,
-0.2135580778,
0.0193224531,
0.1852924973,
0.4195098579,
0.0630552694,
0.2729540169,
0.1801060736,
0.0756962672,
-0.0524456128,
-0.1940381229,
0.154596135,
-0.0744485036,
0.2974148691,
0.0652140975,
-0.0471374057,
-0.0315006934,
0.0150831072,
-0.262602061,
0.0296... |
https://github.com/huggingface/datasets/issues/2214 | load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings' | Yep, seems to have fixed things! The conda package could really do with an update. Thanks! | I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_metric
>>> metric = load_metric("glue", "sst2")
Traceback (most recent call last):
File "<std... | 16 | load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings'
I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_me... | [
-0.2723239064,
-0.2135580778,
0.0193224531,
0.1852924973,
0.4195098579,
0.0630552694,
0.2729540169,
0.1801060736,
0.0756962672,
-0.0524456128,
-0.1940381229,
0.154596135,
-0.0744485036,
0.2974148691,
0.0652140975,
-0.0471374057,
-0.0315006934,
0.0150831072,
-0.262602061,
0.0296... |
https://github.com/huggingface/datasets/issues/2212 | Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset | Hi ! Apparently the data are not available from this url anymore. We'll replace it with the new url when it's available | I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data configuration default
Downloading and preparing dataset fquad/default (download: 3.14 MiB, generated: 6.62 MiB, ... | 22 | Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset
I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data config... | [
-0.3520764112,
0.1850520074,
-0.10664545,
0.2455794066,
0.399815619,
0.0336072519,
0.3876357973,
0.2061041892,
0.3057492077,
0.145392701,
-0.2683596909,
-0.1908083558,
0.2827514112,
0.0175513979,
0.0276068095,
0.0714483485,
-0.0717818663,
-0.0440939181,
-0.1596582085,
0.0708821... |
https://github.com/huggingface/datasets/issues/2212 | Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset | I saw this on their website when we request to download the dataset:

Can we still request them link for the dataset and make a PR? @lhoestq @yjernite | I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data configuration default
Downloading and preparing dataset fquad/default (download: 3.14 MiB, generated: 6.62 MiB, ... | 29 | Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset
I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data config... | [
-0.3520764112,
0.1850520074,
-0.10664545,
0.2455794066,
0.399815619,
0.0336072519,
0.3876357973,
0.2061041892,
0.3057492077,
0.145392701,
-0.2683596909,
-0.1908083558,
0.2827514112,
0.0175513979,
0.0276068095,
0.0714483485,
-0.0717818663,
-0.0440939181,
-0.1596582085,
0.0708821... |
https://github.com/huggingface/datasets/issues/2212 | Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset | I've contacted Martin (first author of the fquad paper) regarding a possible new url. Hopefully we can get one soon ! | I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data configuration default
Downloading and preparing dataset fquad/default (download: 3.14 MiB, generated: 6.62 MiB, ... | 21 | Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset
I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data config... | [
-0.3520764112,
0.1850520074,
-0.10664545,
0.2455794066,
0.399815619,
0.0336072519,
0.3876357973,
0.2061041892,
0.3057492077,
0.145392701,
-0.2683596909,
-0.1908083558,
0.2827514112,
0.0175513979,
0.0276068095,
0.0714483485,
-0.0717818663,
-0.0440939181,
-0.1596582085,
0.0708821... |
https://github.com/huggingface/datasets/issues/2212 | Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset | They now made a website to force people who want to use the dataset for commercial purposes to seek a commercial license from them ... | I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data configuration default
Downloading and preparing dataset fquad/default (download: 3.14 MiB, generated: 6.62 MiB, ... | 25 | Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset
I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data config... | [
-0.3520764112,
0.1850520074,
-0.10664545,
0.2455794066,
0.399815619,
0.0336072519,
0.3876357973,
0.2061041892,
0.3057492077,
0.145392701,
-0.2683596909,
-0.1908083558,
0.2827514112,
0.0175513979,
0.0276068095,
0.0714483485,
-0.0717818663,
-0.0440939181,
-0.1596582085,
0.0708821... |
https://github.com/huggingface/datasets/issues/2211 | Getting checksum error when trying to load lc_quad dataset | Hi,
I've already opened a PR with the fix. If you are in a hurry, just build the project from source and run:
```bash
datasets-cli test datasets/lc_quad --save_infos --all_configs --ignore_verifications
```
| I'm having issues loading the [lc_quad](https://huggingface.co/datasets/fquad) dataset by running:
```Python
lc_quad = load_dataset("lc_quad")
```
which is giving me the following error:
```
Using custom data configuration default
Downloading and preparing dataset lc_quad/default (download: 3.69 MiB, ge... | 31 | Getting checksum error when trying to load lc_quad dataset
I'm having issues loading the [lc_quad](https://huggingface.co/datasets/fquad) dataset by running:
```Python
lc_quad = load_dataset("lc_quad")
```
which is giving me the following error:
```
Using custom data configuration default
Downloading ... | [
-0.1586478055,
0.0430788584,
-0.0340239443,
0.3563590944,
0.2677671015,
0.0147290677,
0.0714034289,
0.2469019145,
0.3710917532,
-0.0422731563,
-0.1049164683,
0.0677453727,
-0.0377759896,
0.075315088,
-0.2280280441,
0.2225682586,
0.0536351725,
0.0427621417,
-0.0752973333,
0.0635... |
https://github.com/huggingface/datasets/issues/2210 | dataloading slow when using HUGE dataset | Hi ! Yes this is an issue with `datasets<=1.5.0`
This issue has been fixed by #2122 , we'll do a new release soon :)
For now you can test it on the `master` branch. | Hi,
When I use datasets with 600GB data, the dataloading speed increases significantly.
I am experimenting with two datasets, and one is about 60GB and the other 600GB.
Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle ddp training.
When looking at the pytorch... | 34 | dataloading slow when using HUGE dataset
Hi,
When I use datasets with 600GB data, the dataloading speed increases significantly.
I am experimenting with two datasets, and one is about 60GB and the other 600GB.
Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle... | [
-0.5256723166,
-0.2135469168,
-0.0631118864,
0.2832503915,
0.1389206797,
-0.027784178,
0.1501028389,
0.2327977866,
-0.157692492,
-0.0921892822,
-0.1191496626,
0.1439053267,
-0.1486593634,
-0.1823078692,
-0.0239587072,
-0.1902339756,
-0.0351157486,
0.0441079065,
-0.39371714,
-0.... |
https://github.com/huggingface/datasets/issues/2210 | dataloading slow when using HUGE dataset | Hi, thank you for your answer. I did not realize that my issue stems from the same problem. | Hi,
When I use datasets with 600GB data, the dataloading speed increases significantly.
I am experimenting with two datasets, and one is about 60GB and the other 600GB.
Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle ddp training.
When looking at the pytorch... | 18 | dataloading slow when using HUGE dataset
Hi,
When I use datasets with 600GB data, the dataloading speed increases significantly.
I am experimenting with two datasets, and one is about 60GB and the other 600GB.
Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle... | [
-0.5256723166,
-0.2135469168,
-0.0631118864,
0.2832503915,
0.1389206797,
-0.027784178,
0.1501028389,
0.2327977866,
-0.157692492,
-0.0921892822,
-0.1191496626,
0.1439053267,
-0.1486593634,
-0.1823078692,
-0.0239587072,
-0.1902339756,
-0.0351157486,
0.0441079065,
-0.39371714,
-0.... |
https://github.com/huggingface/datasets/issues/2207 | making labels consistent across the datasets | Hi ! The ClassLabel feature type encodes the labels as integers.
The integer corresponds to the index of the label name in the `names` list of the ClassLabel.
Here that means that the labels are 'entailment' (0), 'neutral' (1), 'contradiction' (2).
You can get the label names back by using `a.features['label'].int... | Hi
For accessing the labels one can type
```
>>> a.features['label']
ClassLabel(num_classes=3, names=['entailment', 'neutral', 'contradiction'], names_file=None, id=None)
```
The labels however are not consistent with the actual labels sometimes, for instance in case of XNLI, the actual labels are 0,1,2, but if ... | 51 | making labels consistent across the datasets
Hi
For accessing the labels one can type
```
>>> a.features['label']
ClassLabel(num_classes=3, names=['entailment', 'neutral', 'contradiction'], names_file=None, id=None)
```
The labels however are not consistent with the actual labels sometimes, for instance in cas... | [
0.0162606575,
-0.1287366152,
-0.0699901357,
0.4029692709,
0.3827133477,
-0.1300281137,
0.4260283113,
0.0234169308,
0.0852747485,
0.2710853219,
-0.2305016369,
0.5326741338,
-0.0153418686,
0.4047868252,
-0.3078927994,
0.0154725788,
-0.1154663414,
0.1015718207,
0.1249587163,
-0.32... |
https://github.com/huggingface/datasets/issues/2206 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer | Hi,
the output of the tokenizers is treated specially in the lib to optimize the dataset size (see the code [here](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_writer.py#L138-L141)). It looks like that one of the values in a dictionary returned by the tokenizer is out of the assumed range.
... | I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_sin... | 53 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer
I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/... | [
-0.2338281274,
0.2921876013,
-0.0588287339,
0.0690965131,
0.2734425366,
-0.0860713273,
0.1836448163,
0.3055636883,
-0.5368504524,
-0.1932195723,
-0.0461711101,
0.4354700148,
-0.0010597007,
-0.2381744683,
0.2536686361,
-0.0995297208,
0.0430261455,
0.2507620752,
0.2605472207,
0.1... |
https://github.com/huggingface/datasets/issues/2206 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer | Hi @yana-xuyan, thanks for reporting.
As clearly @mariosasko explained, `datasets` performs some optimizations in order to reduce the size of the dataset cache files. And one of them is storing the field `special_tokens_mask` as `int8`, which means that this field can only contain integers between `-128` to `127`. A... | I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_sin... | 98 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer
I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/... | [
-0.2338281274,
0.2921876013,
-0.0588287339,
0.0690965131,
0.2734425366,
-0.0860713273,
0.1836448163,
0.3055636883,
-0.5368504524,
-0.1932195723,
-0.0461711101,
0.4354700148,
-0.0010597007,
-0.2381744683,
0.2536686361,
-0.0995297208,
0.0430261455,
0.2507620752,
0.2605472207,
0.1... |
https://github.com/huggingface/datasets/issues/2206 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer | I'm facing same issue @mariosasko @albertvillanova
```
ArrowInvalid: Integer value 50260 not in range: -128 to 127
```
To reproduce:
```python
SPECIAL_TOKENS = ['<bos>','<eos>','<speaker1>','<speaker2>','<pad>']
ATTR_TO_SPECIAL_TOKEN = {
'bos_token': '<bos>',
'eos_token': '<eos>',
'pad_toke... | I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_sin... | 70 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer
I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/... | [
-0.2338281274,
0.2921876013,
-0.0588287339,
0.0690965131,
0.2734425366,
-0.0860713273,
0.1836448163,
0.3055636883,
-0.5368504524,
-0.1932195723,
-0.0461711101,
0.4354700148,
-0.0010597007,
-0.2381744683,
0.2536686361,
-0.0995297208,
0.0430261455,
0.2507620752,
0.2605472207,
0.1... |
https://github.com/huggingface/datasets/issues/2206 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer | @mariosasko
I am hitting this bug in the Bert tokenizer too. I see that @albertvillanova labeled this as a bug back in April. Has there been a fix released yet?
What I did for now is to just disable the optimization in the HF library. @yana-xuyan and @thomas-happify, is that what you did and did that work for you... | I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_sin... | 59 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer
I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/... | [
-0.2338281274,
0.2921876013,
-0.0588287339,
0.0690965131,
0.2734425366,
-0.0860713273,
0.1836448163,
0.3055636883,
-0.5368504524,
-0.1932195723,
-0.0461711101,
0.4354700148,
-0.0010597007,
-0.2381744683,
0.2536686361,
-0.0995297208,
0.0430261455,
0.2507620752,
0.2605472207,
0.1... |
https://github.com/huggingface/datasets/issues/2206 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer | Hi @gregg-ADP,
This is still a bug.
As @albertvillanova has suggested, maybe it's indeed worth adding a variable to `config.py` to have a way to disable this behavior.
In the meantime, this forced optimization can be disabled by specifying `features` (of the returned examples) in the `map` call:
```python
f... | I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_sin... | 76 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer
I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/... | [
-0.2338281274,
0.2921876013,
-0.0588287339,
0.0690965131,
0.2734425366,
-0.0860713273,
0.1836448163,
0.3055636883,
-0.5368504524,
-0.1932195723,
-0.0461711101,
0.4354700148,
-0.0010597007,
-0.2381744683,
0.2536686361,
-0.0995297208,
0.0430261455,
0.2507620752,
0.2605472207,
0.1... |
https://github.com/huggingface/datasets/issues/2206 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer | Thanks for the quick reply @mariosasko. What I did was to changed the optimizer to use int32 instead of int8.
What you're suggesting specifies the type for each feature explicitly without changing the HF code. This is definitely a better option. However, we are hitting a new error later:
```
File "/Users/ccccc/... | I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_sin... | 111 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer
I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/... | [
-0.2338281274,
0.2921876013,
-0.0588287339,
0.0690965131,
0.2734425366,
-0.0860713273,
0.1836448163,
0.3055636883,
-0.5368504524,
-0.1932195723,
-0.0461711101,
0.4354700148,
-0.0010597007,
-0.2381744683,
0.2536686361,
-0.0995297208,
0.0430261455,
0.2507620752,
0.2605472207,
0.1... |
https://github.com/huggingface/datasets/issues/2206 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer | Hi @gwc4github,
the fix was merged a few minutes ago, and it doesn't require any changes on the user side (e.g. no need for specifying `features`). If you find time, feel free to install `datasets` from master with:
```
pip install git+https://github.com/huggingface/datasets.git
```
and let us know if it works f... | I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_sin... | 54 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer
I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/... | [
-0.2338281274,
0.2921876013,
-0.0588287339,
0.0690965131,
0.2734425366,
-0.0860713273,
0.1836448163,
0.3055636883,
-0.5368504524,
-0.1932195723,
-0.0461711101,
0.4354700148,
-0.0010597007,
-0.2381744683,
0.2536686361,
-0.0995297208,
0.0430261455,
0.2507620752,
0.2605472207,
0.1... |
https://github.com/huggingface/datasets/issues/2200 | _prepare_split will overwrite DatasetBuilder.info.features | Hi ! This might be related to #2153
You're right the ArrowWriter should be initialized with `features=self.info.features` ! Good catch
I'm opening a PR to fix this and also to figure out how it was not caught in the tests
EDIT: opened #2201 | Hi, here is my issue:
I initialized a Csv datasetbuilder with specific features:
```
def get_dataset_features(data_args):
features = {}
if data_args.text_features:
features.update({text_feature: hf_features.Value("string") for text_feature in data_args.text_features.strip().split(",")})
if da... | 43 | _prepare_split will overwrite DatasetBuilder.info.features
Hi, here is my issue:
I initialized a Csv datasetbuilder with specific features:
```
def get_dataset_features(data_args):
features = {}
if data_args.text_features:
features.update({text_feature: hf_features.Value("string") for text_featu... | [
-0.2406018823,
-0.0382910632,
-0.107767351,
0.1783377528,
0.2982979715,
0.2197922915,
0.4546081424,
0.2158692032,
-0.3327545524,
0.1149653271,
0.1291756779,
0.1715127528,
0.0863348916,
0.4873532951,
-0.1215012074,
0.0767409652,
0.0831203759,
0.2866103649,
0.1698092073,
-0.07321... |
https://github.com/huggingface/datasets/issues/2200 | _prepare_split will overwrite DatasetBuilder.info.features | > Hi ! This might be related to #2153
>
> You're right the ArrowWriter should be initialized with `features=self.info.features` ! Good catch
> I'm opening a PR to fix this and also to figure out how it was not caught in the tests
>
> EDIT: opened #2201
Glad to hear that! Thank you for your fix, I'm new to hug... | Hi, here is my issue:
I initialized a Csv datasetbuilder with specific features:
```
def get_dataset_features(data_args):
features = {}
if data_args.text_features:
features.update({text_feature: hf_features.Value("string") for text_feature in data_args.text_features.strip().split(",")})
if da... | 67 | _prepare_split will overwrite DatasetBuilder.info.features
Hi, here is my issue:
I initialized a Csv datasetbuilder with specific features:
```
def get_dataset_features(data_args):
features = {}
if data_args.text_features:
features.update({text_feature: hf_features.Value("string") for text_featu... | [
-0.2406018823,
-0.0382910632,
-0.107767351,
0.1783377528,
0.2982979715,
0.2197922915,
0.4546081424,
0.2158692032,
-0.3327545524,
0.1149653271,
0.1291756779,
0.1715127528,
0.0863348916,
0.4873532951,
-0.1215012074,
0.0767409652,
0.0831203759,
0.2866103649,
0.1698092073,
-0.07321... |
https://github.com/huggingface/datasets/issues/2196 | `load_dataset` caches two arrow files? | Hi ! Files that starts with `cache-*` are cached computation files, i.e. they are the cached results of map/filter/cast/etc. operations. For example if you used `map` on your dataset to transform it, then the resulting dataset is going to be stored and cached in a `cache-*` file. These files are used to avoid having to... | Hi,
I am using datasets to load large json file of 587G.
I checked the cached folder and found that there are two arrow files created:
* `cache-ed205e500a7dc44c.arrow` - 355G
* `json-train.arrow` - 582G
Why is the first file created?
If I delete it, would I still be able to `load_from_disk`? | 64 | `load_dataset` caches two arrow files?
Hi,
I am using datasets to load large json file of 587G.
I checked the cached folder and found that there are two arrow files created:
* `cache-ed205e500a7dc44c.arrow` - 355G
* `json-train.arrow` - 582G
Why is the first file created?
If I delete it, would I still be a... | [
-0.0206239801,
-0.1851760298,
-0.132871449,
0.6645821929,
-0.0811096653,
0.3052114546,
0.1752956957,
0.2564813495,
0.2874501646,
-0.157864511,
-0.0090865688,
0.1946640015,
0.0801826194,
-0.5129998326,
0.1924161911,
0.1686628312,
0.1584640443,
0.0357148908,
-0.1609624326,
-0.138... |
https://github.com/huggingface/datasets/issues/2196 | `load_dataset` caches two arrow files? | Thanks @lhoestq! Hmm.. that's strange because I specifically turned off auto caching, and saved mapped result, using `save_to_disk`, to another location. At this location, the following file is created:`355G cache-ed205e500a7dc44c.arrow`
To my observation, both `load_dataset` and `map` creates `cache-*` files, and I... | Hi,
I am using datasets to load large json file of 587G.
I checked the cached folder and found that there are two arrow files created:
* `cache-ed205e500a7dc44c.arrow` - 355G
* `json-train.arrow` - 582G
Why is the first file created?
If I delete it, would I still be able to `load_from_disk`? | 61 | `load_dataset` caches two arrow files?
Hi,
I am using datasets to load large json file of 587G.
I checked the cached folder and found that there are two arrow files created:
* `cache-ed205e500a7dc44c.arrow` - 355G
* `json-train.arrow` - 582G
Why is the first file created?
If I delete it, would I still be a... | [
0.0172762107,
-0.1464013457,
-0.114908129,
0.6492549777,
-0.0665471256,
0.3106841743,
0.2409405112,
0.2578833401,
0.3021349609,
-0.2563413978,
-0.0356174335,
0.2787823379,
0.1161760017,
-0.497151047,
0.2449564636,
0.1934698224,
0.1645394266,
0.0410902835,
-0.1407984197,
-0.1022... |
https://github.com/huggingface/datasets/issues/2196 | `load_dataset` caches two arrow files? | This is a wrong report -- `cache-*` files are created only my `map`, not by `load_dataset`. | Hi,
I am using datasets to load large json file of 587G.
I checked the cached folder and found that there are two arrow files created:
* `cache-ed205e500a7dc44c.arrow` - 355G
* `json-train.arrow` - 582G
Why is the first file created?
If I delete it, would I still be able to `load_from_disk`? | 16 | `load_dataset` caches two arrow files?
Hi,
I am using datasets to load large json file of 587G.
I checked the cached folder and found that there are two arrow files created:
* `cache-ed205e500a7dc44c.arrow` - 355G
* `json-train.arrow` - 582G
Why is the first file created?
If I delete it, would I still be a... | [
-0.0013235962,
-0.2079310417,
-0.1252864301,
0.741124928,
-0.1488289237,
0.2700349092,
0.2812333703,
0.2348198444,
0.360519141,
-0.2358909249,
-0.0116217863,
0.2067556977,
0.1341191977,
-0.4797017872,
0.1827090532,
0.2512857914,
0.2364552915,
0.0539577007,
-0.1515451223,
-0.184... |
https://github.com/huggingface/datasets/issues/2195 | KeyError: '_indices_files' in `arrow_dataset.py` | Thanks @samsontmr this should be fixed on master now
Feel free to reopen if you're still having issues | After pulling the latest master, I'm getting a crash when `load_from_disk` tries to load my local dataset.
Trace:
```
Traceback (most recent call last):
File "load_data.py", line 11, in <module>
dataset = load_from_disk(SRC)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/load.py", line ... | 18 | KeyError: '_indices_files' in `arrow_dataset.py`
After pulling the latest master, I'm getting a crash when `load_from_disk` tries to load my local dataset.
Trace:
```
Traceback (most recent call last):
File "load_data.py", line 11, in <module>
dataset = load_from_disk(SRC)
File "/opt/conda/envs/py38/l... | [
-0.3437004685,
0.0625096187,
-0.0619885437,
0.6979850531,
-0.0762990639,
0.1504234821,
0.1940689087,
0.4931738377,
0.5131547451,
0.1480898857,
0.0192215368,
0.1839222312,
-0.3918606937,
0.0680307224,
-0.1045262516,
0.0790592656,
0.0853404254,
0.2412600815,
-0.1544323564,
-0.052... |
https://github.com/huggingface/datasets/issues/2193 | Filtering/mapping on one column is very slow | Hi ! Yes we are working on making `filter` significantly faster. You can look at related PRs here: #2060 #2178
I think you can expect to have the fast version of `filter` available next week.
We'll make it only select one column, and we'll also make the overall filtering operation way faster by avoiding many arr... | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | 68 | Filtering/mapping on one column is very slow
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_toke... | [
-0.1237304211,
0.2983856499,
-0.0208493024,
-0.2533704042,
0.1016573533,
-0.1387868077,
0.3027843237,
0.6129099131,
0.2476259023,
-0.032855276,
-0.0985997245,
0.4860818088,
0.0381357484,
-0.1835790873,
-0.0157281235,
0.18545717,
0.0900914669,
0.3238480389,
0.3035544157,
0.10208... |
https://github.com/huggingface/datasets/issues/2193 | Filtering/mapping on one column is very slow | @lhoestq Thanks for the response— it's great to hear that we'll be getting a much faster `filter` method soon. However, my use case does also involve using `map` over a single column in order to pre-compute roughly uniformly sized batches, and right now that is also very slow. Is there any plan to make `map` faster for... | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | 96 | Filtering/mapping on one column is very slow
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_toke... | [
-0.1885618567,
0.3007326126,
-0.0062209,
-0.2915566862,
0.1139059812,
-0.125190869,
0.3774540722,
0.6256684065,
0.2875320911,
0.0172809679,
-0.0364470333,
0.5041142106,
0.0290439874,
-0.1641895175,
-0.0369144417,
0.1709661186,
0.104823783,
0.2638179064,
0.2762957215,
0.12859776... |
https://github.com/huggingface/datasets/issues/2193 | Filtering/mapping on one column is very slow | Currently the optimal setup for single-column computations is probably to do something like
```python
result = dataset.map(f, input_columns="my_col", remove_columns=dataset.column_names)
```
This has two advantages:
- input_columns="my_col" allows to only read the column "my_col"
- remove_columns=dataset.column_n... | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | 82 | Filtering/mapping on one column is very slow
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_toke... | [
-0.1294834912,
0.3391115367,
0.0107455822,
-0.2183183283,
0.0991215855,
-0.0936414599,
0.4080075026,
0.5750509501,
0.2324594855,
0.082152307,
-0.0695115179,
0.463419199,
0.0280996002,
-0.1797306389,
-0.0288195945,
0.2203612477,
0.0991644338,
0.3066150844,
0.3233720958,
0.061782... |
https://github.com/huggingface/datasets/issues/2193 | Filtering/mapping on one column is very slow | Hi @lhoestq ,
I'm hijacking this issue, because I'm currently trying to do the approach you recommend:
> Currently the optimal setup for single-column computations is probably to do something like
>
> ```python
> result = dataset.map(f, input_columns="my_col", remove_columns=dataset.column_names)
> ```
He... | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | 285 | Filtering/mapping on one column is very slow
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_toke... | [
-0.1297884285,
0.3438467681,
0.0056277025,
-0.2346833646,
0.1094863266,
-0.1253670752,
0.3955200613,
0.5808494091,
0.2243223041,
0.0691063926,
-0.0648778602,
0.4875033498,
0.0244312063,
-0.1736371368,
-0.0233188476,
0.2062174529,
0.1099479571,
0.3038490117,
0.3059217632,
0.0706... |
https://github.com/huggingface/datasets/issues/2193 | Filtering/mapping on one column is very slow | Hi ! Can you open a separate issue for that ?
Also if you could provide a google colab or a sample code to reproduce this issue that would be helpful.
On my side I was not able to reproduce this error. | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | 42 | Filtering/mapping on one column is very slow
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_toke... | [
-0.0739278048,
0.2622618079,
0.010453321,
-0.1460136622,
0.1726153046,
-0.092114605,
0.3422683477,
0.585539937,
0.2409633696,
0.0138788391,
-0.0288185254,
0.4858113825,
0.0747994632,
-0.2420255691,
-0.0771974921,
0.2121567428,
0.0645875856,
0.3524307311,
0.3226830959,
0.0753293... |
https://github.com/huggingface/datasets/issues/2193 | Filtering/mapping on one column is very slow | @lhoestq Sorry I'm just responding now. I'm currently using your recommendation for the map on a single column, and I've gotten it to be fast enough to sort of work for my use case by just setting `num_proc=10`, although it's still quite slow. It's clear that it is still loading the entirety of each row into memory and... | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | 195 | Filtering/mapping on one column is very slow
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_toke... | [
-0.1100342795,
0.2694857717,
-0.003548058,
-0.2267881334,
0.1348386705,
-0.1574558616,
0.3228950202,
0.5779767632,
0.2516378462,
0.0171539932,
-0.0084353592,
0.5262536407,
-0.0003767382,
-0.2199470252,
-0.0524407998,
0.2328758985,
0.0863023102,
0.2980626523,
0.3231540024,
0.105... |
https://github.com/huggingface/datasets/issues/2193 | Filtering/mapping on one column is very slow | Hi @norabelrose ! I'm glad you managed to make this work on your side.
Regarding memory usage, you can try to drop the columns that you don't want to use for your `map` for now.
In the future we'll try to find a way to not load unnecessary columns in memory in `map`. Currently the way it works is that it gets the b... | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | 126 | Filtering/mapping on one column is very slow
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_toke... | [
-0.1090223491,
0.2856532931,
-0.0173125081,
-0.1360488385,
0.1743158102,
-0.0846631303,
0.2768235505,
0.550802052,
0.2953396142,
-0.0280730557,
-0.0430033766,
0.4847536683,
-0.0104954764,
-0.2168287933,
-0.0766489729,
0.1780529767,
0.0879165456,
0.2981184423,
0.2514612079,
0.12... |
https://github.com/huggingface/datasets/issues/2193 | Filtering/mapping on one column is very slow | @lhoestq After looking through the source code, it looks like the following solution has at least some chance of working:
- refactor `Dataset.map()` so that the `input_columns` parameter is implemented by using the `self.formatted_as()` context manager with `columns=input_columns`
- change `Dataset._getitem()` so tha... | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | 62 | Filtering/mapping on one column is very slow
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_toke... | [
-0.1273223311,
0.3481044173,
-0.0064756856,
-0.153528899,
0.143802613,
-0.1467356682,
0.363443017,
0.6069024801,
0.1660747528,
-0.0057872017,
-0.1035192236,
0.5548499823,
0.0025604968,
-0.1955351979,
-0.0578186698,
0.1826300323,
0.0585308671,
0.2830990553,
0.2737962604,
0.10410... |
https://github.com/huggingface/datasets/issues/2193 | Filtering/mapping on one column is very slow | Looks like a great direction :)
Note that `query_table` doesn't bring data into memory. Only `format_table` does.
Also the dataset may already have a format with `columns=` already defined so we would need to define the formatted `input_dataset` like:
```python
# before the `map` main for loop
input_columns = inpu... | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | 148 | Filtering/mapping on one column is very slow
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_toke... | [
-0.1205226108,
0.2916716039,
-0.0068579447,
-0.1955766082,
0.1112997681,
-0.1071504876,
0.2545418739,
0.5994859934,
0.204348132,
-0.0285106413,
-0.0340007618,
0.4793542027,
0.0217374396,
-0.2131701559,
0.0204916336,
0.2257365435,
0.1094411239,
0.3549480736,
0.2541977167,
0.0966... |
https://github.com/huggingface/datasets/issues/2193 | Filtering/mapping on one column is very slow | Good to know about `query_table` not bringing anything into memory. I was under the impression that it did because a while back I looked at my `map` operation in pdb and it looked like it was spending forever in line 93 of formatting.py, `return pa.concat_tables(....)`, although that was before the `fast_slice` interpo... | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | 181 | Filtering/mapping on one column is very slow
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_toke... | [
-0.1046055853,
0.322444737,
-0.0158338472,
-0.1899830997,
0.0996765941,
-0.0926471502,
0.2255837023,
0.573900938,
0.1345912963,
-0.0051861489,
-0.0915138572,
0.4579325318,
0.0769861639,
-0.2375073135,
0.0071080327,
0.2567908764,
0.0978267491,
0.357086122,
0.3204877973,
0.119537... |
https://github.com/huggingface/datasets/issues/2193 | Filtering/mapping on one column is very slow | `query_table` simply slices/concatenates parts of the table. The actual data inside the table is not brought in memory.
Also I'm more in favor of declaring `input_dataset = self.with_format(...)` since `formatted_as` may update the dataset fingerprint of `self`, which is not expected when someone runs `map`.
> It's... | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | 145 | Filtering/mapping on one column is very slow
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_toke... | [
-0.0905099213,
0.2783810198,
-0.0073054265,
-0.1781733483,
0.1408670843,
-0.0961291343,
0.3184328675,
0.5706530809,
0.1868227869,
-0.0280764103,
-0.0586420856,
0.4846482575,
0.0940665454,
-0.2658149302,
-0.0045914883,
0.248125881,
0.1001638845,
0.3244250119,
0.2597136199,
0.060... |
https://github.com/huggingface/datasets/issues/2193 | Filtering/mapping on one column is very slow | That sounds good to me! I just submitted a PR (#2246) implementing your approach. I also changed how `_query_table` handles Iterable keys since it still seemed like `pa.concat_tables` was taking a long time to create the table for each batch. Now my whole `map()` operation takes 1 min 46 seconds where it used to take s... | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | 62 | Filtering/mapping on one column is very slow
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_toke... | [
-0.1737177968,
0.3303043842,
-0.0151501531,
-0.2352600396,
0.0967266187,
-0.1373567134,
0.2451968193,
0.6020460725,
0.1215745509,
-0.0561255887,
-0.0701216012,
0.5131317973,
0.0704806075,
-0.204801023,
-0.0179260839,
0.2067434192,
0.0910433754,
0.3320173919,
0.3077169061,
0.141... |
https://github.com/huggingface/datasets/issues/2190 | News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs | Hi @anassalamah,
Could you please try with this:
```python
train_ds = load_dataset("news_commentary", lang1="ar", lang2="en", split='train[:98%]')
val_ds = load_dataset("news_commentary", lang1="ar", lang2="en", split='train[98%:]')
``` | I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi.
```
train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]')
val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]')
# filtering out examples that a... | 22 | News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs
I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi.
```
train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]')
val_ds = load_datas... | [
-0.1603862792,
0.0044460921,
-0.2170422822,
0.2492843419,
-0.053791251,
0.0885382518,
0.3200058043,
0.2294000089,
-0.0253907125,
-0.2357994318,
-0.2461352795,
0.1912741065,
0.1839796007,
0.2749135792,
-0.0599651933,
-0.3076826036,
0.0750908777,
-0.1572852582,
-0.0141738486,
-0.... |
https://github.com/huggingface/datasets/issues/2190 | News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs | Hello @albertvillanova,
Thanks for the suggestion. I didn't know you could do that. however, it didn't resolve the issue

| I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi.
```
train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]')
val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]')
# filtering out examples that a... | 20 | News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs
I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi.
```
train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]')
val_ds = load_datas... | [
-0.2070919722,
0.1082228124,
-0.1750501692,
0.325787425,
-0.117371656,
0.1063094139,
0.2994768322,
0.2640417516,
-0.0422109328,
-0.2637792826,
-0.2876567245,
0.2302990705,
0.2286694348,
0.1873273551,
-0.0158621278,
-0.1873507798,
0.0602837428,
-0.1162407622,
-0.0766713172,
-0.2... |
https://github.com/huggingface/datasets/issues/2189 | save_to_disk doesn't work when we use concatenate_datasets function before creating the final dataset_object. | Hi ! We refactored save_to_disk in #2025 so this doesn't happen.
Feel free to try it on master for now
We'll do a new release soon | As you can see, it saves the entire dataset.
@lhoestq
You can check by going through the following example,
```
from datasets import load_from_disk,concatenate_datasets
loaded_data=load_from_disk('/home/gsir059/HNSW-ori/my_knowledge_dataset')
n=20
kb_list=[loaded_data.shard(n, i, contiguous=True) for i... | 26 | save_to_disk doesn't work when we use concatenate_datasets function before creating the final dataset_object.
As you can see, it saves the entire dataset.
@lhoestq
You can check by going through the following example,
```
from datasets import load_from_disk,concatenate_datasets
loaded_data=load_from_di... | [
-0.2544760108,
0.0369032398,
-0.0747915879,
0.0235319789,
0.2022773921,
0.2077015191,
0.3147448301,
0.3745853901,
-0.0703063458,
0.290863663,
-0.1010365114,
0.1949493885,
0.0023287691,
0.2046325803,
-0.0519777611,
0.1872818172,
0.3261981905,
0.2545648515,
-0.2024897635,
-0.1318... |
https://github.com/huggingface/datasets/issues/2188 | Duplicate data in Timit dataset | Hi ! Thanks for reporting
If I recall correctly this has been recently fixed #1995
Can you try to upgrade your local version of `datasets` ?
```
pip install --upgrade datasets
``` | I ran a simple code to list all texts in Timit dataset and the texts were all the same.
Is this dataset corrupted?
**Code:**
timit = load_dataset("timit_asr")
print(*timit['train']['text'], sep='\n')
**Result:**
Would such an act of refusal be useful?
Would such an act of refusal be useful?
Would such an act of... | 32 | Duplicate data in Timit dataset
I ran a simple code to list all texts in Timit dataset and the texts were all the same.
Is this dataset corrupted?
**Code:**
timit = load_dataset("timit_asr")
print(*timit['train']['text'], sep='\n')
**Result:**
Would such an act of refusal be useful?
Would such an act of refusa... | [
-0.0085599953,
-0.3116270006,
-0.0845907331,
0.6408432722,
0.2972834706,
0.1798604429,
0.2127223015,
0.3483657837,
-0.3550943732,
0.0640648603,
-0.1848577857,
0.3607743084,
-0.0370688848,
0.1377498657,
0.1514658183,
0.1737727821,
-0.0231210217,
-0.0077552232,
-0.2896779776,
-0.... |
https://github.com/huggingface/datasets/issues/2187 | Question (potential issue?) related to datasets caching | An educated guess: does this refer to the fact that depending on the custom column names in the dataset files (csv in this case), there is a dataset loader being created? and this dataset loader - using the "custom data configuration" is used among all jobs running using this particular csv files? (thinking out loud he... | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | 95 | Question (potential issue?) related to datasets caching
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the followi... | [
-0.0747399032,
0.0329242162,
-0.1112522706,
0.184038505,
0.2081030309,
0.199586153,
0.341237247,
0.0353645459,
0.3006587327,
-0.1421153843,
0.1708120704,
0.091015473,
-0.2320551574,
0.0387957878,
-0.0910931081,
0.4722391665,
0.0865921974,
0.071161814,
-0.1339131296,
0.055463273... |
https://github.com/huggingface/datasets/issues/2187 | Question (potential issue?) related to datasets caching | Hi ! Currently disabling the caching means that all the dataset transform like `map`, `filter` etc. ignore the cache: it doesn't write nor read processed cache files.
However `load_dataset` reuses datasets that have already been prepared: it does reload prepared dataset files.
Indeed from the documentation:
> data... | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | 202 | Question (potential issue?) related to datasets caching
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the followi... | [
-0.0997082889,
0.0613426678,
-0.0944523662,
0.1375740618,
0.2868357301,
0.1569393277,
0.2946326733,
0.0576290414,
0.2189562172,
-0.2274290472,
0.0563381277,
0.1891585737,
-0.1538449675,
-0.134917751,
-0.0155598912,
0.4577814639,
0.1715122908,
0.0599947087,
-0.115578711,
0.00473... |
https://github.com/huggingface/datasets/issues/2187 | Question (potential issue?) related to datasets caching | Thank you for the clarification.
This is a bit confusing. On one hand, it says that cache files are always recreated and written to a temporary directory that is removed; on the other hand the last bullet point makes me think that since the default according to the docs for `download_mode (Optional datasets.Generat... | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | 112 | Question (potential issue?) related to datasets caching
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the followi... | [
0.0234996416,
-0.0067310841,
-0.1006587446,
0.2295337915,
0.2193898857,
0.1646068692,
0.1426829398,
-0.1221596003,
0.2681150138,
-0.2213971019,
0.1463904828,
0.0476381481,
-0.0811797529,
-0.0989196002,
0.0419842564,
0.4397714436,
0.141672954,
0.12207032,
-0.1241384521,
-0.02424... |
https://github.com/huggingface/datasets/issues/2187 | Question (potential issue?) related to datasets caching | It would be nice if the documentation elaborated on all the possible values for `download_mode` and/or a link to `datasets.GenerateMode`.
This info here:
```
"""`Enum` for how to treat pre-existing downloads and data.
The default mode is `REUSE_DATASET_IF_EXISTS`, which will reuse both
raw downloads a... | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | 84 | Question (potential issue?) related to datasets caching
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the followi... | [
-0.1325752884,
0.0042609279,
-0.1003255844,
0.1450196356,
0.2611146867,
0.0885637403,
0.1497801542,
-0.0054400498,
0.0523647778,
-0.2209995985,
-0.0105337566,
-0.0093213469,
-0.0788028166,
0.0148023423,
-0.0911449045,
0.4515503049,
0.1717671007,
0.0544780195,
-0.2703156173,
-0.... |
https://github.com/huggingface/datasets/issues/2187 | Question (potential issue?) related to datasets caching | I have another question. Assuming that I understood correctly and there is reuse of datasets files when caching is disabled (!), I'm guessing there is a directory that is created based on some information on the dataset file. I'm interested in the situation where I'm loading a (custom) dataset from local disk. What inf... | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | 139 | Question (potential issue?) related to datasets caching
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the followi... | [
0.0396140777,
0.1744790822,
-0.1220533848,
0.3688905537,
0.1228983998,
0.0972478166,
0.3478938043,
-0.0435186848,
0.1747947633,
-0.1183370203,
0.0815540552,
-0.0739900246,
-0.096710369,
-0.1997420788,
0.1834367961,
0.4033866823,
0.2352863699,
0.0705636144,
-0.1633930802,
-0.211... |
https://github.com/huggingface/datasets/issues/2187 | Question (potential issue?) related to datasets caching | Thanks for the feedback, we'll work in improving this aspect of the documentation.
> Where are these files stored? I guess not in the temporary directory that is removed...
We're using the Arrow file format to load datasets. Therefore each time you load a dataset, it is prepared as an arrow file on your disk. By ... | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | 231 | Question (potential issue?) related to datasets caching
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the followi... | [
0.0426528901,
0.1128460541,
-0.0747743845,
0.3651134968,
0.1875134557,
0.1657912284,
0.3618483841,
-0.0763977468,
0.1920939237,
-0.1457172036,
0.1244347915,
0.0901360065,
-0.2694363892,
-0.1203827709,
0.0926407725,
0.3132811785,
0.1218125224,
0.0018569647,
-0.2674064636,
-0.061... |
https://github.com/huggingface/datasets/issues/2187 | Question (potential issue?) related to datasets caching | Thank you for all your clarifications, really helpful!
If you have the bandwidth, please do revisit the api wrt cache disabling. Anywhere in the computer stack (hardware included) where you disable the cache, one assumes there is no caching that happens. | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | 41 | Question (potential issue?) related to datasets caching
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the followi... | [
-0.0806552917,
-0.0190927014,
-0.0784588158,
0.1829434633,
0.2073648125,
0.2329126149,
0.1717558801,
0.0126284901,
0.2039585859,
-0.1684734076,
0.1043681726,
0.0651793033,
-0.1479161978,
-0.0486045927,
-0.0459727049,
0.4469535053,
0.1602658033,
0.1373852044,
-0.1307974607,
-0.0... |
https://github.com/huggingface/datasets/issues/2187 | Question (potential issue?) related to datasets caching | I have another question about caching, this time in the case where FORCE_REDOWNLOAD is used to load the dataset, the datasets cache is one directory as defined by HF_HOME and there are multiple concurrent jobs running in a cluster using the same local dataset (i.e., same local files in the cluster). Does anything in th... | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | 115 | Question (potential issue?) related to datasets caching
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the followi... | [
-0.1976589113,
0.0997743681,
-0.1261471063,
0.239942506,
0.0289439987,
0.0434310324,
0.4564794898,
0.0984458253,
0.4220702648,
-0.2127027065,
-0.0249834973,
-0.0400108546,
-0.0365920104,
0.0134087363,
-0.1342043281,
0.4030804336,
0.2076877952,
-0.0396348052,
-0.0242150445,
-0.1... |
https://github.com/huggingface/datasets/issues/2187 | Question (potential issue?) related to datasets caching | I got another error that convinces me there is a race condition (one of the test files had zero samples at prediction time). I think it comes down to the fact that the `config_id` above (used in the naming for the cache) has no information on who's touching the data. If I have 2 concurrent jobs, both loading the same d... | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | 74 | Question (potential issue?) related to datasets caching
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the followi... | [
-0.2517685592,
0.191368565,
-0.1285431534,
0.2252949625,
-0.0270279273,
0.0778241754,
0.3286904991,
0.1005266234,
0.3222957253,
-0.165066272,
0.1239639223,
-0.1279573739,
-0.1924361438,
-0.0144358939,
-0.0895962641,
0.5366290212,
0.107899867,
-0.0252204388,
-0.0901682004,
0.036... |
https://github.com/huggingface/datasets/issues/2187 | Question (potential issue?) related to datasets caching | We're using a locking mechanism to prevent two processes from writing at the same time. The locking is based on the `filelock` module.
Also directories that are being written use a suffix ".incomplete" so that reading is not possible on a dataset being written.
Do you think you could provide a simple code to reprod... | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | 61 | Question (potential issue?) related to datasets caching
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the followi... | [
-0.1967528313,
0.1176108196,
-0.0897248983,
0.3552556932,
0.1577597857,
0.1552674174,
0.1879117787,
0.1608639657,
0.3102246225,
-0.188149184,
0.1910192668,
0.0179731883,
-0.2734490335,
0.0268450547,
-0.247635752,
0.4049520195,
0.1267282367,
-0.0034803161,
-0.1405778378,
0.01794... |
https://github.com/huggingface/datasets/issues/2187 | Question (potential issue?) related to datasets caching | I can provide details about the code I'm running (it's really-really close to some official samples from the huggingface transformers examples, I can point to the exact sample file, I kept a record of that). I can also describe in which conditions this race occurs (I'm convinced it has to do with forcing the redownload... | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | 291 | Question (potential issue?) related to datasets caching
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the followi... | [
-0.2184485346,
-0.0956564546,
-0.0137407826,
0.3255731761,
0.1443527192,
0.0401227325,
0.2102060169,
-0.0006145384,
0.2891820371,
-0.2389939129,
-0.0172554757,
-0.1222879738,
-0.2192768157,
0.0667362288,
-0.0892674103,
0.3100434542,
0.1153511554,
-0.0467661731,
-0.2045852393,
0... |
https://github.com/huggingface/datasets/issues/2185 | .map() and distributed training | Hi, one workaround would be to save the mapped(tokenized in your case) file using `save_to_disk`, and having each process load this file using `load_from_disk`. This is what I am doing, and in this case, I turn off the ability to automatically load from the cache.
Also, multiprocessing the map function seems to be s... | Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_path=my_path)
[...]
def tokeni... | 62 | .map() and distributed training
Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_pa... | [
-0.2537620366,
-0.2207172215,
0.1177766919,
0.0745746344,
0.2116165161,
-0.1097596064,
0.5170727372,
0.0102053108,
0.2801569402,
0.02376296,
0.1822473109,
0.3434491754,
-0.2290641665,
-0.5855451822,
0.0941431075,
0.0615339763,
0.1861157864,
0.0366594382,
-0.0377358906,
-0.20080... |
https://github.com/huggingface/datasets/issues/2185 | .map() and distributed training | Thanks @hwijeen for the workaround, feels a bit prototypical but it works! (it seems files are written twice then though)
(I haven't observed slowness using multiprocessed map function but I could be wrong) | Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_path=my_path)
[...]
def tokeni... | 33 | .map() and distributed training
Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_pa... | [
-0.2537620366,
-0.2207172215,
0.1177766919,
0.0745746344,
0.2116165161,
-0.1097596064,
0.5170727372,
0.0102053108,
0.2801569402,
0.02376296,
0.1822473109,
0.3434491754,
-0.2290641665,
-0.5855451822,
0.0941431075,
0.0615339763,
0.1861157864,
0.0366594382,
-0.0377358906,
-0.20080... |
https://github.com/huggingface/datasets/issues/2185 | .map() and distributed training | To my understanding, files are written twice anyhow(one after load_dataset, another aftet map). It's just that you now have it at the location where you can see, whereas it was secretlely saved at caching folder(.cache/huggingface/datasets by default)! Correct me if I'm wrong! | Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_path=my_path)
[...]
def tokeni... | 42 | .map() and distributed training
Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_pa... | [
-0.2537620366,
-0.2207172215,
0.1177766919,
0.0745746344,
0.2116165161,
-0.1097596064,
0.5170727372,
0.0102053108,
0.2801569402,
0.02376296,
0.1822473109,
0.3434491754,
-0.2290641665,
-0.5855451822,
0.0941431075,
0.0615339763,
0.1861157864,
0.0366594382,
-0.0377358906,
-0.20080... |
https://github.com/huggingface/datasets/issues/2185 | .map() and distributed training | So to answer my initial question, I was just doing something stupid as I was not re-giving the `preprocessing_num_workers` arguments when launching the distributed training (and it was then set to `None`). I initially thought the hash was computed only with the `tokenize_function` but it's all arguments. Thanks @lhoest... | Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_path=my_path)
[...]
def tokeni... | 51 | .map() and distributed training
Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_pa... | [
-0.2537620366,
-0.2207172215,
0.1177766919,
0.0745746344,
0.2116165161,
-0.1097596064,
0.5170727372,
0.0102053108,
0.2801569402,
0.02376296,
0.1822473109,
0.3434491754,
-0.2290641665,
-0.5855451822,
0.0941431075,
0.0615339763,
0.1861157864,
0.0366594382,
-0.0377358906,
-0.20080... |
https://github.com/huggingface/datasets/issues/2185 | .map() and distributed training | This cache process isn't really consistent. I just changed `per_device_train_batch_size` of training script and now it rebuilding the dataset cache!!!! Why? | Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_path=my_path)
[...]
def tokeni... | 21 | .map() and distributed training
Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_pa... | [
-0.2537620366,
-0.2207172215,
0.1177766919,
0.0745746344,
0.2116165161,
-0.1097596064,
0.5170727372,
0.0102053108,
0.2801569402,
0.02376296,
0.1822473109,
0.3434491754,
-0.2290641665,
-0.5855451822,
0.0941431075,
0.0615339763,
0.1861157864,
0.0366594382,
-0.0377358906,
-0.20080... |
https://github.com/huggingface/datasets/issues/2185 | .map() and distributed training | Hi ! A `map` function is recomputed if the code changes or if any of the variables it uses changes. Can you check that your function doesn't use `per_device_train_batch_size` or any variable that contains `per_device_train_batch_size` ? | Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_path=my_path)
[...]
def tokeni... | 36 | .map() and distributed training
Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_pa... | [
-0.2537620366,
-0.2207172215,
0.1177766919,
0.0745746344,
0.2116165161,
-0.1097596064,
0.5170727372,
0.0102053108,
0.2801569402,
0.02376296,
0.1822473109,
0.3434491754,
-0.2290641665,
-0.5855451822,
0.0941431075,
0.0615339763,
0.1861157864,
0.0366594382,
-0.0377358906,
-0.20080... |
https://github.com/huggingface/datasets/issues/2185 | .map() and distributed training | My code is actually a transformer's example for training t5, I modified a bit:
https://github.com/puraminy/transformers/blob/4b40877132eedb566043f83de8f1d29a84d71430/examples/flax/language-modeling/run_t5_mlm_flax.py#L614
No, it doesn't use `per_device_train_batch_size`. I remember it worked for several times and... | Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_path=my_path)
[...]
def tokeni... | 94 | .map() and distributed training
Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_pa... | [
-0.2537620366,
-0.2207172215,
0.1177766919,
0.0745746344,
0.2116165161,
-0.1097596064,
0.5170727372,
0.0102053108,
0.2801569402,
0.02376296,
0.1822473109,
0.3434491754,
-0.2290641665,
-0.5855451822,
0.0941431075,
0.0615339763,
0.1861157864,
0.0366594382,
-0.0377358906,
-0.20080... |
https://github.com/huggingface/datasets/issues/2181 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | Hi ! Can you try to increase the block size ? For example
```python
block_size_10MB = 10<<20
load_dataset("json", ..., block_size=block_size_10MB)
```
The block size corresponds to how much bytes to process at a time from the input stream.
This will determine multi-threading granularity as well as the size of ind... | Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-pack... | 64 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries)
Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as ... | [
0.0165568814,
0.0126866689,
-0.007510921,
0.3409381509,
0.1604534239,
-0.264713943,
0.1936327368,
0.5859468579,
-0.1621609926,
-0.0815068111,
0.1654728353,
0.1441627443,
-0.0150832878,
0.0145620955,
-0.129521057,
-0.1911919117,
0.0333680809,
0.0782744884,
-0.0815533027,
0.13550... |
https://github.com/huggingface/datasets/issues/2181 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | Hi @lhoestq! Thank you for your prompt reply.
I have experimented with (10<<20, 10<<28, 10<<30, 10<<33, 10<<34), since my machine has 192G of memory, but it's either the above-mentioned error or processed killed because of OOM.
Could you give me a bit of background on why block size needs to be exactly calibrated?
... | Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-pack... | 66 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries)
Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as ... | [
0.0165568814,
0.0126866689,
-0.007510921,
0.3409381509,
0.1604534239,
-0.264713943,
0.1936327368,
0.5859468579,
-0.1621609926,
-0.0815068111,
0.1654728353,
0.1441627443,
-0.0150832878,
0.0145620955,
-0.129521057,
-0.1911919117,
0.0333680809,
0.0782744884,
-0.0815533027,
0.13550... |
https://github.com/huggingface/datasets/issues/2181 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | We're using the JSON loader of pyarrow. It parses the file chunk by chunk to load the dataset.
This issue happens when there's no delimiter in one chunk of data. For json line, the delimiter is the end of line.
So with a big value for chunk_size this should have worked unless you have one extremely long line in your ... | Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-pack... | 95 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries)
Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as ... | [
0.0165568814,
0.0126866689,
-0.007510921,
0.3409381509,
0.1604534239,
-0.264713943,
0.1936327368,
0.5859468579,
-0.1621609926,
-0.0815068111,
0.1654728353,
0.1441627443,
-0.0150832878,
0.0145620955,
-0.129521057,
-0.1911919117,
0.0333680809,
0.0782744884,
-0.0815533027,
0.13550... |
https://github.com/huggingface/datasets/issues/2181 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | I'm using `pyarrow==3.0.0` with `datasets==1.5.0`.
Your point totally makes sense. I will check if my jsonl file contains an extremely long file and let you know.
Here are some different error messages that I got when tweaking `block_size`. I also suspect that this is related to the pyarrow... but I guess it wou... | Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-pack... | 137 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries)
Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as ... | [
0.0165568814,
0.0126866689,
-0.007510921,
0.3409381509,
0.1604534239,
-0.264713943,
0.1936327368,
0.5859468579,
-0.1621609926,
-0.0815068111,
0.1654728353,
0.1441627443,
-0.0150832878,
0.0145620955,
-0.129521057,
-0.1911919117,
0.0333680809,
0.0782744884,
-0.0815533027,
0.13550... |
https://github.com/huggingface/datasets/issues/2181 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | I am getting the same error. When I tweak the block_size, I also find:
`OverflowError: value too large to convert to int32_t`
and
`pyarrow.lib.ArrowInvalid: Exceeded maximum rows`
| Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-pack... | 27 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries)
Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as ... | [
0.0165568814,
0.0126866689,
-0.007510921,
0.3409381509,
0.1604534239,
-0.264713943,
0.1936327368,
0.5859468579,
-0.1621609926,
-0.0815068111,
0.1654728353,
0.1441627443,
-0.0150832878,
0.0145620955,
-0.129521057,
-0.1911919117,
0.0333680809,
0.0782744884,
-0.0815533027,
0.13550... |
https://github.com/huggingface/datasets/issues/2181 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | I made more tests. I used a smaller dataset and I was getting the same error, which means that it was not necessarily linked to the dataset size. To make both my smaller and larger datasets work, I got rid of lists with the json file. I had the following data format:
```python
[
{'key': "a", 'value': ['one', 'two'... | Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-pack... | 120 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries)
Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as ... | [
0.0165568814,
0.0126866689,
-0.007510921,
0.3409381509,
0.1604534239,
-0.264713943,
0.1936327368,
0.5859468579,
-0.1621609926,
-0.0815068111,
0.1654728353,
0.1441627443,
-0.0150832878,
0.0145620955,
-0.129521057,
-0.1911919117,
0.0333680809,
0.0782744884,
-0.0815533027,
0.13550... |
https://github.com/huggingface/datasets/issues/2181 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | Thanks @hwijeen for reporting and thanks @jpilaul for pointing this out.
Indeed, those are different JSON-like formats:
- the first one is the **standard JSON** format: all the file content is JSON-valid, thus all content is either a JSON object (between curly brackets `{...}`) or a JSON array (between square brack... | Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-pack... | 104 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries)
Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as ... | [
0.0165568814,
0.0126866689,
-0.007510921,
0.3409381509,
0.1604534239,
-0.264713943,
0.1936327368,
0.5859468579,
-0.1621609926,
-0.0815068111,
0.1654728353,
0.1441627443,
-0.0150832878,
0.0145620955,
-0.129521057,
-0.1911919117,
0.0333680809,
0.0782744884,
-0.0815533027,
0.13550... |
https://github.com/huggingface/datasets/issues/2181 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | Thanks @albertvillanova for your explanation, it is helpful to know (maybe add to docs?)!
However, the problem I described above happened when I was dealing with jsonl files 😿
Although I did not thoroughly inspect, I suspect the cause was the one extremely long document in my case. | Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-pack... | 48 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries)
Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as ... | [
0.0165568814,
0.0126866689,
-0.007510921,
0.3409381509,
0.1604534239,
-0.264713943,
0.1936327368,
0.5859468579,
-0.1621609926,
-0.0815068111,
0.1654728353,
0.1441627443,
-0.0150832878,
0.0145620955,
-0.129521057,
-0.1911919117,
0.0333680809,
0.0782744884,
-0.0815533027,
0.13550... |
https://github.com/huggingface/datasets/issues/2176 | Converting a Value to a ClassLabel | Hi @nelson-liu!
Here is what I do to convert a string to class label:
```python
from datasets import load_dataset, features
dset = load_dataset(...)
col_name = "the string column name"
class_names = dset.unique(col_name)
class_feature = features.ClassLabel(names=sorted(class_names))
dset = dset.map(lam... | Hi!
In the docs for `cast`, it's noted that `For non-trivial conversion, e.g. string <-> ClassLabel you should use map() to update the Dataset.`
Would it be possible to have an example that demonstrates such a string <-> ClassLabel conversion using `map`? Thanks! | 50 | Converting a Value to a ClassLabel
Hi!
In the docs for `cast`, it's noted that `For non-trivial conversion, e.g. string <-> ClassLabel you should use map() to update the Dataset.`
Would it be possible to have an example that demonstrates such a string <-> ClassLabel conversion using `map`? Thanks!
Hi @nelson... | [
-0.0255649779,
-0.1936041564,
0.0500335172,
0.0593826622,
0.6273642778,
0.2115247101,
0.2829761803,
0.0898771212,
0.0759950876,
-0.0568678007,
0.1029049903,
0.681489408,
-0.0339196958,
0.1158336401,
-0.0638303161,
-0.2042808831,
0.1750883013,
0.1827312559,
-0.1002871767,
-0.056... |
https://github.com/huggingface/datasets/issues/2175 | dataset.search_batch() function outputs all -1 indices sometime. | Actually, I found the answer [here](https://github.com/facebookresearch/faiss/wiki/FAQ#what-does-it-mean-when-a-search-returns--1-ids).
So we have to do some modifications to the code for instances where the index doesn't retrieve any IDs. | I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_... | 25 | dataset.search_batch() function outputs all -1 indices sometime.
I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingf... | [
0.0112972949,
-0.3721056879,
-0.1019986719,
0.0192688573,
0.2021874934,
-0.0966880023,
0.2803097367,
0.2968435585,
0.1391416341,
0.42091313,
-0.2324289083,
-0.2434537709,
0.1058328822,
-0.0338982828,
-0.1621746272,
0.1081454456,
0.2741161883,
0.3812990785,
-0.0525575019,
-0.519... |
https://github.com/huggingface/datasets/issues/2175 | dataset.search_batch() function outputs all -1 indices sometime. | @lhoestq @patrickvonplaten
I also found another short bug in the retrieval part. Especially, when retrieving documents. If Faiss returns the -1 as the index, the retriever will always use the last element in the dataset.
please check [def get_doc_dicts function](https://github.com/huggingface/transformers/blo... | I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_... | 52 | dataset.search_batch() function outputs all -1 indices sometime.
I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingf... | [
0.0158523172,
-0.2806919217,
-0.0632928312,
0.0849648342,
0.0853545889,
-0.103846103,
0.2369082123,
0.302310586,
0.1052338704,
0.3437609076,
-0.2517733872,
-0.2153438777,
0.1649908423,
-0.2617836893,
-0.1169081628,
0.057884898,
0.2875570059,
0.3696743548,
-0.0135726519,
-0.5771... |
https://github.com/huggingface/datasets/issues/2175 | dataset.search_batch() function outputs all -1 indices sometime. | Hi !
No it happens sometimes to return -1, especially if your dataset is small.
If your dataset is big enough it shouldn't happen in my experience.
Ideally we should ignore all the -1 that are returned. It should be possible to change that in RAG's code | I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_... | 47 | dataset.search_batch() function outputs all -1 indices sometime.
I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingf... | [
-0.0197229125,
-0.4322830439,
-0.0812415555,
0.0478303842,
0.2197976559,
-0.0908180848,
0.3115732074,
0.2803555131,
0.1310122013,
0.4431152642,
-0.2505156994,
-0.1531092525,
0.0910181701,
-0.0791349411,
-0.135779798,
0.0532656647,
0.2860961556,
0.3947463632,
-0.1084835306,
-0.5... |
https://github.com/huggingface/datasets/issues/2175 | dataset.search_batch() function outputs all -1 indices sometime. | I also checked with some indexes it returns more -1s. Specially with IVF
when nprobr is very low. It doesn't happen when using HNSW though. But at
the moment if it happens, dataset will always return the last element.
Maybe we should change it to repeat the most last valid retrieved doc id.
What do you think?
On Wed, ... | I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_... | 150 | dataset.search_batch() function outputs all -1 indices sometime.
I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingf... | [
-0.035219565,
-0.3799370527,
-0.0478164703,
0.042572245,
0.1534608752,
-0.082297273,
0.2974610627,
0.3594942391,
0.1545496285,
0.4000560045,
-0.2719640136,
-0.1355304718,
0.180184558,
-0.1162477657,
-0.1042529717,
0.0176871736,
0.2510519922,
0.411700964,
-0.0238935202,
-0.56550... |
https://github.com/huggingface/datasets/issues/2175 | dataset.search_batch() function outputs all -1 indices sometime. | That would be an easy way to workaround this issue. Feel free to open a PR on `transformers` and ping me ! :) | I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_... | 23 | dataset.search_batch() function outputs all -1 indices sometime.
I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingf... | [
-0.0255078021,
-0.4342887104,
-0.067788817,
0.0737092644,
0.2557209432,
-0.0868848711,
0.3684978485,
0.2831261754,
0.0772827566,
0.4528711438,
-0.2223180979,
-0.1984307915,
0.0448378399,
-0.063660413,
-0.1257003546,
-0.0068469942,
0.2816343904,
0.4138690531,
-0.1538361609,
-0.5... |
https://github.com/huggingface/datasets/issues/2175 | dataset.search_batch() function outputs all -1 indices sometime. | Sure. Will push everything together with RAG end to end. :) thanks a lot.
On Wed, Apr 7, 2021, 21:16 Quentin Lhoest ***@***.***> wrote:
> That would be an easy way to workaround this issue. Feel free to open a PR
> on transformers and ping me ! :)
>
> —
> You are receiving this because you authored the thread.
> Rep... | I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_... | 82 | dataset.search_batch() function outputs all -1 indices sometime.
I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingf... | [
-0.0283131972,
-0.4459781945,
-0.0621184483,
0.1084140837,
0.2921060622,
-0.0717789605,
0.3549712598,
0.3072515428,
0.0994757265,
0.4373638332,
-0.2883693874,
-0.1778712422,
0.0475872755,
-0.0405464843,
-0.1186914295,
0.0098080011,
0.2470598072,
0.4063918293,
-0.1565123498,
-0.... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.