html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 70 51.8k | body stringlengths 0 29.8k | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings list |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/2046 | add_faisis_index gets very slow when doing it interatively | Ok, I will report the details too soon. I am the first one on the list and currently add_index being computed for the 3rd time in the loop. Actually seems like the time is taken to complete each interaction is the same, but around 1 hour more compared to running it without the training loop. A the moment this takes 5hr... | As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any ... | 91 | add_faisis_index gets very slow when doing it interatively
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_d... | [
-0.49882426857948303,
-0.26827821135520935,
-0.024500802159309387,
0.12822844088077545,
0.059439767152071,
0.20778393745422363,
0.12146293371915817,
0.424129843711853,
0.2917422652244568,
0.2924969494342804,
-0.11882511526346207,
0.18831509351730347,
0.13721297681331635,
0.0858209207653999... |
https://github.com/huggingface/datasets/issues/2046 | add_faisis_index gets very slow when doing it interatively | @lhoestq on a different note, I read about using Faiss-GPU, but the documentation says we should use it when the dataset has the ability to fit into the GPU memory. Although this might work, in the long-term this is not that practical for me.
https://github.com/matsui528/faiss_tips | As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any ... | 45 | add_faisis_index gets very slow when doing it interatively
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_d... | [
-0.49882426857948303,
-0.26827821135520935,
-0.024500802159309387,
0.12822844088077545,
0.059439767152071,
0.20778393745422363,
0.12146293371915817,
0.424129843711853,
0.2917422652244568,
0.2924969494342804,
-0.11882511526346207,
0.18831509351730347,
0.13721297681331635,
0.0858209207653999... |
https://github.com/huggingface/datasets/issues/2046 | add_faisis_index gets very slow when doing it interatively | @lhoestq
Hi, I executed the **use_own_dataset.py** script independently and ask a few of my friends to run their programs in the HPC machine at the same time.
Once there are so many other processes are running the add_index function gets slows down naturally. So basically the speed of the add_index depends ... | As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any ... | 121 | add_faisis_index gets very slow when doing it interatively
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_d... | [
-0.49882426857948303,
-0.26827821135520935,
-0.024500802159309387,
0.12822844088077545,
0.059439767152071,
0.20778393745422363,
0.12146293371915817,
0.424129843711853,
0.2917422652244568,
0.2924969494342804,
-0.11882511526346207,
0.18831509351730347,
0.13721297681331635,
0.0858209207653999... |
https://github.com/huggingface/datasets/issues/2046 | add_faisis_index gets very slow when doing it interatively | It's a matter of tradeoffs.
HSNW is fast at query time but takes some time to build.
A flat index is flat to build but is "slow" at query time.
An IVF index is probably a good choice for you: fast building and fast queries (but still slower queries than HSNW).
Note that for an IVF index you would need to have an ... | As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any ... | 181 | add_faisis_index gets very slow when doing it interatively
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_d... | [
-0.49882426857948303,
-0.26827821135520935,
-0.024500802159309387,
0.12822844088077545,
0.059439767152071,
0.20778393745422363,
0.12146293371915817,
0.424129843711853,
0.2917422652244568,
0.2924969494342804,
-0.11882511526346207,
0.18831509351730347,
0.13721297681331635,
0.0858209207653999... |
https://github.com/huggingface/datasets/issues/2046 | add_faisis_index gets very slow when doing it interatively | @lhoestq
Thanks a lot for sharing all this prior knowledge.
Just asking what would be a good nlist of parameters for 30 million embeddings? | As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any ... | 24 | add_faisis_index gets very slow when doing it interatively
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_d... | [
-0.49882426857948303,
-0.26827821135520935,
-0.024500802159309387,
0.12822844088077545,
0.059439767152071,
0.20778393745422363,
0.12146293371915817,
0.424129843711853,
0.2917422652244568,
0.2924969494342804,
-0.11882511526346207,
0.18831509351730347,
0.13721297681331635,
0.0858209207653999... |
https://github.com/huggingface/datasets/issues/2046 | add_faisis_index gets very slow when doing it interatively | When IVF is used alone, nlist should be between `4*sqrt(n)` and `16*sqrt(n)`.
For more details take a look at [this section of the Faiss wiki](https://github.com/facebookresearch/faiss/wiki/Guidelines-to-choose-an-index#how-big-is-the-dataset) | As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any ... | 25 | add_faisis_index gets very slow when doing it interatively
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_d... | [
-0.49882426857948303,
-0.26827821135520935,
-0.024500802159309387,
0.12822844088077545,
0.059439767152071,
0.20778393745422363,
0.12146293371915817,
0.424129843711853,
0.2917422652244568,
0.2924969494342804,
-0.11882511526346207,
0.18831509351730347,
0.13721297681331635,
0.0858209207653999... |
https://github.com/huggingface/datasets/issues/2046 | add_faisis_index gets very slow when doing it interatively | @lhoestq Thanks a lot for the help you have given to solve this issue. As per my experiments, IVF index suits well for my case and it is a lot faster. The use of this can make the entire RAG end-to-end trainable lot faster. So I will close this issue. Will do the final PR soon. | As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any ... | 56 | add_faisis_index gets very slow when doing it interatively
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_d... | [
-0.49882426857948303,
-0.26827821135520935,
-0.024500802159309387,
0.12822844088077545,
0.059439767152071,
0.20778393745422363,
0.12146293371915817,
0.424129843711853,
0.2917422652244568,
0.2924969494342804,
-0.11882511526346207,
0.18831509351730347,
0.13721297681331635,
0.0858209207653999... |
https://github.com/huggingface/datasets/issues/2040 | ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk | Hi ! To help me understand the situation, can you print the values of `load_from_disk(PATH_DATA_CLS_A)['train']._indices_data_files` and `load_from_disk(PATH_DATA_CLS_B)['train']._indices_data_files` ?
They should both have a path to an arrow file
Also note that from #2025 concatenating datasets will no longer hav... | Hi there,
I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects):
```python
concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']])
```
Yie... | 41 | ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk
Hi there,
I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects):
```python
concatenate_datasets([... | [
-0.011297954246401787,
-0.025898095220327377,
-0.04495936259627342,
0.5074424743652344,
0.1715032309293747,
0.18271680176258087,
0.057955753058195114,
0.1286478340625763,
-0.06289076805114746,
0.13362279534339905,
0.07037336379289627,
0.2955009937286377,
-0.08438046276569366,
-0.1025010794... |
https://github.com/huggingface/datasets/issues/2040 | ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk | Sure, thanks for the fast reply!
For dataset A: `[{'filename': 'drive/MyDrive/data_target_task/dataset_a/train/cache-4797266bf4db1eb7.arrow'}]`
For dataset B: `[]`
No clue why for B it returns nothing. `PATH_DATA_CLS_B` is exactly the same in `save_to_disk` and `load_from_disk`... Also I can verify that the fold... | Hi there,
I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects):
```python
concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']])
```
Yie... | 43 | ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk
Hi there,
I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects):
```python
concatenate_datasets([... | [
-0.007757666055113077,
-0.007315397262573242,
-0.018826721236109734,
0.5390485525131226,
0.22807568311691284,
0.18429206311702728,
0.05310789868235588,
0.13096661865711212,
-0.05644036829471588,
0.14849811792373657,
0.06288619339466095,
0.23425522446632385,
-0.032504647970199585,
-0.166605... |
https://github.com/huggingface/datasets/issues/2040 | ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk | In the next release you'll be able to concatenate any kinds of dataset (either from memory or from disk).
For now I'd suggest you to flatten the indices of the A and B datasets. This will remove the indices mapping and you will be able to concatenate them. You can flatten the indices with
```python
dataset = datas... | Hi there,
I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects):
```python
concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']])
```
Yie... | 59 | ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk
Hi there,
I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects):
```python
concatenate_datasets([... | [
-0.0903853103518486,
-0.0803101435303688,
-0.021714722737669945,
0.42966046929359436,
0.21229062974452972,
0.26673153042793274,
0.04071652144193649,
0.17951634526252747,
-0.07189302146434784,
0.1746957004070282,
-0.0009913422400131822,
0.1995583027601242,
-0.0881771445274353,
-0.0326920747... |
https://github.com/huggingface/datasets/issues/2040 | ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk | Indeed this works. Not the most elegant solution, but it does the trick. Thanks a lot! | Hi there,
I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects):
```python
concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']])
```
Yie... | 16 | ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk
Hi there,
I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects):
```python
concatenate_datasets([... | [
-0.04838201031088829,
-0.07965635508298874,
-0.021617192775011063,
0.4603257179260254,
0.1872410774230957,
0.26541784405708313,
0.04070878401398659,
0.11934351176023483,
-0.04796808958053589,
0.14856724441051483,
0.05258757993578911,
0.1789597123861313,
-0.06306624412536621,
-0.07126651704... |
https://github.com/huggingface/datasets/issues/2038 | outdated dataset_infos.json might fail verifications | Hi ! Thanks for reporting.
To update the dataset_infos.json you can run:
```
datasets-cli test ./datasets/doc2dial --all_configs --save_infos --ignore_verifications
``` | The [doc2dial/dataset_infos.json](https://github.com/huggingface/datasets/blob/master/datasets/doc2dial/dataset_infos.json) is outdated. It would fail data_loader when verifying download checksum etc..
Could you please update this file or point me how to update this file?
Thank you. | 20 | outdated dataset_infos.json might fail verifications
The [doc2dial/dataset_infos.json](https://github.com/huggingface/datasets/blob/master/datasets/doc2dial/dataset_infos.json) is outdated. It would fail data_loader when verifying download checksum etc..
Could you please update this file or point me how to update ... | [
-0.12009257823228836,
0.1984141319990158,
-0.11187494546175003,
0.1850014179944992,
0.11309680342674255,
0.216303750872612,
0.10356401652097702,
0.4946839511394501,
0.20697243511676788,
-0.0703190341591835,
0.07293292135000229,
0.0472201332449913,
0.19815464317798615,
0.2417430281639099,
... |
https://github.com/huggingface/datasets/issues/2035 | wiki40b/wikipedia for almost all languages cannot be downloaded | Dear @lhoestq for wikipedia dataset I also get the same error, I greatly appreciate if you could have a look into this dataset as well. Below please find the command to reproduce the error:
```
dataset = load_dataset("wikipedia", "20200501.bg")
print(dataset)
```
Your library is my only chance to be able train... | Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error.
I rea... | 62 | wiki40b/wikipedia for almost all languages cannot be downloaded
Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For alm... | [
-0.2544982135295868,
-0.07792056351900101,
-0.15375836193561554,
0.43086668848991394,
0.39970290660858154,
0.35046160221099854,
0.1368504762649536,
0.5331581830978394,
0.1893271654844284,
0.02358449064195156,
-0.17730358242988586,
-0.09421747922897339,
0.0944037139415741,
0.011184183880686... |
https://github.com/huggingface/datasets/issues/2035 | wiki40b/wikipedia for almost all languages cannot be downloaded | Hi @dorost1234,
Try installing this library first, `pip install 'apache-beam[gcp]' --use-feature=2020-resolver` followed by loading dataset like this using beam runner.
`dataset = load_dataset("wiki40b", "cs", beam_runner='DirectRunner')`
I also read in error stack trace that:
> Trying to generate a dataset ... | Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error.
I rea... | 83 | wiki40b/wikipedia for almost all languages cannot be downloaded
Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For alm... | [
-0.2544982135295868,
-0.07792056351900101,
-0.15375836193561554,
0.43086668848991394,
0.39970290660858154,
0.35046160221099854,
0.1368504762649536,
0.5331581830978394,
0.1893271654844284,
0.02358449064195156,
-0.17730358242988586,
-0.09421747922897339,
0.0944037139415741,
0.011184183880686... |
https://github.com/huggingface/datasets/issues/2035 | wiki40b/wikipedia for almost all languages cannot be downloaded | For wikipedia dataset, looks like the files it's looking for are no longer available. For `bg`, I checked [here](https://dumps.wikimedia.org/bgwiki/). For this I think `dataset_infos.json` for this dataset has to made again? You'll have to load this dataset also using beam runner.
| Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error.
I rea... | 41 | wiki40b/wikipedia for almost all languages cannot be downloaded
Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For alm... | [
-0.2544982135295868,
-0.07792056351900101,
-0.15375836193561554,
0.43086668848991394,
0.39970290660858154,
0.35046160221099854,
0.1368504762649536,
0.5331581830978394,
0.1893271654844284,
0.02358449064195156,
-0.17730358242988586,
-0.09421747922897339,
0.0944037139415741,
0.011184183880686... |
https://github.com/huggingface/datasets/issues/2035 | wiki40b/wikipedia for almost all languages cannot be downloaded | Hello @dorost1234,
Indeed, Wikipedia datasets need a lot of preprocessing and this is done using Apache Beam. That is the reason why it is required that you install Apache Beam in order to preform this preprocessing.
For some specific default parameters (English Wikipedia), Hugging Face has already preprocessed t... | Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error.
I rea... | 94 | wiki40b/wikipedia for almost all languages cannot be downloaded
Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For alm... | [
-0.2544982135295868,
-0.07792056351900101,
-0.15375836193561554,
0.43086668848991394,
0.39970290660858154,
0.35046160221099854,
0.1368504762649536,
0.5331581830978394,
0.1893271654844284,
0.02358449064195156,
-0.17730358242988586,
-0.09421747922897339,
0.0944037139415741,
0.011184183880686... |
https://github.com/huggingface/datasets/issues/2035 | wiki40b/wikipedia for almost all languages cannot be downloaded | Hi
I really appreciate if huggingface can kindly provide preprocessed
datasets, processing these datasets require sufficiently large resources
and I do not have unfortunately access to, and perhaps many others too.
thanks
On Fri, Mar 12, 2021 at 9:04 AM Albert Villanova del Moral <
***@***.***> wrote:
> Hello @dorost... | Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error.
I rea... | 185 | wiki40b/wikipedia for almost all languages cannot be downloaded
Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For alm... | [
-0.2544982135295868,
-0.07792056351900101,
-0.15375836193561554,
0.43086668848991394,
0.39970290660858154,
0.35046160221099854,
0.1368504762649536,
0.5331581830978394,
0.1893271654844284,
0.02358449064195156,
-0.17730358242988586,
-0.09421747922897339,
0.0944037139415741,
0.011184183880686... |
https://github.com/huggingface/datasets/issues/2035 | wiki40b/wikipedia for almost all languages cannot be downloaded | Hi everyone
thanks for the helpful pointers, I did it as @bhavitvyamalik suggested, for me this freezes on this command for several hours,
`Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /users/dara/cache/datasets... | Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error.
I rea... | 65 | wiki40b/wikipedia for almost all languages cannot be downloaded
Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For alm... | [
-0.2544982135295868,
-0.07792056351900101,
-0.15375836193561554,
0.43086668848991394,
0.39970290660858154,
0.35046160221099854,
0.1368504762649536,
0.5331581830978394,
0.1893271654844284,
0.02358449064195156,
-0.17730358242988586,
-0.09421747922897339,
0.0944037139415741,
0.011184183880686... |
https://github.com/huggingface/datasets/issues/2035 | wiki40b/wikipedia for almost all languages cannot be downloaded | HI @dorost1234,
The dataset size is 631.84 MiB so depending on your internet speed it'll take some time. You can monitor your internet speed meanwhile to see if it's downloading the dataset or not (use `nload` if you're using linux/mac to monitor the same). In my case it took around 3-4 mins. Since they haven't used ... | Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error.
I rea... | 65 | wiki40b/wikipedia for almost all languages cannot be downloaded
Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For alm... | [
-0.2544982135295868,
-0.07792056351900101,
-0.15375836193561554,
0.43086668848991394,
0.39970290660858154,
0.35046160221099854,
0.1368504762649536,
0.5331581830978394,
0.1893271654844284,
0.02358449064195156,
-0.17730358242988586,
-0.09421747922897339,
0.0944037139415741,
0.011184183880686... |
https://github.com/huggingface/datasets/issues/2035 | wiki40b/wikipedia for almost all languages cannot be downloaded | Hi
thanks, my internet speed should be good, but this really freezes for me, this is how I try to get this dataset:
`from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs", beam_runner='DirectRunner')`
the output I see if different also from what you see after writing this command:
`Downlo... | Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error.
I rea... | 212 | wiki40b/wikipedia for almost all languages cannot be downloaded
Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For alm... | [
-0.2544982135295868,
-0.07792056351900101,
-0.15375836193561554,
0.43086668848991394,
0.39970290660858154,
0.35046160221099854,
0.1368504762649536,
0.5331581830978394,
0.1893271654844284,
0.02358449064195156,
-0.17730358242988586,
-0.09421747922897339,
0.0944037139415741,
0.011184183880686... |
https://github.com/huggingface/datasets/issues/2035 | wiki40b/wikipedia for almost all languages cannot be downloaded | I tried this on another machine (followed the same procedure I've mentioned above). This is what it shows (during the freeze period) for me:
```
>>> dataset = load_dataset("wiki40b", "cs", beam_runner='DirectRunner')
Downloading: 5.26kB [00:00, 1.23MB/s] ... | Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error.
I rea... | 156 | wiki40b/wikipedia for almost all languages cannot be downloaded
Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For alm... | [
-0.2544982135295868,
-0.07792056351900101,
-0.15375836193561554,
0.43086668848991394,
0.39970290660858154,
0.35046160221099854,
0.1368504762649536,
0.5331581830978394,
0.1893271654844284,
0.02358449064195156,
-0.17730358242988586,
-0.09421747922897339,
0.0944037139415741,
0.011184183880686... |
https://github.com/huggingface/datasets/issues/2035 | wiki40b/wikipedia for almost all languages cannot be downloaded | Hi
I honestly also now tried on another machine and nothing shows up after
hours of waiting. Are you sure you have not set any specific setting? maybe
google cloud which seems it is used here, needs some credential setting?
thanks for any suggestions on this
On Tue, Mar 16, 2021 at 10:02 AM Bhavitvya Malik ***@*... | Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error.
I rea... | 259 | wiki40b/wikipedia for almost all languages cannot be downloaded
Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For alm... | [
-0.2544982135295868,
-0.07792056351900101,
-0.15375836193561554,
0.43086668848991394,
0.39970290660858154,
0.35046160221099854,
0.1368504762649536,
0.5331581830978394,
0.1893271654844284,
0.02358449064195156,
-0.17730358242988586,
-0.09421747922897339,
0.0944037139415741,
0.011184183880686... |
https://github.com/huggingface/datasets/issues/2031 | wikipedia.py generator that extracts XML doesn't release memory | Hi @miyamonz
Thanks for investigating this issue, good job !
It would be awesome to integrate your fix in the library, could you open a pull request ? | I tried downloading Japanese wikipedia, but it always failed because of out of memory maybe.
I found that the generator function that extracts XML data in wikipedia.py doesn't release memory in the loop.
https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikip... | 28 | wikipedia.py generator that extracts XML doesn't release memory
I tried downloading Japanese wikipedia, but it always failed because of out of memory maybe.
I found that the generator function that extracts XML data in wikipedia.py doesn't release memory in the loop.
https://github.com/huggingface/datasets/blob... | [
0.21865254640579224,
-0.11096194386482239,
-0.052367083728313446,
0.624684751033783,
0.3160347640514374,
0.09426531940698624,
-0.21173663437366486,
0.33323344588279724,
0.20938129723072052,
0.25795042514801025,
0.006053997669368982,
0.1517626792192459,
0.2327897995710373,
-0.12655332684516... |
https://github.com/huggingface/datasets/issues/2029 | Loading a faiss index KeyError | In your code `dataset2` doesn't contain the "embeddings" column, since it is created from the pandas DataFrame with columns "text" and "label".
Therefore when you call `dataset2[embeddings_name]`, you get a `KeyError`.
If you want the "embeddings" column back, you can create `dataset2` with
```python
dataset2 =... | I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation.
The basic steps are:
1. Create a dataset (dataset1)
2. Create an embeddings column using DPR
3. Add a faiss index to the dataset
4. Save faiss index to a file
5. Create a new dataset (d... | 65 | Loading a faiss index KeyError
I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation.
The basic steps are:
1. Create a dataset (dataset1)
2. Create an embeddings column using DPR
3. Add a faiss index to the dataset
4. Save faiss index to a... | [
0.0550745390355587,
-0.6190828084945679,
0.07025013118982315,
0.36131319403648376,
0.15540002286434174,
0.271070659160614,
0.345333993434906,
0.05105719342827797,
0.5391319990158081,
0.2844027876853943,
-0.0778135433793068,
0.15644237399101257,
0.4091763198375702,
-0.06366683542728424,
-... |
https://github.com/huggingface/datasets/issues/2029 | Loading a faiss index KeyError | Ok in that case HF should fix their misleading example at https://huggingface.co/docs/datasets/faiss_and_ea.html#adding-a-faiss-index
I copy-pasted it here.
> When you are done with your queries you can save your index on disk:
>
> ```python
> ds_with_embeddings.save_faiss_index('embeddings', 'my_index.faiss... | I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation.
The basic steps are:
1. Create a dataset (dataset1)
2. Create an embeddings column using DPR
3. Add a faiss index to the dataset
4. Save faiss index to a file
5. Create a new dataset (d... | 57 | Loading a faiss index KeyError
I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation.
The basic steps are:
1. Create a dataset (dataset1)
2. Create an embeddings column using DPR
3. Add a faiss index to the dataset
4. Save faiss index to a... | [
0.017279598861932755,
-0.5772307515144348,
0.06312285363674164,
0.2795126736164093,
0.08014950156211853,
0.27220675349235535,
0.3179304301738739,
0.1032484620809555,
0.5430013537406921,
0.27028775215148926,
-0.11547315865755081,
0.11195041239261627,
0.4107092320919037,
-0.10571878403425217... |
https://github.com/huggingface/datasets/issues/2029 | Loading a faiss index KeyError | Hi !
The code of the example is valid.
An index is a search engine, it's not considered a column of a dataset.
When you do `ds.load_faiss_index("embeddings", 'my_index.faiss')`, it attaches an index named "embeddings" to the dataset but it doesn't re-add the "embeddings" column. You can list the indexes of a datas... | I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation.
The basic steps are:
1. Create a dataset (dataset1)
2. Create an embeddings column using DPR
3. Add a faiss index to the dataset
4. Save faiss index to a file
5. Create a new dataset (d... | 119 | Loading a faiss index KeyError
I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation.
The basic steps are:
1. Create a dataset (dataset1)
2. Create an embeddings column using DPR
3. Add a faiss index to the dataset
4. Save faiss index to a... | [
0.13970813155174255,
-0.533545970916748,
0.05147164687514305,
0.3196974992752075,
0.1264747977256775,
0.27505674958229065,
0.41199612617492676,
-0.040478285402059555,
0.6586287021636963,
0.2073548585176468,
-0.062413282692432404,
0.1538582593202591,
0.3866048753261566,
-0.04890155792236328... |
https://github.com/huggingface/datasets/issues/2029 | Loading a faiss index KeyError | > If I understand correctly by reading this example you thought that it was re-adding the "embeddings" column.
Yes. I was trying to use the dataset in RAG and it complained that the dataset didn't have the right columns. No problems when loading the dataset with `load_from_disk` and then doing `load_faiss_index`
Wh... | I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation.
The basic steps are:
1. Create a dataset (dataset1)
2. Create an embeddings column using DPR
3. Add a faiss index to the dataset
4. Save faiss index to a file
5. Create a new dataset (d... | 115 | Loading a faiss index KeyError
I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation.
The basic steps are:
1. Create a dataset (dataset1)
2. Create an embeddings column using DPR
3. Add a faiss index to the dataset
4. Save faiss index to a... | [
0.1141662672162056,
-0.6093639731407166,
0.06223635748028755,
0.37251102924346924,
0.13730216026306152,
0.28628361225128174,
0.40662822127342224,
0.02796567976474762,
0.5614453554153442,
0.20742477476596832,
-0.05763896927237511,
0.15142212808132172,
0.473791241645813,
-0.0607357881963253,... |
https://github.com/huggingface/datasets/issues/2026 | KeyError on using map after renaming a column | Hi,
Actually, the error occurs due to these two lines:
```python
raw_dataset.set_format('torch',columns=['img','label'])
raw_dataset = raw_dataset.rename_column('img','image')
```
`Dataset.rename_column` doesn't update the `_format_columns` attribute, previously defined by `Dataset.set_format`, with a new colum... | Hi,
I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function.
Here is what I try:
```python
transform = Compose([ToPILImage(),... | 42 | KeyError on using map after renaming a column
Hi,
I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function.
Here is what I try:
... | [
0.03439050167798996,
0.006200192496180534,
-0.06358009576797485,
-0.33422812819480896,
0.4808439612388611,
0.26163917779922485,
0.6042658090591431,
0.23733209073543549,
0.13709159195423126,
0.08659088611602783,
0.05403544753789902,
0.5270302891731262,
-0.15589521825313568,
0.29946288466453... |
https://github.com/huggingface/datasets/issues/2026 | KeyError on using map after renaming a column | Hi @mariosasko,
Thanks for opening a PR on this :)
Why does the old name also disappear? | Hi,
I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function.
Here is what I try:
```python
transform = Compose([ToPILImage(),... | 17 | KeyError on using map after renaming a column
Hi,
I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function.
Here is what I try:
... | [
0.03439050167798996,
0.006200192496180534,
-0.06358009576797485,
-0.33422812819480896,
0.4808439612388611,
0.26163917779922485,
0.6042658090591431,
0.23733209073543549,
0.13709159195423126,
0.08659088611602783,
0.05403544753789902,
0.5270302891731262,
-0.15589521825313568,
0.29946288466453... |
https://github.com/huggingface/datasets/issues/2026 | KeyError on using map after renaming a column | I just merged a @mariosasko 's PR that fixes this issue.
If it happens again, feel free to re-open :) | Hi,
I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function.
Here is what I try:
```python
transform = Compose([ToPILImage(),... | 20 | KeyError on using map after renaming a column
Hi,
I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function.
Here is what I try:
... | [
0.03439050167798996,
0.006200192496180534,
-0.06358009576797485,
-0.33422812819480896,
0.4808439612388611,
0.26163917779922485,
0.6042658090591431,
0.23733209073543549,
0.13709159195423126,
0.08659088611602783,
0.05403544753789902,
0.5270302891731262,
-0.15589521825313568,
0.29946288466453... |
https://github.com/huggingface/datasets/issues/2022 | ValueError when rename_column on splitted dataset | Hi,
This is a bug so thanks for reporting it. `Dataset.__setstate__` is the problem, which is called when `Dataset.rename_column` tries to copy the dataset with `copy.deepcopy(self)`. This only happens if the `split` arg in `load_dataset` was defined as `ReadInstruction`.
To overcome this issue, use the named sp... | Hi there,
I am loading `.tsv` file via `load_dataset` and subsequently split the rows into training and test set via the `ReadInstruction` API like so:
```python
split = {
'train': ReadInstruction('train', to=90, unit='%'),
'test': ReadInstruction('train', from_=-10, unit='%')
}
dataset = load_datase... | 66 | ValueError when rename_column on splitted dataset
Hi there,
I am loading `.tsv` file via `load_dataset` and subsequently split the rows into training and test set via the `ReadInstruction` API like so:
```python
split = {
'train': ReadInstruction('train', to=90, unit='%'),
'test': ReadInstruction('trai... | [
-0.0835752859711647,
0.21979939937591553,
-0.03482932224869728,
-0.04238487407565117,
0.42087700963020325,
0.07306195795536041,
0.6438039541244507,
0.41798415780067444,
-0.022552840411663055,
0.339653342962265,
-0.08668669313192368,
0.39780375361442566,
-0.052184779196977615,
0.36443546414... |
https://github.com/huggingface/datasets/issues/2022 | ValueError when rename_column on splitted dataset | This has been fixed in #2043 , thanks @mariosasko
The fix is available on master and we'll do a new release soon :)
feel free to re-open if you still have issues | Hi there,
I am loading `.tsv` file via `load_dataset` and subsequently split the rows into training and test set via the `ReadInstruction` API like so:
```python
split = {
'train': ReadInstruction('train', to=90, unit='%'),
'test': ReadInstruction('train', from_=-10, unit='%')
}
dataset = load_datase... | 32 | ValueError when rename_column on splitted dataset
Hi there,
I am loading `.tsv` file via `load_dataset` and subsequently split the rows into training and test set via the `ReadInstruction` API like so:
```python
split = {
'train': ReadInstruction('train', to=90, unit='%'),
'test': ReadInstruction('trai... | [
-0.0835752859711647,
0.21979939937591553,
-0.03482932224869728,
-0.04238487407565117,
0.42087700963020325,
0.07306195795536041,
0.6438039541244507,
0.41798415780067444,
-0.022552840411663055,
0.339653342962265,
-0.08668669313192368,
0.39780375361442566,
-0.052184779196977615,
0.36443546414... |
https://github.com/huggingface/datasets/issues/2021 | Interactively doing save_to_disk and load_from_disk corrupts the datasets object? | Hi,
Can you give us a minimal reproducible example? This [part](https://huggingface.co/docs/datasets/master/processing.html#controling-the-cache-behavior) of the docs explains how to control caching. | dataset_info.json file saved after using save_to_disk gets corrupted as follows.

Is there a way to disable the cache that will save to /tmp/huggiface/datastes ?
I have a feeling there is a seri... | 19 | Interactively doing save_to_disk and load_from_disk corrupts the datasets object?
dataset_info.json file saved after using save_to_disk gets corrupted as follows.

Is there a way to disable the... | [
-0.07485474646091461,
-0.14697182178497314,
0.05617265775799751,
0.7758012413978577,
0.2905219793319702,
0.32138413190841675,
-0.23171839118003845,
0.1311902552843094,
0.1942947804927826,
0.1316707581281662,
-0.15026943385601044,
0.06683548539876938,
0.23899391293525696,
0.2033743858337402... |
https://github.com/huggingface/datasets/issues/2012 | No upstream branch | What's the issue exactly ?
Given an `upstream` remote repository with url `https://github.com/huggingface/datasets.git`, you can totally rebase from `upstream/master`.
It's mentioned at the beginning how to add the `upstream` remote repository
https://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9... | Feels like the documentation on adding a new dataset is outdated?
https://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L49-L54
There is no upstream branch on remote. | 32 | No upstream branch
Feels like the documentation on adding a new dataset is outdated?
https://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L49-L54
There is no upstream branch on remote.
What's the issue exactly ?
Given an `upstream` remote repository with... | [
-0.05419668182730675,
-0.3429025113582611,
-0.07095760107040405,
-0.2234124094247818,
0.1322251558303833,
0.0014155005337670445,
0.12123461067676544,
0.014974541030824184,
-0.4329249858856201,
0.1663283258676529,
0.010384000837802887,
-0.050456199795007706,
0.14463484287261963,
0.202738478... |
https://github.com/huggingface/datasets/issues/2012 | No upstream branch | ~~What difference is there with the default `origin` remote that is set when you clone the repo?~~ I've just understood that this applies to **forks** of the repo 🤡 | Feels like the documentation on adding a new dataset is outdated?
https://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L49-L54
There is no upstream branch on remote. | 29 | No upstream branch
Feels like the documentation on adding a new dataset is outdated?
https://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L49-L54
There is no upstream branch on remote.
~~What difference is there with the default `origin` remote that is set ... | [
-0.15264350175857544,
-0.39081788063049316,
-0.05686843767762184,
-0.38039663434028625,
-0.053528472781181335,
-0.12343958765268326,
0.3151809573173523,
-0.0022334945388138294,
-0.3539060652256012,
0.26942917704582214,
-0.009293383918702602,
-0.04662807658314705,
0.3955133259296417,
0.2351... |
https://github.com/huggingface/datasets/issues/2010 | Local testing fails | I'm not able to reproduce on my side.
Can you provide the full stacktrace please ?
What version of `python` and `dill` do you have ? Which OS are you using ? | I'm following the CI setup as described in
https://github.com/huggingface/datasets/blob/8eee4fa9e133fe873a7993ba746d32ca2b687551/.circleci/config.yml#L16-L19
in a new conda environment, at commit https://github.com/huggingface/datasets/commit/4de6dbf84e93dad97e1000120d6628c88954e5d4
and getting
```
FAILED... | 32 | Local testing fails
I'm following the CI setup as described in
https://github.com/huggingface/datasets/blob/8eee4fa9e133fe873a7993ba746d32ca2b687551/.circleci/config.yml#L16-L19
in a new conda environment, at commit https://github.com/huggingface/datasets/commit/4de6dbf84e93dad97e1000120d6628c88954e5d4
and ... | [
-0.1565997302532196,
0.09454041719436646,
0.00295068952254951,
0.0519757978618145,
-0.13645029067993164,
-0.2591089904308319,
0.4086023271083832,
0.22406497597694397,
-0.10595893114805222,
0.26768866181373596,
-0.0058868201449513435,
0.07883337885141373,
-0.15376630425453186,
0.50321614742... |
https://github.com/huggingface/datasets/issues/2010 | Local testing fails | ```
co_filename = '<ipython-input-2-e0383a102aae>', returned_obj = [0]
def create_ipython_func(co_filename, returned_obj):
def func():
... | I'm following the CI setup as described in
https://github.com/huggingface/datasets/blob/8eee4fa9e133fe873a7993ba746d32ca2b687551/.circleci/config.yml#L16-L19
in a new conda environment, at commit https://github.com/huggingface/datasets/commit/4de6dbf84e93dad97e1000120d6628c88954e5d4
and getting
```
FAILED... | 47 | Local testing fails
I'm following the CI setup as described in
https://github.com/huggingface/datasets/blob/8eee4fa9e133fe873a7993ba746d32ca2b687551/.circleci/config.yml#L16-L19
in a new conda environment, at commit https://github.com/huggingface/datasets/commit/4de6dbf84e93dad97e1000120d6628c88954e5d4
and ... | [
-0.1595422327518463,
0.08368664979934692,
0.008074183017015457,
0.05419432371854782,
-0.06059148162603378,
-0.25328269600868225,
0.43223145604133606,
0.32108592987060547,
0.06659077852964401,
0.2115703672170639,
-0.024156246334314346,
0.10391579568386078,
-0.19237197935581207,
0.4951996505... |
https://github.com/huggingface/datasets/issues/2010 | Local testing fails | I managed to reproduce. This comes from the CodeType init signature that is different in python 3.8.8
I opened a PR to fix this test
Thanks ! | I'm following the CI setup as described in
https://github.com/huggingface/datasets/blob/8eee4fa9e133fe873a7993ba746d32ca2b687551/.circleci/config.yml#L16-L19
in a new conda environment, at commit https://github.com/huggingface/datasets/commit/4de6dbf84e93dad97e1000120d6628c88954e5d4
and getting
```
FAILED... | 27 | Local testing fails
I'm following the CI setup as described in
https://github.com/huggingface/datasets/blob/8eee4fa9e133fe873a7993ba746d32ca2b687551/.circleci/config.yml#L16-L19
in a new conda environment, at commit https://github.com/huggingface/datasets/commit/4de6dbf84e93dad97e1000120d6628c88954e5d4
and ... | [
-0.21295084059238434,
0.08236498385667801,
0.02384120412170887,
0.09466218203306198,
0.049615420401096344,
-0.19082441926002502,
0.38125523924827576,
0.31044313311576843,
0.05058375373482704,
0.22537113726139069,
0.13752444088459015,
0.11011480540037155,
-0.18604980409145355,
0.65759468078... |
https://github.com/huggingface/datasets/issues/2009 | Ambiguous documentation | Hi @theo-m !
A few lines above this line, you'll find that the `_split_generators` method returns a list of `SplitGenerator`s objects:
```python
datasets.SplitGenerator(
name=datasets.Split.VALIDATION,
# These kwargs will be passed to _generate_examples
gen_kwargs={
"filepath": os.path.jo... | https://github.com/huggingface/datasets/blob/2ac9a0d24a091989f869af55f9f6411b37ff5188/templates/new_dataset_script.py#L156-L158
Looking at the template, I find this documentation line to be confusing, the method parameters don't include the `gen_kwargs` so I'm unclear where they're coming from.
Happy to push a PR... | 79 | Ambiguous documentation
https://github.com/huggingface/datasets/blob/2ac9a0d24a091989f869af55f9f6411b37ff5188/templates/new_dataset_script.py#L156-L158
Looking at the template, I find this documentation line to be confusing, the method parameters don't include the `gen_kwargs` so I'm unclear where they're coming f... | [
0.06302601844072342,
-0.05598887428641319,
-0.05370459705591202,
0.10702980309724808,
0.0235678069293499,
0.16932448744773865,
0.3407929241657257,
0.10250890254974365,
-0.1445973813533783,
-0.16507552564144135,
0.08257517218589783,
0.3592205047607422,
0.047547597438097,
0.02296568267047405... |
https://github.com/huggingface/datasets/issues/2009 | Ambiguous documentation | Oh ok I hadn't made the connection between those two, will offer a tweak to the comment and the template then - thanks! | https://github.com/huggingface/datasets/blob/2ac9a0d24a091989f869af55f9f6411b37ff5188/templates/new_dataset_script.py#L156-L158
Looking at the template, I find this documentation line to be confusing, the method parameters don't include the `gen_kwargs` so I'm unclear where they're coming from.
Happy to push a PR... | 23 | Ambiguous documentation
https://github.com/huggingface/datasets/blob/2ac9a0d24a091989f869af55f9f6411b37ff5188/templates/new_dataset_script.py#L156-L158
Looking at the template, I find this documentation line to be confusing, the method parameters don't include the `gen_kwargs` so I'm unclear where they're coming f... | [
0.09075407683849335,
-0.17848221957683563,
-0.08218145370483398,
-0.13477660715579987,
0.2026105672121048,
0.10430475324392319,
0.43210211396217346,
0.07421441376209259,
-0.09254395961761475,
-0.12975098192691803,
0.12960994243621826,
0.2071549892425537,
0.014975609257817268,
0.10097161680... |
https://github.com/huggingface/datasets/issues/2007 | How to not load huggingface datasets into memory | So maybe a summary here:
If I could fit a large model with batch_size = X into memory, is there a way I could train this model for huge datasets with keeping setting the same? thanks | Hi
I am running this example from transformers library version 4.3.3:
(Here is the full documentation https://github.com/huggingface/transformers/issues/8771 but the running command should work out of the box)
USE_TF=0 deepspeed run_seq2seq.py --model_name_or_path google/mt5-base --dataset_name wmt16 --dataset_... | 36 | How to not load huggingface datasets into memory
Hi
I am running this example from transformers library version 4.3.3:
(Here is the full documentation https://github.com/huggingface/transformers/issues/8771 but the running command should work out of the box)
USE_TF=0 deepspeed run_seq2seq.py --model_name_or_... | [
-0.15727351605892181,
-0.5049940943717957,
0.006041008047759533,
0.530024528503418,
0.5585092306137085,
0.04716859757900238,
0.0790681391954422,
0.2601296305656433,
0.39253175258636475,
0.11795011162757874,
-0.025012431666254997,
-0.2572222948074341,
-0.33554312586784363,
0.350295215845108... |
https://github.com/huggingface/datasets/issues/2007 | How to not load huggingface datasets into memory | The `datastets` library doesn't load datasets into memory. Therefore you can load a dataset that is terabytes big without filling up your RAM.
The only thing that's loaded into memory during training is the batch used in the training step.
So as long as your model works with batch_size = X, then you can load an eve... | Hi
I am running this example from transformers library version 4.3.3:
(Here is the full documentation https://github.com/huggingface/transformers/issues/8771 but the running command should work out of the box)
USE_TF=0 deepspeed run_seq2seq.py --model_name_or_path google/mt5-base --dataset_name wmt16 --dataset_... | 208 | How to not load huggingface datasets into memory
Hi
I am running this example from transformers library version 4.3.3:
(Here is the full documentation https://github.com/huggingface/transformers/issues/8771 but the running command should work out of the box)
USE_TF=0 deepspeed run_seq2seq.py --model_name_or_... | [
-0.15187662839889526,
-0.5147151947021484,
0.0332164391875267,
0.4974523186683655,
0.5210767388343811,
0.013915249146521091,
0.1180272251367569,
0.23573319613933563,
0.4079457223415375,
0.18799634277820587,
0.007619801908731461,
-0.20288428664207458,
-0.2850470542907715,
0.3407027721405029... |
https://github.com/huggingface/datasets/issues/2005 | Setting to torch format not working with torchvision and MNIST | Adding to the previous information, I think `torch.utils.data.DataLoader` is doing some conversion.
What I tried:
```python
train_dataset = load_dataset('mnist')
```
I don't use any `map` or `set_format` or any `transform`. I use this directly, and try to load batches using the `DataLoader` with batch size 2, I g... | Hi
I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object.
A snippet of what I am trying to do:
```python
def prepare_features(examples):
images = []
labe... | 202 | Setting to torch format not working with torchvision and MNIST
Hi
I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object.
A snippet of what I am trying to do:
```p... | [
-0.12737514078617096,
-0.35116562247276306,
-0.017932644113898277,
0.35360872745513916,
0.47600245475769043,
0.08871346712112427,
0.7330296635627747,
0.38043493032455444,
0.06552495807409286,
-0.0380423478782177,
-0.111153244972229,
0.3819507658481598,
-0.23296910524368286,
-0.333934336900... |
https://github.com/huggingface/datasets/issues/2005 | Setting to torch format not working with torchvision and MNIST | What's the feature types of your new dataset after `.map` ?
Can you try with adding `features=` in the `.map` call in order to set the "image" feature type to `Array2D` ?
The default feature type is lists of lists, we've not implemented shape verification to use ArrayXD instead of nested lists yet | Hi
I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object.
A snippet of what I am trying to do:
```python
def prepare_features(examples):
images = []
labe... | 53 | Setting to torch format not working with torchvision and MNIST
Hi
I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object.
A snippet of what I am trying to do:
```p... | [
-0.12737514078617096,
-0.35116562247276306,
-0.017932644113898277,
0.35360872745513916,
0.47600245475769043,
0.08871346712112427,
0.7330296635627747,
0.38043493032455444,
0.06552495807409286,
-0.0380423478782177,
-0.111153244972229,
0.3819507658481598,
-0.23296910524368286,
-0.333934336900... |
https://github.com/huggingface/datasets/issues/2005 | Setting to torch format not working with torchvision and MNIST | Hi @lhoestq
Raw feature types are like this:
```
Image:
<class 'list'> 60000 #(type, len)
<class 'list'> 28
<class 'list'> 28
<class 'int'>
Label:
<class 'list'> 60000
<class 'int'>
```
Inside the `prepare_feature` method with batch size 100000 , after processing, they are like this:
Inside Prepare Tr... | Hi
I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object.
A snippet of what I am trying to do:
```python
def prepare_features(examples):
images = []
labe... | 213 | Setting to torch format not working with torchvision and MNIST
Hi
I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object.
A snippet of what I am trying to do:
```p... | [
-0.12737514078617096,
-0.35116562247276306,
-0.017932644113898277,
0.35360872745513916,
0.47600245475769043,
0.08871346712112427,
0.7330296635627747,
0.38043493032455444,
0.06552495807409286,
-0.0380423478782177,
-0.111153244972229,
0.3819507658481598,
-0.23296910524368286,
-0.333934336900... |
https://github.com/huggingface/datasets/issues/2005 | Setting to torch format not working with torchvision and MNIST | Hi @lhoestq
# Using Array3D
I tried this:
```python
features = datasets.Features({
"image": datasets.Array3D(shape=(1,28,28),dtype="float32"),
"label": datasets.features.ClassLabel(names=["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]),
})
train_dataset = raw_dataset.map(pre... | Hi
I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object.
A snippet of what I am trying to do:
```python
def prepare_features(examples):
images = []
labe... | 447 | Setting to torch format not working with torchvision and MNIST
Hi
I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object.
A snippet of what I am trying to do:
```p... | [
-0.12737514078617096,
-0.35116562247276306,
-0.017932644113898277,
0.35360872745513916,
0.47600245475769043,
0.08871346712112427,
0.7330296635627747,
0.38043493032455444,
0.06552495807409286,
-0.0380423478782177,
-0.111153244972229,
0.3819507658481598,
-0.23296910524368286,
-0.333934336900... |
https://github.com/huggingface/datasets/issues/2005 | Setting to torch format not working with torchvision and MNIST | # Convert raw tensors to torch format
Strangely, converting to torch tensors works perfectly on `raw_dataset`:
```python
raw_dataset.set_format('torch',columns=['image','label'])
```
Types:
```
Image:
<class 'torch.Tensor'> 60000
<class 'torch.Tensor'> 28
<class 'torch.Tensor'> 28
<class 'torch.Tensor'>
Lab... | Hi
I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object.
A snippet of what I am trying to do:
```python
def prepare_features(examples):
images = []
labe... | 299 | Setting to torch format not working with torchvision and MNIST
Hi
I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object.
A snippet of what I am trying to do:
```p... | [
-0.12737514078617096,
-0.35116562247276306,
-0.017932644113898277,
0.35360872745513916,
0.47600245475769043,
0.08871346712112427,
0.7330296635627747,
0.38043493032455444,
0.06552495807409286,
-0.0380423478782177,
-0.111153244972229,
0.3819507658481598,
-0.23296910524368286,
-0.333934336900... |
https://github.com/huggingface/datasets/issues/2005 | Setting to torch format not working with torchvision and MNIST | Concluding, the way it works right now is:
1. Converting raw dataset to `torch` format.
2. Use the transform and apply using `map`, ensure the returned values are tensors.
3. When mapping, use `features` with `image` being `Array3D` type. | Hi
I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object.
A snippet of what I am trying to do:
```python
def prepare_features(examples):
images = []
labe... | 39 | Setting to torch format not working with torchvision and MNIST
Hi
I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object.
A snippet of what I am trying to do:
```p... | [
-0.12737514078617096,
-0.35116562247276306,
-0.017932644113898277,
0.35360872745513916,
0.47600245475769043,
0.08871346712112427,
0.7330296635627747,
0.38043493032455444,
0.06552495807409286,
-0.0380423478782177,
-0.111153244972229,
0.3819507658481598,
-0.23296910524368286,
-0.333934336900... |
https://github.com/huggingface/datasets/issues/2005 | Setting to torch format not working with torchvision and MNIST | What the dataset returns depends on the feature type.
For a feature type that is Sequence(Sequence(Sequence(Value("uint8")))), a dataset formatted as "torch" return lists of lists of tensors. This is because the lists lengths may vary.
For a feature type that is Array3D on the other hand it returns one tensor. This i... | Hi
I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object.
A snippet of what I am trying to do:
```python
def prepare_features(examples):
images = []
labe... | 66 | Setting to torch format not working with torchvision and MNIST
Hi
I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object.
A snippet of what I am trying to do:
```p... | [
-0.12737514078617096,
-0.35116562247276306,
-0.017932644113898277,
0.35360872745513916,
0.47600245475769043,
0.08871346712112427,
0.7330296635627747,
0.38043493032455444,
0.06552495807409286,
-0.0380423478782177,
-0.111153244972229,
0.3819507658481598,
-0.23296910524368286,
-0.333934336900... |
https://github.com/huggingface/datasets/issues/2005 | Setting to torch format not working with torchvision and MNIST | Okay, that makes sense.
Raw images are list of Array2D, hence we get a single tensor when `set_format` is used. But, why should I need to convert the raw images to `torch` format when `map` does this internally?
Using `Array3D` did not work with `map` when raw images weren't `set_format`ted to torch type. | Hi
I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object.
A snippet of what I am trying to do:
```python
def prepare_features(examples):
images = []
labe... | 53 | Setting to torch format not working with torchvision and MNIST
Hi
I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object.
A snippet of what I am trying to do:
```p... | [
-0.12737514078617096,
-0.35116562247276306,
-0.017932644113898277,
0.35360872745513916,
0.47600245475769043,
0.08871346712112427,
0.7330296635627747,
0.38043493032455444,
0.06552495807409286,
-0.0380423478782177,
-0.111153244972229,
0.3819507658481598,
-0.23296910524368286,
-0.333934336900... |
https://github.com/huggingface/datasets/issues/2005 | Setting to torch format not working with torchvision and MNIST | I understand that `map` needs to know what kind of output tensors are expected, and thus converting the raw dataset to `torch` format is necessary. Closing the issue since it is resolved. | Hi
I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object.
A snippet of what I am trying to do:
```python
def prepare_features(examples):
images = []
labe... | 32 | Setting to torch format not working with torchvision and MNIST
Hi
I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object.
A snippet of what I am trying to do:
```p... | [
-0.12737514078617096,
-0.35116562247276306,
-0.017932644113898277,
0.35360872745513916,
0.47600245475769043,
0.08871346712112427,
0.7330296635627747,
0.38043493032455444,
0.06552495807409286,
-0.0380423478782177,
-0.111153244972229,
0.3819507658481598,
-0.23296910524368286,
-0.333934336900... |
https://github.com/huggingface/datasets/issues/2003 | Messages are being printed to the `stdout` | This is expected to show this message to the user via stdout.
This way the users see it directly and can cancel the downloading if they want to.
Could you elaborate why it would be better to have it in stderr instead of stdout ? | In this code segment, we can see some messages are being printed to the `stdout`.
https://github.com/huggingface/datasets/blob/7e60bb509b595e8edc60a87f32b2bacfc065d607/src/datasets/builder.py#L545-L554
According to the comment, it is done intentionally, but I don't really understand why don't we log it with a higher ... | 45 | Messages are being printed to the `stdout`
In this code segment, we can see some messages are being printed to the `stdout`.
https://github.com/huggingface/datasets/blob/7e60bb509b595e8edc60a87f32b2bacfc065d607/src/datasets/builder.py#L545-L554
According to the comment, it is done intentionally, but I don't really ... | [
-0.047096025198698044,
-0.37092748284339905,
-0.03963596001267433,
0.27061203122138977,
0.23185200989246368,
-0.05729660764336586,
0.24660848081111908,
0.11494968086481094,
-0.04178229719400406,
0.19841821491718292,
0.17326201498508453,
0.1642705500125885,
-0.11991792172193527,
0.368881851... |
https://github.com/huggingface/datasets/issues/2003 | Messages are being printed to the `stdout` | @lhoestq, sorry for the late reply
I completely understand why you decided to output a message that is always shown. The only problem is that the message is printed to the `stdout`. For example, if the user runs `python run_glue.py > log_file`, it will redirect `stdout` to the file named `log_file`, and the message... | In this code segment, we can see some messages are being printed to the `stdout`.
https://github.com/huggingface/datasets/blob/7e60bb509b595e8edc60a87f32b2bacfc065d607/src/datasets/builder.py#L545-L554
According to the comment, it is done intentionally, but I don't really understand why don't we log it with a higher ... | 90 | Messages are being printed to the `stdout`
In this code segment, we can see some messages are being printed to the `stdout`.
https://github.com/huggingface/datasets/blob/7e60bb509b595e8edc60a87f32b2bacfc065d607/src/datasets/builder.py#L545-L554
According to the comment, it is done intentionally, but I don't really ... | [
0.05412515252828598,
-0.4283032715320587,
-0.017841104418039322,
0.18631984293460846,
0.1776287704706192,
-0.1547812521457672,
0.3838963210582733,
0.16160008311271667,
0.06724230945110321,
0.2444252222776413,
0.19078323245048523,
0.30530649423599243,
-0.14255928993225098,
0.284975469112396... |
https://github.com/huggingface/datasets/issues/2000 | Windows Permission Error (most recent version of datasets) | Hi @itsLuisa !
Could you give us more information about the error you're getting, please?
A copy-paste of the Traceback would be nice to get a better understanding of what is wrong :) | Hi everyone,
Can anyone help me with why the dataset loading script below raises a Windows Permission Error? I stuck quite closely to https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py , only I want to load the data from three local three-column tsv-files (id\ttokens\tpos_tags\n). I am... | 33 | Windows Permission Error (most recent version of datasets)
Hi everyone,
Can anyone help me with why the dataset loading script below raises a Windows Permission Error? I stuck quite closely to https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py , only I want to load the data from thre... | [
-0.1825570911169052,
0.1823851466178894,
-0.04003103822469711,
0.2590484619140625,
0.0813780203461647,
0.1336376667022705,
0.4580341875553131,
0.012254521250724792,
0.1864633411169052,
0.07412854582071304,
-0.06345562636852264,
-0.011226741597056389,
-0.1087961420416832,
0.1572927385568618... |
https://github.com/huggingface/datasets/issues/2000 | Windows Permission Error (most recent version of datasets) | Hello @SBrandeis , this is it:
```
Traceback (most recent call last):
File "C:\Users\Luisa\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\builder.py", line 537, in incomplete_dir
yield tmp_dir
File "C:\Users\Luisa\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\builder.... | Hi everyone,
Can anyone help me with why the dataset loading script below raises a Windows Permission Error? I stuck quite closely to https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py , only I want to load the data from three local three-column tsv-files (id\ttokens\tpos_tags\n). I am... | 230 | Windows Permission Error (most recent version of datasets)
Hi everyone,
Can anyone help me with why the dataset loading script below raises a Windows Permission Error? I stuck quite closely to https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py , only I want to load the data from thre... | [
-0.1825570911169052,
0.1823851466178894,
-0.04003103822469711,
0.2590484619140625,
0.0813780203461647,
0.1336376667022705,
0.4580341875553131,
0.012254521250724792,
0.1864633411169052,
0.07412854582071304,
-0.06345562636852264,
-0.011226741597056389,
-0.1087961420416832,
0.1572927385568618... |
https://github.com/huggingface/datasets/issues/2000 | Windows Permission Error (most recent version of datasets) | Hi @itsLuisa, thanks for sharing the Traceback.
You are defining the "id" field as a `string` feature:
```python
class Sample(datasets.GeneratorBasedBuilder):
...
def _info(self):
return datasets.DatasetInfo(
features=datasets.Features(
{
"id"... | Hi everyone,
Can anyone help me with why the dataset loading script below raises a Windows Permission Error? I stuck quite closely to https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py , only I want to load the data from three local three-column tsv-files (id\ttokens\tpos_tags\n). I am... | 73 | Windows Permission Error (most recent version of datasets)
Hi everyone,
Can anyone help me with why the dataset loading script below raises a Windows Permission Error? I stuck quite closely to https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py , only I want to load the data from thre... | [
-0.1825570911169052,
0.1823851466178894,
-0.04003103822469711,
0.2590484619140625,
0.0813780203461647,
0.1336376667022705,
0.4580341875553131,
0.012254521250724792,
0.1864633411169052,
0.07412854582071304,
-0.06345562636852264,
-0.011226741597056389,
-0.1087961420416832,
0.1572927385568618... |
https://github.com/huggingface/datasets/issues/1996 | Error when exploring `arabic_speech_corpus` | Actually soundfile is not a dependency of this dataset.
The error comes from a bug that was fixed in this commit: https://github.com/huggingface/datasets/pull/1767/commits/c304e63629f4453367de2fd42883a78768055532
Basically the library used to consider the `import soundfile` in the docstring as a dependency, while it'... | Navigate to https://huggingface.co/datasets/viewer/?dataset=arabic_speech_corpus
Error:
```
ImportError: To be able to use this dataset, you need to install the following dependencies['soundfile'] using 'pip install soundfile' for instance'
Traceback:
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/p... | 58 | Error when exploring `arabic_speech_corpus`
Navigate to https://huggingface.co/datasets/viewer/?dataset=arabic_speech_corpus
Error:
```
ImportError: To be able to use this dataset, you need to install the following dependencies['soundfile'] using 'pip install soundfile' for instance'
Traceback:
File "/home/sas... | [
-0.25912442803382874,
-0.11358871310949326,
-0.036389466375112534,
0.2228231281042099,
0.041789501905441284,
0.041484441608190536,
0.05358162894845009,
0.31405580043792725,
-0.17314375936985016,
0.05045941099524498,
-0.29817402362823486,
0.09690873324871063,
-0.11128225922584534,
-0.040644... |
https://github.com/huggingface/datasets/issues/1994 | not being able to get wikipedia es language | @lhoestq I really appreciate if you could help me providiing processed datasets, I do not really have access to enough resources to run the apache-beam and need to run the codes on these datasets. Only en/de/fr currently works, but I need all the languages more or less. thanks | Hi
I am trying to run a code with wikipedia of config 20200501.es, getting:
Traceback (most recent call last):
File "run_mlm_t5.py", line 608, in <module>
main()
File "run_mlm_t5.py", line 359, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/dara/libs... | 48 | not being able to get wikipedia es language
Hi
I am trying to run a code with wikipedia of config 20200501.es, getting:
Traceback (most recent call last):
File "run_mlm_t5.py", line 608, in <module>
main()
File "run_mlm_t5.py", line 359, in main
datasets = load_dataset(data_args.dataset_name, data... | [
-0.3421066403388977,
0.04719279333949089,
-0.10626477003097534,
0.06201721727848053,
0.20045500993728638,
0.17774423956871033,
0.17687447369098663,
0.3401964008808136,
0.13733333349227905,
0.09211837500333786,
0.41052380204200745,
0.3346934914588928,
0.01600063592195511,
0.3076461255550384... |
https://github.com/huggingface/datasets/issues/1994 | not being able to get wikipedia es language | Hi @dorost1234, I think I can help you a little. I’ve processed some Wikipedia datasets (Spanish inclusive) using the HF/datasets library during recent research.
@lhoestq Could you help me to upload these preprocessed datasets to Huggingface's repositories? To be more precise, I've built datasets from the following ... | Hi
I am trying to run a code with wikipedia of config 20200501.es, getting:
Traceback (most recent call last):
File "run_mlm_t5.py", line 608, in <module>
main()
File "run_mlm_t5.py", line 359, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/dara/libs... | 96 | not being able to get wikipedia es language
Hi
I am trying to run a code with wikipedia of config 20200501.es, getting:
Traceback (most recent call last):
File "run_mlm_t5.py", line 608, in <module>
main()
File "run_mlm_t5.py", line 359, in main
datasets = load_dataset(data_args.dataset_name, data... | [
-0.3421066403388977,
0.04719279333949089,
-0.10626477003097534,
0.06201721727848053,
0.20045500993728638,
0.17774423956871033,
0.17687447369098663,
0.3401964008808136,
0.13733333349227905,
0.09211837500333786,
0.41052380204200745,
0.3346934914588928,
0.01600063592195511,
0.3076461255550384... |
https://github.com/huggingface/datasets/issues/1994 | not being able to get wikipedia es language | Thank you so much @jonatasgrosman , I greatly appreciate your help with them.
Yes, I unfortunately does not have access to a good resource and need it for my
research. I greatly appreciate @lhoestq your help with uploading the processed datasets in huggingface datasets. This would be really helpful for some users l... | Hi
I am trying to run a code with wikipedia of config 20200501.es, getting:
Traceback (most recent call last):
File "run_mlm_t5.py", line 608, in <module>
main()
File "run_mlm_t5.py", line 359, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/dara/libs... | 222 | not being able to get wikipedia es language
Hi
I am trying to run a code with wikipedia of config 20200501.es, getting:
Traceback (most recent call last):
File "run_mlm_t5.py", line 608, in <module>
main()
File "run_mlm_t5.py", line 359, in main
datasets = load_dataset(data_args.dataset_name, data... | [
-0.3421066403388977,
0.04719279333949089,
-0.10626477003097534,
0.06201721727848053,
0.20045500993728638,
0.17774423956871033,
0.17687447369098663,
0.3401964008808136,
0.13733333349227905,
0.09211837500333786,
0.41052380204200745,
0.3346934914588928,
0.01600063592195511,
0.3076461255550384... |
https://github.com/huggingface/datasets/issues/1994 | not being able to get wikipedia es language | Hi @dorost1234, so sorry, but looking at my files here, I figure out that I've preprocessed files using the HF/datasets for all the languages previously listed by me (Portuguese, Russian, French, Japanese, Chinese, and Turkish) except the Spanish (on my tests I've used the [wikicorpus](https://www.cs.upc.edu/~nlp/wikic... | Hi
I am trying to run a code with wikipedia of config 20200501.es, getting:
Traceback (most recent call last):
File "run_mlm_t5.py", line 608, in <module>
main()
File "run_mlm_t5.py", line 359, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/dara/libs... | 86 | not being able to get wikipedia es language
Hi
I am trying to run a code with wikipedia of config 20200501.es, getting:
Traceback (most recent call last):
File "run_mlm_t5.py", line 608, in <module>
main()
File "run_mlm_t5.py", line 359, in main
datasets = load_dataset(data_args.dataset_name, data... | [
-0.3421066403388977,
0.04719279333949089,
-0.10626477003097534,
0.06201721727848053,
0.20045500993728638,
0.17774423956871033,
0.17687447369098663,
0.3401964008808136,
0.13733333349227905,
0.09211837500333786,
0.41052380204200745,
0.3346934914588928,
0.01600063592195511,
0.3076461255550384... |
https://github.com/huggingface/datasets/issues/1994 | not being able to get wikipedia es language | Thanks a lot for the information and help. This would be great to have
these datasets.
@lhoestq <https://github.com/lhoestq> Do you know a way I could get
smaller amount of these data like 1 GBtype of each language to deal with
computatioanl requirements? thanks
On Sat, Mar 6, 2021 at 5:36 PM Jonatas Grosman <notific... | Hi
I am trying to run a code with wikipedia of config 20200501.es, getting:
Traceback (most recent call last):
File "run_mlm_t5.py", line 608, in <module>
main()
File "run_mlm_t5.py", line 359, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/dara/libs... | 189 | not being able to get wikipedia es language
Hi
I am trying to run a code with wikipedia of config 20200501.es, getting:
Traceback (most recent call last):
File "run_mlm_t5.py", line 608, in <module>
main()
File "run_mlm_t5.py", line 359, in main
datasets = load_dataset(data_args.dataset_name, data... | [
-0.3421066403388977,
0.04719279333949089,
-0.10626477003097534,
0.06201721727848053,
0.20045500993728638,
0.17774423956871033,
0.17687447369098663,
0.3401964008808136,
0.13733333349227905,
0.09211837500333786,
0.41052380204200745,
0.3346934914588928,
0.01600063592195511,
0.3076461255550384... |
https://github.com/huggingface/datasets/issues/1994 | not being able to get wikipedia es language | Hi ! As mentioned above the Spanish configuration have parsing issues from `mwparserfromhell`. I haven't tested with the latest `mwparserfromhell` >=0.6 though. Which version of `mwparserfromhell` are you using ?
> @lhoestq Could you help me to upload these preprocessed datasets to Huggingface's repositories? To be ... | Hi
I am trying to run a code with wikipedia of config 20200501.es, getting:
Traceback (most recent call last):
File "run_mlm_t5.py", line 608, in <module>
main()
File "run_mlm_t5.py", line 359, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/dara/libs... | 231 | not being able to get wikipedia es language
Hi
I am trying to run a code with wikipedia of config 20200501.es, getting:
Traceback (most recent call last):
File "run_mlm_t5.py", line 608, in <module>
main()
File "run_mlm_t5.py", line 359, in main
datasets = load_dataset(data_args.dataset_name, data... | [
-0.3421066403388977,
0.04719279333949089,
-0.10626477003097534,
0.06201721727848053,
0.20045500993728638,
0.17774423956871033,
0.17687447369098663,
0.3401964008808136,
0.13733333349227905,
0.09211837500333786,
0.41052380204200745,
0.3346934914588928,
0.01600063592195511,
0.3076461255550384... |
https://github.com/huggingface/datasets/issues/1994 | not being able to get wikipedia es language | Hi @lhoestq!
> Hi ! As mentioned above the Spanish configuration have parsing issues from mwparserfromhell. I haven't tested with the latest mwparserfromhell >=0.6 though. Which version of mwparserfromhell are you using ?
I'm using the latest mwparserfromhell version (0.6)
> That would be awesome ! Feel free t... | Hi
I am trying to run a code with wikipedia of config 20200501.es, getting:
Traceback (most recent call last):
File "run_mlm_t5.py", line 608, in <module>
main()
File "run_mlm_t5.py", line 359, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/dara/libs... | 76 | not being able to get wikipedia es language
Hi
I am trying to run a code with wikipedia of config 20200501.es, getting:
Traceback (most recent call last):
File "run_mlm_t5.py", line 608, in <module>
main()
File "run_mlm_t5.py", line 359, in main
datasets = load_dataset(data_args.dataset_name, data... | [
-0.3421066403388977,
0.04719279333949089,
-0.10626477003097534,
0.06201721727848053,
0.20045500993728638,
0.17774423956871033,
0.17687447369098663,
0.3401964008808136,
0.13733333349227905,
0.09211837500333786,
0.41052380204200745,
0.3346934914588928,
0.01600063592195511,
0.3076461255550384... |
https://github.com/huggingface/datasets/issues/1994 | not being able to get wikipedia es language | Thank you so much @jonatasgrosman and @lhoestq this would be a great help. I am really thankful to you both and to wonderful Huggingface dataset library allowing us to train models at scale. | Hi
I am trying to run a code with wikipedia of config 20200501.es, getting:
Traceback (most recent call last):
File "run_mlm_t5.py", line 608, in <module>
main()
File "run_mlm_t5.py", line 359, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/dara/libs... | 33 | not being able to get wikipedia es language
Hi
I am trying to run a code with wikipedia of config 20200501.es, getting:
Traceback (most recent call last):
File "run_mlm_t5.py", line 608, in <module>
main()
File "run_mlm_t5.py", line 359, in main
datasets = load_dataset(data_args.dataset_name, data... | [
-0.3421066403388977,
0.04719279333949089,
-0.10626477003097534,
0.06201721727848053,
0.20045500993728638,
0.17774423956871033,
0.17687447369098663,
0.3401964008808136,
0.13733333349227905,
0.09211837500333786,
0.41052380204200745,
0.3346934914588928,
0.01600063592195511,
0.3076461255550384... |
https://github.com/huggingface/datasets/issues/1993 | How to load a dataset with load_from disk and save it again after doing transformations without changing the original? | Hi ! That looks like a bug, can you provide some code so that we can reproduce ?
It's not supposed to update the original dataset | I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place.
When I save the dataset with **save_to_disk**, the original da... | 26 | How to load a dataset with load_from disk and save it again after doing transformations without changing the original?
I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object ... | [
-0.27735987305641174,
-0.14240624010562897,
-0.009712575934827328,
0.2224387377500534,
0.2347681224346161,
0.12864670157432556,
-0.03142669051885605,
-0.08363279700279236,
-0.04452681913971901,
0.04259513318538666,
0.07567697763442993,
0.35082700848579407,
0.06566647440195084,
0.2150925248... |
https://github.com/huggingface/datasets/issues/1993 | How to load a dataset with load_from disk and save it again after doing transformations without changing the original? | Hi, I experimented with RAG.
Actually, you can run the [use_own_knowldge_dataset.py](https://github.com/shamanez/transformers/blob/rag-end-to-end-retrieval/examples/research_projects/rag/use_own_knowledge_dataset.py#L80). In the 80 you can save the dataset object to the disk with save_to_disk. Then in order to comp... | I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place.
When I save the dataset with **save_to_disk**, the original da... | 91 | How to load a dataset with load_from disk and save it again after doing transformations without changing the original?
I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object ... | [
-0.2981737554073334,
-0.08286520093679428,
0.026727501302957535,
0.14939935505390167,
0.3067897856235504,
0.12280038744211197,
-0.05620676651597023,
-0.06147686392068863,
0.04452739283442497,
0.05419393628835678,
-0.06526149064302444,
0.3738250732421875,
0.02156338281929493,
0.101198032498... |
https://github.com/huggingface/datasets/issues/1993 | How to load a dataset with load_from disk and save it again after doing transformations without changing the original? | @lhoestq I also found that cache in tmp directory gets updated after transformations. This is really problematic when using datasets interactively. Let's say we use the shards function to a dataset loaded with csv, atm when we do transformations to shards and combine them it updates the original csv cache. | I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place.
When I save the dataset with **save_to_disk**, the original da... | 50 | How to load a dataset with load_from disk and save it again after doing transformations without changing the original?
I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object ... | [
-0.3775414526462555,
-0.10952422022819519,
0.0013313631061464548,
0.2857869267463684,
0.25839200615882874,
0.13189703226089478,
-0.07236940413713455,
-0.09192659705877304,
-0.04141371324658394,
-0.07030738890171051,
-0.05451168492436409,
0.26995858550071716,
0.023235876113176346,
0.0732331... |
https://github.com/huggingface/datasets/issues/1993 | How to load a dataset with load_from disk and save it again after doing transformations without changing the original? | I plan to update the save_to_disk method in #2025 so I can make sure the new save_to_disk doesn't corrupt your cache files.
But from your last message it looks like save_to_disk isn't the root cause right ? | I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place.
When I save the dataset with **save_to_disk**, the original da... | 37 | How to load a dataset with load_from disk and save it again after doing transformations without changing the original?
I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object ... | [
-0.310508131980896,
-0.06806229054927826,
0.004688459448516369,
0.21374262869358063,
0.30936986207962036,
0.09057606011629105,
-0.03542681783437729,
0.052789073437452316,
-0.07327675074338913,
0.006163087673485279,
0.022279484197497368,
0.3242504596710205,
0.08983757346868515,
0.1039878502... |
https://github.com/huggingface/datasets/issues/1993 | How to load a dataset with load_from disk and save it again after doing transformations without changing the original? | ok, one more thing. When we use save_to_disk there are two files other than .arrow. dataset_info.json and state.json. Sometimes most of the fields in the dataset_infor.json are null, especially when saving dataset objects. Anyways I think load_from_disk uses the arrow files mentioned in state.json right? | I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place.
When I save the dataset with **save_to_disk**, the original da... | 45 | How to load a dataset with load_from disk and save it again after doing transformations without changing the original?
I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object ... | [
-0.3007220923900604,
-0.06920696794986725,
-0.005488774739205837,
0.2683241069316864,
0.18687587976455688,
0.08930063247680664,
-0.09074196964502335,
-0.050216589123010635,
-0.09361164271831512,
-0.029689237475395203,
0.08453046530485153,
0.39229440689086914,
0.09210673719644547,
0.2010051... |
https://github.com/huggingface/datasets/issues/1993 | How to load a dataset with load_from disk and save it again after doing transformations without changing the original? | Perfect. For now, I am loading the dataset from CSV in my interactive process and will wait until you make the PR! | I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place.
When I save the dataset with **save_to_disk**, the original da... | 22 | How to load a dataset with load_from disk and save it again after doing transformations without changing the original?
I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object ... | [
-0.3435496389865875,
-0.18610084056854248,
-0.023776641115546227,
0.19790960848331451,
0.23519615828990936,
0.09037342667579651,
-0.0758548453450203,
-0.08912719041109085,
-0.0024479334242641926,
0.06135242432355881,
0.08186816424131393,
0.28882044553756714,
0.008917439728975296,
0.2742181... |
https://github.com/huggingface/datasets/issues/1992 | `datasets.map` multi processing much slower than single processing | Hi @hwijeen, you might want to look at issues #1796 and #1949. I think it could be something related to the I/O operations being performed. | Hi, thank you for the great library.
I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G.
My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentences, and `map` converts a sentence into a list of integers, using a tok... | 25 | `datasets.map` multi processing much slower than single processing
Hi, thank you for the great library.
I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G.
My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentenc... | [
-0.41315704584121704,
-0.31920647621154785,
-0.08791661262512207,
0.357421338558197,
-0.10274987667798996,
0.020394116640090942,
0.34240543842315674,
0.11579525470733643,
0.057924553751945496,
-0.0040251328609883785,
0.06569883972406387,
0.4142298400402069,
0.18681801855564117,
0.211821734... |
https://github.com/huggingface/datasets/issues/1992 | `datasets.map` multi processing much slower than single processing | I see that many people are experiencing the same issue. Is this problem considered an "official" bug that is worth a closer look? @lhoestq | Hi, thank you for the great library.
I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G.
My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentences, and `map` converts a sentence into a list of integers, using a tok... | 24 | `datasets.map` multi processing much slower than single processing
Hi, thank you for the great library.
I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G.
My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentenc... | [
-0.3866337835788727,
-0.2725811004638672,
-0.07879332453012466,
0.3493689298629761,
-0.10609238594770432,
-0.00516586285084486,
0.37053728103637695,
0.12684975564479828,
0.054780419915914536,
-0.016425825655460358,
0.07629195600748062,
0.4323449432849884,
0.16459403932094574,
0.17756359279... |
https://github.com/huggingface/datasets/issues/1992 | `datasets.map` multi processing much slower than single processing | Yes this is an official bug. On my side I haven't managed to reproduce it but @theo-m has. We'll investigate this ! | Hi, thank you for the great library.
I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G.
My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentences, and `map` converts a sentence into a list of integers, using a tok... | 22 | `datasets.map` multi processing much slower than single processing
Hi, thank you for the great library.
I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G.
My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentenc... | [
-0.4096490144729614,
-0.3040941059589386,
-0.08470116555690765,
0.3398917019367218,
-0.08970021456480026,
0.01273383293300867,
0.35276123881340027,
0.11814624816179276,
0.05821249261498451,
0.009482868015766144,
0.07270276546478271,
0.4454631805419922,
0.16307352483272552,
0.18932944536209... |
https://github.com/huggingface/datasets/issues/1992 | `datasets.map` multi processing much slower than single processing | Thank you for the reply! I would be happy to follow the discussions related to the issue.
If you do not mind, could you also give a little more explanation on my p.s.2? I am having a hard time figuring out why the single processing `map` uses all of my cores.
@lhoestq @theo-m | Hi, thank you for the great library.
I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G.
My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentences, and `map` converts a sentence into a list of integers, using a tok... | 53 | `datasets.map` multi processing much slower than single processing
Hi, thank you for the great library.
I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G.
My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentenc... | [
-0.3987977206707001,
-0.33985647559165955,
-0.09195335954427719,
0.3478132486343384,
-0.10693040490150452,
0.030263762921094894,
0.370674192905426,
0.1026252955198288,
0.06230878829956055,
0.014974081888794899,
0.0955682322382927,
0.4472281336784363,
0.17406834661960602,
0.1985434144735336... |
https://github.com/huggingface/datasets/issues/1992 | `datasets.map` multi processing much slower than single processing | Regarding your ps2: It depends what function you pass to `map`.
For example, fast tokenizers from `transformers` in Rust tokenize texts and parallelize the tokenization over all the cores. | Hi, thank you for the great library.
I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G.
My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentences, and `map` converts a sentence into a list of integers, using a tok... | 29 | `datasets.map` multi processing much slower than single processing
Hi, thank you for the great library.
I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G.
My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentenc... | [
-0.4439503848552704,
-0.276717871427536,
-0.07882984727621078,
0.3751106262207031,
-0.0908033475279808,
-0.005079233553260565,
0.3211756944656372,
0.06434030830860138,
-0.045802049338817596,
-0.009251722134649754,
0.04776545986533165,
0.40863656997680664,
0.21794910728931427,
0.15244498848... |
https://github.com/huggingface/datasets/issues/1992 | `datasets.map` multi processing much slower than single processing | I am still experiencing this issue with datasets 1.9.0..
Has there been a further investigation?
<img width="442" alt="image" src="https://user-images.githubusercontent.com/29157715/126143387-8b5ddca2-a896-4e18-abf7-4fbf62a48b41.png">
| Hi, thank you for the great library.
I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G.
My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentences, and `map` converts a sentence into a list of integers, using a tok... | 19 | `datasets.map` multi processing much slower than single processing
Hi, thank you for the great library.
I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G.
My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentenc... | [
-0.4408693015575409,
-0.25723347067832947,
-0.10014680027961731,
0.3424951434135437,
-0.11361966282129288,
0.025486208498477936,
0.3359663486480713,
0.13727445900440216,
0.02380768023431301,
-0.026604047045111656,
0.07284291088581085,
0.4267615079879761,
0.16495084762573242,
0.184482857584... |
https://github.com/huggingface/datasets/issues/1990 | OSError: Memory mapping file failed: Cannot allocate memory | Do you think this is trying to bring the dataset into memory and if I can avoid it to save on memory so it only brings a batch into memory? @lhoestq thank you | Hi,
I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py
```
python run_mlm.py --model_name_or_path bert-base-multi... | 33 | OSError: Memory mapping file failed: Cannot allocate memory
Hi,
I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py... | [
-0.26154208183288574,
-0.03726404160261154,
0.05204009637236595,
0.6045717597007751,
0.45383769273757935,
0.2834008038043976,
0.14564386010169983,
0.27246683835983276,
0.17855094373226166,
0.10566100478172302,
-0.058184292167425156,
0.22738583385944366,
-0.1771182119846344,
-0.145917072892... |
https://github.com/huggingface/datasets/issues/1990 | OSError: Memory mapping file failed: Cannot allocate memory | It's not trying to bring the dataset into memory.
Actually, it's trying to memory map the dataset file, which is different. It allows to load large dataset files without filling up memory.
What dataset did you use to get this error ?
On what OS are you running ? What's your python and pyarrow version ? | Hi,
I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py
```
python run_mlm.py --model_name_or_path bert-base-multi... | 56 | OSError: Memory mapping file failed: Cannot allocate memory
Hi,
I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py... | [
-0.26154208183288574,
-0.03726404160261154,
0.05204009637236595,
0.6045717597007751,
0.45383769273757935,
0.2834008038043976,
0.14564386010169983,
0.27246683835983276,
0.17855094373226166,
0.10566100478172302,
-0.058184292167425156,
0.22738583385944366,
-0.1771182119846344,
-0.145917072892... |
https://github.com/huggingface/datasets/issues/1990 | OSError: Memory mapping file failed: Cannot allocate memory | Dear @lhoestq
thank you so much for coming back to me. Please find info below:
1) Dataset name: I used wikipedia with config 20200501.en
2) I got these pyarrow in my environment:
pyarrow 2.0.0 <pip>
pyarrow 3.0.0 <pip>
3) python versi... | Hi,
I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py
```
python run_mlm.py --model_name_or_path bert-base-multi... | 88 | OSError: Memory mapping file failed: Cannot allocate memory
Hi,
I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py... | [
-0.26154208183288574,
-0.03726404160261154,
0.05204009637236595,
0.6045717597007751,
0.45383769273757935,
0.2834008038043976,
0.14564386010169983,
0.27246683835983276,
0.17855094373226166,
0.10566100478172302,
-0.058184292167425156,
0.22738583385944366,
-0.1771182119846344,
-0.145917072892... |
https://github.com/huggingface/datasets/issues/1990 | OSError: Memory mapping file failed: Cannot allocate memory | I noticed that the error happens when loading the validation dataset.
What value of `data_args.validation_split_percentage` did you use ? | Hi,
I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py
```
python run_mlm.py --model_name_or_path bert-base-multi... | 19 | OSError: Memory mapping file failed: Cannot allocate memory
Hi,
I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py... | [
-0.26154208183288574,
-0.03726404160261154,
0.05204009637236595,
0.6045717597007751,
0.45383769273757935,
0.2834008038043976,
0.14564386010169983,
0.27246683835983276,
0.17855094373226166,
0.10566100478172302,
-0.058184292167425156,
0.22738583385944366,
-0.1771182119846344,
-0.145917072892... |
https://github.com/huggingface/datasets/issues/1990 | OSError: Memory mapping file failed: Cannot allocate memory | Dear @lhoestq
thank you very much for the very sharp observation, indeed, this happens there, I use the default value of 5, I basically plan to subsample a part of the large dataset and choose it as validation set. Do you think this is bringing the data into memory during subsampling? Is there a way I could avoid ... | Hi,
I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py
```
python run_mlm.py --model_name_or_path bert-base-multi... | 133 | OSError: Memory mapping file failed: Cannot allocate memory
Hi,
I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py... | [
-0.26154208183288574,
-0.03726404160261154,
0.05204009637236595,
0.6045717597007751,
0.45383769273757935,
0.2834008038043976,
0.14564386010169983,
0.27246683835983276,
0.17855094373226166,
0.10566100478172302,
-0.058184292167425156,
0.22738583385944366,
-0.1771182119846344,
-0.145917072892... |
https://github.com/huggingface/datasets/issues/1990 | OSError: Memory mapping file failed: Cannot allocate memory | Methods like `dataset.shard`, `dataset.train_test_split`, `dataset.select` etc. don't bring the dataset in memory.
The only time when samples are brought to memory is when you access elements via `dataset[0]`, `dataset[:10]`, `dataset["my_column_names"]`.
But it's possible that trying to use those methods to build... | Hi,
I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py
```
python run_mlm.py --model_name_or_path bert-base-multi... | 121 | OSError: Memory mapping file failed: Cannot allocate memory
Hi,
I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py... | [
-0.26154208183288574,
-0.03726404160261154,
0.05204009637236595,
0.6045717597007751,
0.45383769273757935,
0.2834008038043976,
0.14564386010169983,
0.27246683835983276,
0.17855094373226166,
0.10566100478172302,
-0.058184292167425156,
0.22738583385944366,
-0.1771182119846344,
-0.145917072892... |
https://github.com/huggingface/datasets/issues/1989 | Question/problem with dataset labels | It seems that I get parsing errors for various fields in my data. For example now I get this:
```
File "../../../models/tr-4.3.2/run_puppets.py", line 523, in <module>
main()
File "../../../models/tr-4.3.2/run_puppets.py", line 249, in main
datasets = load_dataset("csv", data_files=data_files)
File ... | Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon".
This is the trace I get:
```
File "../../../models/tr-4.3.2/run_... | 128 | Question/problem with dataset labels
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon".
This is the trace I get:
`... | [
0.17891450226306915,
-0.05732319876551628,
0.005179460160434246,
0.1343672275543213,
0.4113537073135376,
0.3502123951911926,
0.6419944763183594,
0.16370762884616852,
-0.16131749749183655,
0.07234954833984375,
0.17885442078113556,
0.06387543678283691,
-0.060773108154535294,
0.03052571415901... |
https://github.com/huggingface/datasets/issues/1989 | Question/problem with dataset labels | Not sure if this helps, this is how I load my files (as in the sample scripts on transformers):
```
if data_args.train_file.endswith(".csv"):
# Loading a dataset from local csv files
datasets = load_dataset("csv", data_files=data_files)
``` | Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon".
This is the trace I get:
```
File "../../../models/tr-4.3.2/run_... | 35 | Question/problem with dataset labels
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon".
This is the trace I get:
`... | [
0.17891450226306915,
-0.05732319876551628,
0.005179460160434246,
0.1343672275543213,
0.4113537073135376,
0.3502123951911926,
0.6419944763183594,
0.16370762884616852,
-0.16131749749183655,
0.07234954833984375,
0.17885442078113556,
0.06387543678283691,
-0.060773108154535294,
0.03052571415901... |
https://github.com/huggingface/datasets/issues/1989 | Question/problem with dataset labels | Since this worked out of the box in a few examples before, I wonder if it's some quoting issue or something else. | Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon".
This is the trace I get:
```
File "../../../models/tr-4.3.2/run_... | 22 | Question/problem with dataset labels
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon".
This is the trace I get:
`... | [
0.17891450226306915,
-0.05732319876551628,
0.005179460160434246,
0.1343672275543213,
0.4113537073135376,
0.3502123951911926,
0.6419944763183594,
0.16370762884616852,
-0.16131749749183655,
0.07234954833984375,
0.17885442078113556,
0.06387543678283691,
-0.060773108154535294,
0.03052571415901... |
https://github.com/huggingface/datasets/issues/1989 | Question/problem with dataset labels | Hi @ioana-blue,
Can you share a sample from your .csv? A dummy where you get this error will also help.
I tried this csv:
```csv
feature,label
1.2,not nurse
1.3,nurse
1.5,surgeon
```
and the following snippet:
```python
from datasets import load_dataset
d = load_dataset("csv",data_files=['test.csv'])
... | Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon".
This is the trace I get:
```
File "../../../models/tr-4.3.2/run_... | 95 | Question/problem with dataset labels
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon".
This is the trace I get:
`... | [
0.17891450226306915,
-0.05732319876551628,
0.005179460160434246,
0.1343672275543213,
0.4113537073135376,
0.3502123951911926,
0.6419944763183594,
0.16370762884616852,
-0.16131749749183655,
0.07234954833984375,
0.17885442078113556,
0.06387543678283691,
-0.060773108154535294,
0.03052571415901... |
https://github.com/huggingface/datasets/issues/1989 | Question/problem with dataset labels | I've had versions where it worked fain. For this dataset, I had all kind of parsing issues that I couldn't understand. What I ended up doing is strip all the columns that I didn't need and also make the label 0/1.
I think one line that may have caused a problem was the csv version of this:
```crawl-data/CC-MAIN-... | Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon".
This is the trace I get:
```
File "../../../models/tr-4.3.2/run_... | 197 | Question/problem with dataset labels
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon".
This is the trace I get:
`... | [
0.17891450226306915,
-0.05732319876551628,
0.005179460160434246,
0.1343672275543213,
0.4113537073135376,
0.3502123951911926,
0.6419944763183594,
0.16370762884616852,
-0.16131749749183655,
0.07234954833984375,
0.17885442078113556,
0.06387543678283691,
-0.060773108154535294,
0.03052571415901... |
https://github.com/huggingface/datasets/issues/1989 | Question/problem with dataset labels | Hi @ioana-blue,
What is the separator you're using for the csv? I see there are only two commas in the given line, but they don't seem like appropriate points. Also, is this a string part of one line, or an entire line? There should also be a label, right? | Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon".
This is the trace I get:
```
File "../../../models/tr-4.3.2/run_... | 49 | Question/problem with dataset labels
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon".
This is the trace I get:
`... | [
0.17891450226306915,
-0.05732319876551628,
0.005179460160434246,
0.1343672275543213,
0.4113537073135376,
0.3502123951911926,
0.6419944763183594,
0.16370762884616852,
-0.16131749749183655,
0.07234954833984375,
0.17885442078113556,
0.06387543678283691,
-0.060773108154535294,
0.03052571415901... |
https://github.com/huggingface/datasets/issues/1989 | Question/problem with dataset labels | Sorry for the confusion, the sample above was from a tsv that was used to derive the csv. Let me construct the csv again (I had remove it).
This is the line in the csv - this is the whole line:
```crawl-data/CC-MAIN-2017-47/segments/1510934806225.78/wet/CC-MAIN-20171120203833-20171120223833-00571.warc.wet.gz,Rose ... | Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon".
This is the trace I get:
```
File "../../../models/tr-4.3.2/run_... | 139 | Question/problem with dataset labels
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon".
This is the trace I get:
`... | [
0.17891450226306915,
-0.05732319876551628,
0.005179460160434246,
0.1343672275543213,
0.4113537073135376,
0.3502123951911926,
0.6419944763183594,
0.16370762884616852,
-0.16131749749183655,
0.07234954833984375,
0.17885442078113556,
0.06387543678283691,
-0.060773108154535294,
0.03052571415901... |
https://github.com/huggingface/datasets/issues/1989 | Question/problem with dataset labels | Hi,
Just in case you want to use tsv directly, you can use the separator argument while loading the dataset.
```python
d = load_dataset("csv",data_files=['test.csv'],sep="\t")
```
Additionally, I don't face the issues with the following csv (same as the one you provided):
```sh
link1,text1,info1,info2,info3,... | Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon".
This is the trace I get:
```
File "../../../models/tr-4.3.2/run_... | 292 | Question/problem with dataset labels
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon".
This is the trace I get:
`... | [
0.17891450226306915,
-0.05732319876551628,
0.005179460160434246,
0.1343672275543213,
0.4113537073135376,
0.3502123951911926,
0.6419944763183594,
0.16370762884616852,
-0.16131749749183655,
0.07234954833984375,
0.17885442078113556,
0.06387543678283691,
-0.060773108154535294,
0.03052571415901... |
https://github.com/huggingface/datasets/issues/1989 | Question/problem with dataset labels | thanks for the tip. very strange :/ I'll check my datasets version as well.
I will have more similar experiments soon so I'll let you know if I manage to get rid of this. | Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon".
This is the trace I get:
```
File "../../../models/tr-4.3.2/run_... | 34 | Question/problem with dataset labels
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon".
This is the trace I get:
`... | [
0.17891450226306915,
-0.05732319876551628,
0.005179460160434246,
0.1343672275543213,
0.4113537073135376,
0.3502123951911926,
0.6419944763183594,
0.16370762884616852,
-0.16131749749183655,
0.07234954833984375,
0.17885442078113556,
0.06387543678283691,
-0.060773108154535294,
0.03052571415901... |
https://github.com/huggingface/datasets/issues/1989 | Question/problem with dataset labels | No problem at all. I thought I'd be able to solve this but I'm unable to replicate the issue :/ | Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon".
This is the trace I get:
```
File "../../../models/tr-4.3.2/run_... | 20 | Question/problem with dataset labels
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon".
This is the trace I get:
`... | [
0.17891450226306915,
-0.05732319876551628,
0.005179460160434246,
0.1343672275543213,
0.4113537073135376,
0.3502123951911926,
0.6419944763183594,
0.16370762884616852,
-0.16131749749183655,
0.07234954833984375,
0.17885442078113556,
0.06387543678283691,
-0.060773108154535294,
0.03052571415901... |
https://github.com/huggingface/datasets/issues/1988 | Readme.md is misleading about kinds of datasets? | Hi ! Yes it's possible to use image data. There are already a few of them available (MNIST, CIFAR..) | Hi!
At the README.MD, you say: "efficient data pre-processing: simple, fast and reproducible data pre-processing for the above public datasets as well as your own local datasets in CSV/JSON/text. "
But here:
https://github.com/huggingface/datasets/blob/master/templates/new_dataset_script.py#L82-L117
You menti... | 19 | Readme.md is misleading about kinds of datasets?
Hi!
At the README.MD, you say: "efficient data pre-processing: simple, fast and reproducible data pre-processing for the above public datasets as well as your own local datasets in CSV/JSON/text. "
But here:
https://github.com/huggingface/datasets/blob/master/te... | [
-0.10489434748888016,
-0.4077847898006439,
-0.12521077692508698,
0.35286909341812134,
0.20643307268619537,
0.07730388641357422,
0.2821688652038574,
-0.01321975514292717,
0.10144545882940292,
-0.09215116500854492,
-0.29715466499328613,
-0.07670380920171738,
-0.05919022485613823,
0.541465640... |
https://github.com/huggingface/datasets/issues/1983 | The size of CoNLL-2003 is not consistant with the official release. | Hi,
if you inspect the raw data, you can find there are 946 occurrences of `-DOCSTART- -X- -X- O` in the train split and `14041 + 946 = 14987`, which is exactly the number of sentences the authors report. `-DOCSTART-` is a special line that acts as a boundary between two different documents and is filtered out in ou... | Thanks for the dataset sharing! But when I use conll-2003, I meet some questions.
The statistics of conll-2003 in this repo is :
\#train 14041 \#dev 3250 \#test 3453
While the official statistics is:
\#train 14987 \#dev 3466 \#test 3684
Wish for your reply~ | 78 | The size of CoNLL-2003 is not consistant with the official release.
Thanks for the dataset sharing! But when I use conll-2003, I meet some questions.
The statistics of conll-2003 in this repo is :
\#train 14041 \#dev 3250 \#test 3453
While the official statistics is:
\#train 14987 \#dev 3466 \#test 3684
Wish ... | [
0.1660705804824829,
-0.34415170550346375,
-0.0265198964625597,
0.3718196749687195,
-0.3747006058692932,
-0.002087885281071067,
0.061371371150016785,
-0.07089857012033463,
-0.9394964575767517,
-0.006283103488385677,
0.1252354234457016,
0.15680809319019318,
0.08135801553726196,
0.00088947935... |
https://github.com/huggingface/datasets/issues/1983 | The size of CoNLL-2003 is not consistant with the official release. | We should mention in the Conll2003 dataset card that these lines have been removed indeed.
If some users are interested in using these lines (maybe to recombine documents ?) then we can add a parameter to the conll2003 dataset to include them.
But IMO the default config should stay the current one (without the `-... | Thanks for the dataset sharing! But when I use conll-2003, I meet some questions.
The statistics of conll-2003 in this repo is :
\#train 14041 \#dev 3250 \#test 3453
While the official statistics is:
\#train 14987 \#dev 3466 \#test 3684
Wish for your reply~ | 73 | The size of CoNLL-2003 is not consistant with the official release.
Thanks for the dataset sharing! But when I use conll-2003, I meet some questions.
The statistics of conll-2003 in this repo is :
\#train 14041 \#dev 3250 \#test 3453
While the official statistics is:
\#train 14987 \#dev 3466 \#test 3684
Wish ... | [
-0.07905146479606628,
0.005305801052600145,
0.022709064185619354,
0.17673128843307495,
-0.17504696547985077,
0.009194070473313332,
0.1926340013742447,
0.15998324751853943,
-1.0088422298431396,
0.0919717326760292,
0.11612185090780258,
0.03976002335548401,
-0.01693623699247837,
0.11876183003... |
https://github.com/huggingface/datasets/issues/1983 | The size of CoNLL-2003 is not consistant with the official release. | @lhoestq Yes, I agree adding a small note should be sufficient.
Currently, NLTK's `ConllCorpusReader` ignores the `-DOCSTART-` lines so I think it's ok if we do the same. If there is an interest in the future to use these lines, then we can include them. | Thanks for the dataset sharing! But when I use conll-2003, I meet some questions.
The statistics of conll-2003 in this repo is :
\#train 14041 \#dev 3250 \#test 3453
While the official statistics is:
\#train 14987 \#dev 3466 \#test 3684
Wish for your reply~ | 45 | The size of CoNLL-2003 is not consistant with the official release.
Thanks for the dataset sharing! But when I use conll-2003, I meet some questions.
The statistics of conll-2003 in this repo is :
\#train 14041 \#dev 3250 \#test 3453
While the official statistics is:
\#train 14987 \#dev 3466 \#test 3684
Wish ... | [
0.13010796904563904,
0.09629001468420029,
0.03945086896419525,
0.053366899490356445,
-0.2169310450553894,
0.007520818617194891,
0.15427373349666595,
-0.00016209053865168244,
-0.9731166362762451,
0.06726887822151184,
0.2056126743555069,
0.07014179229736328,
-0.05132923647761345,
-0.08427174... |
https://github.com/huggingface/datasets/issues/1983 | The size of CoNLL-2003 is not consistant with the official release. | I added a mention of this in conll2003's dataset card:
https://github.com/huggingface/datasets/blob/fc9796920da88486c3b97690969aabf03d6b4088/datasets/conll2003/README.md#conll2003
Edit: just saw your PR @mariosasko (noticed it too late ^^)
Let me take a look at it :) | Thanks for the dataset sharing! But when I use conll-2003, I meet some questions.
The statistics of conll-2003 in this repo is :
\#train 14041 \#dev 3250 \#test 3453
While the official statistics is:
\#train 14987 \#dev 3466 \#test 3684
Wish for your reply~ | 30 | The size of CoNLL-2003 is not consistant with the official release.
Thanks for the dataset sharing! But when I use conll-2003, I meet some questions.
The statistics of conll-2003 in this repo is :
\#train 14041 \#dev 3250 \#test 3453
While the official statistics is:
\#train 14987 \#dev 3466 \#test 3684
Wish ... | [
-0.11777999252080917,
-0.149089977145195,
-0.12398058921098709,
0.406358927488327,
-0.14216803014278412,
-0.18602581322193146,
0.2695666551589966,
0.010811260901391506,
-0.9765855073928833,
0.12976138293743134,
0.031708974391222,
-0.04653014615178108,
0.029542744159698486,
0.17403796315193... |
https://github.com/huggingface/datasets/issues/1981 | wmt datasets fail to load | yes, of course, I reverted to the version before that and it works ;)
but since a new release was just made you will probably need to make a hotfix.
and add the wmt to the tests? | on master:
```
python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")'
Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/de-en/1.0.0/43e717d978d226150... | 37 | wmt datasets fail to load
on master:
```
python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")'
Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/d... | [
-0.3290725350379944,
-0.0584937259554863,
-0.010408233851194382,
0.5395736694335938,
0.3104315400123596,
0.004104514606297016,
0.2097795009613037,
0.08727050572633743,
0.3129350244998932,
0.10788669437170029,
-0.02249966561794281,
-0.12566150724887848,
-0.2969319224357605,
0.18879434466362... |
https://github.com/huggingface/datasets/issues/1981 | wmt datasets fail to load | @stas00 it is fixed. @lhoestq are you releasing the hot fix or would you prefer me to do it? | on master:
```
python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")'
Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/de-en/1.0.0/43e717d978d226150... | 19 | wmt datasets fail to load
on master:
```
python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")'
Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/d... | [
-0.3290725350379944,
-0.0584937259554863,
-0.010408233851194382,
0.5395736694335938,
0.3104315400123596,
0.004104514606297016,
0.2097795009613037,
0.08727050572633743,
0.3129350244998932,
0.10788669437170029,
-0.02249966561794281,
-0.12566150724887848,
-0.2969319224357605,
0.18879434466362... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.