html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 70 51.8k | body stringlengths 0 29.8k | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings list |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/1981 | wmt datasets fail to load | I'll do a patch release for this issue early tomorrow.
And yes we absolutly need tests for the wmt datasets: The missing tests for wmt are an artifact from the early development of the lib but now we have tools to generate automatically the dummy data used for tests :) | on master:
```
python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")'
Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/de-en/1.0.0/43e717d978d226150... | 50 | wmt datasets fail to load
on master:
```
python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")'
Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/d... | [
-0.3290725350379944,
-0.0584937259554863,
-0.010408233851194382,
0.5395736694335938,
0.3104315400123596,
0.004104514606297016,
0.2097795009613037,
0.08727050572633743,
0.3129350244998932,
0.10788669437170029,
-0.02249966561794281,
-0.12566150724887848,
-0.2969319224357605,
0.18879434466362... |
https://github.com/huggingface/datasets/issues/1981 | wmt datasets fail to load | still facing the same issue or similar:
from datasets import load_dataset
wtm14_test = load_dataset('wmt14',"de-en",cache_dir='./datasets')
~.cache\huggingface\modules\datasets_modules\datasets\wmt14\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\wmt_utils.py in _split_generators(self, dl_manager... | on master:
```
python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")'
Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/de-en/1.0.0/43e717d978d226150... | 52 | wmt datasets fail to load
on master:
```
python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")'
Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/d... | [
-0.3290725350379944,
-0.0584937259554863,
-0.010408233851194382,
0.5395736694335938,
0.3104315400123596,
0.004104514606297016,
0.2097795009613037,
0.08727050572633743,
0.3129350244998932,
0.10788669437170029,
-0.02249966561794281,
-0.12566150724887848,
-0.2969319224357605,
0.18879434466362... |
https://github.com/huggingface/datasets/issues/1977 | ModuleNotFoundError: No module named 'apache_beam' for wikipedia datasets | I sometimes also get this error with other languages of the same dataset:
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 322, in read_table
stream = stream_from(filename)
File "pyarrow/io.pxi", line 782, in pyarrow.lib.memory_map
... | Hi
I am trying to run run_mlm.py code [1] of huggingface with following "wikipedia"/ "20200501.aa" dataset:
`python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 20200501.aa --do_train --do_eval --output_dir /tmp/test-mlm --max_seq_l... | 55 | ModuleNotFoundError: No module named 'apache_beam' for wikipedia datasets
Hi
I am trying to run run_mlm.py code [1] of huggingface with following "wikipedia"/ "20200501.aa" dataset:
`python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 202005... | [
-0.21314120292663574,
-0.33830004930496216,
0.0015386008890345693,
0.33983370661735535,
0.21974503993988037,
0.2354419231414795,
0.2507147490978241,
0.2680618464946747,
0.1814853549003601,
-0.014452153816819191,
-0.03593209385871887,
0.03649013489484787,
-0.17105652391910553,
0.10198640823... |
https://github.com/huggingface/datasets/issues/1977 | ModuleNotFoundError: No module named 'apache_beam' for wikipedia datasets | Hi ! Thanks for reporting
Some wikipedia configurations do require the user to have `apache_beam` in order to parse the wikimedia data.
On the other hand regarding your second issue
```
OSError: Memory mapping file failed: Cannot allocate memory
```
I've never experienced this, can you open a new issue for this... | Hi
I am trying to run run_mlm.py code [1] of huggingface with following "wikipedia"/ "20200501.aa" dataset:
`python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 20200501.aa --do_train --do_eval --output_dir /tmp/test-mlm --max_seq_l... | 84 | ModuleNotFoundError: No module named 'apache_beam' for wikipedia datasets
Hi
I am trying to run run_mlm.py code [1] of huggingface with following "wikipedia"/ "20200501.aa" dataset:
`python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 202005... | [
-0.21314120292663574,
-0.33830004930496216,
0.0015386008890345693,
0.33983370661735535,
0.21974503993988037,
0.2354419231414795,
0.2507147490978241,
0.2680618464946747,
0.1814853549003601,
-0.014452153816819191,
-0.03593209385871887,
0.03649013489484787,
-0.17105652391910553,
0.10198640823... |
https://github.com/huggingface/datasets/issues/1973 | Question: what gets stored in the datasets cache and why is it so huge? | Echo'ing this observation: I have a few datasets in the neighborhood of 2GB CSVs uncompressed, and when I use something like `Dataset.save_to_disk()` it's ~18GB on disk.
If this is unexpected behavior, would be happy to help run debugging as needed. | I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any in... | 40 | Question: what gets stored in the datasets cache and why is it so huge?
I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before ... | [
-0.03606068342924118,
-0.06625715643167496,
-0.10862758010625839,
0.5382813811302185,
0.17685219645500183,
0.30466505885124207,
-0.06870928406715393,
0.25634485483169556,
-0.14989332854747772,
-0.11155165731906891,
-0.004414561670273542,
-0.23843622207641602,
-0.1599154770374298,
-0.229542... |
https://github.com/huggingface/datasets/issues/1973 | Question: what gets stored in the datasets cache and why is it so huge? | Thanks @ioana-blue for pointing out this problem (and thanks also @justin-yan). You are right that current implementation of the datasets caching files take too much memory. We are definitely changing this and optimizing the defaults, so that the file sizes are considerably reduced. I will come back to you as soon as t... | I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any in... | 55 | Question: what gets stored in the datasets cache and why is it so huge?
I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before ... | [
-0.0833510234951973,
0.012125344015657902,
-0.1291755735874176,
0.5186684131622314,
0.14931687712669373,
0.24268420040607452,
-0.07578984647989273,
0.3019646108150482,
-0.08773290365934372,
-0.03593554347753525,
-0.029559148475527763,
-0.2534341812133789,
-0.09368398785591125,
-0.200803592... |
https://github.com/huggingface/datasets/issues/1973 | Question: what gets stored in the datasets cache and why is it so huge? | Thank you! Also I noticed that the files don't seem to be cleaned after the jobs finish. Last night I had only 3 jobs running, but the cache was still at 180GB. | I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any in... | 32 | Question: what gets stored in the datasets cache and why is it so huge?
I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before ... | [
-0.1613563746213913,
0.07331603765487671,
-0.13332916796207428,
0.5806112885475159,
0.07841744273900986,
0.2742607593536377,
-0.05600658804178238,
0.2457105964422226,
-0.12449800223112106,
-0.08306524157524109,
-0.040894314646720886,
-0.2797769606113434,
-0.10303006321191788,
-0.1785736680... |
https://github.com/huggingface/datasets/issues/1973 | Question: what gets stored in the datasets cache and why is it so huge? | Hi ! As Albert said they can sometimes take more space that expected but we'll fix that soon.
Also, to give more details about caching: computations on a dataset are cached by default so that you don't have to recompute them the next time you run them.
So by default the cache files stay on your disk when you job ... | I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any in... | 95 | Question: what gets stored in the datasets cache and why is it so huge?
I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before ... | [
-0.06116209924221039,
-0.03349563106894493,
-0.12972278892993927,
0.5127941370010376,
0.1173945739865303,
0.2473040521144867,
-0.010423514991998672,
0.24276421964168549,
-0.0756753459572792,
-0.03844786062836647,
-0.0576455295085907,
-0.28561240434646606,
-0.06284128874540329,
-0.138533473... |
https://github.com/huggingface/datasets/issues/1973 | Question: what gets stored in the datasets cache and why is it so huge? | Hi @ioana-blue, we have optimized Datasets' disk usage in the latest release v1.5.
Feel free to update your Datasets version
```shell
pip install -U datasets
```
and see if it better suits your needs. | I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any in... | 34 | Question: what gets stored in the datasets cache and why is it so huge?
I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before ... | [
-0.12553654611110687,
-0.03367778658866882,
-0.14823246002197266,
0.5212610960006714,
0.16149595379829407,
0.24268470704555511,
-0.092532217502594,
0.2584300935268402,
-0.08028004318475723,
-0.022631946951150894,
-0.07931539416313171,
-0.1993967592716217,
-0.08902521431446075,
-0.193706452... |
https://github.com/huggingface/datasets/issues/1965 | Can we parallelized the add_faiss_index process over dataset shards ? | Hi !
As far as I know not all faiss indexes can be computed in parallel and then merged.
For example [here](https://github.com/facebookresearch/faiss/wiki/Special-operations-on-indexes#splitting-and-merging-indexes) is is mentioned that only IndexIVF indexes can be merged.
Moreover faiss already works using multith... | I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ?
I feel theoretically this will reduce the accuracy of retrieval since it affects the indexing process... | 79 | Can we parallelized the add_faiss_index process over dataset shards ?
I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ?
I feel theoretically this wil... | [
-0.278940886259079,
-0.06273221224546432,
-0.15734605491161346,
0.11061215400695801,
-0.3771445155143738,
0.24785152077674866,
0.20978879928588867,
0.10627040266990662,
0.13710200786590576,
0.17192494869232178,
-0.1766025274991989,
-0.13538473844528198,
0.34148699045181274,
0.1139962896704... |
https://github.com/huggingface/datasets/issues/1965 | Can we parallelized the add_faiss_index process over dataset shards ? | Actually, you are right. I also had the same idea. I am trying this in the context of end-ton-end retrieval training in RAG. So far I have parallelized the embedding re-computation within the training loop by using datasets shards.
Then I was thinking of can I calculate the indexes for each shard and combined them... | I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ?
I feel theoretically this will reduce the accuracy of retrieval since it affects the indexing process... | 60 | Can we parallelized the add_faiss_index process over dataset shards ?
I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ?
I feel theoretically this wil... | [
-0.2849659025669098,
-0.16969716548919678,
-0.0912608727812767,
0.16800273954868317,
-0.3235112130641937,
0.3758884072303772,
0.33479979634284973,
0.06972014158964157,
-0.035886988043785095,
0.15469524264335632,
-0.08760654926300049,
0.061872683465480804,
0.3824172914028168,
-0.02007574029... |
https://github.com/huggingface/datasets/issues/1965 | Can we parallelized the add_faiss_index process over dataset shards ? | @lhoestq As you mentioned faiss is already using multiprocessing. I tried to do the add_index with faiss for a dataset object inside a RAY actor and the process became very slow... if fact it takes so much time. It is because a ray actor comes with a single CPU core unless we assign it more. I also tried assigning mor... | I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ?
I feel theoretically this will reduce the accuracy of retrieval since it affects the indexing process... | 72 | Can we parallelized the add_faiss_index process over dataset shards ?
I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ?
I feel theoretically this wil... | [
-0.4054991602897644,
-0.15525580942630768,
-0.12579797208309174,
0.18302763998508453,
-0.3051237463951111,
0.2511136531829834,
0.3391771912574768,
0.07270119339227676,
0.13479022681713104,
0.17686845362186432,
-0.1435733139514923,
0.1960877627134323,
0.35047638416290283,
0.0879319235682487... |
https://github.com/huggingface/datasets/issues/1964 | Datasets.py function load_dataset does not match squad dataset | Hi !
To fix 1, an you try to run this code ?
```python
from datasets import load_dataset
load_dataset("squad", download_mode="force_redownload")
```
Maybe the file your downloaded was corrupted, in this case redownloading this way should fix your issue 1.
Regarding your 2nd point, you're right that loading... | ### 1 When I try to train lxmert,and follow the code in README that --dataset name:
```shell
python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_len... | 170 | Datasets.py function load_dataset does not match squad dataset
### 1 When I try to train lxmert,and follow the code in README that --dataset name:
```shell
python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batc... | [
-0.35372042655944824,
0.06250807642936707,
0.03336537256836891,
0.3963989019393921,
0.5554531216621399,
0.005721108056604862,
0.5457587242126465,
0.34046873450279236,
-0.09831029176712036,
-0.12162312865257263,
-0.16506588459014893,
0.45083722472190857,
0.10841518640518188,
-0.077419295907... |
https://github.com/huggingface/datasets/issues/1964 | Datasets.py function load_dataset does not match squad dataset | Thks for quickly answering!
### 1 I try the first way,but seems not work
```
Traceback (most recent call last):
File "examples/question-answering/run_qa.py", line 503, in <module>
main()
File "examples/question-answering/run_qa.py", line 218, in main
datasets = load_dataset(data_args.dataset_name, d... | ### 1 When I try to train lxmert,and follow the code in README that --dataset name:
```shell
python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_len... | 434 | Datasets.py function load_dataset does not match squad dataset
### 1 When I try to train lxmert,and follow the code in README that --dataset name:
```shell
python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batc... | [
-0.35372042655944824,
0.06250807642936707,
0.03336537256836891,
0.3963989019393921,
0.5554531216621399,
0.005721108056604862,
0.5457587242126465,
0.34046873450279236,
-0.09831029176712036,
-0.12162312865257263,
-0.16506588459014893,
0.45083722472190857,
0.10841518640518188,
-0.077419295907... |
https://github.com/huggingface/datasets/issues/1964 | Datasets.py function load_dataset does not match squad dataset | ## I have fixed it, @lhoestq
### the first section change as you said and add ["id"]
```python
def process_squad(examples):
"""
Process a dataset in the squad format with columns "title" and "paragraphs"
to return the dataset with columns "context", "question" and "answers".
"""
# print(exa... | ### 1 When I try to train lxmert,and follow the code in README that --dataset name:
```shell
python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_len... | 569 | Datasets.py function load_dataset does not match squad dataset
### 1 When I try to train lxmert,and follow the code in README that --dataset name:
```shell
python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batc... | [
-0.35372042655944824,
0.06250807642936707,
0.03336537256836891,
0.3963989019393921,
0.5554531216621399,
0.005721108056604862,
0.5457587242126465,
0.34046873450279236,
-0.09831029176712036,
-0.12162312865257263,
-0.16506588459014893,
0.45083722472190857,
0.10841518640518188,
-0.077419295907... |
https://github.com/huggingface/datasets/issues/1964 | Datasets.py function load_dataset does not match squad dataset | I'm glad you managed to fix run_qa.py for your case :)
Regarding the checksum error, I'm not able to reproduce on my side.
This errors says that the downloaded file doesn't match the expected file.
Could you try running this and let me know if you get the same output as me ?
```python
from datasets.utils.info_... | ### 1 When I try to train lxmert,and follow the code in README that --dataset name:
```shell
python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_len... | 69 | Datasets.py function load_dataset does not match squad dataset
### 1 When I try to train lxmert,and follow the code in README that --dataset name:
```shell
python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batc... | [
-0.35372042655944824,
0.06250807642936707,
0.03336537256836891,
0.3963989019393921,
0.5554531216621399,
0.005721108056604862,
0.5457587242126465,
0.34046873450279236,
-0.09831029176712036,
-0.12162312865257263,
-0.16506588459014893,
0.45083722472190857,
0.10841518640518188,
-0.077419295907... |
https://github.com/huggingface/datasets/issues/1964 | Datasets.py function load_dataset does not match squad dataset | I run the code,and it show below:
```
>>> from datasets.utils.info_utils import get_size_checksum_dict
>>> from datasets import cached_path
>>> get_size_checksum_dict(cached_path("https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json"))
Downloading: 30.3MB [04:13, 120kB/s]
{'num_bytes': 30288272, 'ch... | ### 1 When I try to train lxmert,and follow the code in README that --dataset name:
```shell
python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_len... | 29 | Datasets.py function load_dataset does not match squad dataset
### 1 When I try to train lxmert,and follow the code in README that --dataset name:
```shell
python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batc... | [
-0.35372042655944824,
0.06250807642936707,
0.03336537256836891,
0.3963989019393921,
0.5554531216621399,
0.005721108056604862,
0.5457587242126465,
0.34046873450279236,
-0.09831029176712036,
-0.12162312865257263,
-0.16506588459014893,
0.45083722472190857,
0.10841518640518188,
-0.077419295907... |
https://github.com/huggingface/datasets/issues/1964 | Datasets.py function load_dataset does not match squad dataset | Alright ! So in this case redownloading the file with `download_mode="force_redownload"` should fix it. Can you try using `download_mode="force_redownload"` again ?
Not sure why it didn't work for you the first time though :/ | ### 1 When I try to train lxmert,and follow the code in README that --dataset name:
```shell
python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_len... | 34 | Datasets.py function load_dataset does not match squad dataset
### 1 When I try to train lxmert,and follow the code in README that --dataset name:
```shell
python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batc... | [
-0.35372042655944824,
0.06250807642936707,
0.03336537256836891,
0.3963989019393921,
0.5554531216621399,
0.005721108056604862,
0.5457587242126465,
0.34046873450279236,
-0.09831029176712036,
-0.12162312865257263,
-0.16506588459014893,
0.45083722472190857,
0.10841518640518188,
-0.077419295907... |
https://github.com/huggingface/datasets/issues/1963 | bug in SNLI dataset | Hi ! The labels -1 correspond to the examples without gold labels in the original snli dataset.
Feel free to remove these examples if you don't need them by using
```python
data = data.filter(lambda x: x["label"] != -1)
``` | Hi
There is label of -1 in train set of SNLI dataset, please find the code below:
```
import numpy as np
import datasets
data = datasets.load_dataset("snli")["train"]
labels = []
for d in data:
labels.append(d["label"])
print(np.unique(labels))
```
and results:
`[-1 0 1 2]`
version of datas... | 39 | bug in SNLI dataset
Hi
There is label of -1 in train set of SNLI dataset, please find the code below:
```
import numpy as np
import datasets
data = datasets.load_dataset("snli")["train"]
labels = []
for d in data:
labels.append(d["label"])
print(np.unique(labels))
```
and results:
`[-1 0 1 ... | [
0.17582225799560547,
-0.28252559900283813,
-0.10357320308685303,
0.3496331572532654,
0.22197791934013367,
0.047198280692100525,
0.3874208331108093,
0.11834347248077393,
0.08081881701946259,
0.2969244420528412,
-0.2027283012866974,
0.6121096611022949,
-0.14272020757198334,
0.095890574157238... |
https://github.com/huggingface/datasets/issues/1959 | Bug in skip_rows argument of load_dataset function ? | Hi,
try `skiprows` instead. This part is not properly documented in the docs it seems.
@lhoestq I'll fix this as part of a bigger PR that fixes typos in the docs. | Hello everyone,
I'm quite new to Git so sorry in advance if I'm breaking some ground rules of issues posting... :/
I tried to use the load_dataset function, from Huggingface datasets library, on a csv file using the skip_rows argument described on Huggingface page to skip the first row containing column names
`t... | 31 | Bug in skip_rows argument of load_dataset function ?
Hello everyone,
I'm quite new to Git so sorry in advance if I'm breaking some ground rules of issues posting... :/
I tried to use the load_dataset function, from Huggingface datasets library, on a csv file using the skip_rows argument described on Huggingface p... | [
0.11845903843641281,
-0.5157740116119385,
-0.0002605556510388851,
0.13413387537002563,
0.2405339479446411,
0.3162594437599182,
0.3996221721172333,
0.11639031767845154,
0.18976068496704102,
0.27678847312927246,
0.35766682028770447,
0.31011390686035156,
0.05233968794345856,
0.139100492000579... |
https://github.com/huggingface/datasets/issues/1956 | [distributed env] potentially unsafe parallel execution | You can pass the same `experiment_id` for all the metrics of the same group, and use another `experiment_id` for the other groups.
Maybe we can add an environment variable that sets the default value for `experiment_id` ? What do you think ? | ```
metric = load_metric('glue', 'mrpc', num_process=num_process, process_id=rank)
```
presumes that there is only one set of parallel processes running - and will intermittently fail if you have multiple sets running as they will surely overwrite each other. Similar to https://github.com/huggingface/datasets/issu... | 42 | [distributed env] potentially unsafe parallel execution
```
metric = load_metric('glue', 'mrpc', num_process=num_process, process_id=rank)
```
presumes that there is only one set of parallel processes running - and will intermittently fail if you have multiple sets running as they will surely overwrite each othe... | [
-0.33885762095451355,
-0.4435218870639801,
-0.012153657153248787,
0.025914059951901436,
-0.000319974118610844,
-0.11885195225477219,
0.41917455196380615,
-0.02208816446363926,
0.6945431232452393,
0.33475691080093384,
-0.07045699656009674,
0.257586270570755,
0.020857738330960274,
0.09218899... |
https://github.com/huggingface/datasets/issues/1956 | [distributed env] potentially unsafe parallel execution | Ah, you're absolutely correct, @lhoestq - it's exactly the equivalent of the shared secret. Thank you! | ```
metric = load_metric('glue', 'mrpc', num_process=num_process, process_id=rank)
```
presumes that there is only one set of parallel processes running - and will intermittently fail if you have multiple sets running as they will surely overwrite each other. Similar to https://github.com/huggingface/datasets/issu... | 16 | [distributed env] potentially unsafe parallel execution
```
metric = load_metric('glue', 'mrpc', num_process=num_process, process_id=rank)
```
presumes that there is only one set of parallel processes running - and will intermittently fail if you have multiple sets running as they will surely overwrite each othe... | [
-0.2438627928495407,
-0.5634173154830933,
-0.025240328162908554,
-0.07892365008592606,
-0.08833125233650208,
-0.046587470918893814,
0.431041419506073,
-0.09473542869091034,
0.6884126663208008,
0.3382450342178345,
-0.032446619123220444,
0.24160602688789368,
0.029499387368559837,
0.065107472... |
https://github.com/huggingface/datasets/issues/1954 | add a new column | Hi
not sure how change the lable after creation, but this is an issue not dataset request. thanks | Hi
I'd need to add a new column to the dataset, I was wondering how this can be done? thanks
@lhoestq | 18 | add a new column
Hi
I'd need to add a new column to the dataset, I was wondering how this can be done? thanks
@lhoestq
Hi
not sure how change the lable after creation, but this is an issue not dataset request. thanks | [
-0.23425155878067017,
-0.054911185055971146,
-0.18129071593284607,
-0.052451327443122864,
0.01653284952044487,
0.03545733168721199,
0.3398599326610565,
-0.03660150617361069,
0.07984982430934906,
0.1445067971944809,
0.010777721181511879,
0.1978343427181244,
0.030636081472039223,
0.371920377... |
https://github.com/huggingface/datasets/issues/1954 | add a new column | Hi ! Currently you have to use `map` . You can see an example of how to do it in this comment: https://github.com/huggingface/datasets/issues/853#issuecomment-727872188
In the future we'll add support for a more native way of adding a new column ;) | Hi
I'd need to add a new column to the dataset, I was wondering how this can be done? thanks
@lhoestq | 40 | add a new column
Hi
I'd need to add a new column to the dataset, I was wondering how this can be done? thanks
@lhoestq
Hi ! Currently you have to use `map` . You can see an example of how to do it in this comment: https://github.com/huggingface/datasets/issues/853#issuecomment-727872188
In the future we'll ... | [
-0.2675052285194397,
-0.3267516791820526,
-0.22937503457069397,
-0.030696840956807137,
0.07127957791090012,
0.19325853884220123,
0.22967655956745148,
0.09861157089471817,
0.22823941707611084,
0.1450354903936386,
-0.15576067566871643,
0.1634855717420578,
0.002735974034294486,
0.511923313140... |
https://github.com/huggingface/datasets/issues/1949 | Enable Fast Filtering using Arrow Dataset | Hi @gchhablani :)
Thanks for proposing your help !
I'll be doing a refactor of some parts related to filtering in the scope of https://github.com/huggingface/datasets/issues/1877
So I would first wait for this refactor to be done before working on the filtering. In particular because I plan to make things simpler ... | Hi @lhoestq,
As mentioned in Issue #1796, I would love to work on enabling fast filtering/mapping. Can you please share the expectations? It would be great if you could point me to the relevant methods/files involved. Or the docs or maybe an overview of `arrow_dataset.py`. I only ask this because I am having trouble... | 113 | Enable Fast Filtering using Arrow Dataset
Hi @lhoestq,
As mentioned in Issue #1796, I would love to work on enabling fast filtering/mapping. Can you please share the expectations? It would be great if you could point me to the relevant methods/files involved. Or the docs or maybe an overview of `arrow_dataset.py`.... | [
-0.060965705662965775,
-0.24264170229434967,
-0.1764555722475052,
-0.054015930742025375,
0.038251567631959915,
-0.23882752656936646,
-0.10741239041090012,
0.2039252370595932,
0.22310486435890198,
-0.18454018235206604,
-0.2857310175895691,
0.44017407298088074,
-0.11111051589250565,
0.381758... |
https://github.com/huggingface/datasets/issues/1949 | Enable Fast Filtering using Arrow Dataset | Sure! I don't mind waiting. I'll check the refactor and try to understand what you're trying to do :) | Hi @lhoestq,
As mentioned in Issue #1796, I would love to work on enabling fast filtering/mapping. Can you please share the expectations? It would be great if you could point me to the relevant methods/files involved. Or the docs or maybe an overview of `arrow_dataset.py`. I only ask this because I am having trouble... | 19 | Enable Fast Filtering using Arrow Dataset
Hi @lhoestq,
As mentioned in Issue #1796, I would love to work on enabling fast filtering/mapping. Can you please share the expectations? It would be great if you could point me to the relevant methods/files involved. Or the docs or maybe an overview of `arrow_dataset.py`.... | [
-0.14752614498138428,
-0.08898857235908508,
-0.20982831716537476,
-0.09189724177122116,
0.062438711524009705,
-0.23314456641674042,
-0.047011762857437134,
0.2564980685710907,
0.14091692864894867,
-0.16528546810150146,
-0.21170063316822052,
0.5572015047073364,
-0.161082461476326,
0.23407697... |
https://github.com/huggingface/datasets/issues/1948 | dataset loading logger level | These warnings are showed when there's a call to `.map` to say to the user that a dataset is reloaded from the cache instead of being recomputed.
They are warnings since we want to make sure the users know that it's not recomputed. | on master I get this with `--dataset_name wmt16 --dataset_config ro-en`:
```
WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-2e01bead8cf42e26.arrow
WARNING:datasets.arr... | 43 | dataset loading logger level
on master I get this with `--dataset_name wmt16 --dataset_config ro-en`:
```
WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-2e01bead8cf42... | [
-0.15109042823314667,
-0.36360692977905273,
-0.017066964879631996,
0.34751904010772705,
0.40035155415534973,
0.4622991979122162,
0.4706433415412903,
0.14328479766845703,
0.25232335925102234,
-0.004766749683767557,
0.024378793314099312,
-0.01687832735478878,
-0.24817012250423431,
-0.1975330... |
https://github.com/huggingface/datasets/issues/1948 | dataset loading logger level | Thank you for explaining the intention, @lhoestq
1. Could it be then made more human-friendly? Currently the hex gibberish tells me nothing of what's really going on. e.g. the following is instructive, IMHO:
```
WARNING: wmt16/ro-en/train dataset was loaded from cache instead of being recomputed
WARNING: wmt16... | on master I get this with `--dataset_name wmt16 --dataset_config ro-en`:
```
WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-2e01bead8cf42e26.arrow
WARNING:datasets.arr... | 351 | dataset loading logger level
on master I get this with `--dataset_name wmt16 --dataset_config ro-en`:
```
WARNING:datasets.arrow_dataset:Loading cached processed dataset at /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f/cache-2e01bead8cf42... | [
-0.1465868502855301,
-0.2024426907300949,
0.01291909534484148,
0.310209721326828,
0.40032464265823364,
0.3999480903148651,
0.49052006006240845,
0.23117123544216156,
0.1243603453040123,
0.028233056887984276,
0.009334899485111237,
-0.1472628265619278,
-0.24832095205783844,
-0.188298135995864... |
https://github.com/huggingface/datasets/issues/1942 | [experiment] missing default_experiment-1-0.arrow | Hi !
The cache at `~/.cache/huggingface/metrics` stores the users data for metrics computations (hence the arrow files).
However python modules (i.e. dataset scripts, metric scripts) are stored in `~/.cache/huggingface/modules/datasets_modules`.
In particular the metrics are cached in `~/.cache/huggingface/mod... | the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/... | 84 | [experiment] missing default_experiment-1-0.arrow
the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/19... | [
-0.03587377443909645,
-0.0009468607022427022,
0.07417130470275879,
0.11846065521240234,
0.08374018967151642,
-0.10975291579961777,
0.1720351278781891,
0.23171822726726532,
0.26387226581573486,
0.13917072117328644,
0.06787903606891632,
0.17331865429878235,
-0.2854744791984558,
-0.0121780158... |
https://github.com/huggingface/datasets/issues/1942 | [experiment] missing default_experiment-1-0.arrow | Thank you for clarifying that the metrics files are to be found elsewhere, @lhoestq
> The cache at ~/.cache/huggingface/metrics stores the users data for metrics computations (hence the arrow files).
could it be renamed to reflect that? otherwise it misleadingly suggests that it's the metrics. Perhaps `~/.cache/... | the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/... | 93 | [experiment] missing default_experiment-1-0.arrow
the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/19... | [
0.24744871258735657,
0.050926994532346725,
0.0548085980117321,
0.2140229493379593,
0.11529026925563812,
0.18768934905529022,
0.22765085101127625,
0.3381584882736206,
0.2462000846862793,
0.1004050001502037,
0.16332599520683289,
0.1200307160615921,
-0.35130611062049866,
-0.17235614359378815,... |
https://github.com/huggingface/datasets/issues/1942 | [experiment] missing default_experiment-1-0.arrow | The lock files come from an issue with filelock (see comment in the code [here](https://github.com/benediktschmitt/py-filelock/blob/master/filelock.py#L394-L398)). Basically on unix there're always .lock files left behind. I haven't dove into this issue | the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/... | 30 | [experiment] missing default_experiment-1-0.arrow
the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/19... | [
0.09786532819271088,
0.0017494158819317818,
0.10701250284910202,
0.14189694821834564,
-0.0015062284655869007,
0.03592866286635399,
0.24486044049263,
0.27132683992385864,
0.20910952985286713,
0.12138539552688599,
0.168869748711586,
0.13611985743045807,
-0.3981262445449829,
-0.04132823646068... |
https://github.com/huggingface/datasets/issues/1942 | [experiment] missing default_experiment-1-0.arrow | are you sure you need an external lock file? if it's a single purpose locking in the same scope you can lock the caller `__file__` instead, e.g. here is how one can `flock` the script file itself to ensure atomic printing:
```
import fcntl
def printflock(*msgs):
""" print in multiprocess env so that the outpu... | the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/... | 75 | [experiment] missing default_experiment-1-0.arrow
the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/19... | [
-0.09580820053815842,
-0.09423499554395676,
0.1255573034286499,
0.09274033457040787,
-0.08685510605573654,
-0.029632486402988434,
0.23172220587730408,
0.23752246797084808,
0.23480673134326935,
0.17823636531829834,
0.003462842432782054,
0.147410050034523,
-0.23789186775684357,
-0.0178875736... |
https://github.com/huggingface/datasets/issues/1942 | [experiment] missing default_experiment-1-0.arrow | OK, this issue is not about caching but some internal conflict/race condition it seems, I have just run into it on my normal env:
```
Traceback (most recent call last):
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/metric.py", line 356, in _finalize
self.data = Dataset(**reader.read_files([... | the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/... | 409 | [experiment] missing default_experiment-1-0.arrow
the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/19... | [
-0.05769075080752373,
-0.0025045026559382677,
0.10634606331586838,
0.32959580421447754,
-0.07306117564439774,
0.029178032651543617,
0.1474633365869522,
0.24182789027690887,
0.3056665062904358,
0.15990346670150757,
0.20478792488574982,
0.07196015864610672,
-0.31483951210975647,
-0.137842282... |
https://github.com/huggingface/datasets/issues/1942 | [experiment] missing default_experiment-1-0.arrow | When you're using metrics in a distributed setup, there are two cases:
1. you're doing two completely different experiments (two evaluations) and the 2 metrics jobs have nothing to do with each other
2. you're doing one experiment (one evaluation) but use multiple processes to feed the data to the metric.
In case ... | the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/... | 173 | [experiment] missing default_experiment-1-0.arrow
the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/19... | [
-0.059898268431425095,
0.006814896594733,
0.08448442071676254,
0.15806140005588531,
0.0199415385723114,
-0.03207617625594139,
0.2774341106414795,
0.22076477110385895,
0.3088780343532562,
0.1602412462234497,
0.017752114683389664,
0.10314184427261353,
-0.22572343051433563,
-0.020612239837646... |
https://github.com/huggingface/datasets/issues/1942 | [experiment] missing default_experiment-1-0.arrow | Thank you for explaining that in a great way, @lhoestq
So the bottom line is that the `transformers` examples are broken since they don't do any of that. At least `run_seq2seq.py` just does `metric = load_metric(metric_name)`
What test would you recommend to reliably reproduce this bug in `examples/seq2seq/run_s... | the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/... | 48 | [experiment] missing default_experiment-1-0.arrow
the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/19... | [
0.07509253919124603,
-0.15114594995975494,
0.15680889785289764,
0.18094342947006226,
-0.06451235711574554,
-0.012127836234867573,
0.37433183193206787,
0.20745426416397095,
0.20335935056209564,
0.0637146458029747,
0.23238080739974976,
0.10928806662559509,
-0.36820822954177856,
-0.2463029921... |
https://github.com/huggingface/datasets/issues/1942 | [experiment] missing default_experiment-1-0.arrow | To give more context, we are just using the metrics for the `comput_metric` function and nothing else. Is there something else we can use that just applies the function to the full arrays of predictions and labels? Because that's all we need, all the gathering has already been done because the datasets Metric multiproc... | the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/... | 85 | [experiment] missing default_experiment-1-0.arrow
the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/19... | [
0.036068230867385864,
0.03790496289730072,
0.09435892850160599,
0.12535859644412994,
0.17602382600307465,
0.05072280392050743,
0.18199390172958374,
0.25301510095596313,
0.20254604518413544,
0.2519219219684601,
0.0820864737033844,
0.18000109493732452,
-0.37511199712753296,
0.066590517759323... |
https://github.com/huggingface/datasets/issues/1942 | [experiment] missing default_experiment-1-0.arrow | OK, it definitely leads to a race condition in how it's used right now. Here is how you can reproduce it - by injecting a random sleep time different for each process before the locks are acquired.
```
--- a/src/datasets/metric.py
+++ b/src/datasets/metric.py
@@ -348,6 +348,16 @@ class Metric(MetricInfoMixin):
... | the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/... | 452 | [experiment] missing default_experiment-1-0.arrow
the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/19... | [
-0.08441542834043503,
-0.06571976840496063,
0.065217986702919,
0.2722369432449341,
-0.08708643913269043,
-0.03266454115509987,
0.16435259580612183,
0.26979658007621765,
0.3311936855316162,
0.08320562541484833,
0.0982612818479538,
0.19418540596961975,
-0.3580647110939026,
-0.088975615799427... |
https://github.com/huggingface/datasets/issues/1942 | [experiment] missing default_experiment-1-0.arrow | I tried to adjust `run_seq2seq.py` and trainer to use the suggested dist env:
```
import torch.distributed as dist
metric = load_metric(metric_name, num_process=dist.get_world_size(), process_id=dist.get_rank())
```
and in `trainer.py` added to call just for rank 0:
```
if self.is_world_process_z... | the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/... | 302 | [experiment] missing default_experiment-1-0.arrow
the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/19... | [
0.0881398543715477,
-0.10928695648908615,
0.11583003401756287,
0.28568321466445923,
0.0845731794834137,
0.08065598458051682,
0.2349969744682312,
0.23771551251411438,
0.23479080200195312,
0.23564404249191284,
0.07886278629302979,
0.16671247780323029,
-0.37396040558815,
-0.22426554560661316,... |
https://github.com/huggingface/datasets/issues/1942 | [experiment] missing default_experiment-1-0.arrow | But no, since
`
metric = load_metric(metric_name)
`
is called for each process, the race condition is still there. So still getting:
```
ValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric in... | the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/... | 76 | [experiment] missing default_experiment-1-0.arrow
the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/19... | [
-0.07438269257545471,
0.018933268263936043,
0.09862568974494934,
0.238559290766716,
0.0015654858434572816,
0.021089298650622368,
0.1744566410779953,
0.29972153902053833,
0.36625850200653076,
0.12155463546514511,
0.02722785621881485,
0.13098880648612976,
-0.31445837020874023,
-0.01378346513... |
https://github.com/huggingface/datasets/issues/1942 | [experiment] missing default_experiment-1-0.arrow | OK, here is a workaround that works. The onus here is absolutely on the user:
```
diff --git a/examples/seq2seq/run_seq2seq.py b/examples/seq2seq/run_seq2seq.py
index 2a060dac5..c82fd83ea 100755
--- a/examples/seq2seq/run_seq2seq.py
+++ b/examples/seq2seq/run_seq2seq.py
@@ -520,7 +520,11 @@ def main():
... | the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/... | 233 | [experiment] missing default_experiment-1-0.arrow
the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/19... | [
-0.06407072395086288,
-0.07876425236463547,
0.09656429290771484,
0.2534787952899933,
-0.005688012577593327,
-0.02128683589398861,
0.3123222589492798,
0.2638016939163208,
0.2110249549150467,
0.25818076729774475,
0.045173391699790955,
0.36246824264526367,
-0.352606862783432,
-0.1786394566297... |
https://github.com/huggingface/datasets/issues/1942 | [experiment] missing default_experiment-1-0.arrow | I don't see how this could be the responsibility of `Trainer`, who hasn't the faintest idea of what a `datasets.Metric` is. The trainer takes a function `compute_metrics` that goes from predictions + labels to metric results, there is nothing there. That computation is done on all processes
The fact a `datasets.Me... | the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/... | 144 | [experiment] missing default_experiment-1-0.arrow
the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/19... | [
-0.09867696464061737,
0.0874648317694664,
0.0754723995923996,
0.12414398044347763,
0.1474274843931198,
-0.0701841339468956,
0.35259366035461426,
0.235633984208107,
0.1611240953207016,
0.20981371402740479,
0.0004216129600536078,
0.21405623853206635,
-0.3455355167388916,
0.1935882717370987,
... |
https://github.com/huggingface/datasets/issues/1942 | [experiment] missing default_experiment-1-0.arrow | Right, to clarify, I meant it'd be good to have it sorted on the library side and not requiring the user to figure it out. This is too complex and error-prone and if not coded correctly the bug will be intermittent which is even worse.
Oh I guess I wasn't clear in my message - in no way I'm proposing that we use thi... | the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/... | 139 | [experiment] missing default_experiment-1-0.arrow
the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/19... | [
0.0017200825968757272,
0.04833386093378067,
0.07416579872369766,
0.05186320096254349,
0.0032181015703827143,
-0.08906755596399307,
0.27334854006767273,
0.3028242290019989,
0.2559509873390198,
0.1578381359577179,
0.09535074979066849,
0.18284237384796143,
-0.30780893564224243,
0.119295828044... |
https://github.com/huggingface/datasets/issues/1942 | [experiment] missing default_experiment-1-0.arrow | > The fact a datasets.Metric object cannot be used as a simple compute function in a multi-process environment is, in my opinion, a bug in datasets
Yes totally, this use case is supposed to be supported by `datasets`. And in this case there shouldn't be any collision between the metrics. I'm looking into it :)
My g... | the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/... | 85 | [experiment] missing default_experiment-1-0.arrow
the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/19... | [
-0.1335228830575943,
0.03711937740445137,
0.09259749948978424,
0.2165248543024063,
0.1155615895986557,
0.025905797258019447,
0.2892282009124756,
0.24657507240772247,
0.2430669516324997,
0.14858007431030273,
0.022332828491926193,
0.17718063294887543,
-0.2399052083492279,
0.11342409253120422... |
https://github.com/huggingface/datasets/issues/1942 | [experiment] missing default_experiment-1-0.arrow | I just opened #1966 to fix this :)
@stas00 if have a chance feel free to try it ! | the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/... | 19 | [experiment] missing default_experiment-1-0.arrow
the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/19... | [
0.0502675324678421,
-0.03078579530119896,
0.05852575972676277,
0.16687782108783722,
0.03445854038000107,
0.030314818024635315,
0.1830069124698639,
0.2939567267894745,
0.2358812838792801,
0.10173702985048294,
0.10235785692930222,
0.18877007067203522,
-0.3214573264122009,
0.00119256158359348... |
https://github.com/huggingface/datasets/issues/1941 | Loading of FAISS index fails for index_name = 'exact' | Works great 👍 I just put a minor comment on the commit, I think you meant to pass the `train_size` from the one obtained from the config.
Thanks for a quick response! | Hi,
It looks like loading of FAISS index now fails when using index_name = 'exact'.
For example, from the RAG [model card](https://huggingface.co/facebook/rag-token-nq?fbclid=IwAR3bTfhls5U_t9DqsX2Vzb7NhtRHxJxfQ-uwFT7VuCPMZUM2AdAlKF_qkI8#usage).
Running `transformers==4.3.2` and datasets installed from source o... | 32 | Loading of FAISS index fails for index_name = 'exact'
Hi,
It looks like loading of FAISS index now fails when using index_name = 'exact'.
For example, from the RAG [model card](https://huggingface.co/facebook/rag-token-nq?fbclid=IwAR3bTfhls5U_t9DqsX2Vzb7NhtRHxJxfQ-uwFT7VuCPMZUM2AdAlKF_qkI8#usage).
Running `t... | [
-0.08892612904310226,
-0.11038383096456528,
0.010998444631695747,
0.054001420736312866,
0.41016677021980286,
-0.056917641311883926,
0.29681259393692017,
0.25089746713638306,
0.26961129903793335,
0.19975514709949493,
-0.22171953320503235,
0.13984419405460358,
0.13631655275821686,
-0.1376478... |
https://github.com/huggingface/datasets/issues/1940 | Side effect when filtering data due to `does_function_return_dict` call in `Dataset.map()` | Thanks for the report !
Currently we don't have a way to let the user easily disable this behavior.
However I agree that we should support stateful processing functions, ideally by removing `does_function_return_dict`.
We needed this function in order to know whether the `map` functions needs to write data or no... | Hi there!
In my codebase I have a function to filter rows in a dataset, selecting only a certain number of examples per class. The function passes a extra argument to maintain a counter of the number of dataset rows/examples already selected per each class, which are the ones I want to keep in the end:
```python
... | 123 | Side effect when filtering data due to `does_function_return_dict` call in `Dataset.map()`
Hi there!
In my codebase I have a function to filter rows in a dataset, selecting only a certain number of examples per class. The function passes a extra argument to maintain a counter of the number of dataset rows/examples... | [
-0.4214114248752594,
0.02895795740187168,
-0.15750360488891602,
0.018457606434822083,
-0.08442585915327072,
-0.32018518447875977,
0.205118328332901,
0.19465796649456024,
0.26603299379348755,
0.12066266685724258,
0.2073841094970703,
0.5199094414710999,
-0.07889799028635025,
0.09672635048627... |
https://github.com/huggingface/datasets/issues/1939 | [firewalled env] OFFLINE mode | Thanks for reporting and for all the details and suggestions.
I'm totally in favor of having a HF_DATASETS_OFFLINE env variable to disable manually all the connection checks, remove retries etc.
Moreover you may know that the use case that you are mentioning is already supported from `datasets` 1.3.0, i.e. you al... | This issue comes from a need to be able to run `datasets` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls.
I propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a sample program. There are 2 pos... | 156 | [firewalled env] OFFLINE mode
This issue comes from a need to be able to run `datasets` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls.
I propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a... | [
-0.4748802185058594,
0.16853299736976624,
-0.06255347281694412,
0.10505624115467072,
0.04431141912937164,
-0.22849194705486298,
0.4647970199584961,
0.0716799646615982,
0.25459134578704834,
0.04853048920631409,
0.018439369276165962,
0.06361494958400726,
0.08450692892074585,
0.19930404424667... |
https://github.com/huggingface/datasets/issues/1939 | [firewalled env] OFFLINE mode | I'm on master, so using all the available bells and whistles already.
If you look at the common issues - it for example tries to look up files if they appear in `_PACKAGED_DATASETS_MODULES` which it shouldn't do.
--------------
Yes, there is a nuance to it. As I mentioned it's firewalled - that is it has a net... | This issue comes from a need to be able to run `datasets` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls.
I propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a sample program. There are 2 pos... | 257 | [firewalled env] OFFLINE mode
This issue comes from a need to be able to run `datasets` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls.
I propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a... | [
-0.4748802185058594,
0.16853299736976624,
-0.06255347281694412,
0.10505624115467072,
0.04431141912937164,
-0.22849194705486298,
0.4647970199584961,
0.0716799646615982,
0.25459134578704834,
0.04853048920631409,
0.018439369276165962,
0.06361494958400726,
0.08450692892074585,
0.19930404424667... |
https://github.com/huggingface/datasets/issues/1939 | [firewalled env] OFFLINE mode | Ok understand better the hanging issue.
I guess catching connection errors is not enough, we should also avoid all the hangings.
Currently the offline mode tests are only done by simulating an instant connection fail that returns an error, let's have another connection mock that hangs instead.
I'll also take a loo... | This issue comes from a need to be able to run `datasets` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls.
I propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a sample program. There are 2 pos... | 61 | [firewalled env] OFFLINE mode
This issue comes from a need to be able to run `datasets` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls.
I propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a... | [
-0.4748802185058594,
0.16853299736976624,
-0.06255347281694412,
0.10505624115467072,
0.04431141912937164,
-0.22849194705486298,
0.4647970199584961,
0.0716799646615982,
0.25459134578704834,
0.04853048920631409,
0.018439369276165962,
0.06361494958400726,
0.08450692892074585,
0.19930404424667... |
https://github.com/huggingface/datasets/issues/1939 | [firewalled env] OFFLINE mode | FWIW, I think instant failure on the behalf of a network call is the simplest solution to correctly represent the environment and having the caller to sort it out is the next thing to do, since here it is the case of having no functional network, it's just that the software doesn't know this is the case, because there ... | This issue comes from a need to be able to run `datasets` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls.
I propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a sample program. There are 2 pos... | 88 | [firewalled env] OFFLINE mode
This issue comes from a need to be able to run `datasets` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls.
I propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a... | [
-0.4748802185058594,
0.16853299736976624,
-0.06255347281694412,
0.10505624115467072,
0.04431141912937164,
-0.22849194705486298,
0.4647970199584961,
0.0716799646615982,
0.25459134578704834,
0.04853048920631409,
0.018439369276165962,
0.06361494958400726,
0.08450692892074585,
0.19930404424667... |
https://github.com/huggingface/datasets/issues/1939 | [firewalled env] OFFLINE mode | Update on this:
I managed to create a mock environment for tests that makes the connections hang until timeout.
I managed to reproduce the issue you're having in this environment.
I'll update the offline test cases to also test the robustness to connection hangings, and make sure we set proper timeouts where it... | This issue comes from a need to be able to run `datasets` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls.
I propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a sample program. There are 2 pos... | 65 | [firewalled env] OFFLINE mode
This issue comes from a need to be able to run `datasets` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls.
I propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a... | [
-0.4748802185058594,
0.16853299736976624,
-0.06255347281694412,
0.10505624115467072,
0.04431141912937164,
-0.22849194705486298,
0.4647970199584961,
0.0716799646615982,
0.25459134578704834,
0.04853048920631409,
0.018439369276165962,
0.06361494958400726,
0.08450692892074585,
0.19930404424667... |
https://github.com/huggingface/datasets/issues/1939 | [firewalled env] OFFLINE mode | I lost access to the firewalled setup, but I emulated it with:
```
sudo ufw enable
sudo ufw default deny outgoing
```
(thanks @mfuntowicz)
I was able to test `HF_DATASETS_OFFLINE=1` and it worked great - i.e. didn't try to reach out with it and used the cached files instead.
Thank you! | This issue comes from a need to be able to run `datasets` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls.
I propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a sample program. There are 2 pos... | 51 | [firewalled env] OFFLINE mode
This issue comes from a need to be able to run `datasets` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls.
I propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a... | [
-0.4748802185058594,
0.16853299736976624,
-0.06255347281694412,
0.10505624115467072,
0.04431141912937164,
-0.22849194705486298,
0.4647970199584961,
0.0716799646615982,
0.25459134578704834,
0.04853048920631409,
0.018439369276165962,
0.06361494958400726,
0.08450692892074585,
0.19930404424667... |
https://github.com/huggingface/datasets/issues/1924 | Anonymous Dataset Addition (i.e Anonymous PR?) | Hi !
I guess you can add a dataset without the fields that must be kept anonymous, and then update those when the anonymity period is over.
You can also make the PR from an anonymous org.
Pinging @yjernite just to make sure it's ok | Hello,
Thanks a lot for your librairy.
We plan to submit a paper on OpenReview using the Anonymous setting. Is it possible to add a new dataset without breaking the anonimity, with a link to the paper ?
Cheers
@eusip | 45 | Anonymous Dataset Addition (i.e Anonymous PR?)
Hello,
Thanks a lot for your librairy.
We plan to submit a paper on OpenReview using the Anonymous setting. Is it possible to add a new dataset without breaking the anonimity, with a link to the paper ?
Cheers
@eusip
Hi !
I guess you can add a dataset without th... | [
-0.2978556752204895,
0.5536885857582092,
-0.02381746843457222,
-0.19032126665115356,
-0.07589460909366608,
-0.10795895010232925,
0.5019997358322144,
0.008096972480416298,
0.06429582834243774,
0.05346941202878952,
0.07121046632528305,
0.10383307188749313,
0.00415549473837018,
0.163485959172... |
https://github.com/huggingface/datasets/issues/1924 | Anonymous Dataset Addition (i.e Anonymous PR?) | Hello,
I would prefer to do the reverse: adding a link to an anonymous paper without the people names/institution in the PR. Would it be conceivable ?
Cheers
| Hello,
Thanks a lot for your librairy.
We plan to submit a paper on OpenReview using the Anonymous setting. Is it possible to add a new dataset without breaking the anonimity, with a link to the paper ?
Cheers
@eusip | 28 | Anonymous Dataset Addition (i.e Anonymous PR?)
Hello,
Thanks a lot for your librairy.
We plan to submit a paper on OpenReview using the Anonymous setting. Is it possible to add a new dataset without breaking the anonimity, with a link to the paper ?
Cheers
@eusip
Hello,
I would prefer to do the reverse: addi... | [
-0.20116695761680603,
0.4423331916332245,
0.020196983590722084,
-0.07657133787870407,
-0.09471506625413895,
-0.094412662088871,
0.4387369453907013,
-0.0027231096755713224,
0.036278076469898224,
0.09344000369310379,
-0.05074584111571312,
0.06698263436555862,
0.003765957662835717,
0.08743447... |
https://github.com/huggingface/datasets/issues/1922 | How to update the "wino_bias" dataset | Hi @JieyuZhao !
You can edit the dataset card of wino_bias to update the URL via a Pull Request. This would be really appreciated :)
The dataset card is the README.md file you can find at https://github.com/huggingface/datasets/tree/master/datasets/wino_bias
Also the homepage url is also mentioned in the wino_bi... | Hi all,
Thanks for the efforts to collect all the datasets! But I think there is a problem with the wino_bias dataset. The current link is not correct. How can I update that?
Thanks! | 89 | How to update the "wino_bias" dataset
Hi all,
Thanks for the efforts to collect all the datasets! But I think there is a problem with the wino_bias dataset. The current link is not correct. How can I update that?
Thanks!
Hi @JieyuZhao !
You can edit the dataset card of wino_bias to update the URL via a Pul... | [
-0.3595491051673889,
0.16637474298477173,
-0.09109664708375931,
0.10533387959003448,
0.0010985498083755374,
0.1791953146457672,
-0.03932571783661842,
0.06371160596609116,
0.06226535886526108,
-0.12878231704235077,
-0.26217544078826904,
-0.0001706450857454911,
0.2578837275505066,
0.15930899... |
https://github.com/huggingface/datasets/issues/1919 | Failure to save with save_to_disk | Hi thanks for reporting and for proposing a fix :)
I just merged a fix, feel free to try it from the master branch ! | When I try to save a dataset locally using the `save_to_disk` method I get the error:
```bash
FileNotFoundError: [Errno 2] No such file or directory: '/content/squad/train/squad-train.arrow'
```
To replicate:
1. Install `datasets` from master
2. Run this code:
```python
from datasets import load... | 25 | Failure to save with save_to_disk
When I try to save a dataset locally using the `save_to_disk` method I get the error:
```bash
FileNotFoundError: [Errno 2] No such file or directory: '/content/squad/train/squad-train.arrow'
```
To replicate:
1. Install `datasets` from master
2. Run this code:
```p... | [
-0.043686650693416595,
0.14294907450675964,
0.057011622935533524,
0.18956156075000763,
0.519417405128479,
0.2885245084762573,
0.19221335649490356,
0.23881301283836365,
-0.13136450946331024,
0.18590502440929413,
0.14157813787460327,
0.36805951595306396,
-0.3196106255054474,
-0.2508212625980... |
https://github.com/huggingface/datasets/issues/1915 | Unable to download `wiki_dpr` | Thanks for reporting ! This is a bug. For now feel free to set `ignore_verifications=False` in `load_dataset`.
I'm working on a fix | I am trying to download the `wiki_dpr` dataset. Specifically, I want to download `psgs_w100.multiset.no_index` with no embeddings/no index. In order to do so, I ran:
`curr_dataset = load_dataset("wiki_dpr", embeddings_name="multiset", index_name="no_index")`
However, I got the following error:
`datasets.utils.i... | 22 | Unable to download `wiki_dpr`
I am trying to download the `wiki_dpr` dataset. Specifically, I want to download `psgs_w100.multiset.no_index` with no embeddings/no index. In order to do so, I ran:
`curr_dataset = load_dataset("wiki_dpr", embeddings_name="multiset", index_name="no_index")`
However, I got the fol... | [
-0.3591079115867615,
-0.3630981743335724,
-0.05044252425432205,
0.31482598185539246,
0.3717033565044403,
0.4270488917827606,
0.3198949694633484,
0.06253399699926376,
0.238521546125412,
0.1804991364479065,
0.16470183432102203,
-0.02060895413160324,
0.09541145712137222,
-0.021091077476739883... |
https://github.com/huggingface/datasets/issues/1915 | Unable to download `wiki_dpr` | I just merged a fix :)
We'll do a patch release soon. In the meantime feel free to try it from the master branch
Thanks again for reporting ! | I am trying to download the `wiki_dpr` dataset. Specifically, I want to download `psgs_w100.multiset.no_index` with no embeddings/no index. In order to do so, I ran:
`curr_dataset = load_dataset("wiki_dpr", embeddings_name="multiset", index_name="no_index")`
However, I got the following error:
`datasets.utils.i... | 29 | Unable to download `wiki_dpr`
I am trying to download the `wiki_dpr` dataset. Specifically, I want to download `psgs_w100.multiset.no_index` with no embeddings/no index. In order to do so, I ran:
`curr_dataset = load_dataset("wiki_dpr", embeddings_name="multiset", index_name="no_index")`
However, I got the fol... | [
-0.3591079115867615,
-0.3630981743335724,
-0.05044252425432205,
0.31482598185539246,
0.3717033565044403,
0.4270488917827606,
0.3198949694633484,
0.06253399699926376,
0.238521546125412,
0.1804991364479065,
0.16470183432102203,
-0.02060895413160324,
0.09541145712137222,
-0.021091077476739883... |
https://github.com/huggingface/datasets/issues/1911 | Saving processed dataset running infinitely | am suspicious of this thing? what's the purpose of this? pickling and unplickling
`self = pickle.loads(pickle.dumps(self))`
```
def save_to_disk(self, dataset_path: str, fs=None):
"""
Saves a dataset to a dataset directory, or in a filesystem using either :class:`datasets.filesystem.S3FileSys... | I have a text dataset of size 220M.
For pre-processing, I need to tokenize this and filter rows with the large sequence.
My tokenization took roughly 3hrs. I used map() with batch size 1024 and multi-process with 96 processes.
filter() function was way to slow, so I used a hack to use pyarrow filter table func... | 103 | Saving processed dataset running infinitely
I have a text dataset of size 220M.
For pre-processing, I need to tokenize this and filter rows with the large sequence.
My tokenization took roughly 3hrs. I used map() with batch size 1024 and multi-process with 96 processes.
filter() function was way to slow, so ... | [
-0.21761900186538696,
0.19676508009433746,
-0.11508186906576157,
0.2636953592300415,
0.14856977760791779,
-0.07311965525150299,
0.1913732886314392,
0.2333744913339615,
-0.19323325157165527,
-0.08382964879274368,
0.06877279281616211,
0.35656502842903137,
-0.10871390998363495,
0.263534247875... |
https://github.com/huggingface/datasets/issues/1911 | Saving processed dataset running infinitely | Tried finding the root cause but was unsuccessful.
I am using lazy tokenization with `dataset.set_transform()`, it works like a charm with almost same performance as pre-compute. | I have a text dataset of size 220M.
For pre-processing, I need to tokenize this and filter rows with the large sequence.
My tokenization took roughly 3hrs. I used map() with batch size 1024 and multi-process with 96 processes.
filter() function was way to slow, so I used a hack to use pyarrow filter table func... | 26 | Saving processed dataset running infinitely
I have a text dataset of size 220M.
For pre-processing, I need to tokenize this and filter rows with the large sequence.
My tokenization took roughly 3hrs. I used map() with batch size 1024 and multi-process with 96 processes.
filter() function was way to slow, so ... | [
-0.35619667172431946,
0.16838891804218292,
-0.0956779420375824,
0.27057671546936035,
0.2809065878391266,
-0.055566541850566864,
0.22745616734027863,
0.2229020744562149,
-0.31085819005966187,
-0.13959826529026031,
0.1082136407494545,
0.17598652839660645,
-0.1438532918691635,
0.2344979494810... |
https://github.com/huggingface/datasets/issues/1911 | Saving processed dataset running infinitely | Hi ! This very probably comes from the hack you used.
The pickling line was added an a sanity check because save_to_disk uses the same assumptions as pickling for a dataset object. The main assumption is that memory mapped pyarrow tables must be reloadable from the disk. In your case it's not possible since you alte... | I have a text dataset of size 220M.
For pre-processing, I need to tokenize this and filter rows with the large sequence.
My tokenization took roughly 3hrs. I used map() with batch size 1024 and multi-process with 96 processes.
filter() function was way to slow, so I used a hack to use pyarrow filter table func... | 191 | Saving processed dataset running infinitely
I have a text dataset of size 220M.
For pre-processing, I need to tokenize this and filter rows with the large sequence.
My tokenization took roughly 3hrs. I used map() with batch size 1024 and multi-process with 96 processes.
filter() function was way to slow, so ... | [
-0.38356658816337585,
0.22079163789749146,
-0.07534833997488022,
0.2524212598800659,
0.1676332652568817,
-0.08597276359796524,
0.09959086030721664,
0.24253235757350922,
-0.16498245298862457,
-0.058491144329309464,
0.033023472875356674,
0.4020827114582062,
-0.08487129211425781,
0.2319575399... |
https://github.com/huggingface/datasets/issues/1907 | DBPedia14 Dataset Checksum bug? | Hi ! :)
This looks like the same issue as https://github.com/huggingface/datasets/issues/1856
Basically google drive has quota issues that makes it inconvenient for downloading files.
If the quota of a file is exceeded, you have to wait 24h for the quota to reset (which is painful).
The error says that the c... | Hi there!!!
I've been using successfully the DBPedia dataset (https://huggingface.co/datasets/dbpedia_14) with my codebase in the last couple of weeks, but in the last couple of days now I get this error:
```
Traceback (most recent call last):
File "./conditional_classification/basic_pipeline.py", line 178, i... | 75 | DBPedia14 Dataset Checksum bug?
Hi there!!!
I've been using successfully the DBPedia dataset (https://huggingface.co/datasets/dbpedia_14) with my codebase in the last couple of weeks, but in the last couple of days now I get this error:
```
Traceback (most recent call last):
File "./conditional_classificati... | [
-0.24835599958896637,
0.350805401802063,
-0.12276438623666763,
0.262452632188797,
0.1046653687953949,
-0.004844595678150654,
0.3205702304840088,
0.5194302797317505,
-0.08774170279502869,
-0.041358914226293564,
0.07762551307678223,
-0.19822663068771362,
0.0031298913527280092,
0.347411334514... |
https://github.com/huggingface/datasets/issues/1906 | Feature Request: Support for Pandas `Categorical` | We already have a ClassLabel type that does this kind of mapping between the label ids (integers) and actual label values (strings).
I wonder if actually we should use the DictionaryType from Arrow and the Categorical type from pandas for the `datasets` ClassLabel feature type.
Currently ClassLabel corresponds to `... | ```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws NotImplementedError
# TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_... | 69 | Feature Request: Support for Pandas `Categorical`
```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws NotImplementedError
# TODO(thom) this will need access to ... | [
0.023009994998574257,
-0.01027372945100069,
-0.14260606467723846,
0.20766469836235046,
0.2370131015777588,
0.16727855801582336,
0.12674053013324738,
0.24871832132339478,
-0.07384240627288818,
-0.21550747752189636,
0.16739965975284576,
0.28240716457366943,
-0.20351159572601318,
0.4252628982... |
https://github.com/huggingface/datasets/issues/1906 | Feature Request: Support for Pandas `Categorical` | Now that I've heard you explain ClassLabel, that makes a lot of sense! While DictionaryType for Arrow (I think) can have arbitrarily typed keys, so it won't cover all potential cases, pandas' Category is *probably* the most common use for that pyarrow type, and ClassLabel should match that perfectly?
Other thoughts... | ```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws NotImplementedError
# TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_... | 319 | Feature Request: Support for Pandas `Categorical`
```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws NotImplementedError
# TODO(thom) this will need access to ... | [
0.023009994998574257,
-0.01027372945100069,
-0.14260606467723846,
0.20766469836235046,
0.2370131015777588,
0.16727855801582336,
0.12674053013324738,
0.24871832132339478,
-0.07384240627288818,
-0.21550747752189636,
0.16739965975284576,
0.28240716457366943,
-0.20351159572601318,
0.4252628982... |
https://github.com/huggingface/datasets/issues/1906 | Feature Request: Support for Pandas `Categorical` | I think we can first keep the int64 precision but with an arrow Dictionary for ClassLabel, and focus on the connection with arrow and pandas.
In this scope, I really like the idea of checking for the dictionary type:
> Another idea that just occurred to me: add a branch in here to generate a ClassLabel if the dic... | ```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws NotImplementedError
# TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_... | 260 | Feature Request: Support for Pandas `Categorical`
```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws NotImplementedError
# TODO(thom) this will need access to ... | [
0.023009994998574257,
-0.01027372945100069,
-0.14260606467723846,
0.20766469836235046,
0.2370131015777588,
0.16727855801582336,
0.12674053013324738,
0.24871832132339478,
-0.07384240627288818,
-0.21550747752189636,
0.16739965975284576,
0.28240716457366943,
-0.20351159572601318,
0.4252628982... |
https://github.com/huggingface/datasets/issues/1898 | ALT dataset has repeating instances in all splits | I just merged a fix, we'll do a patch release soon. Thanks again for reporting, and sorry for the inconvenience.
In the meantime you can load `ALT` using `datasets` from the master branch | The [ALT](https://huggingface.co/datasets/alt) dataset has all the same instances within each split :/
Seemed like a great dataset for some experiments I wanted to carry out, especially since its medium-sized, and has all splits.
Would be great if this could be fixed :)
Added a snapshot of the contents from `exp... | 33 | ALT dataset has repeating instances in all splits
The [ALT](https://huggingface.co/datasets/alt) dataset has all the same instances within each split :/
Seemed like a great dataset for some experiments I wanted to carry out, especially since its medium-sized, and has all splits.
Would be great if this could be fi... | [
-0.2530449628829956,
-0.7312533855438232,
-0.05559558793902397,
0.43207788467407227,
0.3174538016319275,
-0.17367693781852722,
0.3799055218696594,
0.1424696296453476,
0.2794247269630432,
0.33198994398117065,
-0.16143018007278442,
0.09805223345756531,
0.0777064710855484,
-0.0084823248907923... |
https://github.com/huggingface/datasets/issues/1895 | Bug Report: timestamp[ns] not recognized | Thanks for reporting !
You're right, `string_to_arrow` should be able to take `"timestamp[ns]"` as input and return the right pyarrow timestamp type.
Feel free to suggest a fix for `string_to_arrow` and open a PR if you want to contribute ! This would be very appreciated :)
To give you more context:
As you ma... | Repro:
```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.date_range("2018-01-01", periods=3, freq="H"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws ValueError: Neither timestamp[ns] nor timestamp[ns]_ seems to be a pyarrow data type.
```
The fact... | 159 | Bug Report: timestamp[ns] not recognized
Repro:
```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.date_range("2018-01-01", periods=3, freq="H"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws ValueError: Neither timestamp[ns] nor timestamp[ns]_ seems ... | [
-0.15279346704483032,
0.3287229835987091,
0.004078532103449106,
0.08716775476932526,
0.09477335214614868,
-0.07733089476823807,
0.4513309597969055,
0.36234843730926514,
-0.49170050024986267,
-0.2842440605163574,
0.26754093170166016,
0.5988480448722839,
-0.1803756207227707,
0.01818786188960... |
https://github.com/huggingface/datasets/issues/1895 | Bug Report: timestamp[ns] not recognized | Thanks for the clarification @lhoestq !
This may be a little bit of a stupid question, but I wanted to clarify one more thing before I took a stab at this:
When the features get inferred, I believe they already have a pyarrow schema (https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.p... | Repro:
```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.date_range("2018-01-01", periods=3, freq="H"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws ValueError: Neither timestamp[ns] nor timestamp[ns]_ seems to be a pyarrow data type.
```
The fact... | 100 | Bug Report: timestamp[ns] not recognized
Repro:
```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.date_range("2018-01-01", periods=3, freq="H"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws ValueError: Neither timestamp[ns] nor timestamp[ns]_ seems ... | [
-0.07395324110984802,
0.2633906304836273,
0.023429321125149727,
0.11723325401544571,
0.06972935795783997,
-0.10280070453882217,
0.3482518196105957,
0.26487571001052856,
-0.5368141531944275,
-0.2700381875038147,
0.23996183276176453,
0.4913218319416046,
-0.14977119863033295,
-0.0816281586885... |
https://github.com/huggingface/datasets/issues/1895 | Bug Report: timestamp[ns] not recognized | The objective in terms of design is to make it easy to create Features in a pythonic way. So for example we use a string to define a Value type.
That's why when inferring the Features from an arrow schema we have to find the right string definitions for Value types. I guess we could also have a constructor `Value.from... | Repro:
```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.date_range("2018-01-01", periods=3, freq="H"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws ValueError: Neither timestamp[ns] nor timestamp[ns]_ seems to be a pyarrow data type.
```
The fact... | 116 | Bug Report: timestamp[ns] not recognized
Repro:
```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.date_range("2018-01-01", periods=3, freq="H"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws ValueError: Neither timestamp[ns] nor timestamp[ns]_ seems ... | [
-0.11467518657445908,
0.19220757484436035,
0.044961441308259964,
0.14448198676109314,
-0.02857341803610325,
-0.12259768694639206,
0.44447094202041626,
0.28227758407592773,
-0.5322681665420532,
-0.32548388838768005,
0.32812511920928955,
0.5652285218238831,
-0.2068844586610794,
0.08838324248... |
https://github.com/huggingface/datasets/issues/1895 | Bug Report: timestamp[ns] not recognized | OK I think I understand now:
Features are datasets' internal representation of a schema type, distinct from pyarrow's schema.
Value() corresponds to pyarrow's "primitive" types (e.g. `int` or `string`, but not things like `list` or `dict`).
`get_nested_type()` (https://github.com/huggingface/datasets/blob/master/s... | Repro:
```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.date_range("2018-01-01", periods=3, freq="H"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws ValueError: Neither timestamp[ns] nor timestamp[ns]_ seems to be a pyarrow data type.
```
The fact... | 96 | Bug Report: timestamp[ns] not recognized
Repro:
```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.date_range("2018-01-01", periods=3, freq="H"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws ValueError: Neither timestamp[ns] nor timestamp[ns]_ seems ... | [
-0.14257386326789856,
0.32469606399536133,
-0.00011862847895827144,
0.11998914182186127,
0.06743549555540085,
-0.07652744650840759,
0.39472195506095886,
0.27701982855796814,
-0.4754163920879364,
-0.2832738161087036,
0.20345333218574524,
0.5407944321632385,
-0.16183963418006897,
0.000655502... |
https://github.com/huggingface/datasets/issues/1894 | benchmarking against MMapIndexedDataset | Hi sam !
Indeed we can expect the performances to be very close since both MMapIndexedDataset and the `datasets` implem use memory mapping. With memory mapping what determines the I/O performance is the speed of your hard drive/SSD.
In terms of performance we're pretty close to the optimal speed for reading text, e... | I am trying to benchmark my datasets based implementation against fairseq's [`MMapIndexedDataset`](https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L365) and finding that, according to psrecord, my `datasets` implem uses about 3% more CPU memory and runs 1% slower for `wikitext103` (~1GB o... | 141 | benchmarking against MMapIndexedDataset
I am trying to benchmark my datasets based implementation against fairseq's [`MMapIndexedDataset`](https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L365) and finding that, according to psrecord, my `datasets` implem uses about 3% more CPU memory an... | [
-0.38104191422462463,
-0.030588919296860695,
-0.18135753273963928,
0.25040578842163086,
-0.29359933733940125,
-0.04670786112546921,
0.03982248529791832,
0.19099220633506775,
-0.1505497246980667,
-0.09039321541786194,
-0.16343756020069122,
0.3356112837791443,
0.03113112971186638,
-0.4226370... |
https://github.com/huggingface/datasets/issues/1894 | benchmarking against MMapIndexedDataset | Also I would be interested to know what data types `MMapIndexedDataset` supports. Is there some documentation somewhere ? | I am trying to benchmark my datasets based implementation against fairseq's [`MMapIndexedDataset`](https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L365) and finding that, according to psrecord, my `datasets` implem uses about 3% more CPU memory and runs 1% slower for `wikitext103` (~1GB o... | 18 | benchmarking against MMapIndexedDataset
I am trying to benchmark my datasets based implementation against fairseq's [`MMapIndexedDataset`](https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L365) and finding that, according to psrecord, my `datasets` implem uses about 3% more CPU memory an... | [
-0.262320339679718,
-0.058780863881111145,
-0.18960964679718018,
0.2777881920337677,
-0.22772569954395294,
-0.07376202195882797,
0.01116351317614317,
0.13576088845729828,
-0.29337409138679504,
-0.15680256485939026,
-0.15320034325122833,
0.37742194533348083,
0.002740209922194481,
-0.3708998... |
https://github.com/huggingface/datasets/issues/1894 | benchmarking against MMapIndexedDataset | no docs haha, it's written to support integer numpy arrays.
You can build one in fairseq with, roughly:
```bash
wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip
unzip wikitext-103-raw-v1.zip
export dd=$HOME/fairseq-py/wikitext-103-raw
export mm_dir=$HOME/mmap_wikitext2
mk... | I am trying to benchmark my datasets based implementation against fairseq's [`MMapIndexedDataset`](https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L365) and finding that, according to psrecord, my `datasets` implem uses about 3% more CPU memory and runs 1% slower for `wikitext103` (~1GB o... | 249 | benchmarking against MMapIndexedDataset
I am trying to benchmark my datasets based implementation against fairseq's [`MMapIndexedDataset`](https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L365) and finding that, according to psrecord, my `datasets` implem uses about 3% more CPU memory an... | [
-0.24320228397846222,
0.02335263602435589,
-0.1691766232252121,
0.182050883769989,
-0.10221871733665466,
-0.022615807130932808,
0.1294502466917038,
0.2604651153087616,
-0.20882712304592133,
-0.06991731375455856,
-0.15569910407066345,
0.5033259391784668,
-0.07024982571601868,
-0.41885629296... |
https://github.com/huggingface/datasets/issues/1893 | wmt19 is broken | This was also mentioned in https://github.com/huggingface/datasets/issues/488
The bucket where is data was stored seems to be unavailable now. Maybe we can change the URL to the ones in https://conferences.unite.un.org/uncorpus/en/downloadoverview ? | 1. Check which lang pairs we have: `--dataset_name wmt19`:
Please pick one among the available configs: ['cs-en', 'de-en', 'fi-en', 'gu-en', 'kk-en', 'lt-en', 'ru-en', 'zh-en', 'fr-de']
2. OK, let's pick `ru-en`:
`--dataset_name wmt19 --dataset_config "ru-en"`
no cookies:
```
Traceback (most recent c... | 30 | wmt19 is broken
1. Check which lang pairs we have: `--dataset_name wmt19`:
Please pick one among the available configs: ['cs-en', 'de-en', 'fi-en', 'gu-en', 'kk-en', 'lt-en', 'ru-en', 'zh-en', 'fr-de']
2. OK, let's pick `ru-en`:
`--dataset_name wmt19 --dataset_config "ru-en"`
no cookies:
```
Traceb... | [
-0.353167861700058,
-0.45960497856140137,
-0.04126093164086342,
0.3225635290145874,
0.17832499742507935,
-0.03620482608675957,
0.19373199343681335,
0.23352697491645813,
0.06609708815813065,
0.060678631067276,
0.014796342700719833,
0.10903196781873703,
-0.12556126713752747,
0.57074767351150... |
https://github.com/huggingface/datasets/issues/1892 | request to mirror wmt datasets, as they are really slow to download | Yes that would be awesome. Not only the download speeds are awful, but also some files are missing.
We list all the URLs in the datasets/wmt19/wmt_utils.py so we can make a script to download them all and host on S3.
Also I think most of the materials are under the CC BY-NC-SA 3.0 license (must double check) so it sh... | Would it be possible to mirror the wmt data files under hf? Some of them take hours to download and not because of the local speed. They are all quite small datasets, just extremely slow to download.
Thank you! | 78 | request to mirror wmt datasets, as they are really slow to download
Would it be possible to mirror the wmt data files under hf? Some of them take hours to download and not because of the local speed. They are all quite small datasets, just extremely slow to download.
Thank you!
Yes that would be awesome. Not onl... | [
-0.19423808157444,
-0.3241637647151947,
0.07049348950386047,
0.21240487694740295,
0.010060499422252178,
0.06529325246810913,
0.16630885004997253,
0.45449477434158325,
0.0562022365629673,
-0.11427649855613708,
-0.2518415153026581,
-0.07385186851024628,
-0.06027562543749809,
0.31078296899795... |
https://github.com/huggingface/datasets/issues/1892 | request to mirror wmt datasets, as they are really slow to download | Yeah, the scripts are pretty ugly! A big refactor would make sense here...and I also remember that the datasets were veeery slow to download | Would it be possible to mirror the wmt data files under hf? Some of them take hours to download and not because of the local speed. They are all quite small datasets, just extremely slow to download.
Thank you! | 24 | request to mirror wmt datasets, as they are really slow to download
Would it be possible to mirror the wmt data files under hf? Some of them take hours to download and not because of the local speed. They are all quite small datasets, just extremely slow to download.
Thank you!
Yeah, the scripts are pretty ugly!... | [
-0.3805083930492401,
-0.17535410821437836,
0.007587605621665716,
0.10376977920532227,
-0.01046908088028431,
0.05997798219323158,
0.1670316755771637,
0.5077072978019714,
0.27166005969047546,
-0.03072589635848999,
-0.2506195604801178,
-0.22163145244121552,
0.033997125923633575,
0.24326860904... |
https://github.com/huggingface/datasets/issues/1892 | request to mirror wmt datasets, as they are really slow to download | I'm downloading them.
I'm starting with the ones hosted on http://data.statmt.org which are the slowest ones | Would it be possible to mirror the wmt data files under hf? Some of them take hours to download and not because of the local speed. They are all quite small datasets, just extremely slow to download.
Thank you! | 16 | request to mirror wmt datasets, as they are really slow to download
Would it be possible to mirror the wmt data files under hf? Some of them take hours to download and not because of the local speed. They are all quite small datasets, just extremely slow to download.
Thank you!
I'm downloading them.
I'm startin... | [
-0.4048575758934021,
-0.31400904059410095,
0.03699662908911705,
0.1695624589920044,
-0.013983980752527714,
0.14836400747299194,
0.09767137467861176,
0.4406717121601105,
0.20124968886375427,
-0.02605224959552288,
-0.3467167317867279,
-0.2557106018066406,
0.08942706137895584,
0.1072440221905... |
https://github.com/huggingface/datasets/issues/1892 | request to mirror wmt datasets, as they are really slow to download | @lhoestq better to use our new git-based system than just raw S3, no? (that way we have built-in CDN etc.) | Would it be possible to mirror the wmt data files under hf? Some of them take hours to download and not because of the local speed. They are all quite small datasets, just extremely slow to download.
Thank you! | 20 | request to mirror wmt datasets, as they are really slow to download
Would it be possible to mirror the wmt data files under hf? Some of them take hours to download and not because of the local speed. They are all quite small datasets, just extremely slow to download.
Thank you!
@lhoestq better to use our new git... | [
-0.28890982270240784,
-0.3306781053543091,
0.012012041173875332,
0.10932310670614243,
0.05256754532456398,
-0.0814259722828865,
0.16152285039424896,
0.4271323084831238,
0.09387054294347763,
-0.01903367042541504,
-0.3289588689804077,
-0.08687074482440948,
0.009227766655385494,
0.31452369689... |
https://github.com/huggingface/datasets/issues/1877 | Allow concatenation of both in-memory and on-disk datasets | I started working on this. My idea is to first add the pyarrow Table wrappers InMemoryTable and MemoryMappedTable that both implement what's necessary regarding copy/pickle. Then have another wrapper that takes the concatenation of InMemoryTable/MemoryMappedTable objects.
What's important here is that concatenating ... | This is a prerequisite for the addition of the `add_item` feature (see #1870).
Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using the dataset._data_files).
This assumption is used for pickl... | 55 | Allow concatenation of both in-memory and on-disk datasets
This is a prerequisite for the addition of the `add_item` feature (see #1870).
Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using... | [
-0.3632805049419403,
0.17124883830547333,
-0.016625775024294853,
0.19440218806266785,
0.09919456392526627,
0.03104373998939991,
-0.224034383893013,
0.28792864084243774,
-0.27799278497695923,
0.16979949176311493,
-0.014143416658043861,
0.6244733333587646,
-0.007651618216186762,
0.3612303435... |
https://github.com/huggingface/datasets/issues/1877 | Allow concatenation of both in-memory and on-disk datasets | Hi @lhoestq @albertvillanova,
I checked the linked issues and PR, this seems like a great idea. Would you mind elaborating on the in-memory and memory-mapped datasets?
Based on my understanding, it is something like this, please correct me if I am wrong:
1. For in-memory datasets, we don't have any dataset files ... | This is a prerequisite for the addition of the `add_item` feature (see #1870).
Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using the dataset._data_files).
This assumption is used for pickl... | 129 | Allow concatenation of both in-memory and on-disk datasets
This is a prerequisite for the addition of the `add_item` feature (see #1870).
Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using... | [
-0.3151932954788208,
0.023981774225831032,
-0.0091472202911973,
0.3324128985404968,
-0.09299745410680771,
-0.016001543030142784,
-0.1415652483701706,
0.051388416439294815,
-0.12341365218162537,
-0.034325432032346725,
0.012675134465098381,
0.5089061856269836,
0.015508892014622688,
0.4306789... |
https://github.com/huggingface/datasets/issues/1877 | Allow concatenation of both in-memory and on-disk datasets | Hi ! Yes you're totally right about your two points :)
And in the case of a concatenated dataset, then we should reload each sub-table depending on whether it's in-memory or memory mapped. That means the dataset will be made of several blocks in order to keep track of what's from memory and what's memory mapped. Thi... | This is a prerequisite for the addition of the `add_item` feature (see #1870).
Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using the dataset._data_files).
This assumption is used for pickl... | 62 | Allow concatenation of both in-memory and on-disk datasets
This is a prerequisite for the addition of the `add_item` feature (see #1870).
Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using... | [
-0.3308895230293274,
0.0809665098786354,
-0.08752961456775665,
0.12888069450855255,
0.10378753393888474,
0.02138335257768631,
-0.09534361213445663,
0.26424795389175415,
-0.09941396117210388,
0.19631242752075195,
-0.006662738509476185,
0.574398934841156,
-0.027389418333768845,
0.47451540827... |
https://github.com/huggingface/datasets/issues/1877 | Allow concatenation of both in-memory and on-disk datasets | Hi @lhoestq
Thanks, that sounds nice. Can you explain where the issue of the double memory may arise? Also, why is the existing `concatenate_datasets` not sufficient for this purpose? | This is a prerequisite for the addition of the `add_item` feature (see #1870).
Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using the dataset._data_files).
This assumption is used for pickl... | 29 | Allow concatenation of both in-memory and on-disk datasets
This is a prerequisite for the addition of the `add_item` feature (see #1870).
Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using... | [
-0.3921348452568054,
0.10951627790927887,
-0.010248332284390926,
0.3699241578578949,
0.025551289319992065,
0.16483336687088013,
-0.24442575871944427,
0.19942837953567505,
-0.18210329115390778,
0.13663946092128754,
0.04956821724772453,
0.4666678011417389,
0.02009211853146553,
0.361729770898... |
https://github.com/huggingface/datasets/issues/1877 | Allow concatenation of both in-memory and on-disk datasets | Hi @lhoestq,
Will the `add_item` feature also help with lazy writing (or no caching) during `map`/`filter`? | This is a prerequisite for the addition of the `add_item` feature (see #1870).
Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using the dataset._data_files).
This assumption is used for pickl... | 16 | Allow concatenation of both in-memory and on-disk datasets
This is a prerequisite for the addition of the `add_item` feature (see #1870).
Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using... | [
-0.4765586853027344,
0.060836728662252426,
-0.07072306424379349,
0.052182264626026154,
0.07561097294092178,
0.015053396113216877,
-0.2226109504699707,
0.3435713052749634,
0.07333192229270935,
0.08777458220720291,
0.029772810637950897,
0.6137645840644836,
-0.013316240161657333,
0.4729112982... |
https://github.com/huggingface/datasets/issues/1877 | Allow concatenation of both in-memory and on-disk datasets | > Can you explain where the issue of the double memory may arise?
We have to keep each block (in-memory vs memory mapped) separated in order to be able to reload them with pickle.
On the other hand we also need to have the full table from mixed in-memory and memory mapped data in order to iterate or extract data co... | This is a prerequisite for the addition of the `add_item` feature (see #1870).
Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using the dataset._data_files).
This assumption is used for pickl... | 188 | Allow concatenation of both in-memory and on-disk datasets
This is a prerequisite for the addition of the `add_item` feature (see #1870).
Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using... | [
-0.42054805159568787,
0.1177792176604271,
-0.013511070981621742,
0.2829379141330719,
0.07213038951158524,
0.09397990256547928,
-0.14943662285804749,
0.23201605677604675,
-0.16203926503658295,
0.15226586163043976,
0.020633097738027573,
0.5102049708366394,
-0.0248013474047184,
0.357752263545... |
https://github.com/huggingface/datasets/issues/1876 | load_dataset("multi_woz_v22") NonMatchingChecksumError | Thanks for reporting !
This is due to the changes made in the data files in the multiwoz repo: https://github.com/budzianowski/multiwoz/pull/59
I'm opening a PR to update the checksums of the data files. | Hi, it seems that loading the multi_woz_v22 dataset gives a NonMatchingChecksumError.
To reproduce:
`dataset = load_dataset('multi_woz_v22','v2.2_active_only',split='train')`
This will give the following error:
```
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.N... | 32 | load_dataset("multi_woz_v22") NonMatchingChecksumError
Hi, it seems that loading the multi_woz_v22 dataset gives a NonMatchingChecksumError.
To reproduce:
`dataset = load_dataset('multi_woz_v22','v2.2_active_only',split='train')`
This will give the following error:
```
raise NonMatchingChecksumErr... | [
-0.22443696856498718,
0.1764705777168274,
-0.028719525784254074,
0.14957229793071747,
0.1895727664232254,
0.0018983627669513226,
0.365058034658432,
0.4820297062397003,
0.25148966908454895,
0.16660656034946442,
-0.0970347672700882,
0.17815938591957092,
-0.08224428445100784,
0.09920161217451... |
https://github.com/huggingface/datasets/issues/1876 | load_dataset("multi_woz_v22") NonMatchingChecksumError | I just merged the fix. It will be available in the new release of `datasets` later today.
You'll be able to get the new version with
```
pip install --upgrade datasets
``` | Hi, it seems that loading the multi_woz_v22 dataset gives a NonMatchingChecksumError.
To reproduce:
`dataset = load_dataset('multi_woz_v22','v2.2_active_only',split='train')`
This will give the following error:
```
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.N... | 32 | load_dataset("multi_woz_v22") NonMatchingChecksumError
Hi, it seems that loading the multi_woz_v22 dataset gives a NonMatchingChecksumError.
To reproduce:
`dataset = load_dataset('multi_woz_v22','v2.2_active_only',split='train')`
This will give the following error:
```
raise NonMatchingChecksumErr... | [
-0.22443696856498718,
0.1764705777168274,
-0.028719525784254074,
0.14957229793071747,
0.1895727664232254,
0.0018983627669513226,
0.365058034658432,
0.4820297062397003,
0.25148966908454895,
0.16660656034946442,
-0.0970347672700882,
0.17815938591957092,
-0.08224428445100784,
0.09920161217451... |
https://github.com/huggingface/datasets/issues/1876 | load_dataset("multi_woz_v22") NonMatchingChecksumError | Hi, I still meet the error when loading the datasets after upgradeing datasets.
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dial... | Hi, it seems that loading the multi_woz_v22 dataset gives a NonMatchingChecksumError.
To reproduce:
`dataset = load_dataset('multi_woz_v22','v2.2_active_only',split='train')`
This will give the following error:
```
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.N... | 27 | load_dataset("multi_woz_v22") NonMatchingChecksumError
Hi, it seems that loading the multi_woz_v22 dataset gives a NonMatchingChecksumError.
To reproduce:
`dataset = load_dataset('multi_woz_v22','v2.2_active_only',split='train')`
This will give the following error:
```
raise NonMatchingChecksumErr... | [
-0.22443696856498718,
0.1764705777168274,
-0.028719525784254074,
0.14957229793071747,
0.1895727664232254,
0.0018983627669513226,
0.365058034658432,
0.4820297062397003,
0.25148966908454895,
0.16660656034946442,
-0.0970347672700882,
0.17815938591957092,
-0.08224428445100784,
0.09920161217451... |
https://github.com/huggingface/datasets/issues/1876 | load_dataset("multi_woz_v22") NonMatchingChecksumError | This must be related to https://github.com/budzianowski/multiwoz/pull/72
Those files have changed, let me update the checksums for this dataset.
For now you can use `ignore_verifications=True` in `load_dataset` to skip the checksum verification. | Hi, it seems that loading the multi_woz_v22 dataset gives a NonMatchingChecksumError.
To reproduce:
`dataset = load_dataset('multi_woz_v22','v2.2_active_only',split='train')`
This will give the following error:
```
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.N... | 31 | load_dataset("multi_woz_v22") NonMatchingChecksumError
Hi, it seems that loading the multi_woz_v22 dataset gives a NonMatchingChecksumError.
To reproduce:
`dataset = load_dataset('multi_woz_v22','v2.2_active_only',split='train')`
This will give the following error:
```
raise NonMatchingChecksumErr... | [
-0.22443696856498718,
0.1764705777168274,
-0.028719525784254074,
0.14957229793071747,
0.1895727664232254,
0.0018983627669513226,
0.365058034658432,
0.4820297062397003,
0.25148966908454895,
0.16660656034946442,
-0.0970347672700882,
0.17815938591957092,
-0.08224428445100784,
0.09920161217451... |
https://github.com/huggingface/datasets/issues/1872 | Adding a new column to the dataset after set_format was called | Hi ! Indeed if you add a column to a formatted dataset, then the new dataset gets a new formatting in which:
```
new formatted columns = (all columns - previously unformatted columns)
```
Therefore the new column is going to be formatted using the `torch` formatting.
If you want your new column to be unformatted... | Hi,
thanks for the nice library. I'm in the process of creating a custom dataset, which has a mix of tensors and lists of strings. I stumbled upon an error and want to know if its a problem on my side.
I load some lists of strings and integers, then call `data.set_format("torch", columns=["some_integer_column1"... | 67 | Adding a new column to the dataset after set_format was called
Hi,
thanks for the nice library. I'm in the process of creating a custom dataset, which has a mix of tensors and lists of strings. I stumbled upon an error and want to know if its a problem on my side.
I load some lists of strings and integers, th... | [
-0.16383755207061768,
-0.17487557232379913,
-0.040173232555389404,
-0.049233995378017426,
0.4729720950126648,
0.26253432035446167,
0.6921060085296631,
0.4177585244178772,
0.16875047981739044,
-0.2805865705013275,
0.142106831073761,
0.3377911448478699,
-0.23403969407081604,
-0.0259004738181... |
https://github.com/huggingface/datasets/issues/1872 | Adding a new column to the dataset after set_format was called | Ok cool :)
Also I just did a PR to mention this behavior in the documentation | Hi,
thanks for the nice library. I'm in the process of creating a custom dataset, which has a mix of tensors and lists of strings. I stumbled upon an error and want to know if its a problem on my side.
I load some lists of strings and integers, then call `data.set_format("torch", columns=["some_integer_column1"... | 16 | Adding a new column to the dataset after set_format was called
Hi,
thanks for the nice library. I'm in the process of creating a custom dataset, which has a mix of tensors and lists of strings. I stumbled upon an error and want to know if its a problem on my side.
I load some lists of strings and integers, th... | [
-0.16383755207061768,
-0.17487557232379913,
-0.040173232555389404,
-0.049233995378017426,
0.4729720950126648,
0.26253432035446167,
0.6921060085296631,
0.4177585244178772,
0.16875047981739044,
-0.2805865705013275,
0.142106831073761,
0.3377911448478699,
-0.23403969407081604,
-0.0259004738181... |
https://github.com/huggingface/datasets/issues/1867 | ERROR WHEN USING SET_TRANSFORM() | Hi @alejandrocros it looks like an incompatibility with the current Trainer @sgugger
Indeed currently the Trainer of `transformers` doesn't support a dataset with a transform
It looks like it comes from this line: https://github.com/huggingface/transformers/blob/f51188cbe74195c14c5b3e2e8f10c2f435f9751a/src/transfo... | Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797
However, when I try to use Trainer from transformers with such dataset, it throws an error:
```
TypeError: __init__() missing 1 required positional arg... | 139 | ERROR WHEN USING SET_TRANSFORM()
Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797
However, when I try to use Trainer from transformers with such dataset, it throws an error:
```
TypeError: __init__... | [
-0.1826184242963791,
-0.27325186133384705,
0.13930317759513855,
0.18473999202251434,
0.7292691469192505,
0.11970538645982742,
0.6009588241577148,
0.22877617180347443,
-0.322784960269928,
0.054477497935295105,
0.23551174998283386,
0.03332686424255371,
-0.08834081888198853,
-0.02340479567646... |
https://github.com/huggingface/datasets/issues/1867 | ERROR WHEN USING SET_TRANSFORM() | FYI that option can be removed with `remove_unused_columns = False` in your `TrainingArguments`, so there is a workaround @alexvaca0 while the fix in `Trainer` is underway.
@lhoestq I think I will just use the line you suggested and if someone is using the columns that are removed in their transform they will need t... | Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797
However, when I try to use Trainer from transformers with such dataset, it throws an error:
```
TypeError: __init__() missing 1 required positional arg... | 75 | ERROR WHEN USING SET_TRANSFORM()
Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797
However, when I try to use Trainer from transformers with such dataset, it throws an error:
```
TypeError: __init__... | [
-0.1826184242963791,
-0.27325186133384705,
0.13930317759513855,
0.18473999202251434,
0.7292691469192505,
0.11970538645982742,
0.6009588241577148,
0.22877617180347443,
-0.322784960269928,
0.054477497935295105,
0.23551174998283386,
0.03332686424255371,
-0.08834081888198853,
-0.02340479567646... |
https://github.com/huggingface/datasets/issues/1867 | ERROR WHEN USING SET_TRANSFORM() | I've tried your solutions @sgugger @lhoestq and the good news is that it throws no error. However, TPU training is taking forever, in 1 hour it has only trained 1 batch of 8192 elements, which doesn't make much sense... Is it possible that "on the fly" tokenization of batches is slowing down TPU training to that extent... | Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797
However, when I try to use Trainer from transformers with such dataset, it throws an error:
```
TypeError: __init__() missing 1 required positional arg... | 57 | ERROR WHEN USING SET_TRANSFORM()
Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797
However, when I try to use Trainer from transformers with such dataset, it throws an error:
```
TypeError: __init__... | [
-0.1826184242963791,
-0.27325186133384705,
0.13930317759513855,
0.18473999202251434,
0.7292691469192505,
0.11970538645982742,
0.6009588241577148,
0.22877617180347443,
-0.322784960269928,
0.054477497935295105,
0.23551174998283386,
0.03332686424255371,
-0.08834081888198853,
-0.02340479567646... |
https://github.com/huggingface/datasets/issues/1867 | ERROR WHEN USING SET_TRANSFORM() | I don't know what the value of `padding` is in your lines of code pasted above so I can't say for sure. The first batch will be very slow on TPU since it compiles everything, so that's normal (1 hour is long but 8192 elements is also large). Then if your batches are not of the same lengths, it will recompile everything... | Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797
However, when I try to use Trainer from transformers with such dataset, it throws an error:
```
TypeError: __init__() missing 1 required positional arg... | 92 | ERROR WHEN USING SET_TRANSFORM()
Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797
However, when I try to use Trainer from transformers with such dataset, it throws an error:
```
TypeError: __init__... | [
-0.1826184242963791,
-0.27325186133384705,
0.13930317759513855,
0.18473999202251434,
0.7292691469192505,
0.11970538645982742,
0.6009588241577148,
0.22877617180347443,
-0.322784960269928,
0.054477497935295105,
0.23551174998283386,
0.03332686424255371,
-0.08834081888198853,
-0.02340479567646... |
https://github.com/huggingface/datasets/issues/1867 | ERROR WHEN USING SET_TRANSFORM() | I have tried now on a GPU and it goes smooth! Amazing feature .set_transform() instead of .map()! Now I can pre-train my model without the hard disk limitation. Thanks for your work all HuggingFace team!! :clap: | Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797
However, when I try to use Trainer from transformers with such dataset, it throws an error:
```
TypeError: __init__() missing 1 required positional arg... | 36 | ERROR WHEN USING SET_TRANSFORM()
Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797
However, when I try to use Trainer from transformers with such dataset, it throws an error:
```
TypeError: __init__... | [
-0.1826184242963791,
-0.27325186133384705,
0.13930317759513855,
0.18473999202251434,
0.7292691469192505,
0.11970538645982742,
0.6009588241577148,
0.22877617180347443,
-0.322784960269928,
0.054477497935295105,
0.23551174998283386,
0.03332686424255371,
-0.08834081888198853,
-0.02340479567646... |
https://github.com/huggingface/datasets/issues/1867 | ERROR WHEN USING SET_TRANSFORM() | In the end, to make it work I turned to A-100 gpus instead of TPUS, among other changes. Set_transform doesn't work as expected and slows down training very much even in GPUs, and applying map destroys the disk, as it multiplies by 100 the size of the data passed to it (due to inefficient implementation converting stri... | Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797
However, when I try to use Trainer from transformers with such dataset, it throws an error:
```
TypeError: __init__() missing 1 required positional arg... | 179 | ERROR WHEN USING SET_TRANSFORM()
Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797
However, when I try to use Trainer from transformers with such dataset, it throws an error:
```
TypeError: __init__... | [
-0.1826184242963791,
-0.27325186133384705,
0.13930317759513855,
0.18473999202251434,
0.7292691469192505,
0.11970538645982742,
0.6009588241577148,
0.22877617180347443,
-0.322784960269928,
0.054477497935295105,
0.23551174998283386,
0.03332686424255371,
-0.08834081888198853,
-0.02340479567646... |
https://github.com/huggingface/datasets/issues/1867 | ERROR WHEN USING SET_TRANSFORM() | Great comment @alexvaca0 . I think that we could re-open the issue as a reformulation of why it takes so much space to save the arrow. Saving a 1% of oscar corpus takes more thank 600 GB (it breaks when it pass 600GB because it is the free memory that I have at this moment) when the full dataset is 1,3 TB. I have a 1TB... | Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797
However, when I try to use Trainer from transformers with such dataset, it throws an error:
```
TypeError: __init__() missing 1 required positional arg... | 93 | ERROR WHEN USING SET_TRANSFORM()
Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797
However, when I try to use Trainer from transformers with such dataset, it throws an error:
```
TypeError: __init__... | [
-0.1826184242963791,
-0.27325186133384705,
0.13930317759513855,
0.18473999202251434,
0.7292691469192505,
0.11970538645982742,
0.6009588241577148,
0.22877617180347443,
-0.322784960269928,
0.054477497935295105,
0.23551174998283386,
0.03332686424255371,
-0.08834081888198853,
-0.02340479567646... |
https://github.com/huggingface/datasets/issues/1859 | Error "in void don't know how to serialize this type of index" when saving index to disk when device=0 (GPU) | Hi @corticalstack ! Thanks for reporting. Indeed in the recent versions of Faiss we must use `getDevice` to check if the index in on GPU.
I'm opening a PR | Error serializing faiss index. Error as follows:
`Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /home/conda/feedstock_root/build_artifacts/faiss-split_1612472484670/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index`
Note:
`torch.cuda.is_availabl... | 29 | Error "in void don't know how to serialize this type of index" when saving index to disk when device=0 (GPU)
Error serializing faiss index. Error as follows:
`Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /home/conda/feedstock_root/build_artifacts/faiss-split_1612472484670/work/faiss/... | [
-0.0712382048368454,
-0.3793421685695648,
-0.01263939868658781,
0.1242898628115654,
0.3250301480293274,
0.1440669149160385,
0.3014209568500519,
0.5358150005340576,
0.47758591175079346,
0.3618174195289612,
0.06825330853462219,
-0.0320640504360199,
0.17395079135894775,
-0.08485033363103867,
... |
https://github.com/huggingface/datasets/issues/1859 | Error "in void don't know how to serialize this type of index" when saving index to disk when device=0 (GPU) | I fixed this issue. It should work fine now.
Feel free to try it out by installing `datasets` from source.
Otherwise you can wait for the next release of `datasets` (in a few days) | Error serializing faiss index. Error as follows:
`Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /home/conda/feedstock_root/build_artifacts/faiss-split_1612472484670/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index`
Note:
`torch.cuda.is_availabl... | 34 | Error "in void don't know how to serialize this type of index" when saving index to disk when device=0 (GPU)
Error serializing faiss index. Error as follows:
`Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /home/conda/feedstock_root/build_artifacts/faiss-split_1612472484670/work/faiss/... | [
-0.238478884100914,
-0.24059061706066132,
-0.038640618324279785,
0.11958716809749603,
0.3427581489086151,
0.15249216556549072,
0.34448927640914917,
0.5221390128135681,
0.460819810628891,
0.2513040006160736,
-0.07914189249277115,
0.14128589630126953,
0.12161273509263992,
-0.0570590533316135... |
https://github.com/huggingface/datasets/issues/1859 | Error "in void don't know how to serialize this type of index" when saving index to disk when device=0 (GPU) | Thanks for such a quick fix and merge to master, pip installed git master, tested all OK | Error serializing faiss index. Error as follows:
`Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /home/conda/feedstock_root/build_artifacts/faiss-split_1612472484670/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index`
Note:
`torch.cuda.is_availabl... | 17 | Error "in void don't know how to serialize this type of index" when saving index to disk when device=0 (GPU)
Error serializing faiss index. Error as follows:
`Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /home/conda/feedstock_root/build_artifacts/faiss-split_1612472484670/work/faiss/... | [
-0.11467575281858444,
-0.33902204036712646,
-0.021190937608480453,
0.11056108772754669,
0.3648000657558441,
0.1579401046037674,
0.2933632433414459,
0.5287507772445679,
0.5046115517616272,
0.3331741988658905,
0.019372927024960518,
0.07134953141212463,
0.11176536977291107,
-0.063002079725265... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.