html_url
stringlengths
48
51
title
stringlengths
5
268
comments
stringlengths
70
51.8k
body
stringlengths
0
29.8k
comment_length
int64
16
1.52k
text
stringlengths
164
54.1k
embeddings
list
https://github.com/huggingface/datasets/issues/2128
Dialogue action slot name and value are reversed in MultiWoZ 2.2
Hi Good catch ! Thanks for reporting If you are interested in contributing, feel free to open a PR to fix this :)
Hi @yjernite, thank you for adding MultiWoZ 2.2 in the huggingface datasets platform. It is beneficial! I spot an error that the order of Dialogue action slot names and values are reversed. https://github.com/huggingface/datasets/blob/649b2c469779bc4221e1b6969aa2496d63eb5953/datasets/multi_woz_v22/multi_woz_v22.p...
23
Dialogue action slot name and value are reversed in MultiWoZ 2.2 Hi @yjernite, thank you for adding MultiWoZ 2.2 in the huggingface datasets platform. It is beneficial! I spot an error that the order of Dialogue action slot names and values are reversed. https://github.com/huggingface/datasets/blob/649b2c469779...
[ 0.4248684346675873, -0.34569329023361206, 0.017481282353401184, 0.4803558886051178, -0.21027450263500214, -0.0071411458775401115, 0.1827976405620575, 0.11194036900997162, -0.10078837722539902, 0.238650381565094, -0.35799118876457214, 0.13056425750255585, 0.023053772747516632, 0.39279165863...
https://github.com/huggingface/datasets/issues/2123
Problem downloading GEM wiki_auto_asset_turk dataset
Hi, sadly I can't replicate the problem on my Windows machine. Try to update the library to the newest version with: ```bash pip install git+https://github.com/huggingface/datasets ```
@yjernite ### Summary I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code. ### Steps to reproduce Code snippet: from datasets import load_dataset #dataset = load_dataset('gem', 'web_nlg_en') d...
26
Problem downloading GEM wiki_auto_asset_turk dataset @yjernite ### Summary I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code. ### Steps to reproduce Code snippet: from datasets import load_d...
[ -0.0816880390048027, -0.028079651296138763, -0.04647589847445488, 0.44956374168395996, 0.2806960940361023, 0.11595890671014786, 0.31619468331336975, 0.17786066234111786, 0.24703973531723022, -0.11711520701646805, 0.1766643077135086, 0.17229755222797394, 0.11423476785421371, -0.003353023901...
https://github.com/huggingface/datasets/issues/2123
Problem downloading GEM wiki_auto_asset_turk dataset
Is there an error message ? What stacktrace do you get if you interrupt the execution of the program while downloading ?
@yjernite ### Summary I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code. ### Steps to reproduce Code snippet: from datasets import load_dataset #dataset = load_dataset('gem', 'web_nlg_en') d...
22
Problem downloading GEM wiki_auto_asset_turk dataset @yjernite ### Summary I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code. ### Steps to reproduce Code snippet: from datasets import load_d...
[ -0.13216234743595123, 0.014239229261875153, -0.08156013488769531, 0.43999409675598145, 0.29924476146698, 0.14021821320056915, 0.37947338819503784, 0.14778801798820496, 0.2320191115140915, -0.04262927174568176, 0.23447082936763763, 0.14406724274158478, 0.10741288214921951, -0.02187653072178...
https://github.com/huggingface/datasets/issues/2123
Problem downloading GEM wiki_auto_asset_turk dataset
Sorry for the long time since my last comment, I tried again and don't seem to have the problem anymore, thanks for your support!
@yjernite ### Summary I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code. ### Steps to reproduce Code snippet: from datasets import load_dataset #dataset = load_dataset('gem', 'web_nlg_en') d...
24
Problem downloading GEM wiki_auto_asset_turk dataset @yjernite ### Summary I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code. ### Steps to reproduce Code snippet: from datasets import load_d...
[ -0.06903548538684845, -0.009998085908591747, -0.07155101746320724, 0.4162677824497223, 0.2822791337966919, 0.12228887528181076, 0.3851534426212311, 0.21341688930988312, 0.23347042500972748, -0.11339298635721207, 0.24285034835338593, 0.16283468902111053, 0.13820387423038483, -0.010503296740...
https://github.com/huggingface/datasets/issues/2123
Problem downloading GEM wiki_auto_asset_turk dataset
Great ! I'm closing the issue then. Feel free to re-open if you experience this issue again
@yjernite ### Summary I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code. ### Steps to reproduce Code snippet: from datasets import load_dataset #dataset = load_dataset('gem', 'web_nlg_en') d...
17
Problem downloading GEM wiki_auto_asset_turk dataset @yjernite ### Summary I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code. ### Steps to reproduce Code snippet: from datasets import load_d...
[ -0.06359101831912994, 0.005495988763868809, -0.0699881836771965, 0.39752691984176636, 0.27048158645629883, 0.11177162826061249, 0.40511974692344666, 0.2125454843044281, 0.22199462354183197, -0.09807099401950836, 0.24230410158634186, 0.16528944671154022, 0.1437681019306183, -0.0094376187771...
https://github.com/huggingface/datasets/issues/2116
Creating custom dataset results in error while calling the map() function
Hi, the `_data` attribute is missing due to `MyDataset.__init__` not calling the parent `__init__`. However, I don't think it's a good idea to subclass the `datasets.Dataset` class (e.g. it's kind of dangerous to override `datasets.Dataset.__getitem__`). Instead, it's better to follow the "association over inheritan...
calling `map()` of `datasets` library results into an error while defining a Custom dataset. Reproducible example: ``` import datasets class MyDataset(datasets.Dataset): def __init__(self, sentences): "Initialization" self.samples = sentences def __len__(self): "Denotes the ...
75
Creating custom dataset results in error while calling the map() function calling `map()` of `datasets` library results into an error while defining a Custom dataset. Reproducible example: ``` import datasets class MyDataset(datasets.Dataset): def __init__(self, sentences): "Initialization" ...
[ -0.30952686071395874, 0.19198347628116608, -0.028081491589546204, 0.08091811835765839, 0.2322637438774109, 0.02445319853723049, 0.4063165485858917, 0.37322136759757996, 0.22326715290546417, 0.05796719342470169, 0.10709232836961746, 0.47602227330207825, -0.37421244382858276, -0.051104824990...
https://github.com/huggingface/datasets/issues/2106
WMT19 Dataset for Kazakh-English is not formatted correctly
Hi ! Thanks for reporting By looking at the raw `news-commentary-v14.en-kk.tsv` file, it looks like there are at least 17 lines with this issue. Moreover these issues are not always the same: - L97 is only `kk` text and must be appended at the end of the `kk` text of the **next** line - L2897 is only `kk` text an...
In addition to the bug of languages being switched from Issue @415, there are incorrect translations in the dataset because the English-Kazakh translations have a one off formatting error. The News Commentary v14 parallel data set for kk-en from http://www.statmt.org/wmt19/translation-task.html has a bug here: > ...
144
WMT19 Dataset for Kazakh-English is not formatted correctly In addition to the bug of languages being switched from Issue @415, there are incorrect translations in the dataset because the English-Kazakh translations have a one off formatting error. The News Commentary v14 parallel data set for kk-en from http://ww...
[ -0.09533054381608963, -0.5491962432861328, -0.04321814328432083, 0.316206157207489, -0.0860268771648407, 0.010772272013127804, 0.18379110097885132, 0.17220936715602875, -0.1140662431716919, 0.20220860838890076, 0.12895117700099945, 0.09879262000322342, 0.11172693967819214, 0.52284169197082...
https://github.com/huggingface/datasets/issues/2105
Request to remove S2ORC dataset
Hello @kyleclo! Currently, we are getting the data from your bucket, so if you remove it the HF script won't work anymore :) Until you solve things on your end, @lhoestq suggested we just return a warning message when people try to load that dataset from HF. What would you like it to say?
Hi! I was wondering if it's possible to remove [S2ORC](https://huggingface.co/datasets/s2orc) from hosting on Huggingface's platform? Unfortunately, there are some legal considerations about how we make this data available. Happy to add back to Huggingface's platform once we work out those hurdles! Thanks!
54
Request to remove S2ORC dataset Hi! I was wondering if it's possible to remove [S2ORC](https://huggingface.co/datasets/s2orc) from hosting on Huggingface's platform? Unfortunately, there are some legal considerations about how we make this data available. Happy to add back to Huggingface's platform once we work ou...
[ 0.2497459053993225, -0.3549622595310211, -0.015487821772694588, 0.21698683500289917, 0.05097898095846176, -0.05997942388057709, 0.015363162383437157, 0.17014668881893158, 0.3005090653896332, 0.06499636918306351, -0.31514492630958557, -0.24191607534885406, -0.397151917219162, 0.367079079151...
https://github.com/huggingface/datasets/issues/2105
Request to remove S2ORC dataset
Hi @kyleclo, as of today, you have not removed your bucket data yet, and therefore HuggingFace can download it from there. Is it OK? Are you planning to eventually delete it? Thank you.
Hi! I was wondering if it's possible to remove [S2ORC](https://huggingface.co/datasets/s2orc) from hosting on Huggingface's platform? Unfortunately, there are some legal considerations about how we make this data available. Happy to add back to Huggingface's platform once we work out those hurdles! Thanks!
33
Request to remove S2ORC dataset Hi! I was wondering if it's possible to remove [S2ORC](https://huggingface.co/datasets/s2orc) from hosting on Huggingface's platform? Unfortunately, there are some legal considerations about how we make this data available. Happy to add back to Huggingface's platform once we work ou...
[ 0.49576011300086975, -0.3337319493293762, -0.08406239002943039, 0.5156826972961426, 0.043857142329216, -0.101533442735672, -0.03269374370574951, 0.032905157655477524, -0.06692080944776535, 0.03527074307203293, -0.44595035910606384, -0.15734633803367615, -0.40666013956069946, 0.307204723358...
https://github.com/huggingface/datasets/issues/2105
Request to remove S2ORC dataset
Hi! Sorry I missed @yjernite 's previous message, thanks for responding! Is there an option where we can keep our data in our bucket, but the HF script no longer pulls data from it?
Hi! I was wondering if it's possible to remove [S2ORC](https://huggingface.co/datasets/s2orc) from hosting on Huggingface's platform? Unfortunately, there are some legal considerations about how we make this data available. Happy to add back to Huggingface's platform once we work out those hurdles! Thanks!
34
Request to remove S2ORC dataset Hi! I was wondering if it's possible to remove [S2ORC](https://huggingface.co/datasets/s2orc) from hosting on Huggingface's platform? Unfortunately, there are some legal considerations about how we make this data available. Happy to add back to Huggingface's platform once we work ou...
[ 0.3147091567516327, -0.2077653557062149, -0.02390393801033497, 0.32826200127601624, -0.01409437321126461, -0.18290287256240845, 0.017518896609544754, 0.17257513105869293, 0.17710070312023163, 0.162148579955101, -0.40387988090515137, -0.27868321537971497, -0.3127875328063965, 0.457226723432...
https://github.com/huggingface/datasets/issues/2104
Trouble loading wiki_movies
Hi ! `wiki_movies` was added in `datasets==1.2.0`. However it looks like you have `datasets==1.1.2`. To use `wiki_movies`, please update `datasets` with ``` pip install --upgrade datasets ```
Hello, I am trying to load_dataset("wiki_movies") and it gives me this error - `FileNotFoundError: Couldn't find file locally at wiki_movies/wiki_movies.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/wiki_movies/wiki_movies.py or https://s3.amazonaws.com/datasets.huggingfa...
27
Trouble loading wiki_movies Hello, I am trying to load_dataset("wiki_movies") and it gives me this error - `FileNotFoundError: Couldn't find file locally at wiki_movies/wiki_movies.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/wiki_movies/wiki_movies.py or https://s3.am...
[ -0.2073981761932373, -0.013514507561922073, -0.044220417737960815, 0.3873904049396515, 0.28917643427848816, 0.19632720947265625, 0.16524076461791992, 0.29042527079582214, 0.1623350828886032, -0.05911308899521828, -0.06512915343046188, 0.039870455861091614, 0.03908183425664902, 0.0095692258...
https://github.com/huggingface/datasets/issues/2104
Trouble loading wiki_movies
Thanks a lot! That solved it and I was able to upload a model trained on it as well :)
Hello, I am trying to load_dataset("wiki_movies") and it gives me this error - `FileNotFoundError: Couldn't find file locally at wiki_movies/wiki_movies.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/wiki_movies/wiki_movies.py or https://s3.amazonaws.com/datasets.huggingfa...
20
Trouble loading wiki_movies Hello, I am trying to load_dataset("wiki_movies") and it gives me this error - `FileNotFoundError: Couldn't find file locally at wiki_movies/wiki_movies.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/wiki_movies/wiki_movies.py or https://s3.am...
[ -0.11215667426586151, -0.013994269073009491, 0.007302516605705023, 0.4026292562484741, 0.3210337162017822, 0.20782646536827087, 0.24287958443164825, 0.2546977996826172, 0.15914645791053772, -0.1293400675058365, -0.08096617460250854, -0.06365736573934555, -0.017483273521065712, 0.0371454879...
https://github.com/huggingface/datasets/issues/2103
citation, homepage, and license fields of `dataset_info.json` are duplicated many times
Thanks for reporting :) Maybe we can concatenate fields only if they are different. Currently this is done here: https://github.com/huggingface/nlp/blob/349ac4398a3bcae6356f14c5754483383a60e8a4/src/datasets/info.py#L180-L196 This can be a good first contribution to the library. Please comment if you'd like t...
This happens after a `map` operation when `num_proc` is set to `>1`. I tested this by cleaning up the json before running the `map` op on the dataset so it's unlikely it's coming from an earlier concatenation. Example result: ``` "citation": "@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {...
43
citation, homepage, and license fields of `dataset_info.json` are duplicated many times This happens after a `map` operation when `num_proc` is set to `>1`. I tested this by cleaning up the json before running the `map` op on the dataset so it's unlikely it's coming from an earlier concatenation. Example result: ...
[ 0.13812345266342163, 0.013645620085299015, -0.06308352202177048, 0.39427801966667175, -0.005480618681758642, 0.07270900905132294, 0.2289257049560547, 0.3846520185470581, -0.06935484707355499, 0.04314146935939789, 0.07733896374702454, 0.6607301831245422, 0.4834606945514679, -0.1113682314753...
https://github.com/huggingface/datasets/issues/2099
load_from_disk takes a long time to load local dataset
Hi ! Can you share more information about the features of your dataset ? You can get them by printing `my_dataset.features` Can you also share the code of your `map` function ?
I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helpin...
32
load_from_disk takes a long time to load local dataset I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.u...
[ -0.3500203490257263, -0.19821491837501526, -0.05445132404565811, 0.2700723111629486, 0.3246716856956482, 0.050147440284490585, 0.42782527208328247, 0.1963977813720703, 0.77520352602005, -0.012891951017081738, 0.1478901356458664, 0.5318046808242798, 0.08684618026018143, -0.08264327794313431...
https://github.com/huggingface/datasets/issues/2099
load_from_disk takes a long time to load local dataset
It is actually just the tokenized `wikipedia` dataset with `input_ids`, `attention_mask`, etc, with one extra column which is a list of integers. The `text` column is removed during tokenization. ``` def add_len_and_seq(example): end_idx = example['input_ids'].index(SEP) example['actual_len'] = end_idx-1 ...
I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helpin...
51
load_from_disk takes a long time to load local dataset I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.u...
[ -0.2723698616027832, -0.14686626195907593, -0.03886256366968155, 0.1874406337738037, 0.4000377655029297, 0.0453447625041008, 0.4641449451446533, 0.22945669293403625, 0.6530975699424744, -0.03941304236650467, 0.20311318337917328, 0.5122694373130798, -0.024184035137295723, -0.165299117565155...
https://github.com/huggingface/datasets/issues/2099
load_from_disk takes a long time to load local dataset
Is `PAD_ID` a python integer ? You need all the integers in `example['seq']` to have the same type. Does this work if you remove the `np.uint8` and use python integers instead ?
I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helpin...
32
load_from_disk takes a long time to load local dataset I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.u...
[ -0.29434019327163696, -0.22488000988960266, -0.05711609497666359, 0.30335381627082825, 0.31564861536026, 0.04970107600092888, 0.4435778856277466, 0.24287548661231995, 0.6799730062484741, -0.01870105043053627, 0.15296535193920135, 0.5113558769226074, 0.08853223919868469, -0.1085036918520927...
https://github.com/huggingface/datasets/issues/2099
load_from_disk takes a long time to load local dataset
yup I casted it to `np.uint8` outside the function where it was defined. It was originally using python integers.
I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helpin...
19
load_from_disk takes a long time to load local dataset I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.u...
[ -0.29974672198295593, -0.2523110508918762, -0.047044314444065094, 0.2852841019630432, 0.32804396748542786, 0.02109954133629799, 0.42626118659973145, 0.2177659422159195, 0.7503628134727478, -0.03946685418486595, 0.15559275448322296, 0.5000487565994263, 0.08297251164913177, -0.08634104579687...
https://github.com/huggingface/datasets/issues/2099
load_from_disk takes a long time to load local dataset
Strangely, even when I manually created `np.arrays` of specific `dtypes`, the types in the final `dataset_info.json` that gets written are still `int64`. Update: I tried creating lists of `int8`s and got the same result.
I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helpin...
34
load_from_disk takes a long time to load local dataset I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.u...
[ -0.2173987478017807, -0.1876722127199173, -0.05202405899763107, 0.33667105436325073, 0.32998862862586975, 0.034900959581136703, 0.4153965711593628, 0.2823086082935333, 0.678605854511261, -0.10994819551706314, 0.1326243132352829, 0.5752292275428772, 0.19425342977046967, -0.04714962840080261...
https://github.com/huggingface/datasets/issues/2099
load_from_disk takes a long time to load local dataset
Yes this is a known issue: #625 We're working on making the precision kept for numpy :) To specify the precision of the integers, currently one needs to specify the output features with `.map(..., features=output_features)`
I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helpin...
35
load_from_disk takes a long time to load local dataset I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.u...
[ -0.2853316068649292, -0.2187875360250473, -0.028098158538341522, 0.23458172380924225, 0.32534337043762207, 0.06384453922510147, 0.4536391794681549, 0.20494331419467926, 0.6538361310958862, -0.03381219133734703, 0.10111376643180847, 0.4731716513633728, 0.1274629384279251, -0.169902518391609...
https://github.com/huggingface/datasets/issues/2099
load_from_disk takes a long time to load local dataset
Do you know what step is taking forever in the code ? What happens if you interrupt the execution of the dataset loading ?
I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helpin...
24
load_from_disk takes a long time to load local dataset I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.u...
[ -0.35770922899246216, -0.1911458820104599, -0.054289232939481735, 0.25220462679862976, 0.317802757024765, 0.07410068064928055, 0.41246363520622253, 0.201944962143898, 0.6933247447013855, 0.032103683799505234, 0.19329173862934113, 0.49876102805137634, 0.11994986981153488, -0.085437893867492...
https://github.com/huggingface/datasets/issues/2099
load_from_disk takes a long time to load local dataset
After a synchronous discussion, we found that the cache file sizes have an enormous effect on the loading speed: smaller cache files result in faster load times. `num_proc` controls the number of cache files that are being written and is inversely proportional to the individual file size. In other words, increase `num_...
I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helpin...
66
load_from_disk takes a long time to load local dataset I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.u...
[ -0.3226241171360016, -0.1608172059059143, -0.07459092885255814, 0.24950340390205383, 0.29855337738990784, 0.038758523762226105, 0.4221411347389221, 0.28228941559791565, 0.7425704598426819, -0.014208321459591389, 0.12869837880134583, 0.5218667387962341, 0.1710798442363739, -0.06049477681517...
https://github.com/huggingface/datasets/issues/2092
How to disable making arrow tables in load_dataset ?
Hi ! We plan to add streaming features in the future. This should allow to load a dataset instantaneously without generating the arrow table. The trade-off is that accessing examples from a streaming dataset must be done in an iterative way, and with an additional (but hopefully minor) overhead. What do you think a...
Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ?
84
How to disable making arrow tables in load_dataset ? Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ? Hi ! We plan to add streaming features in the future. This should allow to load a dataset instantaneously without generating the arrow table. ...
[ -0.31087878346443176, -0.30462753772735596, -0.10252197086811066, -0.047980062663555145, 0.06876859813928604, -0.0700552687048912, 0.19108650088310242, -0.05083465203642845, 0.1361461877822876, 0.15695080161094666, 0.30370640754699707, 0.19406825304031372, -0.18489858508110046, 0.373087674...
https://github.com/huggingface/datasets/issues/2092
How to disable making arrow tables in load_dataset ?
People mainly want this feature either because it takes too much time too make arrow tables, or they occupy too much memory on the disk. I think both the problem can be solved if we provide arrow tables themselves on datasets hub. Can we do this currently @lhoestq ?
Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ?
49
How to disable making arrow tables in load_dataset ? Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ? People mainly want this feature either because it takes too much time too make arrow tables, or they occupy too much memory on the disk. I think ...
[ -0.2959379255771637, -0.24903154373168945, -0.10362107306718826, 0.2489059865474701, -0.04503399506211281, 0.16052572429180145, 0.2928254306316376, -0.021119285374879837, 0.33967775106430054, 0.29514724016189575, 0.03263529762625694, 0.48352712392807007, 0.026250911876559258, 0.01749466545...
https://github.com/huggingface/datasets/issues/2092
How to disable making arrow tables in load_dataset ?
@lhoestq I think the ```try_from_hf_gcs``` provide the same functionality. What all datasets are available on HF GCS? Are all the datasets on huggingFace datasets hub are made available on GCS, automatically?
Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ?
31
How to disable making arrow tables in load_dataset ? Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ? @lhoestq I think the ```try_from_hf_gcs``` provide the same functionality. What all datasets are available on HF GCS? Are all the datasets on hug...
[ -0.06998123228549957, -0.537631094455719, -0.0606943741440773, 0.26995453238487244, 0.03118840791285038, 0.18576914072036743, 0.20472145080566406, -0.07385995984077454, 0.5251612663269043, 0.19155393540859222, -0.20835766196250916, 0.21426133811473846, 0.14290203154087067, 0.06532853096723...
https://github.com/huggingface/datasets/issues/2092
How to disable making arrow tables in load_dataset ?
Only datasets like wikipedia, wiki40b, wiki_dpr and natural questions are available already processed on the HF google storage. This is used to download directly the arrow file instead of building it from the original data files.
Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ?
36
How to disable making arrow tables in load_dataset ? Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ? Only datasets like wikipedia, wiki40b, wiki_dpr and natural questions are available already processed on the HF google storage. This is used to d...
[ -0.1666768193244934, -0.09972251951694489, -0.06023559719324112, 0.25159594416618347, -0.04688329994678497, 0.11431436985731125, 0.27080780267715454, 0.0048356251791119576, 0.4697786569595337, 0.16326767206192017, -0.037074726074934006, 0.1923092007637024, 0.1889045685529709, 0.01693006604...
https://github.com/huggingface/datasets/issues/2092
How to disable making arrow tables in load_dataset ?
@lhoestq How can we make sure that the data we upload on HuggingFace hub is available in form of preprocessed arrow files ?
Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ?
23
How to disable making arrow tables in load_dataset ? Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ? @lhoestq How can we make sure that the data we upload on HuggingFace hub is available in form of preprocessed arrow files ?
[ -0.11009698361158371, -0.45456942915916443, -0.12453409284353256, 0.3275224566459656, 0.13850520551204681, 0.06665004789829254, 0.10800863802433014, 0.0051870690658688545, 0.2907288372516632, 0.2163161039352417, -0.07625764608383179, 0.3311558663845062, 0.07582549750804901, 0.2674564719200...
https://github.com/huggingface/datasets/issues/2092
How to disable making arrow tables in load_dataset ?
We're still working on this :) This will be available soon Users will be able to put their processed arrow files on the Hub
Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ?
24
How to disable making arrow tables in load_dataset ? Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ? We're still working on this :) This will be available soon Users will be able to put their processed arrow files on the Hub
[ -0.31036797165870667, -0.2093835026025772, -0.14643238484859467, 0.26523539423942566, 0.03456997498869896, 0.06669431924819946, 0.27370980381965637, 0.11328869313001633, 0.3428301513195038, 0.22532975673675537, 0.02257843129336834, 0.5258626937866211, 0.020450782030820847, 0.13482171297073...
https://github.com/huggingface/datasets/issues/2089
Add documentaton for dataset README.md files
Hi ! We are using the [datasets-tagging app](https://github.com/huggingface/datasets-tagging) to select the tags to add. We are also adding the full list of tags in #2107 This covers multilinguality, language_creators, licenses, size_categories and task_categories. In general if you want to add a tag that doesn...
Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what should be passed to language_creators? - which valu...
94
Add documentaton for dataset README.md files Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what shoul...
[ -0.20096662640571594, 0.252023845911026, -0.13049320876598358, 0.07989351451396942, 0.18093274533748627, 0.35714271664619446, 0.31196412444114685, 0.18498314917087555, 0.1644609123468399, -0.016346648335456848, -0.021375665441155434, 0.3338293135166168, -0.09630246460437775, 0.198384642601...
https://github.com/huggingface/datasets/issues/2089
Add documentaton for dataset README.md files
@lhoestq hmm - ok thanks for the answer. To be honest I am not sure if this issue can be closed now. I just wanted to point out that this should either be documented or linked in the documentation. If you feel like it is (will be) please just close this.
Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what should be passed to language_creators? - which valu...
51
Add documentaton for dataset README.md files Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what shoul...
[ -0.18345044553279877, 0.25136512517929077, -0.1089651957154274, 0.1974363774061203, 0.11071699857711792, 0.3312235474586487, 0.37459006905555725, 0.17199599742889404, 0.0320846363902092, 0.047318991273641586, 0.02834390103816986, 0.238266721367836, -0.06312797218561172, 0.09270689636468887...
https://github.com/huggingface/datasets/issues/2089
Add documentaton for dataset README.md files
We're still working on the validation+documentation in this. Feel free to keep this issue open till we've added them
Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what should be passed to language_creators? - which valu...
19
Add documentaton for dataset README.md files Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what shoul...
[ -0.21523351967334747, 0.2649824917316437, -0.12585458159446716, 0.16833429038524628, 0.15319758653640747, 0.293340802192688, 0.29241397976875305, 0.19512218236923218, 0.03196549043059349, 0.020962774753570557, 0.05009423568844795, 0.20491017401218414, -0.07927605509757996, 0.10062263906002...
https://github.com/huggingface/datasets/issues/2089
Add documentaton for dataset README.md files
Hi ! There's the tagging app at https://huggingface.co/datasets/tagging/ that you can use. It shows the list of all the tags you can use. It is based on all the tag sets defined in this folder: https://github.com/huggingface/datasets/tree/master/src/datasets/utils/resources
Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what should be passed to language_creators? - which valu...
36
Add documentaton for dataset README.md files Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what shoul...
[ -0.28661680221557617, 0.0945327877998352, -0.15884868800640106, 0.19306418299674988, 0.27685070037841797, 0.315613716840744, 0.2661203444004059, 0.19319134950637817, 0.17165808379650116, 0.023346638306975365, -0.12728945910930634, 0.24129396677017212, -0.12643830478191376, 0.35970631241798...
https://github.com/huggingface/datasets/issues/2089
Add documentaton for dataset README.md files
I don't think so. Feel free to take a look at the tags of other models (example [here](https://huggingface.co/bert-base-uncased/blob/main/README.md)). But we should definitely have some docs or an app to write the tags. Feel free to open an issue in the `transformers` repo or in the `huggingface_hub` repo so we can dis...
Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what should be passed to language_creators? - which valu...
52
Add documentaton for dataset README.md files Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what shoul...
[ -0.13014467060565948, -0.03524341434240341, -0.05321858078241348, 0.24710460007190704, 0.2942429780960083, 0.25594133138656616, 0.41520068049430847, 0.1293422430753708, 0.09287907928228378, -0.04842407628893852, -0.17295514047145844, 0.07008861750364304, -0.17187148332595825, 0.33226606249...
https://github.com/huggingface/datasets/issues/2083
`concatenate_datasets` throws error when changing the order of datasets to concatenate
Hi, this bug is related to `Dataset.{remove_columns, rename_column, flatten}` not propagating the change to the schema metadata when the info features are updated, so this line is the culprit: ```python common_voice_train = common_voice_train.remove_columns(['client_id', 'up_votes', 'down_votes', 'age', 'gender', ...
Hey, I played around with the `concatenate_datasets(...)` function: https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate_datasets#datasets.concatenate_datasets and noticed that when the order in which the datasets are concatenated changes an error is thrown where it shou...
70
`concatenate_datasets` throws error when changing the order of datasets to concatenate Hey, I played around with the `concatenate_datasets(...)` function: https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate_datasets#datasets.concatenate_datasets and noticed that when ...
[ -0.07968545705080032, -0.003557472722604871, 0.04932437837123871, 0.09624506533145905, 0.40014341473579407, 0.2031107246875763, 0.14764630794525146, 0.16858866810798645, -0.4686336815357208, 0.03903469443321228, -0.11334352195262909, 0.22358547151088715, 0.1865086704492569, 0.2279957532882...
https://github.com/huggingface/datasets/issues/2080
Multidimensional arrays in a Dataset
Hi ! This is actually supported ! but not yet in `from_pandas`. You can use `from_dict` for now instead: ```python from datasets import Dataset, Array2D, Features, Value import pandas as pd import numpy as np dataset = { 'bbox': [ np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]), np.array([[1...
Hi, I'm trying to put together a `datasets.Dataset` to be used with LayoutLM which is available in `transformers`. This model requires as input the bounding boxes of each of the token of a sequence. This is when I realized that `Dataset` does not support multi-dimensional arrays as a value for a column in a row. ...
165
Multidimensional arrays in a Dataset Hi, I'm trying to put together a `datasets.Dataset` to be used with LayoutLM which is available in `transformers`. This model requires as input the bounding boxes of each of the token of a sequence. This is when I realized that `Dataset` does not support multi-dimensional array...
[ 0.014902155846357346, -0.25679439306259155, -0.0799354612827301, 0.19153161346912384, 0.44957083463668823, 0.05492754280567169, 0.8991118669509888, 0.10109470039606094, 0.18042439222335815, 0.06783009320497513, -0.11097673326730728, 0.3204250633716583, -0.2819293737411499, 0.05221123993396...
https://github.com/huggingface/datasets/issues/2080
Multidimensional arrays in a Dataset
Thanks for the explanation. With my original DataFrame, I did ``` dataset = dataset.to_dict("list") ``` and then the rest of the transformation from dictionary works just fine.
Hi, I'm trying to put together a `datasets.Dataset` to be used with LayoutLM which is available in `transformers`. This model requires as input the bounding boxes of each of the token of a sequence. This is when I realized that `Dataset` does not support multi-dimensional arrays as a value for a column in a row. ...
27
Multidimensional arrays in a Dataset Hi, I'm trying to put together a `datasets.Dataset` to be used with LayoutLM which is available in `transformers`. This model requires as input the bounding boxes of each of the token of a sequence. This is when I realized that `Dataset` does not support multi-dimensional array...
[ 0.014902155846357346, -0.25679439306259155, -0.0799354612827301, 0.19153161346912384, 0.44957083463668823, 0.05492754280567169, 0.8991118669509888, 0.10109470039606094, 0.18042439222335815, 0.06783009320497513, -0.11097673326730728, 0.3204250633716583, -0.2819293737411499, 0.05221123993396...
https://github.com/huggingface/datasets/issues/2078
MemoryError when computing WER metric
Hi ! Thanks for reporting. We're indeed using `jiwer` to compute the WER. Maybe instead of calling `jiwer.wer` once for all the preditions/references we can compute the WER iteratively to avoid memory issues ? I'm not too familial with `jiwer` but this must be possible. Currently the code to compute the WER is d...
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File ...
56
MemoryError when computing WER metric Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Tra...
[ 0.12497904151678085, -0.20072919130325317, 0.05871821194887161, 0.32565081119537354, 0.4333941638469696, 0.07378947734832764, -0.2180432677268982, 0.29508787393569946, 0.028830701485276222, 0.4309869706630707, 0.09653076529502869, -0.09463461488485336, -0.3058878779411316, -0.5752208828926...
https://github.com/huggingface/datasets/issues/2078
MemoryError when computing WER metric
Hi, I've just pushed a pull request that is related to this issue https://github.com/huggingface/datasets/pull/2169. It's not iterative, but it should avoid memory errors. It's based on the editdistance python library. An iterative implementation should be as easy as storing scores and words stepwise and dividing at...
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File ...
48
MemoryError when computing WER metric Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Tra...
[ 0.12497904151678085, -0.20072919130325317, 0.05871821194887161, 0.32565081119537354, 0.4333941638469696, 0.07378947734832764, -0.2180432677268982, 0.29508787393569946, 0.028830701485276222, 0.4309869706630707, 0.09653076529502869, -0.09463461488485336, -0.3058878779411316, -0.5752208828926...
https://github.com/huggingface/datasets/issues/2078
MemoryError when computing WER metric
I see, this was solved by other thread. Ok, let me know if you want to switch the implementation for any reason :)
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File ...
23
MemoryError when computing WER metric Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Tra...
[ 0.12497904151678085, -0.20072919130325317, 0.05871821194887161, 0.32565081119537354, 0.4333941638469696, 0.07378947734832764, -0.2180432677268982, 0.29508787393569946, 0.028830701485276222, 0.4309869706630707, 0.09653076529502869, -0.09463461488485336, -0.3058878779411316, -0.5752208828926...
https://github.com/huggingface/datasets/issues/2078
MemoryError when computing WER metric
Thanks for diving into this anyway ^^' As you said this actually got solved a few days ago
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File ...
18
MemoryError when computing WER metric Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Tra...
[ 0.12497904151678085, -0.20072919130325317, 0.05871821194887161, 0.32565081119537354, 0.4333941638469696, 0.07378947734832764, -0.2180432677268982, 0.29508787393569946, 0.028830701485276222, 0.4309869706630707, 0.09653076529502869, -0.09463461488485336, -0.3058878779411316, -0.5752208828926...
https://github.com/huggingface/datasets/issues/2078
MemoryError when computing WER metric
Someone created an issue https://github.com/jitsi/jiwer/issues/40 at jiwer which shows that this is still a problem in the current version. Would be curious to figure out how this can be fixed by jiwer... :) I assume that it runs of out memory because it's trying to compute the WER over (too many) test samples?
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File ...
53
MemoryError when computing WER metric Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Tra...
[ 0.12497904151678085, -0.20072919130325317, 0.05871821194887161, 0.32565081119537354, 0.4333941638469696, 0.07378947734832764, -0.2180432677268982, 0.29508787393569946, 0.028830701485276222, 0.4309869706630707, 0.09653076529502869, -0.09463461488485336, -0.3058878779411316, -0.5752208828926...
https://github.com/huggingface/datasets/issues/2078
MemoryError when computing WER metric
Hi ! It's computed iteratively so not sure what could go wrong https://github.com/huggingface/datasets/blob/8afd0ba8c27800a55ea69d9fcd702dc97d9c16d8/metrics/wer/wer.py#L100-L106 @NiklasHoltmeyer what version of `datasets` are you running ?
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File ...
22
MemoryError when computing WER metric Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Tra...
[ 0.12497904151678085, -0.20072919130325317, 0.05871821194887161, 0.32565081119537354, 0.4333941638469696, 0.07378947734832764, -0.2180432677268982, 0.29508787393569946, 0.028830701485276222, 0.4309869706630707, 0.09653076529502869, -0.09463461488485336, -0.3058878779411316, -0.5752208828926...
https://github.com/huggingface/datasets/issues/2078
MemoryError when computing WER metric
One possible explanation might be that it is the user who is passing all the sentences in a single element to `wer.compute`? As current implementation iterates over the elements of `predictions` and `references`, this can be problematic if `predictions` and `references` contain a single huge element each. This c...
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File ...
103
MemoryError when computing WER metric Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Tra...
[ 0.12497904151678085, -0.20072919130325317, 0.05871821194887161, 0.32565081119537354, 0.4333941638469696, 0.07378947734832764, -0.2180432677268982, 0.29508787393569946, 0.028830701485276222, 0.4309869706630707, 0.09653076529502869, -0.09463461488485336, -0.3058878779411316, -0.5752208828926...
https://github.com/huggingface/datasets/issues/2078
MemoryError when computing WER metric
Hi all, in my case I was using and older version of datasets and, as @albertvillanova points out, passing the full list of sentences for the metric calculation. The problem was in the way jiwer implements WER, as it tries to compute WER for the full list at once instead of doing it element-wise. I think that with th...
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File ...
82
MemoryError when computing WER metric Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Tra...
[ 0.12497904151678085, -0.20072919130325317, 0.05871821194887161, 0.32565081119537354, 0.4333941638469696, 0.07378947734832764, -0.2180432677268982, 0.29508787393569946, 0.028830701485276222, 0.4309869706630707, 0.09653076529502869, -0.09463461488485336, -0.3058878779411316, -0.5752208828926...
https://github.com/huggingface/datasets/issues/2078
MemoryError when computing WER metric
@lhoestq i was using Datasets==1.5.0 with 1.6.1 it worked (atleast the first run) but 1.5.0 is not compatible with my preprocessing. i cant save my dataset to a parquet file while using the latest datasets version -> ``` File "../preprocess_dataset.py", line 132, in <module> pq.write_table(train_dataset.da...
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File ...
96
MemoryError when computing WER metric Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Tra...
[ 0.12497904151678085, -0.20072919130325317, 0.05871821194887161, 0.32565081119537354, 0.4333941638469696, 0.07378947734832764, -0.2180432677268982, 0.29508787393569946, 0.028830701485276222, 0.4309869706630707, 0.09653076529502869, -0.09463461488485336, -0.3058878779411316, -0.5752208828926...
https://github.com/huggingface/datasets/issues/2076
Issue: Dataset download error
Hi @XuhuiZhou, thanks for reporting this issue. Indeed, the old links are no longer valid (404 Not Found error), and the script must be updated with the new links to Google Drive.
The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link?
32
Issue: Dataset download error The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link? Hi @Xuhui...
[ -0.23376265168190002, 0.14580847322940826, 0.0028068546671420336, -0.03095102868974209, -0.027438335120677948, 0.08244875073432922, 0.37594541907310486, 0.20595739781856537, 0.15375971794128418, 0.014403488487005234, 0.1848478466272354, -0.17239689826965332, 0.2339598834514618, 0.296917468...
https://github.com/huggingface/datasets/issues/2076
Issue: Dataset download error
It would be nice to update the urls indeed ! To do this, you just need to replace the urls in `iwslt2017.py` and then update the dataset_infos.json file with ``` datasets-cli test ./datasets/iwslt2017 --all_configs --save_infos --ignore_verifications ```
The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link?
37
Issue: Dataset download error The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link? It would ...
[ -0.2658805549144745, 0.12000533193349838, -0.08369525521993637, -0.10603183507919312, -0.05615135282278061, 0.013199212029576302, 0.11352710425853729, 0.3138657808303833, 0.1883038729429245, -0.06657436490058899, 0.09627290070056915, 0.04737386107444763, 0.2630380690097809, 0.2957362234592...
https://github.com/huggingface/datasets/issues/2076
Issue: Dataset download error
Is this a command to update my local files or fix the file Github repo in general? (I am not so familiar with the datasets-cli command here) I also took a brief look at the **Sharing your dataset** section, looks like I could fix that locally and push it to the repo? I guess we are "canonical" category?
The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link?
58
Issue: Dataset download error The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link? Is this a...
[ -0.3287453353404999, 0.22784507274627686, 0.017168324440717697, -0.1485101282596588, -0.027821384370326996, 0.12953564524650574, 0.2159445583820343, 0.4029396176338196, 0.1252269148826599, -0.053337108343839645, 0.1292184740304947, -0.14838309586048126, 0.36618125438690186, 0.1626038402318...
https://github.com/huggingface/datasets/issues/2076
Issue: Dataset download error
This command will update your local file. Then you can open a Pull Request to push your fix to the github repo :) And yes you are right, it is a "canonical" dataset, i.e. a dataset script defined in this github repo (as opposed to dataset repositories of users on the huggingface hub)
The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link?
53
Issue: Dataset download error The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link? This comm...
[ -0.2335670292377472, -0.11886042356491089, 0.01896366849541664, -0.13955460488796234, 0.01611192338168621, -0.0059838020242750645, 0.17576399445533752, 0.3258334994316101, 0.20678576827049255, -0.049148183315992355, 0.12043099850416183, -0.044896241277456284, 0.2096991240978241, 0.45809403...
https://github.com/huggingface/datasets/issues/2076
Issue: Dataset download error
Hi, thanks for the answer. I gave a try to the problem today. But I encountered an upload error: ``` git push -u origin fix_link_iwslt Enter passphrase for key '/home2/xuhuizh/.ssh/id_rsa': ERROR: Permission to huggingface/datasets.git denied to XuhuiZhou. fatal: Could not read from remote repository. P...
The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link?
148
Issue: Dataset download error The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link? Hi, thank...
[ -0.1731773167848587, -0.010390166193246841, 0.05730966478586197, -0.039356060326099396, 0.05381635203957558, -0.012416395358741283, 0.25629982352256775, 0.31276261806488037, 0.11338663101196289, 0.020173516124486923, 0.03134820982813835, 0.018982509151101112, 0.3240015208721161, 0.24096590...
https://github.com/huggingface/datasets/issues/2076
Issue: Dataset download error
Hi ! To create a PR on this repo your must fork it and create a branch on your fork. See how to fork the repo [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#start-by-preparing-your-environment). And to make the command work without the `ExpectedMoreDownloadedFiles` error, you just need t...
The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link?
45
Issue: Dataset download error The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link? Hi ! To c...
[ -0.2560880780220032, -0.09676952660083771, 0.017853714525699615, -0.06596142053604126, 0.09448862075805664, 0.0495946891605854, 0.19209611415863037, 0.2611154317855835, 0.16361717879772186, 0.04193791374564171, 0.005067329853773117, -0.013362784869968891, 0.252541720867157, 0.3941254019737...
https://github.com/huggingface/datasets/issues/2076
Issue: Dataset download error
Hi @XuhuiZhou, As @lhoestq has well explained, you need to fork HF's repository, create a feature branch in your fork, push your changes to it and then open a Pull Request to HF's upstream repository. This is so because at HuggingFace Datasets we follow a development model called "Fork and Pull Model". You can find ...
The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link?
126
Issue: Dataset download error The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link? Hi @Xuhui...
[ -0.2110830694437027, -0.23542657494544983, 0.0539524219930172, -0.12035335600376129, -0.01603786274790764, 0.05731751769781113, 0.07369111478328705, 0.3318699896335602, 0.2753100097179413, 0.036061037331819534, -0.1463584452867508, -0.14004148542881012, 0.32786673307418823, 0.3584643304347...
https://github.com/huggingface/datasets/issues/2075
ConnectionError: Couldn't reach common_voice.py
Hi @LifaSun, thanks for reporting this issue. Sometimes, GitHub has some connectivity problems. Could you confirm that the problem persists?
When I run: from datasets import load_dataset, load_metric common_voice_train = load_dataset("common_voice", "zh-CN", split="train+validation") common_voice_test = load_dataset("common_voice", "zh-CN", split="test") Got: ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/ma...
20
ConnectionError: Couldn't reach common_voice.py When I run: from datasets import load_dataset, load_metric common_voice_train = load_dataset("common_voice", "zh-CN", split="train+validation") common_voice_test = load_dataset("common_voice", "zh-CN", split="test") Got: ConnectionError: Couldn't reach https:/...
[ -0.4876461625099182, -0.08469899743795395, -0.07917772978544235, -0.0048032524064183235, 0.39035606384277344, 0.012986864894628525, 0.2704301178455353, 0.3016327917575836, -0.06318770349025726, 0.30897533893585205, -0.21267545223236084, -0.13470558822155, 0.1226181760430336, 0.017010109499...
https://github.com/huggingface/datasets/issues/2070
ArrowInvalid issue for squad v2 dataset
Hi ! This error happens when you use `map` in batched mode and then your function doesn't return the same number of values per column. Indeed since you're using `map` in batched mode, `prepare_validation_features` must take a batch as input (i.e. a dictionary of multiple rows of the dataset), and return a batch. ...
Hello, I am using the huggingface official question answering example notebook (https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb). In the prepare_validation_features function, I made some modifications to tokenize a new set of quesions with the original co...
80
ArrowInvalid issue for squad v2 dataset Hello, I am using the huggingface official question answering example notebook (https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb). In the prepare_validation_features function, I made some modifications to tokenize ...
[ 0.19508683681488037, -0.5012048482894897, 0.0041139270178973675, 0.2735457718372345, 0.28309527039527893, -0.06308548897504807, 0.29188692569732666, 0.12445568293333054, -0.3849656879901886, 0.1932285875082016, 0.06840679794549942, 0.2712973952293396, 0.11750511825084686, -0.27016255259513...
https://github.com/huggingface/datasets/issues/2068
PyTorch not available error on SageMaker GPU docker though it is installed
Hey @sivakhno, how does your `requirements.txt` look like to install the `datasets` library and which version of it are you running? Can you try to install `datasets>=1.4.0`
I get en error when running data loading using SageMaker SDK ``` File "main.py", line 34, in <module> run_training() File "main.py", line 25, in run_training dm.setup('fit') File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn return fn(*a...
27
PyTorch not available error on SageMaker GPU docker though it is installed I get en error when running data loading using SageMaker SDK ``` File "main.py", line 34, in <module> run_training() File "main.py", line 25, in run_training dm.setup('fit') File "/opt/conda/lib/python3.6/site-packages/p...
[ -0.35786423087120056, 0.062430910766124725, -0.00355949392542243, 0.09239524602890015, 0.2011190950870514, 0.034648358821868896, 0.5519383549690247, 0.32340729236602783, 0.3856208622455597, 0.09295836091041565, 0.0008825983386486769, 0.39658182859420776, -0.03543062508106232, 0.19524647295...
https://github.com/huggingface/datasets/issues/2068
PyTorch not available error on SageMaker GPU docker though it is installed
Hi @philschmid - thanks for suggestion. I am using `datasets==1.4.1`. I have also tried using `torch=1.6.0` (docker `763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.6.0-gpu-py3 `), but the error is the same.
I get en error when running data loading using SageMaker SDK ``` File "main.py", line 34, in <module> run_training() File "main.py", line 25, in run_training dm.setup('fit') File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn return fn(*a...
25
PyTorch not available error on SageMaker GPU docker though it is installed I get en error when running data loading using SageMaker SDK ``` File "main.py", line 34, in <module> run_training() File "main.py", line 25, in run_training dm.setup('fit') File "/opt/conda/lib/python3.6/site-packages/p...
[ -0.35786423087120056, 0.062430910766124725, -0.00355949392542243, 0.09239524602890015, 0.2011190950870514, 0.034648358821868896, 0.5519383549690247, 0.32340729236602783, 0.3856208622455597, 0.09295836091041565, 0.0008825983386486769, 0.39658182859420776, -0.03543062508106232, 0.19524647295...
https://github.com/huggingface/datasets/issues/2068
PyTorch not available error on SageMaker GPU docker though it is installed
Could paste the code you use the start your training job and the fine-tuning script you run?
I get en error when running data loading using SageMaker SDK ``` File "main.py", line 34, in <module> run_training() File "main.py", line 25, in run_training dm.setup('fit') File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn return fn(*a...
17
PyTorch not available error on SageMaker GPU docker though it is installed I get en error when running data loading using SageMaker SDK ``` File "main.py", line 34, in <module> run_training() File "main.py", line 25, in run_training dm.setup('fit') File "/opt/conda/lib/python3.6/site-packages/p...
[ -0.35786423087120056, 0.062430910766124725, -0.00355949392542243, 0.09239524602890015, 0.2011190950870514, 0.034648358821868896, 0.5519383549690247, 0.32340729236602783, 0.3856208622455597, 0.09295836091041565, 0.0008825983386486769, 0.39658182859420776, -0.03543062508106232, 0.19524647295...
https://github.com/huggingface/datasets/issues/2067
Multiprocessing windows error
Hi ! Thanks for reporting. This looks like a bug, could you try to provide a minimal code example that reproduces the issue ? This would be very helpful ! Otherwise I can try to run the wav2vec2 code above on my side but probably not this week..
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws and error. After this the log c...
48
Multiprocessing windows error As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws...
[ -0.058142274618148804, -0.453678697347641, -0.03556206449866295, 0.2569918632507324, 0.08609466999769211, -0.003773934906348586, 0.11553127318620682, -0.06296014040708542, 0.04080399125814438, 0.31918707489967346, 0.25381267070770264, 0.0701289102435112, -0.3072563707828522, -0.04254566133...
https://github.com/huggingface/datasets/issues/2067
Multiprocessing windows error
``` from datasets import load_dataset dataset = load_dataset('glue', 'mrpc', split='train') updated_dataset = dataset.map(lambda example: {'sentence1': 'My sentence: ' + example['sentence1']}, num_proc=4) ```
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws and error. After this the log c...
22
Multiprocessing windows error As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws...
[ -0.2164212316274643, -0.26953190565109253, -0.030776390805840492, 0.18502292037010193, 0.17840217053890228, 0.13764163851737976, 0.05756871774792671, -0.002502218121662736, 0.008717830292880535, 0.2766422927379608, 0.1489744782447815, 0.16106007993221283, -0.2854521572589874, -0.1023182272...
https://github.com/huggingface/datasets/issues/2067
Multiprocessing windows error
I was able to copy some of the shell This is repeating every half second Win 10, Anaconda with python 3.8, datasets installed from main branche ``` File "C:\Users\flozi\anaconda3\envs\wav2vec\lib\site-packages\multiprocess\spawn.py", line 287, in _fixup_main_from_path _check_not_importing_main()...
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws and error. After this the log c...
433
Multiprocessing windows error As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws...
[ -0.18814674019813538, -0.3144049346446991, -0.03913525119423866, 0.19852514564990997, 0.11119993031024933, 0.0044225119054317474, 0.14737991988658905, 0.05224272236227989, 0.0072059049271047115, 0.1542070508003235, 0.03806169331073761, -0.03699848800897598, -0.23013317584991455, 0.00494796...
https://github.com/huggingface/datasets/issues/2067
Multiprocessing windows error
Thanks this is really helpful ! I'll try to reproduce on my side and come back to you
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws and error. After this the log c...
18
Multiprocessing windows error As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws...
[ -0.15062828361988068, -0.46654391288757324, -0.08509168028831482, 0.23130783438682556, 0.0865277647972107, 0.010160233825445175, 0.03361395001411438, -0.030732620507478714, 0.023088261485099792, 0.28429025411605835, 0.12158586829900742, 0.06673750281333923, -0.30628910660743713, 0.01604949...
https://github.com/huggingface/datasets/issues/2067
Multiprocessing windows error
if __name__ == '__main__': This line before calling the map function stops the error but the script still repeats endless
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws and error. After this the log c...
20
Multiprocessing windows error As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws...
[ -0.11089110374450684, -0.43928685784339905, -0.08853832632303238, 0.20902127027511597, 0.06226322054862976, 0.03885630518198013, 0.005569825414568186, 0.03431252762675285, 0.13055506348609924, 0.3507120609283447, 0.18459157645702362, 0.11303388327360153, -0.27034905552864075, -0.0625601187...
https://github.com/huggingface/datasets/issues/2067
Multiprocessing windows error
Indeed you needed `if __name__ == '__main__'` since accoding to [this stackoverflow post](https://stackoverflow.com/a/18205006): > On Windows the subprocesses will import (i.e. execute) the main module at start. You need to insert an if __name__ == '__main__': guard in the main module to avoid creating subprocesses ...
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws and error. After this the log c...
59
Multiprocessing windows error As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws...
[ -0.13518191874027252, -0.4226137101650238, -0.1194845661520958, -0.017614144831895828, 0.10806332528591156, -0.096012644469738, 0.1320638656616211, 0.030847087502479553, -0.10139179229736328, 0.36022043228149414, -0.05518867075443268, 0.0946902334690094, -0.3736594617366791, -0.14211730659...
https://github.com/huggingface/datasets/issues/2067
Multiprocessing windows error
``` Traceback (most recent call last): File "C:\Users\flozi\anaconda3\envs\wav2vec\lib\shutil.py", line 791, in move os.rename(src, real_dst) FileExistsError: [WinError 183] Eine Datei kann nicht erstellt werden, wenn sie bereits vorhanden ist: 'D:\\huggingfacecache\\common_voice\\de\\6.1.0\\0041e06ab061b91d0...
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws and error. After this the log c...
224
Multiprocessing windows error As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws...
[ -0.012636869214475155, -0.3874340057373047, -0.05402319133281708, 0.20228013396263123, 0.13534730672836304, 0.018755413591861725, 0.09084805846214294, 0.0778498724102974, 0.03930342569947243, 0.26125743985176086, 0.011653919704258442, -0.07170090824365616, -0.3001479208469391, -0.077501513...
https://github.com/huggingface/datasets/issues/2067
Multiprocessing windows error
Usually OSError of an arrow file on windows means that the file is currently opened as a dataset object, so you can't overwrite it until the dataset object falls out of scope. Can you make sure that there's no dataset object that loaded the `cache-9b4f203a63742dfc.arrow` file ?
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws and error. After this the log c...
47
Multiprocessing windows error As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws...
[ -0.30129924416542053, -0.2888678014278412, -0.08387667685747147, 0.19855473935604095, 0.024326423183083534, -0.05253223329782486, 0.06525416672229767, -0.014697358012199402, 0.08489970862865448, 0.2829810678958893, 0.17037898302078247, 0.26384225487709045, -0.31195977330207825, -0.10786546...
https://github.com/huggingface/datasets/issues/2067
Multiprocessing windows error
Now I understand The error occures because the script got restarted in another thread, so the object is already loaded. Still don't have an idea why a new thread starts the whole script again
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws and error. After this the log c...
34
Multiprocessing windows error As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws...
[ -0.17128004133701324, -0.6625726222991943, -0.01566753163933754, 0.3150125741958618, -0.01036785077303648, -0.025893472135066986, 0.13373900949954987, -0.13210538029670715, 0.08812709152698517, 0.26714739203453064, 0.23279546201229095, 0.06817088276147842, -0.1253812611103058, -0.067375756...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
Hi ! Thanks for reporting. Currently there's no way to specify this. When loading/processing a dataset, the arrow file is written using a temporary file. Then once writing is finished, it's moved to the cache directory (using `shutil.move` [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356...
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
67
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.1139015406370163, 0.28156858682632446, -0.07893498986959457, 0.11773616820573807, -0.02596750296652317, 0.09056612104177475, 0.40601417422294617, 0.11466384679079056, -0.0934799313545227, -0.0789993479847908, -0.17398102581501007, 0.12208788841962814, -0.059026699513196945, -0.209067061...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
Hi @lhoestq, I looked into this and yes you're right. The `NamedTemporaryFile` is always created with mode 0600, which prevents group from reading the file. Should we change the permissions of `tmp_file.name` [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arro...
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
47
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ 0.06293822079896927, 0.03598101809620857, -0.11017099767923355, 0.2581419050693512, 0.060199372470378876, 0.13293161988258362, 0.3463631570339203, 0.09343539923429489, -0.2962671220302582, -0.03030315600335598, -0.28031808137893677, -0.0526035837829113, 0.08457192778587341, -0.313435792922...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
Good idea :) we could even update the permissions after the file has been moved by shutil.move [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1899) and [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb...
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
45
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.14866787195205688, 0.14043845236301422, -0.07720549404621124, 0.0294159147888422, 0.12957949936389923, 0.04635808989405632, 0.3478870689868927, 0.03462385758757591, -0.12994466722011566, 0.02007589489221573, -0.13891661167144775, -0.0676591619849205, 0.018578823655843735, -0.27129250764...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
Would it be possible to actually set the umask based on a user provided argument? For example, a popular usecase my team has is using a shared file-system for processing datasets. This may involve writing/deleting other files, or changing filenames, which a -rw-r--r-- wouldn't fix.
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
45
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.1471501886844635, 0.09206648916006088, -0.0505225732922554, -0.05057569965720177, -0.10244075208902359, 0.0650751143693924, 0.2770627439022064, 0.03331337869167328, -0.12057279795408249, 0.07548259943723679, -0.08884113281965256, -0.12017891556024551, 0.09109780192375183, -0.42197000980...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
Note that you can get the cache files of a dataset with the `cache_files` attributes. Then you can `chmod` those files and all the other cache files in the same directory. Moreover we can probably keep the same permissions after each transform. This way you just need to set the permissions once after doing `load_da...
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
75
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.11946775019168854, 0.25942355394363403, -0.09670412540435791, 0.030385196208953857, 0.023178813979029655, 0.10834978520870209, 0.3391302525997162, 0.21172547340393066, -0.33623436093330383, -0.03682338446378708, -0.1443874090909958, 0.022060297429561615, 0.010938477702438831, -0.2525074...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
This means we'll check the permissions of other `cache_files` already created for a dataset before setting permissions for new `cache_files`?
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
20
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.1428571343421936, 0.2471923977136612, -0.13657605648040771, 0.02183777280151844, -0.06113337725400925, 0.1519797146320343, 0.2972185015678406, 0.17494075000286102, -0.21720294654369354, -0.0215460192412138, -0.08393676578998566, -0.05868104472756386, 0.03178391605615616, -0.285692691802...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
> This way you just need to set the permissions once after doing load_dataset for example, and then all the new transformed cached files will have the same permissions. I was referring to this. Ensuring that newly generated `cache_files` have the same permissions
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
43
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.164994478225708, 0.2504473030567169, -0.1311803162097931, 0.0018535733688622713, 0.01640293002128601, 0.06968562304973602, 0.3859350383281708, 0.20129233598709106, -0.24317564070224762, -0.026852549985051155, -0.08324296027421951, 0.03175567463040352, 0.00883231870830059, -0.20328457653...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
Yes exactly I imagine users can first do `load_dataset`, then chmod on the arrow files. After that all the new cache files could have the same permissions as the first arrow files. Opinions on this ?
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
36
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.09823188930749893, 0.26637938618659973, -0.12864334881305695, 0.08532853424549103, -0.13201995193958282, 0.04467393457889557, 0.3324170708656311, 0.11853108555078506, -0.23568899929523468, -0.06214803084731102, -0.09892106056213379, 0.037031352519989014, 0.01958274468779564, -0.24613913...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
Sounds nice but I feel this is a sub-part of the approach mentioned by @siddk. Instead of letting the user set new permissions by itself first and then making sure newly generated files have same permissions why don't we ask the user initially only what they want? What are your thoughts?
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
51
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.02150663733482361, 0.30346786975860596, -0.1613316833972931, -0.06145039200782776, -0.09429995715618134, 0.011614518240094185, 0.3593587279319763, 0.2102663516998291, -0.28817468881607056, 0.06739791482686996, 0.021829359233379364, -0.12377908080816269, 0.010428152047097683, -0.22285751...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
Yes sounds good. Should this be a parameter in `load_dataset` ? Or an env variable ? Or use the value of `os.umask` ?
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
23
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.11572428792715073, 0.19156627357006073, -0.12044429033994675, 0.010218828916549683, -0.004656806122511625, 0.07383771240711212, 0.35489389300346375, 0.08304746448993683, -0.09774130582809448, 0.02342354506254196, -0.10515771061182022, 0.11370620876550674, 0.07436949759721756, -0.2714214...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
Ideally it should be a parameter in `load_dataset` but I'm not sure how important it is for the users (considering only important things should go into `load_dataset` parameters)
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
28
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.13602757453918457, 0.2357683777809143, -0.13112716376781464, 0.06138218566775322, -0.05536578223109245, 0.09406492859125137, 0.31851911544799805, 0.1379300355911255, -0.17317351698875427, 0.021259324625134468, -0.026156671345233917, 0.026860207319259644, 0.06032564491033554, -0.25843170...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
I think it's fairly important; for context, our team uses a shared file-system where many folks run experiments based on datasets that are cached by other users. For example, I might start a training run, downloading a dataset. Then, a couple of days later, a collaborator using the same repository might want to use ...
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
123
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.13351938128471375, 0.2638702094554901, -0.08877964317798615, -0.013650734908878803, -0.048319876194000244, 0.1047767922282219, 0.35522183775901794, 0.11916112154722214, -0.1658603698015213, 0.03788447007536888, -0.10652373731136322, -0.024867258965969086, -0.01919192261993885, -0.313713...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
Maybe let's start by defaulting to the user's umask ! Do you want to give it a try @bhavitvyamalik ?
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
20
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.18029522895812988, 0.1989278644323349, -0.1055656149983406, -0.08582399785518646, 0.010979189537465572, 0.09721384942531586, 0.30967992544174194, 0.07195950299501419, -0.16494083404541016, 0.08733594417572021, -0.08067452907562256, 0.0011490200413390994, 0.06524607539176941, -0.26501914...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
Yeah sure! Instead of using default `0o644` should I first extract umask of current user and then use `os.umask` on it? We can do it inside `Dataset` class so that all folders/files created during the call use running user's umask
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
40
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.2242727130651474, 0.22611309587955475, -0.08175299316644669, 0.009475518949329853, 0.04713591933250427, 0.014975937083363533, 0.2634159326553345, 0.07192394137382507, -0.1914960741996765, 0.04073034226894379, -0.1341531127691269, 0.08633588254451752, 0.049393441528081894, -0.31421878933...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
You can get the umask using `os.umask` and then I guess you can just use `os.chmod` as in your previous PR, but with the right permissions depending on the umask.
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
30
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.18572615087032318, 0.17379218339920044, -0.1075277328491211, -0.18510441482067108, -0.008170196786522865, 0.0839049220085144, 0.35653790831565857, 0.07572765648365021, -0.1108321100473404, 0.10525533556938171, -0.22693879902362823, 0.1543242484331131, 0.10393127799034119, -0.29344946146...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
FWIW, we have this issue with other caches - e.g. `transformers` model files. So probably will need to backport this into `transformers` as well. thanks @thomwolf for the pointer.
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
29
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.12074410915374756, 0.21106210350990295, -0.08410493284463882, 0.046818505972623825, -0.002863587811589241, 0.14593739807605743, 0.38540515303611755, 0.13916026055812836, -0.2011680006980896, -0.05856741964817047, -0.1354779601097107, -0.017622631043195724, 0.11561992019414902, -0.290522...
https://github.com/huggingface/datasets/issues/2065
Only user permission of saved cache files, not group
Hi @stas00, For this should we use the same umask code in the respective model directory inside `TRANSFORMERS_CACHE`?
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno...
18
Only user permission of saved cache files, not group Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to co...
[ -0.20815670490264893, 0.11179564148187637, -0.061236076056957245, 0.040736690163612366, -0.006120169535279274, 0.12794162333011627, 0.3894425928592682, 0.0571594201028347, -0.21278126537799835, -0.041988786309957504, -0.19218532741069794, -0.12315960228443146, 0.10957247763872147, -0.33432...
https://github.com/huggingface/datasets/issues/2061
Cannot load udpos subsets from xtreme dataset using load_dataset()
@lhoestq Adding "_" to the class labels in the dataset script will fix the issue. The bigger issue IMO is that the data files are in conll format, but the examples are tokens, not sentences.
Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and ...
35
Cannot load udpos subsets from xtreme dataset using load_dataset() Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset al...
[ -0.34640318155288696, -0.07528641819953918, 0.002613973570987582, 0.4443909525871277, 0.3713300824165344, 0.06911943852901459, 0.24056868255138397, 0.0394754521548748, 0.5101771354675293, 0.20612460374832153, -0.3589056432247162, 0.05270497500896454, 0.0860961377620697, 0.08086590468883514...
https://github.com/huggingface/datasets/issues/2061
Cannot load udpos subsets from xtreme dataset using load_dataset()
Hi ! Thanks for reporting @adzcodez > @lhoestq Adding "_" to the class labels in the dataset script will fix the issue. > > The bigger issue IMO is that the data files are in conll format, but the examples are tokens, not sentences. You're right: "_" should be added to the list of labels, and the examples m...
Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and ...
66
Cannot load udpos subsets from xtreme dataset using load_dataset() Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset al...
[ -0.34640318155288696, -0.07528641819953918, 0.002613973570987582, 0.4443909525871277, 0.3713300824165344, 0.06911943852901459, 0.24056868255138397, 0.0394754521548748, 0.5101771354675293, 0.20612460374832153, -0.3589056432247162, 0.05270497500896454, 0.0860961377620697, 0.08086590468883514...
https://github.com/huggingface/datasets/issues/2061
Cannot load udpos subsets from xtreme dataset using load_dataset()
@lhoestq Can you please label this issue with the "good first issue" label? I'm not sure I'll find time to fix this. To resolve it, the user should: 1. add `"_"` to the list of labels 2. transform the udpos subset to the conll format (I think the preprocessing logic can be borrowed from [the original repo](https:/...
Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and ...
84
Cannot load udpos subsets from xtreme dataset using load_dataset() Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset al...
[ -0.34640318155288696, -0.07528641819953918, 0.002613973570987582, 0.4443909525871277, 0.3713300824165344, 0.06911943852901459, 0.24056868255138397, 0.0394754521548748, 0.5101771354675293, 0.20612460374832153, -0.3589056432247162, 0.05270497500896454, 0.0860961377620697, 0.08086590468883514...
https://github.com/huggingface/datasets/issues/2061
Cannot load udpos subsets from xtreme dataset using load_dataset()
I tried fixing this issue, but its working fine in the dev version : "1.6.2.dev0" I think somebody already fixed it.
Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and ...
21
Cannot load udpos subsets from xtreme dataset using load_dataset() Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset al...
[ -0.34640318155288696, -0.07528641819953918, 0.002613973570987582, 0.4443909525871277, 0.3713300824165344, 0.06911943852901459, 0.24056868255138397, 0.0394754521548748, 0.5101771354675293, 0.20612460374832153, -0.3589056432247162, 0.05270497500896454, 0.0860961377620697, 0.08086590468883514...
https://github.com/huggingface/datasets/issues/2061
Cannot load udpos subsets from xtreme dataset using load_dataset()
Hi, after #2326, the lines with pos tags equal to `"_"` are filtered out when generating the dataset, so this fixes the KeyError described above. However, the udpos subset should be in the conll format i.e. it should yield sequences of tokens and not single tokens, so it would be great to see this fixed (feel free t...
Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and ...
70
Cannot load udpos subsets from xtreme dataset using load_dataset() Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset al...
[ -0.34640318155288696, -0.07528641819953918, 0.002613973570987582, 0.4443909525871277, 0.3713300824165344, 0.06911943852901459, 0.24056868255138397, 0.0394754521548748, 0.5101771354675293, 0.20612460374832153, -0.3589056432247162, 0.05270497500896454, 0.0860961377620697, 0.08086590468883514...
https://github.com/huggingface/datasets/issues/2059
Error while following docs to load the `ted_talks_iwslt` dataset
This has been fixed in #2064 by @mariosasko (thanks again !) The fix is available on the master branch and we'll do a new release very soon :)
I am currently trying to load the `ted_talks_iwslt` dataset into google colab. The [docs](https://huggingface.co/datasets/ted_talks_iwslt) mention the following way of doing so. ```python dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014") ``` Executing it results in the error ...
28
Error while following docs to load the `ted_talks_iwslt` dataset I am currently trying to load the `ted_talks_iwslt` dataset into google colab. The [docs](https://huggingface.co/datasets/ted_talks_iwslt) mention the following way of doing so. ```python dataset = load_dataset("ted_talks_iwslt", language_pair=("...
[ -0.21400970220565796, 0.128444105386734, 0.04851163551211357, 0.2001315802335739, 0.03129125013947487, 0.09337275475263596, 0.6712162494659424, 0.19440847635269165, 0.12410248816013336, -0.06191829591989517, -0.04957537725567818, 0.3381049335002899, -0.29105472564697266, 0.4190407991409302...
https://github.com/huggingface/datasets/issues/2056
issue with opus100/en-fr dataset
@lhoestq I also deleted the cache and redownload the file and still the same issue, I appreciate any help on this. thanks
Hi I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this? Thanks a lot @lhoestq for your help in advance. ` thread '<unnamed>' panicked...
22
issue with opus100/en-fr dataset Hi I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this? Thanks a lot @lhoestq for your help in advanc...
[ -0.41173654794692993, -0.16735151410102844, -0.015645001083612442, 0.39724859595298767, 0.11825747787952423, 0.08237624168395996, 0.19225232303142548, 0.28199782967567444, -0.14314988255500793, 0.2579927444458008, -0.2424348145723343, -0.024896180257201195, 0.13222351670265198, 0.383420884...
https://github.com/huggingface/datasets/issues/2056
issue with opus100/en-fr dataset
Here please find the minimal code to reproduce the issue @lhoestq note this only happens with MT5TokenizerFast ``` from datasets import load_dataset from transformers import MT5TokenizerFast def get_tokenized_dataset(dataset_name, dataset_config_name, tokenizer): datasets = load_dataset(dataset_name, data...
Hi I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this? Thanks a lot @lhoestq for your help in advance. ` thread '<unnamed>' panicked...
114
issue with opus100/en-fr dataset Hi I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this? Thanks a lot @lhoestq for your help in advanc...
[ -0.41173654794692993, -0.16735151410102844, -0.015645001083612442, 0.39724859595298767, 0.11825747787952423, 0.08237624168395996, 0.19225232303142548, 0.28199782967567444, -0.14314988255500793, 0.2579927444458008, -0.2424348145723343, -0.024896180257201195, 0.13222351670265198, 0.383420884...
https://github.com/huggingface/datasets/issues/2056
issue with opus100/en-fr dataset
as per https://github.com/huggingface/tokenizers/issues/626 this looks like to be the tokenizer bug, I therefore, reported it there https://github.com/huggingface/tokenizers/issues/626 and I am closing this one.
Hi I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this? Thanks a lot @lhoestq for your help in advance. ` thread '<unnamed>' panicked...
23
issue with opus100/en-fr dataset Hi I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this? Thanks a lot @lhoestq for your help in advanc...
[ -0.41173654794692993, -0.16735151410102844, -0.015645001083612442, 0.39724859595298767, 0.11825747787952423, 0.08237624168395996, 0.19225232303142548, 0.28199782967567444, -0.14314988255500793, 0.2579927444458008, -0.2424348145723343, -0.024896180257201195, 0.13222351670265198, 0.383420884...
https://github.com/huggingface/datasets/issues/2055
is there a way to override a dataset object saved with save_to_disk?
I tried this way, but when there is a mapping process to the dataset, it again uses a random cache name. atm, I am trying to use the following method by setting an exact cache file, ``` dataset_with_embedding =csv_dataset.map( partial(self.embed, ctx_encoder=ctx_encoder, ctx_tokenizer=s...
At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object?
69
is there a way to override a dataset object saved with save_to_disk? At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object? I tried this way, but when there is a mapping process to the dataset, it again uses a random cache name. atm, I am t...
[ 0.018776722252368927, -0.15488997101783752, -0.014641659334301949, -0.039853282272815704, 0.24491731822490692, 0.2703031003475189, 0.1437663733959198, 0.1278596669435501, 0.11003562808036804, 0.17715536057949066, 0.3504517674446106, 0.4532105326652527, -0.20381221175193787, -0.036236792802...
https://github.com/huggingface/datasets/issues/2055
is there a way to override a dataset object saved with save_to_disk?
I'm not sure I understand your issue, can you elaborate ? `cache_file_name` is indeed an argument you can set to specify the cache file that will be used for the processed dataset. By default the file is named with something like `cache-<fingerprint>.arrow` where the fingerprint is a hash.
At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object?
48
is there a way to override a dataset object saved with save_to_disk? At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object? I'm not sure I understand your issue, can you elaborate ? `cache_file_name` is indeed an argument you can set to ...
[ 0.005880632903426886, -0.07092950493097305, -0.04539540782570839, 0.00918580126017332, 0.2773497998714447, 0.252723753452301, 0.2612766921520233, 0.2426065057516098, -0.053569696843624115, 0.10696277767419815, 0.30203577876091003, 0.4447041153907776, -0.16169282793998718, -0.12795747816562...
https://github.com/huggingface/datasets/issues/2055
is there a way to override a dataset object saved with save_to_disk?
Let's say I am updating a set of embedding in a dataset that is around 40GB inside a training loop every 500 steps (Ex: calculating the embeddings in updated ctx_encoder in RAG and saving it to the passage path). So when we use **dataset_object.save_to_disk('passage_path_directory')** it will save the new dataset obje...
At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object?
134
is there a way to override a dataset object saved with save_to_disk? At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object? Let's say I am updating a set of embedding in a dataset that is around 40GB inside a training loop every 500 steps (...
[ 0.045734703540802, -0.036520712077617645, -0.0065599363297224045, -0.06631405651569366, 0.2652110159397125, 0.14675535261631012, 0.05342177301645279, 0.1336880475282669, -0.13232606649398804, 0.33304348587989807, 0.3283967673778534, 0.48019132018089294, -0.3888701796531677, 0.0933780223131...
https://github.com/huggingface/datasets/issues/2054
Could not find file for ZEST dataset
This has been fixed in #2057 by @matt-peters (thanks again !) The fix is available on the master branch and we'll do a new release very soon :)
I am trying to use zest dataset from Allen AI using below code in colab, ``` !pip install -q datasets from datasets import load_dataset dataset = load_dataset("zest") ``` I am getting the following error, ``` Using custom data configuration default Downloading and preparing dataset zest/default (download: ...
28
Could not find file for ZEST dataset I am trying to use zest dataset from Allen AI using below code in colab, ``` !pip install -q datasets from datasets import load_dataset dataset = load_dataset("zest") ``` I am getting the following error, ``` Using custom data configuration default Downloading and pre...
[ -0.5492250323295593, -0.1562175154685974, -0.09730509668588638, 0.364107221364975, 0.3012056350708008, 0.09310784935951233, 0.03187276050448418, 0.22301320731639862, 0.23836351931095123, 0.43103528022766113, -0.20342139899730682, -0.009750347584486008, -0.11628196388483047, 0.1218976229429...
https://github.com/huggingface/datasets/issues/2052
Timit_asr dataset repeats examples
Hi, this was fixed by #1995, so you can wait for the next release or install the package directly from the master branch with the following command: ```bash pip install git+https://github.com/huggingface/datasets ```
Summary When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same Steps to reproduce As an example, on this code there is the text from the training part: Code snippet: ``` from datasets import load_dataset, load_metric timit = load_dataset("timit_asr") timit['train']['text']...
32
Timit_asr dataset repeats examples Summary When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same Steps to reproduce As an example, on this code there is the text from the training part: Code snippet: ``` from datasets import load_dataset, load_metric timit = load_dataset...
[ 0.06932374835014343, -0.23608078062534332, 0.014321497641503811, 0.38754159212112427, 0.2564872205257416, -0.05277526378631592, 0.38944631814956665, 0.15092350542545319, -0.3530940115451813, 0.21451415121555328, 0.036256104707717896, 0.38388678431510925, -0.19141094386577606, 0.31492096185...
https://github.com/huggingface/datasets/issues/2050
Build custom dataset to fine-tune Wav2Vec2
Sure you can use the json loader ```python data_files = {"train": "path/to/your/train_data.json", "test": "path/to/your/test_data.json"} train_dataset = load_dataset("json", data_files=data_files, split="train") test_dataset = load_dataset("json", data_files=data_files, split="test") ``` You just need to make s...
Thank you for your recent tutorial on how to finetune Wav2Vec2 on a custom dataset. The example you gave here (https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) was on the CommonVoice dataset. However, what if I want to load my own dataset? I have a manifest (transcript and their audio files) in a JSON file.
51
Build custom dataset to fine-tune Wav2Vec2 Thank you for your recent tutorial on how to finetune Wav2Vec2 on a custom dataset. The example you gave here (https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) was on the CommonVoice dataset. However, what if I want to load my own dataset? I have a manifest (transcript ...
[ -0.08610294759273529, 0.025272907689213753, 0.013138333335518837, 0.05659550428390503, -0.015416079200804234, -0.00169777509290725, 0.04018595069646835, 0.16728077828884125, 0.11139751970767975, -0.03239339962601662, -0.2139551192522049, 0.4168871343135834, -0.21583929657936096, 0.11425540...
https://github.com/huggingface/datasets/issues/2046
add_faisis_index gets very slow when doing it interatively
I think faiss automatically sets the number of threads to use to build the index. Can you check how many CPU cores are being used when you build the index in `use_own_knowleldge_dataset` as compared to this script ? Are there other programs running (maybe for rank>0) ?
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any ...
47
add_faisis_index gets very slow when doing it interatively As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_d...
[ -0.49882426857948303, -0.26827821135520935, -0.024500802159309387, 0.12822844088077545, 0.059439767152071, 0.20778393745422363, 0.12146293371915817, 0.424129843711853, 0.2917422652244568, 0.2924969494342804, -0.11882511526346207, 0.18831509351730347, 0.13721297681331635, 0.0858209207653999...
https://github.com/huggingface/datasets/issues/2046
add_faisis_index gets very slow when doing it interatively
Hi, I am running the add_faiss_index during the training process of the RAG from the master process (rank 0). But at the exact moment, I do not run any other process since I do it in every 5000 training steps. I think what you say is correct. It depends on the number of CPU cores. I did an experiment to compare...
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any ...
108
add_faisis_index gets very slow when doing it interatively As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_d...
[ -0.49882426857948303, -0.26827821135520935, -0.024500802159309387, 0.12822844088077545, 0.059439767152071, 0.20778393745422363, 0.12146293371915817, 0.424129843711853, 0.2917422652244568, 0.2924969494342804, -0.11882511526346207, 0.18831509351730347, 0.13721297681331635, 0.0858209207653999...
https://github.com/huggingface/datasets/issues/2046
add_faisis_index gets very slow when doing it interatively
Can you try to set the number of threads manually ? If you set the same number of threads for both the `use_own_knowledge_dataset.py` and RAG training, it should take the same amount of time. You can see how to set the number of thread in the faiss wiki: https://github.com/facebookresearch/faiss/wiki/Threads-and-asyn...
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any ...
49
add_faisis_index gets very slow when doing it interatively As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_d...
[ -0.49882426857948303, -0.26827821135520935, -0.024500802159309387, 0.12822844088077545, 0.059439767152071, 0.20778393745422363, 0.12146293371915817, 0.424129843711853, 0.2917422652244568, 0.2924969494342804, -0.11882511526346207, 0.18831509351730347, 0.13721297681331635, 0.0858209207653999...