html_url
stringlengths
48
51
title
stringlengths
5
268
comments
stringlengths
70
51.8k
body
stringlengths
0
29.8k
comment_length
int64
16
1.52k
text
stringlengths
164
54.1k
embeddings
list
https://github.com/huggingface/datasets/issues/888
Nested lists are zipped unexpectedly
Yes following the Tensorflow Datasets convention, objects with type `Sequence of a Dict` are actually stored as a `dictionary of lists`. See the [documentation](https://huggingface.co/docs/datasets/features.html?highlight=features) for more details
I might misunderstand something, but I expect that if I define: ```python "top": datasets.features.Sequence({ "middle": datasets.features.Sequence({ "bottom": datasets.Value("int32") }) }) ``` And I then create an example: ```python yield 1, { "top": [{ "middle": [ {"bottom": 1}, ...
27
Nested lists are zipped unexpectedly I might misunderstand something, but I expect that if I define: ```python "top": datasets.features.Sequence({ "middle": datasets.features.Sequence({ "bottom": datasets.Value("int32") }) }) ``` And I then create an example: ```python yield 1, { "top": [{ ...
[ 0.22983874380588531, -0.4676937758922577, -0.10918619483709335, 0.39305782318115234, 0.01122317649424076, -0.007393186911940575, 0.24145425856113434, 0.043625976890325546, 0.170482337474823, 0.12772074341773987, -0.14806903898715973, 0.435934454202652, 0.23617568612098694, 0.29456776380538...
https://github.com/huggingface/datasets/issues/888
Nested lists are zipped unexpectedly
Thanks. This is a bit (very) confusing, but I guess if its intended, I'll just work with it as if its how my data was originally structured :)
I might misunderstand something, but I expect that if I define: ```python "top": datasets.features.Sequence({ "middle": datasets.features.Sequence({ "bottom": datasets.Value("int32") }) }) ``` And I then create an example: ```python yield 1, { "top": [{ "middle": [ {"bottom": 1}, ...
28
Nested lists are zipped unexpectedly I might misunderstand something, but I expect that if I define: ```python "top": datasets.features.Sequence({ "middle": datasets.features.Sequence({ "bottom": datasets.Value("int32") }) }) ``` And I then create an example: ```python yield 1, { "top": [{ ...
[ 0.21846766769886017, -0.31721821427345276, -0.15455208718776703, 0.40963977575302124, 0.019808528944849968, 0.0022648668382316828, 0.1813221424818039, 0.12848219275474548, 0.12042085826396942, 0.11076198518276215, -0.08942649513483047, 0.5233137011528015, 0.33097413182258606, 0.20519095659...
https://github.com/huggingface/datasets/issues/887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
Yes right now `ArrayXD` can only be used as a column feature type, not a subtype. With the current Arrow limitations I don't think we'll be able to make it work as a subtype, however it should be possible to allow dimensions of dynamic sizes (`Array3D(shape=(None, 137, 2), dtype="float32")` for example since the [unde...
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and ...
85
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets...
[ -0.1616060733795166, 0.01084157545119524, -0.06661894172430038, 0.17723128199577332, 0.3133356273174286, 0.00577168446034193, 0.5568304657936096, 0.11533649265766144, -0.13974183797836304, 0.07902061939239502, 0.16181223094463348, 0.061189040541648865, -0.24828724563121796, 0.2882793843746...
https://github.com/huggingface/datasets/issues/887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
> Yes right now ArrayXD can only be used as a column feature type, not a subtype. Meaning it can't be nested under `Sequence`? If so, for now I'll just make it a python list and make it with the nested `Sequence` type you suggested.
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and ...
45
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets...
[ -0.14969058334827423, 0.046683695167303085, -0.08322657644748688, 0.11906835436820984, 0.27300816774368286, 0.12014748901128769, 0.6236190795898438, 0.13252943754196167, 0.00043084556818939745, 0.11699926853179932, 0.27267971634864807, 0.21372032165527344, -0.14821559190750122, 0.087337337...
https://github.com/huggingface/datasets/issues/887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
Yea unfortunately.. That's a current limitation with Arrow ExtensionTypes that can't be used in the default Arrow Array objects. We already have an ExtensionArray that allows us to use them as column types but not for subtypes. Maybe we can extend it, I haven't experimented with that yet
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and ...
48
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets...
[ -0.021221181377768517, 0.1621367186307907, 0.01912551000714302, 0.19337473809719086, 0.35349616408348083, 0.1465558558702469, 0.6769048571586609, 0.10014152526855469, -0.13536329567432404, 0.16975010931491852, 0.17544172704219818, 0.24838680028915405, -0.21873244643211365, -0.0489065013825...
https://github.com/huggingface/datasets/issues/887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
Cool So please consider this issue as a feature request for: ``` Array3D(shape=(None, 137, 2), dtype="float32") ``` its a way to represent videos, poses, and other cool sequences
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and ...
28
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets...
[ -0.11950726807117462, 0.2424519807100296, -0.08252330869436264, 0.0961938202381134, 0.3023025095462799, -0.08079171925783157, 0.6678082346916199, 0.12745097279548645, -0.45234525203704834, 0.26682183146476746, 0.3651505708694458, 0.15331602096557617, -0.25517579913139343, 0.148840829730033...
https://github.com/huggingface/datasets/issues/887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
@lhoestq well, so sequence of sequences doesn't work either... ``` pyarrow.lib.ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648 ```
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and ...
23
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets...
[ -0.1600916087627411, 0.1551731675863266, -0.13658002018928528, 0.20118463039398193, 0.23780497908592224, 0.002427169354632497, 0.5615336894989014, 0.08139834553003311, -0.15611106157302856, 0.18688717484474182, 0.3679438531398773, 0.14772449433803558, -0.19520968198776245, -0.0751514360308...
https://github.com/huggingface/datasets/issues/887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
Working with Arrow can be quite fun sometimes. You can fix this issue by trying to reduce the writer batch size (same trick than the one used to reduce the RAM usage in https://github.com/huggingface/datasets/issues/741). Let me know if it works. I haven't investigated yet on https://github.com/huggingface/dataset...
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and ...
67
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets...
[ -0.2213132381439209, 0.19797593355178833, -0.015701554715633392, 0.3644231855869293, 0.35363149642944336, 0.012184892781078815, 0.5489981174468994, 0.1765436977148056, -0.22645048797130585, 0.20632675290107727, 0.21528109908103943, 0.2304825782775879, -0.1810271292924881, 0.003713978687301...
https://github.com/huggingface/datasets/issues/887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
The batch size fix doesn't work... not for #741 and not for this dataset I'm trying (DGS corpus) Loading the DGS corpus takes 400GB of RAM, which is fine with me as my machine is large enough
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and ...
37
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets...
[ -0.374576210975647, 0.12805821001529694, -0.047619856894016266, 0.208238422870636, 0.2405511885881424, -0.0033122380264103413, 0.49839818477630615, 0.14746320247650146, -0.2932838797569275, 0.1866018921136856, 0.2814418375492096, -0.05396429821848869, -0.19091719388961792, -0.1175013184547...
https://github.com/huggingface/datasets/issues/887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
Not yet, I've been pretty busy with the dataset sprint lately but this is something that's been asked several times already. So I'll definitely work on this as soon as I'm done with the sprint and with the RAM issue you reported.
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and ...
42
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets...
[ -0.1500292867422104, 0.20671392977237701, -0.04670111835002899, 0.2796335816383362, 0.44586944580078125, -0.014902414754033089, 0.6093958020210266, 0.1298004388809204, -0.32201865315437317, 0.19370655715465546, 0.2713230848312378, 0.31502142548561096, -0.16626328229904175, -0.1340914964675...
https://github.com/huggingface/datasets/issues/887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
Hi @lhoestq, Any chance you have some updates on the supporting `ArrayXD` as a subtype or support of dynamic sized arrays? e.g.: `datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))` `Array3D(shape=(None, 137, 2), dtype="float32")`
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and ...
29
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets...
[ -0.2084440290927887, 0.003985189832746983, -0.09013105183839798, 0.04117843881249428, 0.3527285158634186, -0.03630052134394646, 0.618253231048584, 0.09212662279605865, -0.19569189846515656, 0.19286686182022095, 0.24202658236026764, 0.04752204939723015, -0.27965620160102844, 0.0491817668080...
https://github.com/huggingface/datasets/issues/887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
Hi ! We haven't worked in this lately and it's not in our very short-term roadmap since it requires a bit a work to make it work with arrow. Though this will definitely be added at one point.
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and ...
38
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets...
[ -0.09873039275407791, 0.21576887369155884, -0.036189738661050797, 0.1980811357498169, 0.3335244655609131, -0.01481380220502615, 0.6729154586791992, 0.09411292523145676, -0.2653336226940155, 0.19205231964588165, 0.3300935924053192, 0.28478512167930603, -0.18515832722187042, -0.0952649712562...
https://github.com/huggingface/datasets/issues/887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
@lhoestq, thanks for the update. I actually tried to modify some piece of code to make it work. Can you please tell if I missing anything here? I think that for vast majority of cases it's enough to make first dimension of the array dynamic i.e. `shape=(None, 100, 100)`. For that, it's enough to modify class [Array...
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and ...
224
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets...
[ 0.008176663890480995, 0.11728734523057938, -0.10610853880643845, 0.2801862061023712, 0.40251344442367554, -0.06431570649147034, 0.5933443903923035, 0.10925915837287903, -0.2332412302494049, 0.22073672711849213, 0.1275910586118698, 0.3022356331348419, -0.24728399515151978, -0.02119618467986...
https://github.com/huggingface/datasets/issues/887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
Thanks for diving into this ! Indeed focusing on making the first dimensions dynamic make total sense (and users could still re-order their dimensions to match this constraint). Your code looks great :) I think it can even be extended to support several dynamic dimensions if we want to. Feel free to open a PR to...
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and ...
164
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets...
[ 0.08111881464719772, 0.06608691811561584, -0.07496575266122818, 0.19978894293308258, 0.5059948563575745, -0.06205607205629349, 0.666225254535675, 0.22281469404697418, -0.2351689338684082, 0.21039441227912903, 0.20339086651802063, 0.3141407370567322, -0.16791634261608124, 0.1519098132848739...
https://github.com/huggingface/datasets/issues/883
Downloading/caching only a part of a datasets' dataset.
I think it would be a very helpful feature, because sometimes one only wants to evaluate models on the dev set, and the whole training data may be many times bigger. This makes the task impossible with limited memory resources.
Hi, I want to use the validation data *only* (of natural question). I don't want to have the whole dataset cached in my machine, just the dev set. Is this possible? I can't find a way to do it in the docs. Thank you, Sapir
40
Downloading/caching only a part of a datasets' dataset. Hi, I want to use the validation data *only* (of natural question). I don't want to have the whole dataset cached in my machine, just the dev set. Is this possible? I can't find a way to do it in the docs. Thank you, Sapir I think it would be a very hel...
[ -0.25964710116386414, 0.017662450671195984, -0.14441365003585815, 0.05602747201919556, 0.005209655035287142, 0.1932338923215866, 0.07143615931272507, 0.4244528114795685, -0.20966292917728424, 0.12258386611938477, -0.16739612817764282, -0.31124889850616455, 0.03828083723783493, 0.3431014120...
https://github.com/huggingface/datasets/issues/880
Add SQA
I’ll take this one to test the workflow for the sprint next week cc @yjernite @lhoestq
## Adding a Dataset - **Name:** SQA (Sequential Question Answering) by Microsoft. - **Description:** The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total. - **Paper:** https://www.microsoft.com/en-us/r...
16
Add SQA ## Adding a Dataset - **Name:** SQA (Sequential Question Answering) by Microsoft. - **Description:** The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total. - **Paper:** https://www.microsoft.c...
[ -0.06400924175977707, -0.2312077432870865, -0.20991787314414978, -0.12106953561306, 0.09991472959518433, -0.19758279621601105, 0.03379734233021736, 0.3069656491279602, 0.14835992455482483, 0.05626997724175453, -0.16755640506744385, -0.0026325518265366554, -0.07126069813966751, 0.5443828105...
https://github.com/huggingface/datasets/issues/880
Add SQA
@thomwolf here's a slightly adapted version of the code from the [official Tapas repository](https://github.com/google-research/tapas/blob/master/tapas/utils/interaction_utils.py) that is used to turn the `answer_coordinates` and `answer_texts` columns into true Python lists of tuples/strings: ``` import pandas as ...
## Adding a Dataset - **Name:** SQA (Sequential Question Answering) by Microsoft. - **Description:** The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total. - **Paper:** https://www.microsoft.com/en-us/r...
185
Add SQA ## Adding a Dataset - **Name:** SQA (Sequential Question Answering) by Microsoft. - **Description:** The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total. - **Paper:** https://www.microsoft.c...
[ -0.06400924175977707, -0.2312077432870865, -0.20991787314414978, -0.12106953561306, 0.09991472959518433, -0.19758279621601105, 0.03379734233021736, 0.3069656491279602, 0.14835992455482483, 0.05626997724175453, -0.16755640506744385, -0.0026325518265366554, -0.07126069813966751, 0.5443828105...
https://github.com/huggingface/datasets/issues/879
boolq does not load
Hi ! It runs on my side without issues. I tried ```python from datasets import load_dataset load_dataset("boolq") ``` What version of datasets and tensorflow are your runnning ? Also if you manage to get a minimal reproducible script (on google colab for example) that would be useful.
Hi I am getting these errors trying to load boolq thanks Traceback (most recent call last): File "test.py", line 5, in <module> data = AutoTask().get("boolq").get_dataset("train", n_obs=10) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset d...
47
boolq does not load Hi I am getting these errors trying to load boolq thanks Traceback (most recent call last): File "test.py", line 5, in <module> data = AutoTask().get("boolq").get_dataset("train", n_obs=10) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42...
[ -0.3142426013946533, -0.1296665072441101, -0.13981470465660095, 0.22436578571796417, 0.08980449289083481, 0.0106669245287776, 0.4253872036933899, 0.3031974732875824, 0.44576847553253174, -0.12459570914506912, -0.0860084667801857, 0.3309846520423889, -0.18556809425354004, 0.4824473559856415...
https://github.com/huggingface/datasets/issues/879
boolq does not load
hey i do the exact same commands. for me it fails i guess might be issues with caching maybe? thanks best rabeeh On Tue, Nov 24, 2020, 10:24 AM Quentin Lhoest <notifications@github.com> wrote: > Hi ! It runs on my side without issues. I tried > > from datasets import load_datasetload_dataset("boolq") > > What version...
Hi I am getting these errors trying to load boolq thanks Traceback (most recent call last): File "test.py", line 5, in <module> data = AutoTask().get("boolq").get_dataset("train", n_obs=10) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset d...
117
boolq does not load Hi I am getting these errors trying to load boolq thanks Traceback (most recent call last): File "test.py", line 5, in <module> data = AutoTask().get("boolq").get_dataset("train", n_obs=10) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42...
[ -0.3142426013946533, -0.1296665072441101, -0.13981470465660095, 0.22436578571796417, 0.08980449289083481, 0.0106669245287776, 0.4253872036933899, 0.3031974732875824, 0.44576847553253174, -0.12459570914506912, -0.0860084667801857, 0.3309846520423889, -0.18556809425354004, 0.4824473559856415...
https://github.com/huggingface/datasets/issues/879
boolq does not load
Could you check if it works on the master branch ? You can use `load_dataset("boolq", script_version="master")` to do so. We did some changes recently in boolq to remove the TF dependency and we changed the way the data files are downloaded in https://github.com/huggingface/datasets/pull/881
Hi I am getting these errors trying to load boolq thanks Traceback (most recent call last): File "test.py", line 5, in <module> data = AutoTask().get("boolq").get_dataset("train", n_obs=10) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset d...
43
boolq does not load Hi I am getting these errors trying to load boolq thanks Traceback (most recent call last): File "test.py", line 5, in <module> data = AutoTask().get("boolq").get_dataset("train", n_obs=10) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42...
[ -0.3142426013946533, -0.1296665072441101, -0.13981470465660095, 0.22436578571796417, 0.08980449289083481, 0.0106669245287776, 0.4253872036933899, 0.3031974732875824, 0.44576847553253174, -0.12459570914506912, -0.0860084667801857, 0.3309846520423889, -0.18556809425354004, 0.4824473559856415...
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
> neat feature I dint get these clearly, can you please elaborate like how to work on these
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files...
18
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_fi...
[ -0.2508121132850647, -0.24077488481998444, -0.053151946514844894, 0.5038032531738281, 0.2426777184009552, 0.03797951713204384, 0.42340996861457825, 0.23022297024726868, 0.05345635861158371, 0.012271949090063572, -0.02131815254688263, 0.3855956196784973, -0.23641864955425262, 0.408721238374...
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
It could maybe work almost out of the box just by using `cached_path` in the text/csv/json scripts, no?
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files...
18
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_fi...
[ -0.2508121132850647, -0.24077488481998444, -0.053151946514844894, 0.5038032531738281, 0.2426777184009552, 0.03797951713204384, 0.42340996861457825, 0.23022297024726868, 0.05345635861158371, 0.012271949090063572, -0.02131815254688263, 0.3855956196784973, -0.23641864955425262, 0.408721238374...
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
Thanks thomwolf and julien-c I'm still confusion on what you guys said, I have solved the problem as follows: 1. read the csv file using pandas from s3 2. Convert to dictionary key as column name and values as list column data 3. convert it to Dataset using `from datasets import Dataset` `train_dataset ...
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files...
55
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_fi...
[ -0.2508121132850647, -0.24077488481998444, -0.053151946514844894, 0.5038032531738281, 0.2426777184009552, 0.03797951713204384, 0.42340996861457825, 0.23022297024726868, 0.05345635861158371, 0.012271949090063572, -0.02131815254688263, 0.3855956196784973, -0.23641864955425262, 0.408721238374...
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
We were brainstorming around your use-case. Let's keep the issue open for now, I think this is an interesting question to think about.
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files...
23
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_fi...
[ -0.2508121132850647, -0.24077488481998444, -0.053151946514844894, 0.5038032531738281, 0.2426777184009552, 0.03797951713204384, 0.42340996861457825, 0.23022297024726868, 0.05345635861158371, 0.012271949090063572, -0.02131815254688263, 0.3855956196784973, -0.23641864955425262, 0.408721238374...
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
> We were brainstorming around your use-case. > > Let's keep the issue open for now, I think this is an interesting question to think about. Sure thomwolf, Thanks for your concern
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files...
32
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_fi...
[ -0.2508121132850647, -0.24077488481998444, -0.053151946514844894, 0.5038032531738281, 0.2426777184009552, 0.03797951713204384, 0.42340996861457825, 0.23022297024726868, 0.05345635861158371, 0.012271949090063572, -0.02131815254688263, 0.3855956196784973, -0.23641864955425262, 0.408721238374...
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
I agree it would be cool to have that feature. Also that's good to know that pandas supports this. For the moment I'd suggest to first download the files locally as thom suggested and then load the dataset by providing paths to the local files
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files...
45
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_fi...
[ -0.2508121132850647, -0.24077488481998444, -0.053151946514844894, 0.5038032531738281, 0.2426777184009552, 0.03797951713204384, 0.42340996861457825, 0.23022297024726868, 0.05345635861158371, 0.012271949090063572, -0.02131815254688263, 0.3855956196784973, -0.23641864955425262, 0.408721238374...
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
Any updates on this issue? I face a similar issue. I have many parquet files in S3 and I would like to train on them. To be honest I even face issues with only getting the last layer embedding out of them.
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files...
42
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_fi...
[ -0.2508121132850647, -0.24077488481998444, -0.053151946514844894, 0.5038032531738281, 0.2426777184009552, 0.03797951713204384, 0.42340996861457825, 0.23022297024726868, 0.05345635861158371, 0.012271949090063572, -0.02131815254688263, 0.3855956196784973, -0.23641864955425262, 0.408721238374...
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
Hi dorlavie, You can find one solution that i have mentioned above, that can help you. And there is one more solution also which is downloading files locally
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files...
28
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_fi...
[ -0.2508121132850647, -0.24077488481998444, -0.053151946514844894, 0.5038032531738281, 0.2426777184009552, 0.03797951713204384, 0.42340996861457825, 0.23022297024726868, 0.05345635861158371, 0.012271949090063572, -0.02131815254688263, 0.3855956196784973, -0.23641864955425262, 0.408721238374...
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
> Hi dorlavie, > You can find one solution that i have mentioned above, that can help you. > And there is one more solution also which is downloading files locally mahesh1amour, thanks for the fast reply Unfortunately, in my case I can not read with pandas. The dataset is too big (50GB). In addition, due to s...
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files...
68
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_fi...
[ -0.2508121132850647, -0.24077488481998444, -0.053151946514844894, 0.5038032531738281, 0.2426777184009552, 0.03797951713204384, 0.42340996861457825, 0.23022297024726868, 0.05345635861158371, 0.012271949090063572, -0.02131815254688263, 0.3855956196784973, -0.23641864955425262, 0.408721238374...
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
@dorlavie could use `boto3` to download the data to your local machine and then load it with `dataset` boto3 example [documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-example-download-file.html) ```python import boto3 s3 = boto3.client('s3') s3.download_file('BUCKET_NAME', 'OBJEC...
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files...
46
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_fi...
[ -0.2508121132850647, -0.24077488481998444, -0.053151946514844894, 0.5038032531738281, 0.2426777184009552, 0.03797951713204384, 0.42340996861457825, 0.23022297024726868, 0.05345635861158371, 0.012271949090063572, -0.02131815254688263, 0.3855956196784973, -0.23641864955425262, 0.408721238374...
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
Thanks @philschmid for the suggestion. As I mentioned in the previous comment, due to security issues I can not save the data locally. I need to read it from S3 and process it directly. I guess that many other people try to train / fit those models on huge datasets (e.g entire Wiki), what is the best practice in t...
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files...
61
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_fi...
[ -0.2508121132850647, -0.24077488481998444, -0.053151946514844894, 0.5038032531738281, 0.2426777184009552, 0.03797951713204384, 0.42340996861457825, 0.23022297024726868, 0.05345635861158371, 0.012271949090063572, -0.02131815254688263, 0.3855956196784973, -0.23641864955425262, 0.408721238374...
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
If I understand correctly you're not allowed to write data on disk that you downloaded from S3 for example ? Or is it the use of the `boto3` library that is not allowed in your case ?
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files...
37
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_fi...
[ -0.2508121132850647, -0.24077488481998444, -0.053151946514844894, 0.5038032531738281, 0.2426777184009552, 0.03797951713204384, 0.42340996861457825, 0.23022297024726868, 0.05345635861158371, 0.012271949090063572, -0.02131815254688263, 0.3855956196784973, -0.23641864955425262, 0.408721238374...
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
@lhoestq yes you are correct. I am not allowed to save the "raw text" locally - The "raw text" must be saved only on S3. I am allowed to save the output of any model locally. It doesn't matter how I do it boto3/pandas/pyarrow, it is forbidden
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files...
47
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_fi...
[ -0.2508121132850647, -0.24077488481998444, -0.053151946514844894, 0.5038032531738281, 0.2426777184009552, 0.03797951713204384, 0.42340996861457825, 0.23022297024726868, 0.05345635861158371, 0.012271949090063572, -0.02131815254688263, 0.3855956196784973, -0.23641864955425262, 0.408721238374...
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
@dorlavie are you using sagemaker for training too? Then you could use S3 URI, for example `s3://my-bucket/my-training-data` and pass it within the `.fit()` function when you start the sagemaker training job. Sagemaker would then download the data from s3 into the training runtime and you could load it from disk **s...
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files...
127
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_fi...
[ -0.2508121132850647, -0.24077488481998444, -0.053151946514844894, 0.5038032531738281, 0.2426777184009552, 0.03797951713204384, 0.42340996861457825, 0.23022297024726868, 0.05345635861158371, 0.012271949090063572, -0.02131815254688263, 0.3855956196784973, -0.23641864955425262, 0.408721238374...
https://github.com/huggingface/datasets/issues/877
DataLoader(datasets) become more and more slowly within iterations
Hi ! Thanks for reporting. Do you have the same slowdown when you iterate through the raw dataset object as well ? (no dataloader) It would be nice to know whether it comes from the dataloader or not
Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly! ``` dataset = load_from_disk(dataset_path) # around 21,000,000 lines lineloader = tqdm(DataLoader(dataset, batch_size=1)) for idx, line in enumerate(lineloader): # do some thing for each line ``` In the begining, th...
38
DataLoader(datasets) become more and more slowly within iterations Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly! ``` dataset = load_from_disk(dataset_path) # around 21,000,000 lines lineloader = tqdm(DataLoader(dataset, batch_size=1)) for idx, line in enumerate(linel...
[ -0.3241669535636902, -0.0680927112698555, -0.12075738608837128, 0.23754309117794037, -0.0479867085814476, -0.055125631392002106, 0.4634113311767578, -0.010447191074490547, 0.27750056982040405, 0.1358923614025116, -0.0574941411614418, 0.44069704413414, -0.03794948756694794, -0.0338125154376...
https://github.com/huggingface/datasets/issues/877
DataLoader(datasets) become more and more slowly within iterations
> Hi ! Thanks for reporting. > Do you have the same slowdown when you iterate through the raw dataset object as well ? (no dataloader) > It would be nice to know whether it comes from the dataloader or not I did not iter data from raw dataset, maybe I will test later. Now I iter all files directly from `open(file)...
Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly! ``` dataset = load_from_disk(dataset_path) # around 21,000,000 lines lineloader = tqdm(DataLoader(dataset, batch_size=1)) for idx, line in enumerate(lineloader): # do some thing for each line ``` In the begining, th...
64
DataLoader(datasets) become more and more slowly within iterations Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly! ``` dataset = load_from_disk(dataset_path) # around 21,000,000 lines lineloader = tqdm(DataLoader(dataset, batch_size=1)) for idx, line in enumerate(linel...
[ -0.31665992736816406, -0.10293681919574738, -0.09790528565645218, 0.24407020211219788, -0.0499429814517498, -0.06016191467642784, 0.4438633322715759, 0.05886587128043175, 0.2712092101573944, 0.12997455894947052, -0.1063351109623909, 0.41852936148643494, -0.0331583246588707, -0.027514500543...
https://github.com/huggingface/datasets/issues/876
imdb dataset cannot be loaded
It looks like there was an issue while building the imdb dataset. Could you provide more information about your OS and the version of python and `datasets` ? Also could you try again with ```python dataset = datasets.load_dataset("imdb", split="train", download_mode="force_redownload") ``` to make sure it's no...
Hi I am trying to load the imdb train dataset `dataset = datasets.load_dataset("imdb", split="train")` getting following errors, thanks for your help ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/...
51
imdb dataset cannot be loaded Hi I am trying to load the imdb train dataset `dataset = datasets.load_dataset("imdb", split="train")` getting following errors, thanks for your help ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anacond...
[ -0.5215907692909241, -0.03979794308543205, -0.12092772871255875, 0.41382864117622375, 0.30875933170318604, 0.3442186117172241, 0.4162842035293579, 0.41161730885505676, 0.18930791318416595, -0.12590980529785156, -0.24593763053417206, -0.03638038411736488, -0.23879972100257874, 0.11891843378...
https://github.com/huggingface/datasets/issues/873
load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error
I see the issue happening again today - [nltk_data] Downloading package stopwords to /root/nltk_data... [nltk_data] Package stopwords is already up-to-date! Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/....
``` from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0') ``` Stack trace: ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-6-2e06a8332652> in <module>() ...
108
load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error ``` from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0') ``` Stack trace: ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (mo...
[ -0.22429290413856506, 0.1419459879398346, -0.022575952112674713, 0.23843230307102203, 0.38199472427368164, 0.12616503238677979, 0.6168980002403259, 0.21475985646247864, -0.023177430033683777, 0.13341295719146729, -0.16375663876533508, 0.05823656544089317, -0.375956654548645, -0.12307801842...
https://github.com/huggingface/datasets/issues/871
terminate called after throwing an instance of 'google::protobuf::FatalException'
Loading the iwslt2017-en-nl config of iwslt2017 works fine on my side. Maybe you can open an issue on transformers as well ? And also add more details about your environment (OS, python version, version of transformers and datasets etc.)
Hi I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks 100%|█████████████████████████████████████████████████████████████████████████████████████████████...
39
terminate called after throwing an instance of 'google::protobuf::FatalException' Hi I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks 100%|█████████...
[ -0.27569520473480225, -0.6896211504936218, 0.11620399355888367, 0.3686620891094208, 0.3344203233718872, -0.061229903250932693, 0.33480292558670044, 0.15361082553863525, -0.20834007859230042, 0.11798208951950073, -0.06540077924728394, -0.33246126770973206, -0.029532156884670258, 0.405820816...
https://github.com/huggingface/datasets/issues/871
terminate called after throwing an instance of 'google::protobuf::FatalException'
closing now, figured out this is because the max length of decoder was set smaller than the input_dimensions. thanks
Hi I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks 100%|█████████████████████████████████████████████████████████████████████████████████████████████...
19
terminate called after throwing an instance of 'google::protobuf::FatalException' Hi I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks 100%|█████████...
[ -0.1825484335422516, -0.7557111382484436, 0.08364059031009674, 0.5567312240600586, 0.33121538162231445, -0.04410403221845627, 0.2286124974489212, 0.2201019525527954, -0.37961632013320923, 0.312804639339447, -0.01004094909876585, -0.4250833988189697, 0.002881661057472229, 0.5060401558876038...
https://github.com/huggingface/datasets/issues/870
[Feature Request] Add optional parameter in text loading script to preserve linebreaks
Hi ! Thanks for your message. Indeed it's a free feature we can add and that can be useful. If you want to contribute, feel free to open a PR to add it to the text dataset script :)
I'm working on a project about rhyming verse using phonetic poetry and song lyrics, and line breaks are a vital part of the data. I recently switched over to use the datasets library when my various corpora grew larger than my computer's memory. And so far, it is SO great. But the first time I processed all of ...
39
[Feature Request] Add optional parameter in text loading script to preserve linebreaks I'm working on a project about rhyming verse using phonetic poetry and song lyrics, and line breaks are a vital part of the data. I recently switched over to use the datasets library when my various corpora grew larger than my ...
[ -0.4354715943336487, 0.2950851321220398, 0.05672154948115349, -0.24413076043128967, -0.09724639356136322, -0.20723769068717957, 0.3331259787082672, 0.17588423192501068, 0.29804590344429016, 0.32773977518081665, 0.503433883190155, 0.23918817937374115, -0.017637604847550392, 0.13068459928035...
https://github.com/huggingface/datasets/issues/866
OSCAR from Inria group
PR is already open here : #348 The only thing remaining is to compute the metadata of each subdataset (one per language + shuffled/unshuffled). As soon as #863 is merged we can start computing them. This will take a bit of time though
## Adding a Dataset - **Name:** *OSCAR* (Open Super-large Crawled ALMAnaCH coRpus), multilingual parsing of Common Crawl (separate crawls for many different languages), [here](https://oscar-corpus.com/). - **Description:** *OSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by la...
43
OSCAR from Inria group ## Adding a Dataset - **Name:** *OSCAR* (Open Super-large Crawled ALMAnaCH coRpus), multilingual parsing of Common Crawl (separate crawls for many different languages), [here](https://oscar-corpus.com/). - **Description:** *OSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multiling...
[ -0.07899967581033707, -0.06898017972707748, -0.08298517763614655, 0.061385780572891235, -0.08882305771112442, 0.09406177699565887, -0.012248681858181953, 0.2985755503177643, 0.0025456047151237726, 0.08631610125303268, -0.5969402194023132, -0.05120494216680527, -0.24507272243499756, 0.07547...
https://github.com/huggingface/datasets/issues/865
Have Trouble importing `datasets`
I'm sorry, this was a problem with my environment. Now that I have identified the cause of environmental dependency, I would like to fix it and try it. Excuse me for making a noise.
I'm failing to import transformers (v4.0.0-dev), and tracing the cause seems to be failing to import datasets. I cloned the newest version of datasets (master branch), and do `pip install -e .`. Then, `import datasets` causes the error below. ``` ~/workspace/Clone/datasets/src/datasets/utils/file_utils.py in ...
34
Have Trouble importing `datasets` I'm failing to import transformers (v4.0.0-dev), and tracing the cause seems to be failing to import datasets. I cloned the newest version of datasets (master branch), and do `pip install -e .`. Then, `import datasets` causes the error below. ``` ~/workspace/Clone/datasets/...
[ -0.26811960339546204, 0.1308588683605194, 0.009198013693094254, 0.17224803566932678, 0.2753850519657135, 0.07383527606725693, 0.2072007954120636, 0.18338418006896973, 0.15290021896362305, -0.10359552502632141, -0.28493720293045044, -0.14241555333137512, -0.1180942952632904, -0.318189561367...
https://github.com/huggingface/datasets/issues/864
Unable to download cnn_dailymail dataset
Same here! My kaggle notebook stopped working like yesterday. It's strange because I have fixed version of datasets==1.1.2
### Script to reproduce the error ``` from datasets import load_dataset train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") ``` ### Error ``` -------------------------------------------------------------...
18
Unable to download cnn_dailymail dataset ### Script to reproduce the error ``` from datasets import load_dataset train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") ``` ### Error ``` ------------------...
[ -0.30534854531288147, 0.234373539686203, -0.07922708243131638, 0.17068012058734894, 0.360565721988678, 0.17380857467651367, 0.5607905983924866, 0.24365611374378204, -0.17192447185516357, 0.07515959441661835, -0.11793938279151917, 0.07677548378705978, -0.28970828652381897, -0.03775293752551...
https://github.com/huggingface/datasets/issues/864
Unable to download cnn_dailymail dataset
I couldn't reproduce unfortunately. I tried ```python from datasets import load_dataset load_dataset("cnn_dailymail", "3.0.0", download_mode="force_redownload") ``` and it worked fine on both my env (python 3.7.2) and colab (python 3.6.9) Maybe there was an issue with the google drive download link of the dat...
### Script to reproduce the error ``` from datasets import load_dataset train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") ``` ### Error ``` -------------------------------------------------------------...
66
Unable to download cnn_dailymail dataset ### Script to reproduce the error ``` from datasets import load_dataset train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") ``` ### Error ``` ------------------...
[ -0.30534854531288147, 0.234373539686203, -0.07922708243131638, 0.17068012058734894, 0.360565721988678, 0.17380857467651367, 0.5607905983924866, 0.24365611374378204, -0.17192447185516357, 0.07515959441661835, -0.11793938279151917, 0.07677548378705978, -0.28970828652381897, -0.03775293752551...
https://github.com/huggingface/datasets/issues/864
Unable to download cnn_dailymail dataset
No, It's working fine now. Very strange. Here are my python and request versions requests 2.24.0 Python 3.8.2
### Script to reproduce the error ``` from datasets import load_dataset train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") ``` ### Error ``` -------------------------------------------------------------...
18
Unable to download cnn_dailymail dataset ### Script to reproduce the error ``` from datasets import load_dataset train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") ``` ### Error ``` ------------------...
[ -0.30534854531288147, 0.234373539686203, -0.07922708243131638, 0.17068012058734894, 0.360565721988678, 0.17380857467651367, 0.5607905983924866, 0.24365611374378204, -0.17192447185516357, 0.07515959441661835, -0.11793938279151917, 0.07677548378705978, -0.28970828652381897, -0.03775293752551...
https://github.com/huggingface/datasets/issues/861
Possible Bug: Small training/dataset file creates gigantic output
The preprocessing tokenizes the input text. Tokenization outputs `input_ids`, `attention_mask`, `token_type_ids` and `special_tokens_mask`. All those are of length`max_seq_length` because of padding. Therefore for each sample it generate 4 *`max_seq_length` integers. Currently they're all saved as int64. This is why th...
Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB r...
58
Possible Bug: Small training/dataset file creates gigantic output Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling...
[ -0.1432340443134308, -0.38764140009880066, 0.11331380903720856, 0.32277432084083557, 0.5927135944366455, -0.19819122552871704, 0.28742316365242004, 0.39092981815338135, -0.3091716170310974, 0.17205798625946045, -0.03392792120575905, -0.17978839576244354, -0.3273913562297821, 0.189794749021...
https://github.com/huggingface/datasets/issues/861
Possible Bug: Small training/dataset file creates gigantic output
First I think we should disable padding in the dataset processing and let the data collator do it. Then I'm wondering if you need attention_mask and token_type_ids at this point ? Finally we can also specify the output feature types at this line https://github.com/huggingface/transformers/blob/master/examples/lan...
Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB r...
90
Possible Bug: Small training/dataset file creates gigantic output Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling...
[ -0.22990477085113525, -0.36673402786254883, 0.12432359158992767, 0.2771851420402527, 0.6030980944633484, -0.16952762007713318, 0.3121454417705536, 0.37343552708625793, -0.25991758704185486, 0.15220652520656586, -0.05892394855618477, -0.13221894204616547, -0.28685134649276733, 0.24878726899...
https://github.com/huggingface/datasets/issues/861
Possible Bug: Small training/dataset file creates gigantic output
> First I think we should disable padding in the dataset processing and let the data collator do it. No, you can't do that on TPUs as dynamic shapes will result in a very slow training. The script can however be tweaked to use the `PaddingDataCollator` with a fixed max length instead of dynamic batching. For the ...
Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB r...
91
Possible Bug: Small training/dataset file creates gigantic output Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling...
[ -0.24492309987545013, -0.4394877254962921, 0.11735696345567703, 0.3189444839954376, 0.5710245966911316, -0.17657695710659027, 0.2565918266773224, 0.42917540669441223, -0.3390790522098541, 0.15733583271503448, -0.06757251173257828, -0.20117205381393433, -0.3087226152420044, 0.19150939583778...
https://github.com/huggingface/datasets/issues/861
Possible Bug: Small training/dataset file creates gigantic output
Oh yes right.. Do you think that a lazy map feature on the `datasets` side could help to avoid storing padded tokenized texts then ?
Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB r...
25
Possible Bug: Small training/dataset file creates gigantic output Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling...
[ -0.1584593802690506, -0.32274940609931946, 0.11426511406898499, 0.2783169150352478, 0.6050810813903809, -0.08939453214406967, 0.36650463938713074, 0.37571367621421814, -0.25044044852256775, 0.16443227231502533, -0.06106586381793022, -0.08094025403261185, -0.3737187683582306, 0.147270590066...
https://github.com/huggingface/datasets/issues/861
Possible Bug: Small training/dataset file creates gigantic output
I think I can do the tweak mentioned above with the data collator as short fix (but fully focused on v4 right now so that will be for later this week, beginning of next week :-) ). If it doesn't hurt performance to tokenize on the fly, that would clearly be the long-term solution however!
Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB r...
55
Possible Bug: Small training/dataset file creates gigantic output Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling...
[ -0.20229443907737732, -0.2920217514038086, 0.11891685426235199, 0.2300965040922165, 0.6037492752075195, -0.17225311696529388, 0.3156903088092804, 0.3898855149745941, -0.2658644914627075, 0.18251657485961914, -0.0329228937625885, -0.14777863025665283, -0.3427739143371582, 0.2524905204772949...
https://github.com/huggingface/datasets/issues/861
Possible Bug: Small training/dataset file creates gigantic output
> Hey guys, > > I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small ...
Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB r...
273
Possible Bug: Small training/dataset file creates gigantic output Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling...
[ -0.12765148282051086, -0.4218367636203766, 0.10686738789081573, 0.2943577766418457, 0.6427580118179321, -0.12837210297584534, 0.3398818075656891, 0.40121471881866455, -0.24704474210739136, 0.154962956905365, -0.03813442215323448, -0.121829554438591, -0.29877132177352905, 0.2704769372940063...
https://github.com/huggingface/datasets/issues/861
Possible Bug: Small training/dataset file creates gigantic output
Hi @NebelAI, we have optimized Datasets' disk usage in the latest release v1.5. Feel free to update your Datasets version ```shell pip install -U datasets ``` and see if it better suits your needs.
Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB r...
34
Possible Bug: Small training/dataset file creates gigantic output Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling...
[ -0.23383398354053497, -0.3642718195915222, 0.10987385362386703, 0.32716724276542664, 0.580288827419281, -0.12271162867546082, 0.2948492169380188, 0.40527769923210144, -0.18221348524093628, 0.17751678824424744, -0.04829094186425209, -0.12470803409814835, -0.33843642473220825, 0.229903250932...
https://github.com/huggingface/datasets/issues/854
wmt16 does not download
Hi,I also posted it to the forum, but this is a bug, perhaps it needs to be reported here? thanks
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/...
20
wmt16 does not download Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, tot...
[ -0.4449557960033417, -0.42247599363327026, -0.08150412142276764, 0.4179416000843048, 0.5046890377998352, 0.12449770420789719, 0.12814545631408691, 0.21399644017219543, 0.27867045998573303, 0.0013543693348765373, 0.0727132111787796, -0.11046157777309418, -0.23516149818897247, 0.097207307815...
https://github.com/huggingface/datasets/issues/854
wmt16 does not download
It looks like the official OPUS server for WMT16 doesn't provide the data files anymore (503 error). I searched a bit and couldn't find a mirror except maybe http://nlp.ffzg.hr/resources/corpora/setimes/ (the data are a cleaned version of the original ones though) Should we consider replacing the old urls with these ...
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/...
59
wmt16 does not download Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, tot...
[ -0.4449557960033417, -0.42247599363327026, -0.08150412142276764, 0.4179416000843048, 0.5046890377998352, 0.12449770420789719, 0.12814545631408691, 0.21399644017219543, 0.27867045998573303, 0.0013543693348765373, 0.0727132111787796, -0.11046157777309418, -0.23516149818897247, 0.097207307815...
https://github.com/huggingface/datasets/issues/854
wmt16 does not download
The data storage is down at the moment. Sorry. Hopefully, it will come back soon. Apologies for the inconvenience ...
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/...
20
wmt16 does not download Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, tot...
[ -0.4449557960033417, -0.42247599363327026, -0.08150412142276764, 0.4179416000843048, 0.5046890377998352, 0.12449770420789719, 0.12814545631408691, 0.21399644017219543, 0.27867045998573303, 0.0013543693348765373, 0.0727132111787796, -0.11046157777309418, -0.23516149818897247, 0.097207307815...
https://github.com/huggingface/datasets/issues/854
wmt16 does not download
Dear great huggingface team, this is not working yet, I really appreciate some temporary fix on this, I need this for my project and this is time sensitive and I will be grateful for your help on this.
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/...
38
wmt16 does not download Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, tot...
[ -0.4449557960033417, -0.42247599363327026, -0.08150412142276764, 0.4179416000843048, 0.5046890377998352, 0.12449770420789719, 0.12814545631408691, 0.21399644017219543, 0.27867045998573303, 0.0013543693348765373, 0.0727132111787796, -0.11046157777309418, -0.23516149818897247, 0.097207307815...
https://github.com/huggingface/datasets/issues/854
wmt16 does not download
We have reached out to the OPUS team which is currently working on making the data available again. Cc @jorgtied
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/...
20
wmt16 does not download Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, tot...
[ -0.4449557960033417, -0.42247599363327026, -0.08150412142276764, 0.4179416000843048, 0.5046890377998352, 0.12449770420789719, 0.12814545631408691, 0.21399644017219543, 0.27867045998573303, 0.0013543693348765373, 0.0727132111787796, -0.11046157777309418, -0.23516149818897247, 0.097207307815...
https://github.com/huggingface/datasets/issues/854
wmt16 does not download
Hi, this is still down, I would be really grateful if you could ping them one more time. thank you so much.
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/...
22
wmt16 does not download Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, tot...
[ -0.4449557960033417, -0.42247599363327026, -0.08150412142276764, 0.4179416000843048, 0.5046890377998352, 0.12449770420789719, 0.12814545631408691, 0.21399644017219543, 0.27867045998573303, 0.0013543693348765373, 0.0727132111787796, -0.11046157777309418, -0.23516149818897247, 0.097207307815...
https://github.com/huggingface/datasets/issues/854
wmt16 does not download
Hi I am trying with multiple setting of wmt datasets and all failed so far, I need to have at least one dataset working for testing somecodes, and this is really time sensitive, I greatly appreciate letting me know of one translation datasets currently working. thanks
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/...
46
wmt16 does not download Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, tot...
[ -0.4449557960033417, -0.42247599363327026, -0.08150412142276764, 0.4179416000843048, 0.5046890377998352, 0.12449770420789719, 0.12814545631408691, 0.21399644017219543, 0.27867045998573303, 0.0013543693348765373, 0.0727132111787796, -0.11046157777309418, -0.23516149818897247, 0.097207307815...
https://github.com/huggingface/datasets/issues/854
wmt16 does not download
It is still down, unfortunately. I'm sorry for that. It should come up again later today or tomorrow at the latest if no additional complications will happen.
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/...
27
wmt16 does not download Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, tot...
[ -0.4449557960033417, -0.42247599363327026, -0.08150412142276764, 0.4179416000843048, 0.5046890377998352, 0.12449770420789719, 0.12814545631408691, 0.21399644017219543, 0.27867045998573303, 0.0013543693348765373, 0.0727132111787796, -0.11046157777309418, -0.23516149818897247, 0.097207307815...
https://github.com/huggingface/datasets/issues/853
concatenate_datasets support axis=0 or 1 ?
Unfortunately `concatenate_datasets` only supports concatenating the rows, while what you want to achieve is concatenate the columns. Currently to add more columns to a dataset, one must use `map`. What you can do is somehting like this: ```python # suppose you have datasets d1, d2, d3 def add_columns(example, ind...
I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png)
58
concatenate_datasets support axis=0 or 1 ? I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png) Unfortunately `concatenate_datasets` only supports concatenating the rows, while what you want to achieve is concatenate...
[ -0.42260879278182983, -0.08625979721546173, -0.1545989215373993, 0.07816196233034134, 0.10730957984924316, 0.4110478460788727, 0.31150761246681213, 0.46485084295272827, 0.11830145865678787, 0.17059604823589325, -0.1758231371641159, 0.3464905023574829, -0.07507321983575821, 0.54082113504409...
https://github.com/huggingface/datasets/issues/853
concatenate_datasets support axis=0 or 1 ?
That's not really difficult to add, though, no? I think it can be done without copy. Maybe let's add it to the roadmap?
I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png)
23
concatenate_datasets support axis=0 or 1 ? I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png) That's not really difficult to add, though, no? I think it can be done without copy. Maybe let's add it to the roadmap...
[ -0.5632333159446716, 0.15662437677383423, -0.10527736693620682, -0.08301149308681488, 0.08748175948858261, 0.16249701380729675, 0.3028233051300049, 0.4194338321685791, -0.22494205832481384, 0.3868967294692993, -0.1537492275238037, 0.285817414522171, -0.1336851418018341, 0.42659613490104675...
https://github.com/huggingface/datasets/issues/853
concatenate_datasets support axis=0 or 1 ?
Actually it's doable but requires to update the `Dataset._data_files` schema to support this. I'm re-opening this since we may want to add this in the future
I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png)
26
concatenate_datasets support axis=0 or 1 ? I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png) Actually it's doable but requires to update the `Dataset._data_files` schema to support this. I'm re-opening this since...
[ -0.6200821995735168, 0.11323975026607513, -0.14977407455444336, 0.11423583328723907, 0.125723734498024, 0.20324254035949707, 0.39566484093666077, 0.39590147137641907, -0.003954931627959013, 0.24560575187206268, -0.20389965176582336, 0.38265103101730347, -0.07559116929769516, 0.500622212886...
https://github.com/huggingface/datasets/issues/853
concatenate_datasets support axis=0 or 1 ?
Hi @lhoestq, I would love to help and add this feature if still needed. My plan is to add an axis variable in the `concatenate_datasets` function in `arrow_dataset.py` and when that is set to 1 concatenate columns instead of rows.
I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png)
40
concatenate_datasets support axis=0 or 1 ? I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png) Hi @lhoestq, I would love to help and add this feature if still needed. My plan is to add an axis variable in the `conca...
[ -0.34840458631515503, -0.03517330437898636, -0.12199399620294571, 0.06049662083387375, 0.17259781062602997, 0.09185495227575302, 0.3840519189834595, 0.3609427511692047, 0.0169332567602396, 0.3619815409183502, 0.0030053199734538794, 0.4973301589488983, -0.15670962631702423, 0.51727324724197...
https://github.com/huggingface/datasets/issues/853
concatenate_datasets support axis=0 or 1 ?
Hi ! I would love to see this feature implemented as well :) Thank you for proposing your help ! Here is a few things about the current implementation: - A dataset object is a wrapper of one `pyarrow.Table` that contains the data - Pyarrow offers an API that allows to transform Table objects. For example there are...
I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png)
230
concatenate_datasets support axis=0 or 1 ? I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png) Hi ! I would love to see this feature implemented as well :) Thank you for proposing your help ! Here is a few things...
[ -0.5330206155776978, 0.30334562063217163, -0.030163750052452087, 0.06750238686800003, -0.06257481873035431, -0.05822180584073067, 0.2094469666481018, 0.4255702495574951, -0.07458487898111343, 0.18278071284294128, -0.18690836429595947, 0.7275899052619934, -0.1599382609128952, 0.572449982166...
https://github.com/huggingface/datasets/issues/853
concatenate_datasets support axis=0 or 1 ?
@lhoestq, we have two Pull Requests to implement: - Dataset.add_item: #1870 - Dataset.add_column: #2145 which add a single row or column, repectively. The request here is to implement the concatenation of *multiple* rows/columns. Am I right? We should agree on the API: - `concatenate_datasets` with `axis`? -...
I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png)
51
concatenate_datasets support axis=0 or 1 ? I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png) @lhoestq, we have two Pull Requests to implement: - Dataset.add_item: #1870 - Dataset.add_column: #2145 which add a s...
[ -0.2659381031990051, -0.09517239034175873, -0.08206047117710114, 0.04083068668842316, -0.23825716972351074, 0.10751865804195404, 0.29300248622894287, 0.25326332449913025, 0.07052107155323029, 0.16365554928779602, -0.004881810862571001, 0.3495404124259949, -0.02749420516192913, 0.5060870051...
https://github.com/huggingface/datasets/issues/853
concatenate_datasets support axis=0 or 1 ?
For the API, I like `concatenate_datasets` with `axis` personally :) From a list of `Dataset` objects, it would concatenate them to a new `Dataset` object backed by a `ConcatenationTable`, that is the concatenation of the tables of each input dataset. The concatenation is either on axis=0 (append rows) or on axis=1 (a...
I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png)
158
concatenate_datasets support axis=0 or 1 ? I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png) For the API, I like `concatenate_datasets` with `axis` personally :) From a list of `Dataset` objects, it would concate...
[ -0.34827178716659546, 0.06457655131816864, -0.08929747343063354, 0.14640723168849945, -0.004705748986452818, 0.06871254742145538, 0.17819473147392273, 0.43016621470451355, -0.05500125139951706, 0.16881951689720154, -0.0407027043402195, 0.5322628617286682, -0.17968806624412537, 0.4457183480...
https://github.com/huggingface/datasets/issues/849
Load amazon dataset
Thanks for reporting ! We plan to show information about the different configs of the datasets on the website, with the corresponding `load_dataset` calls. Also I think the bullet points formatting has been fixed
Hi, I was going through amazon_us_reviews dataset and found that example API usage given on website is different from the API usage while loading dataset. Eg. what API usage is on the [website](https://huggingface.co/datasets/amazon_us_reviews) ``` from datasets import load_dataset dataset = load_dataset("amaz...
34
Load amazon dataset Hi, I was going through amazon_us_reviews dataset and found that example API usage given on website is different from the API usage while loading dataset. Eg. what API usage is on the [website](https://huggingface.co/datasets/amazon_us_reviews) ``` from datasets import load_dataset datase...
[ -0.0789320170879364, -0.1892746537923813, -0.21114924550056458, 0.5182408690452576, 0.17753319442272186, 0.28121891617774963, 0.2807135581970215, 0.10958217829465866, 0.10753505676984787, -0.2891574800014496, -0.16567079722881317, 0.13360968232154846, 0.37317824363708496, 0.380805492401123...
https://github.com/huggingface/datasets/issues/848
Error when concatenate_datasets
As you can see in the error the test checks if `indices_mappings_in_memory` is True or not, which is different from the test you do in your script. In a dataset, both the data and the indices mapping can be either on disk or in memory. The indices mapping correspond to a mapping on top of the data table that is used...
Hello, when I concatenate two dataset loading from disk, I encountered a problem: ``` test_dataset = load_from_disk('data/test_dataset') trn_dataset = load_from_disk('data/train_dataset') train_dataset = concatenate_datasets([trn_dataset, test_dataset]) ``` And it reported ValueError blow: ``` --------------...
172
Error when concatenate_datasets Hello, when I concatenate two dataset loading from disk, I encountered a problem: ``` test_dataset = load_from_disk('data/test_dataset') trn_dataset = load_from_disk('data/train_dataset') train_dataset = concatenate_datasets([trn_dataset, test_dataset]) ``` And it reported Val...
[ -0.09119009971618652, -0.1483130007982254, -0.04474862664937973, 0.6809359788894653, 0.10687576979398727, 0.2218789905309677, 0.31290537118911743, 0.22910870611667633, -0.12348383665084839, 0.12460477650165558, -0.10832247883081436, 0.27360260486602783, -0.08168939501047134, -0.12340896576...
https://github.com/huggingface/datasets/issues/848
Error when concatenate_datasets
> As you can see in the error the test checks if `indices_mappings_in_memory` is True or not, which is different from the test you do in your script. In a dataset, both the data and the indices mapping can be either on disk or in memory. > > The indices mapping correspond to a mapping on top of the data table that i...
Hello, when I concatenate two dataset loading from disk, I encountered a problem: ``` test_dataset = load_from_disk('data/test_dataset') trn_dataset = load_from_disk('data/train_dataset') train_dataset = concatenate_datasets([trn_dataset, test_dataset]) ``` And it reported ValueError blow: ``` --------------...
184
Error when concatenate_datasets Hello, when I concatenate two dataset loading from disk, I encountered a problem: ``` test_dataset = load_from_disk('data/test_dataset') trn_dataset = load_from_disk('data/train_dataset') train_dataset = concatenate_datasets([trn_dataset, test_dataset]) ``` And it reported Val...
[ -0.09119009971618652, -0.1483130007982254, -0.04474862664937973, 0.6809359788894653, 0.10687576979398727, 0.2218789905309677, 0.31290537118911743, 0.22910870611667633, -0.12348383665084839, 0.12460477650165558, -0.10832247883081436, 0.27360260486602783, -0.08168939501047134, -0.12340896576...
https://github.com/huggingface/datasets/issues/848
Error when concatenate_datasets
@lhoestq we can add a mention of `dataset.flatten_indices()` in the error message (no rush, just put it on your TODO list or I can do it when I come at it)
Hello, when I concatenate two dataset loading from disk, I encountered a problem: ``` test_dataset = load_from_disk('data/test_dataset') trn_dataset = load_from_disk('data/train_dataset') train_dataset = concatenate_datasets([trn_dataset, test_dataset]) ``` And it reported ValueError blow: ``` --------------...
31
Error when concatenate_datasets Hello, when I concatenate two dataset loading from disk, I encountered a problem: ``` test_dataset = load_from_disk('data/test_dataset') trn_dataset = load_from_disk('data/train_dataset') train_dataset = concatenate_datasets([trn_dataset, test_dataset]) ``` And it reported Val...
[ -0.09119009971618652, -0.1483130007982254, -0.04474862664937973, 0.6809359788894653, 0.10687576979398727, 0.2218789905309677, 0.31290537118911743, 0.22910870611667633, -0.12348383665084839, 0.12460477650165558, -0.10832247883081436, 0.27360260486602783, -0.08168939501047134, -0.12340896576...
https://github.com/huggingface/datasets/issues/847
multiprocessing in dataset map "can only test a child process"
It looks like an issue with wandb/tqdm here. We're using the `multiprocess` library instead of the `multiprocessing` builtin python package to support various types of mapping functions. Maybe there's some sort of incompatibility. Could you make a minimal script to reproduce or a google colab ?
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` -------------------------...
46
multiprocessing in dataset map "can only test a child process" Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, ...
[ -0.39569926261901855, 0.01562918722629547, -0.1538686901330948, -0.18502987921237946, 0.14179755747318268, -0.05785640701651573, 0.5068990588188171, 0.3400261700153351, -0.08340385556221008, 0.1681773066520691, -0.018427539616823196, 0.337063193321228, 0.04750126972794533, 0.09183328598737...
https://github.com/huggingface/datasets/issues/847
multiprocessing in dataset map "can only test a child process"
hi facing the same issue here - `AssertionError: Caught AssertionError in DataLoader worker process 0. Original Traceback (most recent call last): File "/usr/lib/python3.6/logging/__init__.py", line 996, in emit stream.write(msg) File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/lib/redirect.py", l...
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` -------------------------...
293
multiprocessing in dataset map "can only test a child process" Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, ...
[ -0.39569926261901855, 0.01562918722629547, -0.1538686901330948, -0.18502987921237946, 0.14179755747318268, -0.05785640701651573, 0.5068990588188171, 0.3400261700153351, -0.08340385556221008, 0.1681773066520691, -0.018427539616823196, 0.337063193321228, 0.04750126972794533, 0.09183328598737...
https://github.com/huggingface/datasets/issues/847
multiprocessing in dataset map "can only test a child process"
It looks like this warning : "Truncation was not explicitly activated but max_length is provided a specific value, " is not handled well by wandb. The error occurs when calling the tokenizer. Maybe you can try to specify `truncation=True` when calling the tokenizer to remove the warning ? Otherwise I don't know...
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` -------------------------...
80
multiprocessing in dataset map "can only test a child process" Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, ...
[ -0.39569926261901855, 0.01562918722629547, -0.1538686901330948, -0.18502987921237946, 0.14179755747318268, -0.05785640701651573, 0.5068990588188171, 0.3400261700153351, -0.08340385556221008, 0.1681773066520691, -0.018427539616823196, 0.337063193321228, 0.04750126972794533, 0.09183328598737...
https://github.com/huggingface/datasets/issues/847
multiprocessing in dataset map "can only test a child process"
I'm having a similar issue but when I try to do multiprocessing with the `DataLoader` Code to reproduce: ``` from datasets import load_dataset book_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:1%]') book_corpus = book_corpus.map(encode, batched=True...
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` -------------------------...
383
multiprocessing in dataset map "can only test a child process" Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, ...
[ -0.39569926261901855, 0.01562918722629547, -0.1538686901330948, -0.18502987921237946, 0.14179755747318268, -0.05785640701651573, 0.5068990588188171, 0.3400261700153351, -0.08340385556221008, 0.1681773066520691, -0.018427539616823196, 0.337063193321228, 0.04750126972794533, 0.09183328598737...
https://github.com/huggingface/datasets/issues/847
multiprocessing in dataset map "can only test a child process"
Isn't it more the pytorch warning on the use of non-writable memory for tensor that trigger this here @lhoestq? (since it seems to be a warning triggered in `torch.tensor()`
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` -------------------------...
29
multiprocessing in dataset map "can only test a child process" Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, ...
[ -0.39569926261901855, 0.01562918722629547, -0.1538686901330948, -0.18502987921237946, 0.14179755747318268, -0.05785640701651573, 0.5068990588188171, 0.3400261700153351, -0.08340385556221008, 0.1681773066520691, -0.018427539616823196, 0.337063193321228, 0.04750126972794533, 0.09183328598737...
https://github.com/huggingface/datasets/issues/847
multiprocessing in dataset map "can only test a child process"
Yep this time this is a warning from pytorch that causes wandb to not work properly. Could this by a wandb issue ?
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` -------------------------...
23
multiprocessing in dataset map "can only test a child process" Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, ...
[ -0.39569926261901855, 0.01562918722629547, -0.1538686901330948, -0.18502987921237946, 0.14179755747318268, -0.05785640701651573, 0.5068990588188171, 0.3400261700153351, -0.08340385556221008, 0.1681773066520691, -0.018427539616823196, 0.337063193321228, 0.04750126972794533, 0.09183328598737...
https://github.com/huggingface/datasets/issues/847
multiprocessing in dataset map "can only test a child process"
Hi @timothyjlaurent @gaceladri If you're running `transformers` from `master` you can try setting the env var `WAND_DISABLE=true` (from https://github.com/huggingface/transformers/pull/9896) and try again ? This issue might be related to https://github.com/huggingface/transformers/issues/9623
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` -------------------------...
30
multiprocessing in dataset map "can only test a child process" Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, ...
[ -0.39569926261901855, 0.01562918722629547, -0.1538686901330948, -0.18502987921237946, 0.14179755747318268, -0.05785640701651573, 0.5068990588188171, 0.3400261700153351, -0.08340385556221008, 0.1681773066520691, -0.018427539616823196, 0.337063193321228, 0.04750126972794533, 0.09183328598737...
https://github.com/huggingface/datasets/issues/847
multiprocessing in dataset map "can only test a child process"
I have commented the lines that cause my code break. I'm now seeing my reports on Wandb and my code does not break. I am training now, so I will check probably in 6 hours. I suppose that setting wandb disable will work as well.
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` -------------------------...
45
multiprocessing in dataset map "can only test a child process" Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, ...
[ -0.39569926261901855, 0.01562918722629547, -0.1538686901330948, -0.18502987921237946, 0.14179755747318268, -0.05785640701651573, 0.5068990588188171, 0.3400261700153351, -0.08340385556221008, 0.1681773066520691, -0.018427539616823196, 0.337063193321228, 0.04750126972794533, 0.09183328598737...
https://github.com/huggingface/datasets/issues/846
Add HoVer multi-hop fact verification dataset
Hi @yjernite I'm new but wanted to contribute. Has anyone already taken this problem and do you think it is suitable for newbies?
## Adding a Dataset - **Name:** HoVer - **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples - **Paper:** https://arxiv.org/abs/2011.03088 - **Data:** https://hover-nlp.github.io/ - **Motivation:** There are still few multi-hop information extraction...
23
Add HoVer multi-hop fact verification dataset ## Adding a Dataset - **Name:** HoVer - **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples - **Paper:** https://arxiv.org/abs/2011.03088 - **Data:** https://hover-nlp.github.io/ - **Motivation:** Ther...
[ -0.2272990643978119, -0.08233214169740677, 0.00033864332363009453, 0.1499359905719757, -0.40595048666000366, -0.03460044413805008, 0.1919763684272766, 0.10570194572210312, 0.09617801010608673, -0.14476270973682404, -0.047181613743305206, -0.1035466343164444, -0.2635313868522644, 0.23446680...
https://github.com/huggingface/datasets/issues/846
Add HoVer multi-hop fact verification dataset
Hi @tenjjin! This dataset is still up for grabs! Here's the link with the guide to add it. You should play around with the library first (download and look at a few datasets), then follow the steps here: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md
## Adding a Dataset - **Name:** HoVer - **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples - **Paper:** https://arxiv.org/abs/2011.03088 - **Data:** https://hover-nlp.github.io/ - **Motivation:** There are still few multi-hop information extraction...
39
Add HoVer multi-hop fact verification dataset ## Adding a Dataset - **Name:** HoVer - **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples - **Paper:** https://arxiv.org/abs/2011.03088 - **Data:** https://hover-nlp.github.io/ - **Motivation:** Ther...
[ -0.286743700504303, -0.2775804400444031, -0.08202367275953293, 0.022968648001551628, -0.2790862023830414, -0.12419363111257553, 0.1645181030035019, 0.11470865458250046, 0.09816402196884155, 0.14873318374156952, -0.09630386531352997, 0.05982744321227074, -0.09502258896827698, 0.298151969909...
https://github.com/huggingface/datasets/issues/843
use_custom_baseline still produces errors for bertscore
Thanks for reporting ! That's a bug indeed If you want to contribute, feel free to fix this issue and open a PR :)
`metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py"...
24
use_custom_baseline still produces errors for bertscore `metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_c...
[ -0.02370934747159481, 0.12972012162208557, 0.12809036672115326, 0.042228810489177704, 0.18126773834228516, -0.06104787439107895, 0.4962114095687866, 0.05432983487844467, -0.07811608910560608, 0.10937356948852539, 0.09238934516906738, 0.21730169653892517, -0.21500280499458313, -0.0573223158...
https://github.com/huggingface/datasets/issues/843
use_custom_baseline still produces errors for bertscore
This error is because of a mismatch between `datasets` and `bert_score`. With `datasets=1.1.2` and `bert_score>=0.3.6` it works ok. So `pip install -U bert_score` should fix the problem.
`metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py"...
27
use_custom_baseline still produces errors for bertscore `metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_c...
[ -0.047540489584207535, 0.12642605602741241, 0.10102428495883942, 0.07860185950994492, 0.10257850587368011, -0.11232966184616089, 0.37081268429756165, 0.00623772107064724, -0.0859038382768631, 0.09649987518787384, 0.07316985726356506, 0.32716283202171326, -0.16986176371574402, -0.0304800253...
https://github.com/huggingface/datasets/issues/843
use_custom_baseline still produces errors for bertscore
Hello everyone, I think the problem is not solved: ``` from datasets import load_metric metric=load_metric('bertscore') metric.compute( predictions=predictions, references=references, lang='fr', rescale_with_baseline=True ) TypeError: get_hash() missing 2 required positional arguments: ...
`metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py"...
42
use_custom_baseline still produces errors for bertscore `metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_c...
[ 0.01399239245802164, 0.04069599136710167, 0.09277040511369705, 0.07601725310087204, 0.1578616201877594, -0.04843174293637276, 0.45957502722740173, 0.04065268486738205, -0.07028165459632874, 0.09882186353206635, 0.05724063143134117, 0.24822388589382172, -0.24298250675201416, -0.091258734464...
https://github.com/huggingface/datasets/issues/843
use_custom_baseline still produces errors for bertscore
Hi ! This has been fixed by https://github.com/huggingface/datasets/pull/2770, we'll do a new release soon to make the fix available :) In the meantime please use an older version of `bert_score`
`metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py"...
30
use_custom_baseline still produces errors for bertscore `metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_c...
[ -0.0653991773724556, 0.10557124018669128, 0.12522727251052856, 0.07913249731063843, 0.17582041025161743, -0.10428115725517273, 0.42146965861320496, 0.018661171197891235, -0.06677263975143433, 0.1184605285525322, 0.03066953271627426, 0.23698508739471436, -0.19792155921459198, 0.030478490516...
https://github.com/huggingface/datasets/issues/842
How to enable `.map()` pre-processing pipelines to support multi-node parallelism?
Right now multiprocessing only runs on single node. However it's probably possible to extend it to support multi nodes. Indeed we're using the `multiprocess` library from the `pathos` project to do multiprocessing in `datasets`, and `pathos` is made to support parallelism on several nodes. More info about pathos [on...
Hi, Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel processing among nodes, instead of only 1 node running the `.map()` while the other ...
76
How to enable `.map()` pre-processing pipelines to support multi-node parallelism? Hi, Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel...
[ -0.3369056284427643, -0.3196839988231659, -0.16863512992858887, -0.07855023443698883, -0.13222838938236237, 0.033465418964624405, 0.0507950633764267, -0.0447823628783226, 0.23892061412334442, 0.23582929372787476, 0.2544298470020294, 0.5422919988632202, -0.15785205364227295, 0.3227919340133...
https://github.com/huggingface/datasets/issues/841
Can not reuse datasets already downloaded
It seems the process needs '/datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py' Where and how to assign this ```wikipedia.py``` after I manually download it ?
Hello, I need to connect to a frontal node (with http proxy, no gpu) before connecting to a gpu node (but no http proxy, so can not use wget so on). I successfully downloaded and reuse the wikipedia datasets in a frontal node. When I connect to the gpu node, I supposed to use the downloaded datasets from cache, but...
19
Can not reuse datasets already downloaded Hello, I need to connect to a frontal node (with http proxy, no gpu) before connecting to a gpu node (but no http proxy, so can not use wget so on). I successfully downloaded and reuse the wikipedia datasets in a frontal node. When I connect to the gpu node, I supposed to...
[ -0.15487243235111237, -0.23765875399112701, -0.09354013204574585, 0.2473086267709732, 0.3116976022720337, 0.09384498000144958, 0.2028711587190628, 0.054930660873651505, 0.43617942929267883, -0.1159731075167656, 0.05712587758898735, -0.06967675685882568, 0.46847039461135864, -0.013519854284...
https://github.com/huggingface/datasets/issues/836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
Which version of pyarrow do you have ? Could you try to update pyarrow and try again ?
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-...
18
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading a...
[ -0.24118328094482422, -0.3198656737804413, -0.06815572828054428, 0.43160730600357056, 0.3625253736972809, 0.01915484294295311, 0.5210208296775818, 0.43771326541900635, 0.29491308331489563, 0.03277552127838135, 0.016962869092822075, -0.11870472133159637, 0.09059187024831772, 0.1802898049354...
https://github.com/huggingface/datasets/issues/836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
Thanks for the fast response. I have the latest version '2.0.0' (I tried to update) I am working with Python 3.8.5
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-...
21
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading a...
[ -0.24118328094482422, -0.3198656737804413, -0.06815572828054428, 0.43160730600357056, 0.3625253736972809, 0.01915484294295311, 0.5210208296775818, 0.43771326541900635, 0.29491308331489563, 0.03277552127838135, 0.016962869092822075, -0.11870472133159637, 0.09059187024831772, 0.1802898049354...
https://github.com/huggingface/datasets/issues/836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
I think that the issue is similar to this one:https://issues.apache.org/jira/browse/ARROW-9612 The problem is in arrow when the column data contains long strings. Any ideas on how to bypass this?
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-...
29
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading a...
[ -0.24118328094482422, -0.3198656737804413, -0.06815572828054428, 0.43160730600357056, 0.3625253736972809, 0.01915484294295311, 0.5210208296775818, 0.43771326541900635, 0.29491308331489563, 0.03277552127838135, 0.016962869092822075, -0.11870472133159637, 0.09059187024831772, 0.1802898049354...
https://github.com/huggingface/datasets/issues/836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
We should expose the [`block_size` argument](https://arrow.apache.org/docs/python/generated/pyarrow.csv.ReadOptions.html#pyarrow.csv.ReadOptions) of Apache Arrow csv `ReadOptions` in the [script](https://github.com/huggingface/datasets/blob/master/datasets/csv/csv.py). In the meantime you can specify yourself the ...
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-...
56
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading a...
[ -0.24118328094482422, -0.3198656737804413, -0.06815572828054428, 0.43160730600357056, 0.3625253736972809, 0.01915484294295311, 0.5210208296775818, 0.43771326541900635, 0.29491308331489563, 0.03277552127838135, 0.016962869092822075, -0.11870472133159637, 0.09059187024831772, 0.1802898049354...
https://github.com/huggingface/datasets/issues/836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
This did help to load the data. But the problem now is that I get: ArrowInvalid: CSV parse error: Expected 5 columns, got 187 It seems that this change the parsing so I changed the table to tab-separated and tried to load it directly from pyarrow But I got a similar error, again it loaded fine in pandas so I am no...
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-...
66
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading a...
[ -0.24118328094482422, -0.3198656737804413, -0.06815572828054428, 0.43160730600357056, 0.3625253736972809, 0.01915484294295311, 0.5210208296775818, 0.43771326541900635, 0.29491308331489563, 0.03277552127838135, 0.016962869092822075, -0.11870472133159637, 0.09059187024831772, 0.1802898049354...
https://github.com/huggingface/datasets/issues/836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
Got almost the same error loading a ~5GB TSV file, first got the same error as OP, then tried giving it my own ReadOptions and also got the same CSV parse error.
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-...
32
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading a...
[ -0.24118328094482422, -0.3198656737804413, -0.06815572828054428, 0.43160730600357056, 0.3625253736972809, 0.01915484294295311, 0.5210208296775818, 0.43771326541900635, 0.29491308331489563, 0.03277552127838135, 0.016962869092822075, -0.11870472133159637, 0.09059187024831772, 0.1802898049354...
https://github.com/huggingface/datasets/issues/835
Wikipedia postprocessing
Hi @bminixhofer ! Parsing WikiMedia is notoriously difficult: this processing used [mwparserfromhell](https://github.com/earwig/mwparserfromhell) which is pretty good but not perfect. As an alternative, you can also use the Wiki40b dataset which was pre-processed using an un-released Google internal tool
Hi, thanks for this library! Running this code: ```py import datasets wikipedia = datasets.load_dataset("wikipedia", "20200501.de") print(wikipedia['train']['text'][0]) ``` I get: ``` mini|Ricardo Flores Magón mini|Mexikanische Revolutionäre, Magón in der Mitte anführend, gegen die Diktatur von Porfir...
38
Wikipedia postprocessing Hi, thanks for this library! Running this code: ```py import datasets wikipedia = datasets.load_dataset("wikipedia", "20200501.de") print(wikipedia['train']['text'][0]) ``` I get: ``` mini|Ricardo Flores Magón mini|Mexikanische Revolutionäre, Magón in der Mitte anführend, ge...
[ -0.037630122154951096, 0.11600078642368317, -0.1909409910440445, 0.4500380754470825, 0.29546478390693665, -0.2197374403476715, 0.3712627589702606, 0.2625179886817932, -0.002885247115045786, 0.0670771449804306, 0.16373232007026672, 0.36848798394203186, 0.04031369462609291, -0.06343391537666...
https://github.com/huggingface/datasets/issues/834
[GEM] add WikiLingua cross-lingual abstractive summarization dataset
Hey @yjernite. This is a very interesting dataset. Would love to work on adding it but I see that the link to the data is to a gdrive folder. Can I just confirm wether dlmanager can handle gdrive urls or would this have to be a manual dl?
## Adding a Dataset - **Name:** WikiLingua - **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images that are used to describe each how-to step in an article. - **Paper:** h...
48
[GEM] add WikiLingua cross-lingual abstractive summarization dataset ## Adding a Dataset - **Name:** WikiLingua - **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images th...
[ -0.20528936386108398, 0.31998881697654724, 0.07221487909555435, 0.5395575761795044, 0.13003748655319214, 0.21938076615333557, 0.12780003249645233, -0.18642424046993256, -0.028953509405255318, -0.06992631405591965, 0.2743758261203766, 0.000888182723429054, -0.4298545718193054, 0.20579624176...
https://github.com/huggingface/datasets/issues/834
[GEM] add WikiLingua cross-lingual abstractive summarization dataset
Hi @KMFODA ! A version of WikiLingua is actually already accessible in the [GEM dataset](https://huggingface.co/datasets/gem) You can use it for example to load the French to English translation with: ```python from datasets import load_dataset wikilingua = load_dataset("gem", "wiki_lingua_french_fr") ``` Clo...
## Adding a Dataset - **Name:** WikiLingua - **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images that are used to describe each how-to step in an article. - **Paper:** h...
42
[GEM] add WikiLingua cross-lingual abstractive summarization dataset ## Adding a Dataset - **Name:** WikiLingua - **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images th...
[ -0.2764139473438263, 0.002688318956643343, -0.10010240972042084, 0.3107094466686249, -0.11341528594493866, 0.14023438096046448, 0.010671926662325859, 0.2736698389053345, -0.049723438918590546, 0.14254416525363922, -0.06661218404769897, 0.2388453632593155, 0.03321676328778267, 0.32905408740...
https://github.com/huggingface/datasets/issues/827
[GEM] MultiWOZ dialogue dataset
Hi @yjernite can I help in adding this dataset? I am excited about this because this will be my first contribution to the datasets library as well as to hugginface.
## Adding a Dataset - **Name:** MultiWOZ (Multi-Domain Wizard-of-Oz) - **Description:** 10k annotated human-human dialogues. Each dialogue consists of a goal, multiple user and system utterances as well as a belief state. Only system utterances are annotated with dialogue acts – there are no annotations from the user...
30
[GEM] MultiWOZ dialogue dataset ## Adding a Dataset - **Name:** MultiWOZ (Multi-Domain Wizard-of-Oz) - **Description:** 10k annotated human-human dialogues. Each dialogue consists of a goal, multiple user and system utterances as well as a belief state. Only system utterances are annotated with dialogue acts – ther...
[ -0.049559466540813446, -0.12617550790309906, 0.048442542552948, 0.5216317772865295, 0.028047097846865654, 0.2442048043012619, 0.22579002380371094, -0.12951326370239258, -0.07643046975135803, -0.03523027151823044, -0.3937833607196808, 0.02837585099041462, -0.41572850942611694, 0.37123554944...
https://github.com/huggingface/datasets/issues/824
Discussion using datasets in offline mode
I think it would be very cool. I'm currently working on a cluster from Compute Canada, and I have internet access only when I'm not in the nodes where I run the scripts. So I was expecting to be able to use the wmt14 dataset until I realized I needed internet connection even if I downloaded the data already. I'm going ...
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind or other propositions. Here are some point...
72
Discussion using datasets in offline mode `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind o...
[ -0.6300708651542664, 0.20589101314544678, -0.010249721817672253, 0.13883249461650848, 0.19344668090343475, -0.1737719476222992, 0.5314207673072815, 0.08316382765769958, 0.3500733971595764, 0.23671187460422516, 0.13064807653427124, -0.0646166130900383, -0.1149483397603035, 0.489258110523223...
https://github.com/huggingface/datasets/issues/824
Discussion using datasets in offline mode
Requiring online connection is a deal breaker in some cases unfortunately so it'd be great if offline mode is added similar to how `transformers` loads models offline fine. @mandubian's second bullet point suggests that there's a workaround allowing you to use your offline (custom?) dataset with `datasets`. Could yo...
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind or other propositions. Here are some point...
57
Discussion using datasets in offline mode `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind o...
[ -0.4731806814670563, 0.24578382074832916, -0.01263077650219202, 0.14121387898921967, 0.2833555042743683, -0.14702095091342926, 0.6012279987335205, 0.012776016257703304, 0.2687808573246002, 0.13127794861793518, -0.024036703631281853, -0.02398514188826084, -0.024662259966135025, 0.4021065533...