html_url
stringlengths
48
51
title
stringlengths
5
268
comments
stringlengths
70
51.8k
body
stringlengths
0
29.8k
comment_length
int64
16
1.52k
text
stringlengths
164
54.1k
embeddings
list
https://github.com/huggingface/datasets/issues/1857
Unable to upload "community provided" dataset - 400 Client Error
Hi ! We're in the process of switching the community datasets to git repos, exactly like what we're doing for models. You can find an example here: https://huggingface.co/datasets/lhoestq/custom_squad/tree/main We'll update the CLI in the coming days and do a new release :) Also cc @julien-c maybe we can make i...
Hi, i'm trying to a upload a dataset as described [here](https://huggingface.co/docs/datasets/v1.2.0/share_dataset.html#sharing-a-community-provided-dataset). This is what happens: ``` $ datasets-cli login $ datasets-cli upload_dataset my_dataset About to upload file /path/to/my_dataset/dataset_infos.json to S3...
54
Unable to upload "community provided" dataset - 400 Client Error Hi, i'm trying to a upload a dataset as described [here](https://huggingface.co/docs/datasets/v1.2.0/share_dataset.html#sharing-a-community-provided-dataset). This is what happens: ``` $ datasets-cli login $ datasets-cli upload_dataset my_dataset...
[ -0.09157098084688187, -0.03636622428894043, 0.034416839480400085, 0.06724171340465546, 0.36497053503990173, -0.049285463988780975, 0.17340518534183502, 0.004484193399548531, -0.22304560244083405, -0.14681170880794525, -0.1488376408815384, -0.020599959418177605, 0.005085619632154703, 0.3581...
https://github.com/huggingface/datasets/issues/1856
load_dataset("amazon_polarity") NonMatchingChecksumError
Hi ! This issue may be related to #996 This comes probably from the Quota Exceeded error from Google Drive. Can you try again tomorrow and see if you still have the error ? On my side I didn't get any error today with `load_dataset("amazon_polarity")`
Hi, it seems that loading the amazon_polarity dataset gives a NonMatchingChecksumError. To reproduce: ``` load_dataset("amazon_polarity") ``` This will give the following error: ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback ...
45
load_dataset("amazon_polarity") NonMatchingChecksumError Hi, it seems that loading the amazon_polarity dataset gives a NonMatchingChecksumError. To reproduce: ``` load_dataset("amazon_polarity") ``` This will give the following error: ``` ----------------------------------------------------------------------...
[ -0.14463524520397186, 0.1181081160902977, -0.12130685895681381, 0.24109329283237457, 0.12806227803230286, -0.003592638298869133, 0.3443527817726135, 0.06529989093542099, 0.2865488827228546, 0.23939190804958344, 0.028122272342443466, -0.06478916108608246, 0.009715796448290348, 0.15067976713...
https://github.com/huggingface/datasets/issues/1856
load_dataset("amazon_polarity") NonMatchingChecksumError
@lhoestq Hi! I encounter the same error when loading `yelp_review_full`. ``` from datasets import load_dataset dataset_yp = load_dataset("yelp_review_full") ``` When you say the "Quota Exceeded from Google drive". Is this a quota from the dataset owner? or the quota from our (the runner) Google Drive?
Hi, it seems that loading the amazon_polarity dataset gives a NonMatchingChecksumError. To reproduce: ``` load_dataset("amazon_polarity") ``` This will give the following error: ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback ...
45
load_dataset("amazon_polarity") NonMatchingChecksumError Hi, it seems that loading the amazon_polarity dataset gives a NonMatchingChecksumError. To reproduce: ``` load_dataset("amazon_polarity") ``` This will give the following error: ``` ----------------------------------------------------------------------...
[ -0.06117379292845726, 0.14232762157917023, -0.10863210260868073, 0.22138342261314392, 0.12633629143238068, 0.12175075709819794, 0.24527676403522491, 0.010380279272794724, 0.3931344747543335, 0.10966632515192032, -0.13955886662006378, -0.020144829526543617, 0.036381810903549194, 0.025237916...
https://github.com/huggingface/datasets/issues/1856
load_dataset("amazon_polarity") NonMatchingChecksumError
> When you say the "Quota Exceeded from Google drive". Is this a quota from the dataset owner? or the quota from our (the runner) Google Drive? Each file on Google Drive can be downloaded only a certain amount of times per day because of a quota. The quota is reset every day. So if too many people download the datas...
Hi, it seems that loading the amazon_polarity dataset gives a NonMatchingChecksumError. To reproduce: ``` load_dataset("amazon_polarity") ``` This will give the following error: ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback ...
127
load_dataset("amazon_polarity") NonMatchingChecksumError Hi, it seems that loading the amazon_polarity dataset gives a NonMatchingChecksumError. To reproduce: ``` load_dataset("amazon_polarity") ``` This will give the following error: ``` ----------------------------------------------------------------------...
[ -0.1349429190158844, 0.047477882355451584, -0.11106205731630325, 0.25881001353263855, 0.062481943517923355, -0.03678252920508385, 0.2863379418849945, 0.01199390273541212, 0.3435027003288269, 0.2581290006637573, 0.07673431932926178, -0.0375790111720562, 0.003082318464294076, 0.0379523970186...
https://github.com/huggingface/datasets/issues/1856
load_dataset("amazon_polarity") NonMatchingChecksumError
@lhoestq Gotcha, that is quite problematic...for what it's worth, I've had no issues with the other datasets I tried, such as `yelp_reviews_full` and `amazon_reviews_multi`.
Hi, it seems that loading the amazon_polarity dataset gives a NonMatchingChecksumError. To reproduce: ``` load_dataset("amazon_polarity") ``` This will give the following error: ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback ...
24
load_dataset("amazon_polarity") NonMatchingChecksumError Hi, it seems that loading the amazon_polarity dataset gives a NonMatchingChecksumError. To reproduce: ``` load_dataset("amazon_polarity") ``` This will give the following error: ``` ----------------------------------------------------------------------...
[ -0.19478663802146912, 0.0554472990334034, -0.08366961032152176, 0.15782234072685242, 0.17194846272468567, 0.0804428905248642, 0.23757299780845642, 0.09915191680192947, 0.24162019789218903, -0.013614020310342312, -0.08398184925317764, 0.06070075184106827, -0.003998273983597755, -0.141038477...
https://github.com/huggingface/datasets/issues/1856
load_dataset("amazon_polarity") NonMatchingChecksumError
Same issue today with "big_patent", though the symptoms are slightly different. When running ```py from datasets import load_dataset load_dataset("big_patent", split="validation") ``` I get the following `FileNotFoundError: Local file \huggingface\datasets\downloads\6159313604f4f2c01e7d1cac52139343b6c07f73...
Hi, it seems that loading the amazon_polarity dataset gives a NonMatchingChecksumError. To reproduce: ``` load_dataset("amazon_polarity") ``` This will give the following error: ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback ...
230
load_dataset("amazon_polarity") NonMatchingChecksumError Hi, it seems that loading the amazon_polarity dataset gives a NonMatchingChecksumError. To reproduce: ``` load_dataset("amazon_polarity") ``` This will give the following error: ``` ----------------------------------------------------------------------...
[ -0.19774773716926575, 0.16995744407176971, -0.051092568784952164, 0.2398211508989334, 0.14944009482860565, -0.05001041293144226, 0.2648402750492096, 0.22645094990730286, 0.3186415433883667, 0.06029510498046875, -0.062127187848091125, 0.007255561649799347, -0.004848541226238012, -0.13671769...
https://github.com/huggingface/datasets/issues/1854
Feature Request: Dataset.add_item
Hi @sshleifer. I am not sure of understanding the need of the `add_item` approach... By just reading your "Desired API" section, I would say you could (nearly) get it with a 1-column Dataset: ```python data = {"input_ids": [np.array([4,4,2]), np.array([8,6,5,5,2]), np.array([3,3,31,5])]} ds = Dataset.from_dict...
I'm trying to integrate `huggingface/datasets` functionality into `fairseq`, which requires (afaict) being able to build a dataset through an `add_item` method, such as https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L318, as opposed to loading all the text into arrow, and then `dataset.m...
48
Feature Request: Dataset.add_item I'm trying to integrate `huggingface/datasets` functionality into `fairseq`, which requires (afaict) being able to build a dataset through an `add_item` method, such as https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L318, as opposed to loading all the ...
[ -0.264629602432251, 0.16031573712825775, -0.04589061811566353, 0.1386154443025589, -0.025280846282839775, 0.12016993761062622, 0.15693829953670502, 0.1071404293179512, 0.05867019668221474, 0.04408647119998932, 0.10954906791448593, 0.5823285579681396, -0.15739132463932037, 0.138006761670112...
https://github.com/huggingface/datasets/issues/1854
Feature Request: Dataset.add_item
Hi @sshleifer :) We don't have methods like `Dataset.add_batch` or `Dataset.add_entry/add_item` yet. But that's something we'll add pretty soon. Would an API that looks roughly like this help ? Do you have suggestions ? ```python import numpy as np from datasets import Dataset tokenized = [np.array([4,4,2]),...
I'm trying to integrate `huggingface/datasets` functionality into `fairseq`, which requires (afaict) being able to build a dataset through an `add_item` method, such as https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L318, as opposed to loading all the text into arrow, and then `dataset.m...
92
Feature Request: Dataset.add_item I'm trying to integrate `huggingface/datasets` functionality into `fairseq`, which requires (afaict) being able to build a dataset through an `add_item` method, such as https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L318, as opposed to loading all the ...
[ -0.264629602432251, 0.16031573712825775, -0.04589061811566353, 0.1386154443025589, -0.025280846282839775, 0.12016993761062622, 0.15693829953670502, 0.1071404293179512, 0.05867019668221474, 0.04408647119998932, 0.10954906791448593, 0.5823285579681396, -0.15739132463932037, 0.138006761670112...
https://github.com/huggingface/datasets/issues/1849
Add TIMIT
@patrickvonplaten Could you please help me with how the output text has to be represented in the data? TIMIT has Words, Phonemes and texts. Also has lot on info on the speaker and the dialect. Could you please help me? An example of how to arrange it would be super helpful!
## Adding a Dataset - **Name:** *TIMIT* - **Description:** *The TIMIT corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems* - **Paper:** *Homepage*: http://groups.inf.ed.ac.uk...
51
Add TIMIT ## Adding a Dataset - **Name:** *TIMIT* - **Description:** *The TIMIT corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems* - **Paper:** *Homepage*: http://groups....
[ 0.07334112375974655, -0.3110240697860718, -0.1762830913066864, 0.18871566653251648, 0.08587668836116791, -0.10724178701639175, 0.11768975853919983, 0.0632871463894844, -0.42619067430496216, -0.13589204847812653, -0.2996528744697571, 0.13682131469249725, -0.07004347443580627, 0.313822418451...
https://github.com/huggingface/datasets/issues/1849
Add TIMIT
Hey @vrindaprabhu - sure I'll help you :-) Could you open a first PR for TIMIT where you copy-paste more or less the `librispeech_asr` script: https://github.com/huggingface/datasets/blob/28be129db862ec89a87ac9349c64df6b6118aff4/datasets/librispeech_asr/librispeech_asr.py#L93 (obviously replacing all the naming and lin...
## Adding a Dataset - **Name:** *TIMIT* - **Description:** *The TIMIT corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems* - **Paper:** *Homepage*: http://groups.inf.ed.ac.uk...
85
Add TIMIT ## Adding a Dataset - **Name:** *TIMIT* - **Description:** *The TIMIT corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems* - **Paper:** *Homepage*: http://groups....
[ -0.24468469619750977, -0.47310420870780945, -0.17479470372200012, 0.05118829011917114, 0.12573257088661194, -0.09047053754329681, 0.07210580259561539, 0.21556350588798523, -0.2834831774234772, 0.12188906967639923, -0.3020021915435791, 0.1912917047739029, -0.3662663400173187, 0.164515390992...
https://github.com/huggingface/datasets/issues/1849
Add TIMIT
I am sorry! I created the PR [#1903](https://github.com/huggingface/datasets/pull/1903#). Requesting your comments! CircleCI tests are failing, will address them along with your comments!
## Adding a Dataset - **Name:** *TIMIT* - **Description:** *The TIMIT corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems* - **Paper:** *Homepage*: http://groups.inf.ed.ac.uk...
22
Add TIMIT ## Adding a Dataset - **Name:** *TIMIT* - **Description:** *The TIMIT corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems* - **Paper:** *Homepage*: http://groups....
[ -0.24184942245483398, -0.34229138493537903, -0.14152643084526062, 0.07103144377470016, -0.03408156707882881, -0.09505589306354523, 0.1323646605014801, 0.13229112327098846, -0.318258136510849, 0.21849268674850464, -0.3416486084461212, 0.18533483147621155, -0.22746430337429047, 0.21728275716...
https://github.com/huggingface/datasets/issues/1844
Update Open Subtitles corpus with original sentence IDs
Hi ! You're right this can can useful. This should be easy to add, so feel free to give it a try if you want to contribute :) I think we just need to add it to the _generate_examples method of the OpenSubtitles dataset builder [here](https://github.com/huggingface/datasets/blob/master/datasets/open_subtitles/open_sub...
Hi! It would be great if you could add the original sentence ids to [Open Subtitles](https://huggingface.co/datasets/open_subtitles). I can think of two reasons: first, it's possible to gather sentences for an entire document (the original ids contain media id, subtitle file id and sentence id), therefore somewhat a...
46
Update Open Subtitles corpus with original sentence IDs Hi! It would be great if you could add the original sentence ids to [Open Subtitles](https://huggingface.co/datasets/open_subtitles). I can think of two reasons: first, it's possible to gather sentences for an entire document (the original ids contain media i...
[ 0.20111314952373505, 0.16045242547988892, -0.04834134504199028, -0.20894064009189606, -0.13016551733016968, 0.2428107112646103, 0.27112454175949097, 0.25712400674819946, -0.2969919443130493, -0.03867390379309654, -0.31456419825553894, 0.24536386132240295, 0.2158559262752533, -0.07630900293...
https://github.com/huggingface/datasets/issues/1844
Update Open Subtitles corpus with original sentence IDs
Hey @lhoestq , absolutely yes! Just one question before I start implementing. The ids found in the zip file have this format: (the following is line `22497315` of the `ids` file of the `de-en` dump) `de/2017/7006210/7063319.xml.gz en/2017/7006210/7050201.xml.gz 335 339 340` (every space is actually a tab, ...
Hi! It would be great if you could add the original sentence ids to [Open Subtitles](https://huggingface.co/datasets/open_subtitles). I can think of two reasons: first, it's possible to gather sentences for an entire document (the original ids contain media id, subtitle file id and sentence id), therefore somewhat a...
217
Update Open Subtitles corpus with original sentence IDs Hi! It would be great if you could add the original sentence ids to [Open Subtitles](https://huggingface.co/datasets/open_subtitles). I can think of two reasons: first, it's possible to gather sentences for an entire document (the original ids contain media i...
[ 0.25303956866264343, 0.2482582926750183, -0.01338670402765274, -0.1263539344072342, -0.2649044096469879, 0.16548404097557068, 0.2578907310962677, 0.2178698182106018, -0.3385523557662964, -0.11331034451723099, -0.31917351484298706, 0.3215388357639313, 0.1591855138540268, -0.1485190242528915...
https://github.com/huggingface/datasets/issues/1844
Update Open Subtitles corpus with original sentence IDs
I like the idea of having `year`, `imdbId` and `subtitleId` as columns for filtering for example. And for the `sentenceIds` a list of integers is fine.
Hi! It would be great if you could add the original sentence ids to [Open Subtitles](https://huggingface.co/datasets/open_subtitles). I can think of two reasons: first, it's possible to gather sentences for an entire document (the original ids contain media id, subtitle file id and sentence id), therefore somewhat a...
26
Update Open Subtitles corpus with original sentence IDs Hi! It would be great if you could add the original sentence ids to [Open Subtitles](https://huggingface.co/datasets/open_subtitles). I can think of two reasons: first, it's possible to gather sentences for an entire document (the original ids contain media i...
[ 0.20258626341819763, 0.15204806625843048, -0.046398378908634186, -0.215180441737175, -0.20975883305072784, 0.22186361253261566, 0.29271450638771057, 0.3270959258079529, -0.2807449996471405, -0.08636625856161118, -0.34239351749420166, 0.17482228577136993, 0.1811576634645462, -0.133489415049...
https://github.com/huggingface/datasets/issues/1844
Update Open Subtitles corpus with original sentence IDs
Something like this? (adapted from [here](https://github.com/huggingface/datasets/blob/master/datasets/open_subtitles/open_subtitles.py#L114)) ```python result = ( sentence_counter, { "id": str(sentence_counter), "meta": { "year": year, "imdbId": imdb_id, "subtitleId...
Hi! It would be great if you could add the original sentence ids to [Open Subtitles](https://huggingface.co/datasets/open_subtitles). I can think of two reasons: first, it's possible to gather sentences for an entire document (the original ids contain media id, subtitle file id and sentence id), therefore somewhat a...
79
Update Open Subtitles corpus with original sentence IDs Hi! It would be great if you could add the original sentence ids to [Open Subtitles](https://huggingface.co/datasets/open_subtitles). I can think of two reasons: first, it's possible to gather sentences for an entire document (the original ids contain media i...
[ 0.22097864747047424, 0.034774865955114365, -0.017486119642853737, -0.15179544687271118, -0.20496001839637756, 0.3104090094566345, 0.3499244451522827, 0.23899142444133759, -0.4182048439979553, -0.11902163922786713, -0.39921995997428894, 0.2717110514640808, 0.2253200113773346, -0.12070716917...
https://github.com/huggingface/datasets/issues/1843
MustC Speech Translation
That's awesome! Actually, I just noticed that this dataset might become a bit too big! MuST-C is the main dataset used for IWSLT19 and should probably be added as a standalone dataset. Would you be interested also in adding `datasets/MuST-C` instead? Description: _MuST-C is a multilingual speech translation cor...
## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google.com/view/iwslt-evaluation-2...
188
MustC Speech Translation ## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google....
[ -0.28751295804977417, 0.17527498304843903, -0.05233779177069664, -0.016081223264336586, -0.10274030268192291, 0.04134631156921387, 0.03224589303135872, 0.15731316804885864, -0.3544584810733795, 0.23967675864696503, -0.23819857835769653, -0.11117725074291229, -0.19824498891830444, 0.1046883...
https://github.com/huggingface/datasets/issues/1843
MustC Speech Translation
Hi @patrickvonplaten I have tried downloading this dataset, but the connection seems to reset all the time. I have tried it via the browser, wget, and using gdown . But it gives me an error message. _"The server is busy or down, pls try again"_ (rephrasing the message here) I have completed adding 4 datasets in th...
## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google.com/view/iwslt-evaluation-2...
90
MustC Speech Translation ## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google....
[ -0.405750572681427, 0.11951646208763123, -0.003910532221198082, 0.1509024202823639, 0.0220462828874588, -0.0853213369846344, -0.13353091478347778, -0.00625575752928853, -0.24000242352485657, 0.12572908401489258, -0.2750508785247803, -0.31894734501838684, 0.10233280807733536, 0.139847218990...
https://github.com/huggingface/datasets/issues/1843
MustC Speech Translation
@skyprince999, I think I'm getting the same error you're getting :-/ ``` Sorry, you can't view or download this file at this time. Too many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with m...
## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google.com/view/iwslt-evaluation-2...
117
MustC Speech Translation ## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google....
[ -0.3741525113582611, 0.051990289241075516, -0.04828917235136032, 0.0795520469546318, -0.013570498675107956, 0.01749902032315731, 0.08156539499759674, 0.18861348927021027, -0.20640353858470917, 0.38557514548301697, -0.3391422927379608, -0.5373646020889282, -0.053258638828992844, -0.01043659...
https://github.com/huggingface/datasets/issues/1843
MustC Speech Translation
Also there are huge those datasets. Think downloading MuST-C v1.2 amounts to ~ 1000GB... because there are 14 possible configs each around 60-70GB. I think users mostly will only use one of the 14 configs so that they would only need, in theory, will have to download ~60GB which is ok. But I think this functionality do...
## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google.com/view/iwslt-evaluation-2...
64
MustC Speech Translation ## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google....
[ -0.3945275843143463, 0.2756195068359375, -0.08757786452770233, 0.006215598434209824, -0.06524164229631424, 0.028635524213314056, -0.18247246742248535, 0.2685893774032593, -0.2595134675502777, 0.3226705491542816, -0.29367634654045105, -0.2957717478275299, -0.19456133246421814, 0.18701896071...
https://github.com/huggingface/datasets/issues/1843
MustC Speech Translation
> Also cc @lhoestq - do you think we could mirror the dataset? Yes we can mirror it if the authors are fine with it. You can create a dataset repo on huggingface.co (possibly under the relevant org) and add the mirrored data files. > I think users mostly will only use one of the 14 configs so that they would only...
## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google.com/view/iwslt-evaluation-2...
110
MustC Speech Translation ## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google....
[ -0.3701684772968292, 0.07569365203380585, -0.05002211406826973, 0.1428012251853943, -0.1143721267580986, -0.022535670548677444, 0.024504680186510086, 0.25288209319114685, -0.21712948381900787, 0.2163129448890686, -0.36347687244415283, -0.2713865041732788, -0.09528888016939163, 0.2548370659...
https://github.com/huggingface/datasets/issues/1843
MustC Speech Translation
I have written to the dataset authors, highlighting this issue. Waiting for their response. Update on 25th Feb: The authors have replied back, they are updating the download link and will revert back shortly! ``` first of all thanks a lot for being interested in MuST-C and for building the data-loader. Be...
## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google.com/view/iwslt-evaluation-2...
147
MustC Speech Translation ## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google....
[ -0.19946876168251038, 0.2436532825231552, -0.09747280180454254, -0.07846877723932266, -0.010633702389895916, -0.14516587555408478, 0.052475761622190475, 0.07501109689474106, -0.12100738286972046, 0.30050280690193176, -0.16881239414215088, -0.12146369367837906, -0.09290602058172226, 0.10149...
https://github.com/huggingface/datasets/issues/1843
MustC Speech Translation
Awesome, actually @lhoestq let's just ask the authors if we should host the dataset no? They could just use our links then as well for their website - what do you think? Is it fine to use our AWS dataset storage also as external links?
## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google.com/view/iwslt-evaluation-2...
45
MustC Speech Translation ## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google....
[ -0.17688919603824615, 0.23500777781009674, -0.0860685184597969, 0.2041037529706955, -0.23309792578220367, -0.04346412420272827, 0.3742261528968811, -0.04431677609682083, 0.06548832356929779, 0.1011236384510994, -0.4423069655895233, -0.2948478162288666, -0.053012654185295105, 0.190222233533...
https://github.com/huggingface/datasets/issues/1843
MustC Speech Translation
Yes definitely. Shall we suggest them to create a dataset repository under their org on huggingface.co ? @julien-c The dataset is around 1TB
## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google.com/view/iwslt-evaluation-2...
23
MustC Speech Translation ## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google....
[ -0.28197425603866577, 0.13165096938610077, -0.11428958177566528, 0.1891249567270279, -0.07617292553186417, 0.023895250633358955, -0.015186971053481102, 0.13233540952205658, -0.2927868664264679, 0.15688274800777435, -0.2882779538631439, -0.22735287249088287, -0.13568992912769318, 0.17714568...
https://github.com/huggingface/datasets/issues/1843
MustC Speech Translation
Sounds good! Order of magnitude is storage costs ~$20 per TB per month (not including bandwidth). Happy to provide this to the community as I feel this is an important dataset. Let us know what the authors want to do!
## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google.com/view/iwslt-evaluation-2...
40
MustC Speech Translation ## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google....
[ -0.26745620369911194, 0.3468383252620697, -0.11506323516368866, 0.07435581833124161, -0.10949254035949707, 0.028010863810777664, 0.0628126934170723, 0.20077265799045563, -0.30660879611968994, 0.21898315846920013, -0.2691018879413605, -0.2065333127975464, -0.18998733162879944, 0.13933338224...
https://github.com/huggingface/datasets/issues/1843
MustC Speech Translation
Great! @skyprince999, do you think you could ping the authors here or link to this thread? I think it could be a cool idea to host the dataset on our side then
## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google.com/view/iwslt-evaluation-2...
32
MustC Speech Translation ## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google....
[ -0.26381513476371765, 0.25565075874328613, -0.1020893082022667, 0.06742620468139648, -0.11630119383335114, 0.004987317603081465, 0.2088761031627655, 0.13246676325798035, -0.23804666101932526, 0.19588711857795715, -0.2925174832344055, -0.1947900950908661, -0.12356529384851456, 0.17620922625...
https://github.com/huggingface/datasets/issues/1843
MustC Speech Translation
Done. They replied back, and they want to have a call over a meet/ skype. Is that possible ? Btw @patrickvonplaten you are looped in that email (_pls check you gmail account_)
## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google.com/view/iwslt-evaluation-2...
32
MustC Speech Translation ## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google....
[ -0.2442641258239746, 0.07631390541791916, -0.12693443894386292, 0.12922494113445282, -0.060205936431884766, -0.10816596448421478, 0.09540992230176926, 0.029259175062179565, -0.19839420914649963, 0.22876231372356415, -0.21634231507778168, -0.30687740445137024, -0.036102257668972015, 0.20483...
https://github.com/huggingface/datasets/issues/1843
MustC Speech Translation
@gegallego there were some concerns regarding dataset usage & attribution by a for-profit company, so couldn't take it forward. Also the download links were unstable. But I guess if you want to test the fairseq benchmarks, you can connect with them directly for downloading the dataset.
## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google.com/view/iwslt-evaluation-2...
46
MustC Speech Translation ## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google....
[ -0.355178564786911, 0.2733173966407776, -0.08489160239696503, 0.04397100210189819, -0.21060748398303986, -0.09972155839204788, 0.014361183159053326, 0.16643017530441284, -0.333497017621994, 0.18587277829647064, -0.25435999035835266, -0.07739317417144775, 0.05462557449936867, -0.11916165053...
https://github.com/huggingface/datasets/issues/1843
MustC Speech Translation
Yes, that dataset is not easy to download... I had to copy it to my Google Drive and use `rsync` to be able to download it. However, we could add the dataset with a manual download, right?
## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google.com/view/iwslt-evaluation-2...
37
MustC Speech Translation ## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google....
[ -0.3261702060699463, 0.2585773169994354, -0.0921742394566536, -0.033368173986673355, -0.05543072521686554, 0.02017766423523426, 0.0042869108729064465, 0.12214869260787964, -0.3325348496437073, 0.2870289981365204, -0.24631133675575256, -0.16635258495807648, -0.08833853155374527, 0.043067049...
https://github.com/huggingface/datasets/issues/1843
MustC Speech Translation
yes that is possible. I couldn't unfortunately complete this PR, If you would like to add it, please feel free to do it.
## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google.com/view/iwslt-evaluation-2...
23
MustC Speech Translation ## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google....
[ -0.31441691517829895, 0.24071379005908966, -0.09851743280887604, 0.0024263772647827864, -0.04147341474890709, -0.036333050578832626, 0.0861029103398323, 0.15631262958049774, -0.3312878906726837, 0.16874495148658752, -0.3144814372062683, -0.08722835779190063, -0.08008407056331635, 0.1177137...
https://github.com/huggingface/datasets/issues/1840
Add common voice
Hey @BirgerMoell - awesome that you started working on Common Voice. Common Voice is a bit special since, there is no direct download link to download the data. In these cases we usually consider two options: 1) Find a hacky solution to extract the download link somehow from the XLM tree of the website 2) If this ...
## Adding a Dataset - **Name:** *common voice* - **Description:** *Mozilla Common Voice Dataset* - **Paper:** Homepage: https://voice.mozilla.org/en/datasets - **Data:** https://voice.mozilla.org/en/datasets - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/dat...
100
Add common voice ## Adding a Dataset - **Name:** *common voice* - **Description:** *Mozilla Common Voice Dataset* - **Paper:** Homepage: https://voice.mozilla.org/en/datasets - **Data:** https://voice.mozilla.org/en/datasets - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www...
[ -0.19141416251659393, -0.23302234709262848, -0.057918839156627655, -0.16404199600219727, 0.14640496671199799, -0.07734158635139465, 0.22383488714694977, 0.28073135018348694, -0.20906198024749756, 0.3071739971637726, -0.44875437021255493, 0.041479140520095825, -0.05049744248390198, -0.06181...
https://github.com/huggingface/datasets/issues/1840
Add common voice
I added a Work in Progress pull request (hope that is ok). I've made a card for the dataset and filled out the common_voice.py file with information about the datset (not completely). I didn't manage to get the tagging tool working locally on my machine but will look into that later. Left to do. - Tag the data...
## Adding a Dataset - **Name:** *common voice* - **Description:** *Mozilla Common Voice Dataset* - **Paper:** Homepage: https://voice.mozilla.org/en/datasets - **Data:** https://voice.mozilla.org/en/datasets - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/dat...
66
Add common voice ## Adding a Dataset - **Name:** *common voice* - **Description:** *Mozilla Common Voice Dataset* - **Paper:** Homepage: https://voice.mozilla.org/en/datasets - **Data:** https://voice.mozilla.org/en/datasets - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www...
[ -0.1344468742609024, -0.2228565216064453, 0.00019300229905638844, -0.15788565576076508, 0.19370679557323456, 0.0049484190531075, 0.2681260406970978, 0.30204564332962036, -0.24950100481510162, 0.17952275276184082, -0.3955017626285553, 0.23003479838371277, -0.04257671907544136, -0.0264863409...
https://github.com/huggingface/datasets/issues/1838
Add tedlium
Hi @patrickvonplaten I can have a look to this dataset later since I am trying to add the OpenSLR dataset https://github.com/huggingface/datasets/pull/2173 Hopefully I have enough space since the compressed file is 21GB. The release 3 is even bigger: 54GB :-0
## Adding a Dataset - **Name:** *tedlium* - **Description:** *The TED-LIUM 1-3 corpus is English-language TED talks, with transcriptions, sampled at 16kHz. It contains about 118 hours of speech.* - **Paper:** Homepage: http://www.openslr.org/7/, https://lium.univ-lemans.fr/en/ted-lium2/ &, https://www.openslr.org/51...
40
Add tedlium ## Adding a Dataset - **Name:** *tedlium* - **Description:** *The TED-LIUM 1-3 corpus is English-language TED talks, with transcriptions, sampled at 16kHz. It contains about 118 hours of speech.* - **Paper:** Homepage: http://www.openslr.org/7/, https://lium.univ-lemans.fr/en/ted-lium2/ &, https://www....
[ -0.34317752718925476, 0.07050645351409912, -0.06830784678459167, 0.13331177830696106, 0.06836728751659393, -0.020419856533408165, 0.035358116030693054, 0.37950700521469116, -0.42947453260421753, 0.27444109320640564, -0.2579483389854431, 0.3504011631011963, -0.20843695104122162, -0.09178823...
https://github.com/huggingface/datasets/issues/1831
Some question about raw dataset download info in the project .
Hi ! The `dl_manager` is a `DownloadManager` object and is responsible for downloading the raw data files. It is used by dataset builders in their `_split_generators` method to download the raw data files that are necessary to build the datasets splits. The `Conll2003` class is a dataset builder, and so you can dow...
Hi , i review the code in https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py in the _split_generators function is the truly logic of download raw datasets with dl_manager and use Conll2003 cls by use import_main_class in load_dataset function My question is that , with this logic i...
166
Some question about raw dataset download info in the project . Hi , i review the code in https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py in the _split_generators function is the truly logic of download raw datasets with dl_manager and use Conll2003 cls by use import_main_class ...
[ -0.13929522037506104, -0.09699501097202301, -0.057543739676475525, 0.5315549969673157, 0.1806543618440628, -0.07769414782524109, 0.12914760410785675, -0.06849304586648941, 0.13074743747711182, 0.1263672113418579, -0.4010898470878601, 0.19039911031723022, -0.07171807438135147, 0.52833974361...
https://github.com/huggingface/datasets/issues/1831
Some question about raw dataset download info in the project .
I am afraid that there is not a very straightforward way to get that location. Another option, from _split_generators would be to use: - `dl_manager._download_config.cache_dir` to get the directory where all the raw downloaded files are: ```python download_dir = dl_manager._download_config.cache_dir ``` -...
Hi , i review the code in https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py in the _split_generators function is the truly logic of download raw datasets with dl_manager and use Conll2003 cls by use import_main_class in load_dataset function My question is that , with this logic i...
111
Some question about raw dataset download info in the project . Hi , i review the code in https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py in the _split_generators function is the truly logic of download raw datasets with dl_manager and use Conll2003 cls by use import_main_class ...
[ -0.13469825685024261, -0.02914184145629406, -0.056595444679260254, 0.5307530164718628, 0.11883686482906342, -0.11446941643953323, 0.10200833529233932, -0.008243290707468987, 0.04150865972042084, 0.08339418470859528, -0.41979047656059265, 0.20728373527526855, -0.12456389516592026, 0.5449789...
https://github.com/huggingface/datasets/issues/1831
Some question about raw dataset download info in the project .
Sure it would be nice to have an easier access to these paths ! The dataset builder could have a method to return those, what do you think ? Feel free to work on this @albertvillanova , it would be a nice addition :) Your suggestion does work as well @albertvillanova if you complete it by specifying `etag=` to `h...
Hi , i review the code in https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py in the _split_generators function is the truly logic of download raw datasets with dl_manager and use Conll2003 cls by use import_main_class in load_dataset function My question is that , with this logic i...
100
Some question about raw dataset download info in the project . Hi , i review the code in https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py in the _split_generators function is the truly logic of download raw datasets with dl_manager and use Conll2003 cls by use import_main_class ...
[ -0.11998742818832397, -0.10072711855173111, -0.04412248730659485, 0.5240066051483154, 0.04389805346727371, -0.14139989018440247, 0.11834002286195755, 0.01704983599483967, 0.01131491083651781, 0.13442540168762207, -0.3461930453777313, 0.15114368498325348, -0.13548927009105682, 0.59206938743...
https://github.com/huggingface/datasets/issues/1831
Some question about raw dataset download info in the project .
Once #1846 will be merged, the paths to the raw downloaded files will be accessible as: ```python builder_instance.dl_manager.downloaded_paths ```
Hi , i review the code in https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py in the _split_generators function is the truly logic of download raw datasets with dl_manager and use Conll2003 cls by use import_main_class in load_dataset function My question is that , with this logic i...
19
Some question about raw dataset download info in the project . Hi , i review the code in https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py in the _split_generators function is the truly logic of download raw datasets with dl_manager and use Conll2003 cls by use import_main_class ...
[ -0.22275249660015106, -0.07279057055711746, -0.079534612596035, 0.5368556380271912, 0.11309166252613068, -0.07173091918230057, 0.1194329783320427, -0.04083789139986038, 0.0301587525755167, 0.1258057802915573, -0.3729727566242218, 0.1903691440820694, -0.11315010488033295, 0.6030975580215454...
https://github.com/huggingface/datasets/issues/1830
using map on loaded Tokenizer 10x - 100x slower than default Tokenizer?
Hi @wumpusman `datasets` has a caching mechanism that allows to cache the results of `.map` so that when you want to re-run it later it doesn't recompute it again. So when you do `.map`, what actually happens is: 1. compute the hash used to identify your `map` for the cache 2. apply your function on every batch ...
This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: ```` def save_tokenizer(original_tokenizer,text,path="simpledata/tokenizer"): words_u...
116
using map on loaded Tokenizer 10x - 100x slower than default Tokenizer? This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: ```` def save_to...
[ -0.4040471911430359, -0.013007371686398983, -0.10247061401605606, 0.103429414331913, 0.14530524611473083, -0.1225539967417717, 0.255054235458374, 0.18908898532390594, 0.13067756593227386, -0.006502095144242048, -0.05422332137823105, 0.4988861382007599, -0.1724810004234314, -0.2727597057819...
https://github.com/huggingface/datasets/issues/1830
using map on loaded Tokenizer 10x - 100x slower than default Tokenizer?
Hi @lhoestq , Thanks for the reply. It's entirely possible that is the issue. Since it's a side project I won't be looking at it till later this week, but, I'll verify it by disabling caching and hopefully I'll see the same runtime. Appreciate the reference, Michael
This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: ```` def save_tokenizer(original_tokenizer,text,path="simpledata/tokenizer"): words_u...
47
using map on loaded Tokenizer 10x - 100x slower than default Tokenizer? This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: ```` def save_to...
[ -0.4040471911430359, -0.013007371686398983, -0.10247061401605606, 0.103429414331913, 0.14530524611473083, -0.1225539967417717, 0.255054235458374, 0.18908898532390594, 0.13067756593227386, -0.006502095144242048, -0.05422332137823105, 0.4988861382007599, -0.1724810004234314, -0.2727597057819...
https://github.com/huggingface/datasets/issues/1830
using map on loaded Tokenizer 10x - 100x slower than default Tokenizer?
I believe this is an actual issue, tokenizing a ~4GB txt file went from an hour and a half to ~10 minutes when I switched from my pre-trained tokenizer(on the same dataset) to the default gpt2 tokenizer. Both were loaded using: ``` AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) ``` I trained the ...
This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: ```` def save_tokenizer(original_tokenizer,text,path="simpledata/tokenizer"): words_u...
117
using map on loaded Tokenizer 10x - 100x slower than default Tokenizer? This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: ```` def save_to...
[ -0.4040471911430359, -0.013007371686398983, -0.10247061401605606, 0.103429414331913, 0.14530524611473083, -0.1225539967417717, 0.255054235458374, 0.18908898532390594, 0.13067756593227386, -0.006502095144242048, -0.05422332137823105, 0.4988861382007599, -0.1724810004234314, -0.2727597057819...
https://github.com/huggingface/datasets/issues/1830
using map on loaded Tokenizer 10x - 100x slower than default Tokenizer?
Hi @johncookds do you think this can come from one tokenizer being faster than the other one ? Can you try to compare their speed without using `datasets` just to make sure ?
This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: ```` def save_tokenizer(original_tokenizer,text,path="simpledata/tokenizer"): words_u...
33
using map on loaded Tokenizer 10x - 100x slower than default Tokenizer? This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: ```` def save_to...
[ -0.4040471911430359, -0.013007371686398983, -0.10247061401605606, 0.103429414331913, 0.14530524611473083, -0.1225539967417717, 0.255054235458374, 0.18908898532390594, 0.13067756593227386, -0.006502095144242048, -0.05422332137823105, 0.4988861382007599, -0.1724810004234314, -0.2727597057819...
https://github.com/huggingface/datasets/issues/1830
using map on loaded Tokenizer 10x - 100x slower than default Tokenizer?
Hi yes, I'm closing the loop here with some timings below. The issue seems to be at least somewhat/mainly with the tokenizer's themselves. Moreover legacy saves of the trainer tokenizer perform faster but differently than the new tokenizer.json saves(note nothing about the training process/adding of special tokens chan...
This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: ```` def save_tokenizer(original_tokenizer,text,path="simpledata/tokenizer"): words_u...
124
using map on loaded Tokenizer 10x - 100x slower than default Tokenizer? This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: ```` def save_to...
[ -0.4040471911430359, -0.013007371686398983, -0.10247061401605606, 0.103429414331913, 0.14530524611473083, -0.1225539967417717, 0.255054235458374, 0.18908898532390594, 0.13067756593227386, -0.006502095144242048, -0.05422332137823105, 0.4988861382007599, -0.1724810004234314, -0.2727597057819...
https://github.com/huggingface/datasets/issues/1830
using map on loaded Tokenizer 10x - 100x slower than default Tokenizer?
@lhoestq , Hi, which version of datasets has datasets.set_caching_enabled(False)? I get module 'datasets' has no attribute 'set_caching_enabled'. To hopefully get around this, I reran my code on a new set of data, and did so only once. @johncookds , thanks for chiming in, it looks this might be an issue of Toke...
This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: ```` def save_tokenizer(original_tokenizer,text,path="simpledata/tokenizer"): words_u...
182
using map on loaded Tokenizer 10x - 100x slower than default Tokenizer? This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: ```` def save_to...
[ -0.4040471911430359, -0.013007371686398983, -0.10247061401605606, 0.103429414331913, 0.14530524611473083, -0.1225539967417717, 0.255054235458374, 0.18908898532390594, 0.13067756593227386, -0.006502095144242048, -0.05422332137823105, 0.4988861382007599, -0.1724810004234314, -0.2727597057819...
https://github.com/huggingface/datasets/issues/1830
using map on loaded Tokenizer 10x - 100x slower than default Tokenizer?
Thanks for the experiments @johncookds and @wumpusman ! > Hi, which version of datasets has datasets.set_caching_enabled(False)? Currently you have to install `datasets` from source to have this feature, but this will be available in the next release in a few days. > I'm trying to figure out why the overhead ...
This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: ```` def save_tokenizer(original_tokenizer,text,path="simpledata/tokenizer"): words_u...
157
using map on loaded Tokenizer 10x - 100x slower than default Tokenizer? This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: ```` def save_to...
[ -0.4040471911430359, -0.013007371686398983, -0.10247061401605606, 0.103429414331913, 0.14530524611473083, -0.1225539967417717, 0.255054235458374, 0.18908898532390594, 0.13067756593227386, -0.006502095144242048, -0.05422332137823105, 0.4988861382007599, -0.1724810004234314, -0.2727597057819...
https://github.com/huggingface/datasets/issues/1830
using map on loaded Tokenizer 10x - 100x slower than default Tokenizer?
@lhoestq, I just checked that previous run time was actually 3000 chars. I increased it to 6k chars, again, roughly double. SlowTokenizer **7.4 s** to **15.7 s** Tokenizer: **276 ms** to **616 ms** I'll post this issue on Tokenizer, seems it hasn't quite been raised (albeit I noticed a similar issue that mig...
This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: ```` def save_tokenizer(original_tokenizer,text,path="simpledata/tokenizer"): words_u...
56
using map on loaded Tokenizer 10x - 100x slower than default Tokenizer? This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: ```` def save_to...
[ -0.4040471911430359, -0.013007371686398983, -0.10247061401605606, 0.103429414331913, 0.14530524611473083, -0.1225539967417717, 0.255054235458374, 0.18908898532390594, 0.13067756593227386, -0.006502095144242048, -0.05422332137823105, 0.4988861382007599, -0.1724810004234314, -0.2727597057819...
https://github.com/huggingface/datasets/issues/1830
using map on loaded Tokenizer 10x - 100x slower than default Tokenizer?
Hi, I'm following up here as I found my exact issue. It was with saving and re-loading the tokenizer. When I trained then processed the data without saving and reloading it, it was 10x-100x faster than when I saved and re-loaded it. Both resulted in the exact same tokenized datasets as well. There is additionally ...
This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: ```` def save_tokenizer(original_tokenizer,text,path="simpledata/tokenizer"): words_u...
93
using map on loaded Tokenizer 10x - 100x slower than default Tokenizer? This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: ```` def save_to...
[ -0.4040471911430359, -0.013007371686398983, -0.10247061401605606, 0.103429414331913, 0.14530524611473083, -0.1225539967417717, 0.255054235458374, 0.18908898532390594, 0.13067756593227386, -0.006502095144242048, -0.05422332137823105, 0.4988861382007599, -0.1724810004234314, -0.2727597057819...
https://github.com/huggingface/datasets/issues/1827
Regarding On-the-fly Data Loading
Hi @acul3 Issue #1776 talks about doing on-the-fly data pre-processing, which I think is solved in the next release as mentioned in the issue #1825. I also look forward to using this feature, though :) I wanted to ask about on-the-fly data loading from the cache (before pre-processing).
Hi, I was wondering if it is possible to load images/texts as a batch during the training process, without loading the entire dataset on the RAM at any given point. Thanks, Gunjan
48
Regarding On-the-fly Data Loading Hi, I was wondering if it is possible to load images/texts as a batch during the training process, without loading the entire dataset on the RAM at any given point. Thanks, Gunjan Hi @acul3 Issue #1776 talks about doing on-the-fly data pre-processing, which I think is s...
[ -0.17268453538417816, -0.2835986912250519, -0.11839695274829865, 0.2190789133310318, 0.3321973383426666, 0.09173063188791275, 0.570720911026001, 0.13171741366386414, 0.1204758808016777, 0.06700479239225388, 0.07138803601264954, -0.0857144221663475, -0.0965600535273552, 0.21161873638629913,...
https://github.com/huggingface/datasets/issues/1827
Regarding On-the-fly Data Loading
Hi ! Currently when you load a dataset via `load_dataset` for example, then the dataset is memory-mapped from an Arrow file on disk. Therefore there's almost no RAM usage even if your dataset contains TB of data. Usually at training time only one batch of data at a time is loaded in memory. Does that answer your qu...
Hi, I was wondering if it is possible to load images/texts as a batch during the training process, without loading the entire dataset on the RAM at any given point. Thanks, Gunjan
66
Regarding On-the-fly Data Loading Hi, I was wondering if it is possible to load images/texts as a batch during the training process, without loading the entire dataset on the RAM at any given point. Thanks, Gunjan Hi ! Currently when you load a dataset via `load_dataset` for example, then the dataset is memo...
[ -0.2869426906108856, -0.3630422353744507, -0.1495334804058075, 0.27318263053894043, 0.41436272859573364, 0.17094267904758453, 0.4223022162914276, 0.12925879657268524, 0.1851394921541214, 0.0802384689450264, 0.12164247781038284, -0.0679909735918045, -0.24279652535915375, 0.19639937579631805...
https://github.com/huggingface/datasets/issues/1825
Datasets library not suitable for huge text datasets.
Hi ! Looks related to #861 You are right: tokenizing a dataset using map takes a lot of space since it can store `input_ids` but also `token_type_ids`, `attention_mask` and `special_tokens_mask`. Moreover if your tokenization function returns python integers then by default they'll be stored as int64 which can take...
Hi, I'm trying to use datasets library to load a 187GB dataset of pure text, with the intention of building a Language Model. The problem is that from the 187GB it goes to some TB when processed by Datasets. First of all, I think the pre-tokenizing step (with tokenizer.map()) is not really thought for datasets this ...
197
Datasets library not suitable for huge text datasets. Hi, I'm trying to use datasets library to load a 187GB dataset of pure text, with the intention of building a Language Model. The problem is that from the 187GB it goes to some TB when processed by Datasets. First of all, I think the pre-tokenizing step (with t...
[ -0.2644476592540741, 0.07049369066953659, 0.009130625054240227, 0.12959197163581848, 0.3082515597343445, 0.00805096048861742, 0.29017943143844604, 0.37177774310112, -0.2807386815547943, -0.018172523006796837, -0.04882347583770752, 0.14737820625305176, -0.23607194423675537, 0.01088563166558...
https://github.com/huggingface/datasets/issues/1825
Datasets library not suitable for huge text datasets.
How recently was `set_transform` added? I am actually trying to implement it and getting an error: `AttributeError: 'Dataset' object has no attribute 'set_transform' ` I'm on v.1.2.1. EDIT: Oh, wait I see now it's in the v.2.0. Whoops! This should be really useful.
Hi, I'm trying to use datasets library to load a 187GB dataset of pure text, with the intention of building a Language Model. The problem is that from the 187GB it goes to some TB when processed by Datasets. First of all, I think the pre-tokenizing step (with tokenizer.map()) is not really thought for datasets this ...
43
Datasets library not suitable for huge text datasets. Hi, I'm trying to use datasets library to load a 187GB dataset of pure text, with the intention of building a Language Model. The problem is that from the 187GB it goes to some TB when processed by Datasets. First of all, I think the pre-tokenizing step (with t...
[ -0.4678060710430145, 0.16560781002044678, 0.030423343181610107, 0.13930319249629974, 0.36984381079673767, -0.07995746284723282, 0.13434459269046783, 0.42008814215660095, -0.43385952711105347, -0.10870681703090668, 0.10512027144432068, 0.09787485748529434, -0.25969311594963074, 0.1617349535...
https://github.com/huggingface/datasets/issues/1825
Datasets library not suitable for huge text datasets.
Yes indeed it was added a few days ago. The code is available on master We'll do a release next week :) Feel free to install `datasets` from source to try it out though, I would love to have some feedbacks
Hi, I'm trying to use datasets library to load a 187GB dataset of pure text, with the intention of building a Language Model. The problem is that from the 187GB it goes to some TB when processed by Datasets. First of all, I think the pre-tokenizing step (with tokenizer.map()) is not really thought for datasets this ...
41
Datasets library not suitable for huge text datasets. Hi, I'm trying to use datasets library to load a 187GB dataset of pure text, with the intention of building a Language Model. The problem is that from the 187GB it goes to some TB when processed by Datasets. First of all, I think the pre-tokenizing step (with t...
[ -0.4051036536693573, 0.13909576833248138, 0.015502887777984142, 0.29045388102531433, 0.2991151809692383, 0.007427196949720383, 0.07476841658353806, 0.3566737473011017, -0.3289952278137207, -0.06482837349176407, 0.0511007234454155, 0.060472521930933, -0.24908986687660217, 0.1777264475822448...
https://github.com/huggingface/datasets/issues/1825
Datasets library not suitable for huge text datasets.
For information: it's now available in `datasets` 1.3.0. The 2.0 is reserved for even cooler features ;)
Hi, I'm trying to use datasets library to load a 187GB dataset of pure text, with the intention of building a Language Model. The problem is that from the 187GB it goes to some TB when processed by Datasets. First of all, I think the pre-tokenizing step (with tokenizer.map()) is not really thought for datasets this ...
17
Datasets library not suitable for huge text datasets. Hi, I'm trying to use datasets library to load a 187GB dataset of pure text, with the intention of building a Language Model. The problem is that from the 187GB it goes to some TB when processed by Datasets. First of all, I think the pre-tokenizing step (with t...
[ -0.4345218241214752, 0.13323645293712616, 0.011771571822464466, 0.2648242115974426, 0.267735093832016, 0.020275993272662163, 0.07535015046596527, 0.36074575781822205, -0.31799212098121643, -0.06220643222332001, 0.03596840053796768, 0.05125747621059418, -0.2549734115600586, 0.18891148269176...
https://github.com/huggingface/datasets/issues/1825
Datasets library not suitable for huge text datasets.
Hi @alexvaca0 , we have optimized Datasets' disk usage in the latest release v1.5. Feel free to update your Datasets version ```shell pip install -U datasets ``` and see if it better suits your needs.
Hi, I'm trying to use datasets library to load a 187GB dataset of pure text, with the intention of building a Language Model. The problem is that from the 187GB it goes to some TB when processed by Datasets. First of all, I think the pre-tokenizing step (with tokenizer.map()) is not really thought for datasets this ...
35
Datasets library not suitable for huge text datasets. Hi, I'm trying to use datasets library to load a 187GB dataset of pure text, with the intention of building a Language Model. The problem is that from the 187GB it goes to some TB when processed by Datasets. First of all, I think the pre-tokenizing step (with t...
[ -0.4028153419494629, 0.07713953405618668, 0.008600398898124695, 0.3193156123161316, 0.3035769760608673, 0.02334943227469921, 0.030155831947922707, 0.3559311032295227, -0.2499682754278183, -0.024190565571188927, 0.03889091685414314, 0.10702212154865265, -0.262981116771698, 0.118851147592067...
https://github.com/huggingface/datasets/issues/1821
Provide better exception message when one of many files results in an exception
Hi! Thank you for reporting this issue. I agree that the information about the exception should be more clear and explicit. I could take on this issue. On the meantime, as you can see from the exception stack trace, HF Datasets uses pandas to read the CSV files. You can pass arguments to `pandas.read_csv` by p...
I find when I process many files, i.e. ``` train_files = glob.glob('rain*.csv') validation_files = glob.glob(validation*.csv') datasets = load_dataset("csv", data_files=dict(train=train_files, validation=validation_files)) ``` I sometimes encounter an error due to one of the files being misformed (i.e. no dat...
129
Provide better exception message when one of many files results in an exception I find when I process many files, i.e. ``` train_files = glob.glob('rain*.csv') validation_files = glob.glob(validation*.csv') datasets = load_dataset("csv", data_files=dict(train=train_files, validation=validation_files)) ``` I...
[ -0.1087331548333168, -0.41903072595596313, 0.007515584118664265, 0.2815205454826355, 0.18453562259674072, 0.16356299817562103, 0.14092230796813965, 0.39342933893203735, 0.22628797590732574, 0.28828540444374084, 0.2727562189102173, -0.2174748182296753, -0.05892053619027138, -0.2714684903621...
https://github.com/huggingface/datasets/issues/1818
Loading local dataset raise requests.exceptions.ConnectTimeout
Hi ! Thanks for reporting. This was indeed a bug introduced when we moved the `json` dataset loader inside the `datasets` package (before that, the `json` loader was fetched online, as all the other dataset scripts). This should be fixed on master now. Feel free to install `datasets` from source to try it out. The ...
Load local dataset: ``` dataset = load_dataset('json', data_files=["../../data/json.json"]) train = dataset["train"] print(train.features) train1 = train.map(lambda x: {"labels": 1}) print(train1[:2]) ``` but it raised requests.exceptions.ConnectTimeout: ``` /Users/littlely/myvirtual/tf2/bin/python3.7 /Us...
69
Loading local dataset raise requests.exceptions.ConnectTimeout Load local dataset: ``` dataset = load_dataset('json', data_files=["../../data/json.json"]) train = dataset["train"] print(train.features) train1 = train.map(lambda x: {"labels": 1}) print(train1[:2]) ``` but it raised requests.exceptions.Connec...
[ -0.08766548335552216, -0.30965402722358704, -0.07478281110525131, 0.24969221651554108, 0.31518077850341797, 0.08913836628198624, 0.31492313742637634, 0.2885441184043884, -0.1467544138431549, 0.05681467428803444, -0.046975817531347275, 0.3553401231765747, 0.062465257942676544, -0.0724610164...
https://github.com/huggingface/datasets/issues/1817
pyarrow.lib.ArrowInvalid: Column 1 named input_ids expected length 599 but got length 1500
Hi ! The error you have is due to the `input_ids` column not having the same number of examples as the other columns. Indeed you're concatenating the `input_ids` at this line: https://github.com/LuCeHe/GenericTools/blob/431835d8e13ec24dceb5ee4dc4ae58f0e873b091/KerasTools/lm_preprocessing.py#L134 However the oth...
I am trying to preprocess any dataset in this package with GPT-2 tokenizer, so I need to structure the datasets as long sequences of text without padding. I've been following a couple of your tutorials and here you can find the script that is failing right at the end https://github.com/LuCeHe/GenericTools/blob/maste...
116
pyarrow.lib.ArrowInvalid: Column 1 named input_ids expected length 599 but got length 1500 I am trying to preprocess any dataset in this package with GPT-2 tokenizer, so I need to structure the datasets as long sequences of text without padding. I've been following a couple of your tutorials and here you can find the...
[ -0.15500250458717346, -0.07036436349153519, -0.00863442663103342, 0.17082907259464264, 0.2552531659603119, 0.03644741699099541, 0.5036024451255798, 0.5221661925315857, -0.5246614217758179, 0.010455019772052765, 0.3807773292064667, 0.23190680146217346, 0.0748814195394516, -0.174508094787597...
https://github.com/huggingface/datasets/issues/1811
Unable to add Multi-label Datasets
Thanks for adding this dataset! As far as I know `supervised_keys` is mostly a holdover from TFDS, but isn't really used, so feel free to drop it (@lhoestq or @thomwolf correct me if I'm wrong). It definitely shouldn't be blocking :)
I am trying to add [CIFAR-100](https://www.cs.toronto.edu/~kriz/cifar.html) dataset. The dataset contains two labels per image - `fine label` and `coarse label`. Using just one label in supervised keys as `supervised_keys=("img", "fine_label")` raises no issue. But trying `supervised_keys=("img", "fine_label","coarse...
41
Unable to add Multi-label Datasets I am trying to add [CIFAR-100](https://www.cs.toronto.edu/~kriz/cifar.html) dataset. The dataset contains two labels per image - `fine label` and `coarse label`. Using just one label in supervised keys as `supervised_keys=("img", "fine_label")` raises no issue. But trying `supervi...
[ 0.021659499034285545, -0.2712220549583435, 0.018146134912967682, 0.13295722007751465, 0.3712172508239746, 0.31664982438087463, 0.6164795756340027, -0.4762452244758606, 0.19942544400691986, 0.2660481631755829, -0.21772587299346924, 0.15355278551578522, -0.27598854899406433, 0.34793272614479...
https://github.com/huggingface/datasets/issues/1811
Unable to add Multi-label Datasets
Thanks @yjernite @lhoestq The template for new dataset makes it slightly confusing. I suppose the comment suggesting its update can be removed.
I am trying to add [CIFAR-100](https://www.cs.toronto.edu/~kriz/cifar.html) dataset. The dataset contains two labels per image - `fine label` and `coarse label`. Using just one label in supervised keys as `supervised_keys=("img", "fine_label")` raises no issue. But trying `supervised_keys=("img", "fine_label","coarse...
22
Unable to add Multi-label Datasets I am trying to add [CIFAR-100](https://www.cs.toronto.edu/~kriz/cifar.html) dataset. The dataset contains two labels per image - `fine label` and `coarse label`. Using just one label in supervised keys as `supervised_keys=("img", "fine_label")` raises no issue. But trying `supervi...
[ 0.021659499034285545, -0.2712220549583435, 0.018146134912967682, 0.13295722007751465, 0.3712172508239746, 0.31664982438087463, 0.6164795756340027, -0.4762452244758606, 0.19942544400691986, 0.2660481631755829, -0.21772587299346924, 0.15355278551578522, -0.27598854899406433, 0.34793272614479...
https://github.com/huggingface/datasets/issues/1810
Add Hateful Memes Dataset
Hi @gchhablani since Array2D doesn't support images of different sizes, I would suggest to store in the dataset the paths to the image file instead of the image data. This has the advantage of not decompressing the data (images are often compressed using jpeg, png etc.). Users can still apply `.map` to load the images ...
## Add Hateful Memes Dataset - **Name:** Hateful Memes - **Description:** [https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set]( https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set) - **Paper:** [https://arxiv.org/pdf/2005.04790.pdf](https://arxiv.org/pdf/2005.04790.pdf) - **Data:** [Thi...
87
Add Hateful Memes Dataset ## Add Hateful Memes Dataset - **Name:** Hateful Memes - **Description:** [https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set]( https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set) - **Paper:** [https://arxiv.org/pdf/2005.04790.pdf](https://arxiv.org/pdf/2005....
[ -0.2504214346408844, 0.1592070609331131, -0.14461423456668854, 0.04864095523953438, 0.1048058271408081, 0.2664600908756256, 0.4449887275695801, -0.07321225851774216, -0.08323448896408081, -0.22534552216529846, -0.02304983139038086, -0.282184898853302, -0.5021380186080933, 0.247500807046890...
https://github.com/huggingface/datasets/issues/1808
writing Datasets in a human readable format
AFAIK, there is currently no built-in method on the `Dataset` object to do this. However, a workaround is to directly use the Arrow table backing the dataset, **but it implies loading the whole dataset in memory** (correct me if I'm mistaken @lhoestq). You can convert the Arrow table to a pandas dataframe to save t...
Hi I see there is a save_to_disk function to save data, but this is not human readable format, is there a way I could save a Dataset object in a human readable format to a file like json? thanks @lhoestq
101
writing Datasets in a human readable format Hi I see there is a save_to_disk function to save data, but this is not human readable format, is there a way I could save a Dataset object in a human readable format to a file like json? thanks @lhoestq AFAIK, there is currently no built-in method on the `Dataset` obj...
[ -0.1064516231417656, 0.2575984299182892, -0.07314237207174301, 0.12428116053342819, 0.490632563829422, 0.29741960763931274, -0.16252365708351135, 0.3496722877025604, 0.04431062564253807, 0.025897301733493805, -0.15335604548454285, 0.33091336488723755, -0.4147043824195862, 0.270733892917633...
https://github.com/huggingface/datasets/issues/1808
writing Datasets in a human readable format
Indeed this works as long as you have enough memory. It would be amazing to have export options like csv, json etc. ! It should be doable to implement something that iterates through the dataset batch by batch to write to csv for example. There is already an `export` method but currently the only export type that ...
Hi I see there is a save_to_disk function to save data, but this is not human readable format, is there a way I could save a Dataset object in a human readable format to a file like json? thanks @lhoestq
61
writing Datasets in a human readable format Hi I see there is a save_to_disk function to save data, but this is not human readable format, is there a way I could save a Dataset object in a human readable format to a file like json? thanks @lhoestq Indeed this works as long as you have enough memory. It would be...
[ -0.20983873307704926, 0.049268949776887894, -0.10766670107841492, 0.06642670184373856, 0.5001394748687744, 0.23276878893375397, -0.1172599196434021, 0.35747233033180237, 0.000751799379941076, 0.14927472174167633, -0.16769756376743317, 0.052724335342645645, -0.3711419701576233, 0.4290179908...
https://github.com/huggingface/datasets/issues/1805
can't pickle SwigPyObject objects when calling dataset.get_nearest_examples from FAISS index
Hi ! Indeed we used to require mapping functions to be picklable with `pickle` or `dill` in order to cache the resulting datasets. And FAISS indexes are not picklable unfortunately. But since #1703 this is no longer required (the caching will simply be disabled). This change will be available in the next release of ...
So, I have the following instances in my dataset ``` {'question': 'An astronomer observes that a planet rotates faster after a meteorite impact. Which is the most likely effect of this increase in rotation?', 'answer': 'C', 'example_id': 'ARCCH_Mercury_7175875', 'options':[{'option_context': 'One effect of ...
63
can't pickle SwigPyObject objects when calling dataset.get_nearest_examples from FAISS index So, I have the following instances in my dataset ``` {'question': 'An astronomer observes that a planet rotates faster after a meteorite impact. Which is the most likely effect of this increase in rotation?', 'answer'...
[ 0.06904204934835434, -0.38878345489501953, -0.06963580846786499, 0.2468254417181015, 0.164226233959198, 0.027764270082116127, 0.17842744290828705, 0.05411217361688614, 0.4089656174182892, 0.5476052165031433, -0.05009916052222252, 0.4959976077079773, 0.11579318344593048, -0.1273351609706878...
https://github.com/huggingface/datasets/issues/1805
can't pickle SwigPyObject objects when calling dataset.get_nearest_examples from FAISS index
I totally forgot to answer this issue, I'm so sorry. I was able to get it working by installing `datasets` from source. Huge thanks!
So, I have the following instances in my dataset ``` {'question': 'An astronomer observes that a planet rotates faster after a meteorite impact. Which is the most likely effect of this increase in rotation?', 'answer': 'C', 'example_id': 'ARCCH_Mercury_7175875', 'options':[{'option_context': 'One effect of ...
24
can't pickle SwigPyObject objects when calling dataset.get_nearest_examples from FAISS index So, I have the following instances in my dataset ``` {'question': 'An astronomer observes that a planet rotates faster after a meteorite impact. Which is the most likely effect of this increase in rotation?', 'answer'...
[ 0.06904204934835434, -0.38878345489501953, -0.06963580846786499, 0.2468254417181015, 0.164226233959198, 0.027764270082116127, 0.17842744290828705, 0.05411217361688614, 0.4089656174182892, 0.5476052165031433, -0.05009916052222252, 0.4959976077079773, 0.11579318344593048, -0.1273351609706878...
https://github.com/huggingface/datasets/issues/1803
Querying examples from big datasets is slower than small datasets
Hello, @lhoestq / @gaceladri : We have been seeing similar behavior with bigger datasets, where querying time increases. Are you folks aware of any solution that fixes this problem yet?
After some experiments with bookcorpus I noticed that querying examples from big datasets is slower than small datasets. For example ```python from datasets import load_dataset b1 = load_dataset("bookcorpus", split="train[:1%]") b50 = load_dataset("bookcorpus", split="train[:50%]") b100 = load_dataset("bookcorp...
30
Querying examples from big datasets is slower than small datasets After some experiments with bookcorpus I noticed that querying examples from big datasets is slower than small datasets. For example ```python from datasets import load_dataset b1 = load_dataset("bookcorpus", split="train[:1%]") b50 = load_datas...
[ -0.40549370646476746, 0.05864446982741356, -0.07997152954339981, 0.1315324306488037, -0.07969389110803604, -0.1776914894580841, 0.2441747635602951, 0.4918093979358673, -0.23139889538288116, 0.1801937073469162, 0.08149363845586777, 0.0550893135368824, 0.12508580088615417, -0.100756667554378...
https://github.com/huggingface/datasets/issues/1803
Querying examples from big datasets is slower than small datasets
Hi ! I'm pretty sure that it can be fixed by using the Arrow IPC file format instead of the raw streaming format but I haven't tested yet. I'll take a look at it soon and let you know
After some experiments with bookcorpus I noticed that querying examples from big datasets is slower than small datasets. For example ```python from datasets import load_dataset b1 = load_dataset("bookcorpus", split="train[:1%]") b50 = load_dataset("bookcorpus", split="train[:50%]") b100 = load_dataset("bookcorp...
39
Querying examples from big datasets is slower than small datasets After some experiments with bookcorpus I noticed that querying examples from big datasets is slower than small datasets. For example ```python from datasets import load_dataset b1 = load_dataset("bookcorpus", split="train[:1%]") b50 = load_datas...
[ -0.40549370646476746, 0.05864446982741356, -0.07997152954339981, 0.1315324306488037, -0.07969389110803604, -0.1776914894580841, 0.2441747635602951, 0.4918093979358673, -0.23139889538288116, 0.1801937073469162, 0.08149363845586777, 0.0550893135368824, 0.12508580088615417, -0.100756667554378...
https://github.com/huggingface/datasets/issues/1803
Querying examples from big datasets is slower than small datasets
My workaround is to shard the dataset into splits in my ssd disk and feed the data in different training sessions. But it is a bit of a pain when we need to reload the last training session with the rest of the split with the Trainer in transformers. I mean, when I split the training and then reloads the model and o...
After some experiments with bookcorpus I noticed that querying examples from big datasets is slower than small datasets. For example ```python from datasets import load_dataset b1 = load_dataset("bookcorpus", split="train[:1%]") b50 = load_dataset("bookcorpus", split="train[:50%]") b100 = load_dataset("bookcorp...
218
Querying examples from big datasets is slower than small datasets After some experiments with bookcorpus I noticed that querying examples from big datasets is slower than small datasets. For example ```python from datasets import load_dataset b1 = load_dataset("bookcorpus", split="train[:1%]") b50 = load_datas...
[ -0.40549370646476746, 0.05864446982741356, -0.07997152954339981, 0.1315324306488037, -0.07969389110803604, -0.1776914894580841, 0.2441747635602951, 0.4918093979358673, -0.23139889538288116, 0.1801937073469162, 0.08149363845586777, 0.0550893135368824, 0.12508580088615417, -0.100756667554378...
https://github.com/huggingface/datasets/issues/1803
Querying examples from big datasets is slower than small datasets
I just tested and using the Arrow File format doesn't improve the speed... This will need further investigation. My guess is that it has to iterate over the record batches or chunks of a ChunkedArray in order to retrieve elements. However if we know in advance in which chunk the element is, and at what index it i...
After some experiments with bookcorpus I noticed that querying examples from big datasets is slower than small datasets. For example ```python from datasets import load_dataset b1 = load_dataset("bookcorpus", split="train[:1%]") b50 = load_dataset("bookcorpus", split="train[:50%]") b100 = load_dataset("bookcorp...
82
Querying examples from big datasets is slower than small datasets After some experiments with bookcorpus I noticed that querying examples from big datasets is slower than small datasets. For example ```python from datasets import load_dataset b1 = load_dataset("bookcorpus", split="train[:1%]") b50 = load_datas...
[ -0.40549370646476746, 0.05864446982741356, -0.07997152954339981, 0.1315324306488037, -0.07969389110803604, -0.1776914894580841, 0.2441747635602951, 0.4918093979358673, -0.23139889538288116, 0.1801937073469162, 0.08149363845586777, 0.0550893135368824, 0.12508580088615417, -0.100756667554378...
https://github.com/huggingface/datasets/issues/1803
Querying examples from big datasets is slower than small datasets
I have a dataset with about 2.7 million rows (which I'm loading via `load_from_disk`), and I need to fetch around 300k (particular) rows of it, by index. Currently this is taking a really long time (~8 hours). I tried sharding the large dataset but overall it doesn't change how long it takes to fetch the desired rows. ...
After some experiments with bookcorpus I noticed that querying examples from big datasets is slower than small datasets. For example ```python from datasets import load_dataset b1 = load_dataset("bookcorpus", split="train[:1%]") b50 = load_dataset("bookcorpus", split="train[:50%]") b100 = load_dataset("bookcorp...
125
Querying examples from big datasets is slower than small datasets After some experiments with bookcorpus I noticed that querying examples from big datasets is slower than small datasets. For example ```python from datasets import load_dataset b1 = load_dataset("bookcorpus", split="train[:1%]") b50 = load_datas...
[ -0.40549370646476746, 0.05864446982741356, -0.07997152954339981, 0.1315324306488037, -0.07969389110803604, -0.1776914894580841, 0.2441747635602951, 0.4918093979358673, -0.23139889538288116, 0.1801937073469162, 0.08149363845586777, 0.0550893135368824, 0.12508580088615417, -0.100756667554378...
https://github.com/huggingface/datasets/issues/1803
Querying examples from big datasets is slower than small datasets
Hi ! Feel free to post a message on the [forum](https://discuss.huggingface.co/c/datasets/10). I'd be happy to help you with this. In your post on the forum, feel free to add more details about your setup: What are column names and types of your dataset ? How was the dataset constructed ? Is the dataset shuffled ...
After some experiments with bookcorpus I noticed that querying examples from big datasets is slower than small datasets. For example ```python from datasets import load_dataset b1 = load_dataset("bookcorpus", split="train[:1%]") b50 = load_dataset("bookcorpus", split="train[:50%]") b100 = load_dataset("bookcorp...
95
Querying examples from big datasets is slower than small datasets After some experiments with bookcorpus I noticed that querying examples from big datasets is slower than small datasets. For example ```python from datasets import load_dataset b1 = load_dataset("bookcorpus", split="train[:1%]") b50 = load_datas...
[ -0.40549370646476746, 0.05864446982741356, -0.07997152954339981, 0.1315324306488037, -0.07969389110803604, -0.1776914894580841, 0.2441747635602951, 0.4918093979358673, -0.23139889538288116, 0.1801937073469162, 0.08149363845586777, 0.0550893135368824, 0.12508580088615417, -0.100756667554378...
https://github.com/huggingface/datasets/issues/1797
Connection error
Hi ! For future references let me add a link to our discussion here : https://github.com/huggingface/datasets/issues/759#issuecomment-770684693 Let me know if you manage to fix your proxy issue or if we can do something on our end to help you :)
Hi I am hitting to the error, help me and thanks. `train_data = datasets.load_dataset("xsum", split="train")` `ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/xsum/xsum.py`
40
Connection error Hi I am hitting to the error, help me and thanks. `train_data = datasets.load_dataset("xsum", split="train")` `ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/xsum/xsum.py` Hi ! For future references let me add a link to our discussion here...
[ -0.2624680697917938, -0.47331249713897705, -0.1055455431342125, 0.1787155121564865, 0.5621638894081116, -0.10308819264173508, -0.017071938142180443, 0.22683119773864746, 0.1645880490541458, 0.22738096117973328, -0.3096102178096771, -0.004598597530275583, 0.20449618995189667, 0.443228662014...
https://github.com/huggingface/datasets/issues/1796
Filter on dataset too much slowww
When I use the filter on the arrow table directly, it works like butter. But I can't find a way to update the table in `Dataset` object. ``` ds_table = dataset.data.filter(mask=dataset['flag']) ```
I have a dataset with 50M rows. For pre-processing, I need to tokenize this and filter rows with the large sequence. My tokenization took roughly 12mins. I used `map()` with batch size 1024 and multi-process with 96 processes. When I applied the `filter()` function it is taking too much time. I need to filter se...
32
Filter on dataset too much slowww I have a dataset with 50M rows. For pre-processing, I need to tokenize this and filter rows with the large sequence. My tokenization took roughly 12mins. I used `map()` with batch size 1024 and multi-process with 96 processes. When I applied the `filter()` function it is takin...
[ -0.2712087631225586, -0.0597313791513443, -0.11476314067840576, -0.2779483497142792, -0.17065058648586273, -0.2504286766052246, 0.17076264321804047, 0.2090245485305786, 0.25987720489501953, -0.11639663577079773, -0.08618315309286118, 0.2958265244960785, -0.0557142049074173, 0.4060815274715...
https://github.com/huggingface/datasets/issues/1796
Filter on dataset too much slowww
Hi ! Currently the filter method reads the dataset batch by batch to write a new, filtered, arrow file on disk. Therefore all the reading + writing can take some time. Using a mask directly on the arrow table doesn't do any read or write operation therefore it's way quicker. Replacing the old table by the new one s...
I have a dataset with 50M rows. For pre-processing, I need to tokenize this and filter rows with the large sequence. My tokenization took roughly 12mins. I used `map()` with batch size 1024 and multi-process with 96 processes. When I applied the `filter()` function it is taking too much time. I need to filter se...
162
Filter on dataset too much slowww I have a dataset with 50M rows. For pre-processing, I need to tokenize this and filter rows with the large sequence. My tokenization took roughly 12mins. I used `map()` with batch size 1024 and multi-process with 96 processes. When I applied the `filter()` function it is takin...
[ -0.25801458954811096, -0.12244263291358948, -0.08642466366291046, -0.260146826505661, -0.1652653068304062, -0.2456996738910675, 0.25937533378601074, 0.26136812567710876, 0.3136884272098541, -0.15321603417396545, 0.0066367811523377895, 0.22178930044174194, -0.11383549124002457, 0.3481482565...
https://github.com/huggingface/datasets/issues/1796
Filter on dataset too much slowww
Hi @lhoestq @ayubSubhaniya, If there's no progress on this one, can I try working on it? Thanks, Gunjan
I have a dataset with 50M rows. For pre-processing, I need to tokenize this and filter rows with the large sequence. My tokenization took roughly 12mins. I used `map()` with batch size 1024 and multi-process with 96 processes. When I applied the `filter()` function it is taking too much time. I need to filter se...
18
Filter on dataset too much slowww I have a dataset with 50M rows. For pre-processing, I need to tokenize this and filter rows with the large sequence. My tokenization took roughly 12mins. I used `map()` with batch size 1024 and multi-process with 96 processes. When I applied the `filter()` function it is takin...
[ -0.22525474429130554, -0.2595489025115967, -0.10918066650629044, -0.21481584012508392, -0.12303641438484192, -0.269241601228714, 0.2215610146522522, 0.2114836722612381, 0.26342976093292236, -0.12331388890743256, 0.027962177991867065, 0.09088090807199478, -0.11606823652982712, 0.40823641419...
https://github.com/huggingface/datasets/issues/1796
Filter on dataset too much slowww
Sure @gchhablani feel free to start working on it, this would be very appreciated :) This feature is would be really awesome, especially since arrow allows to mask really quickly and without having to rewrite the dataset on disk
I have a dataset with 50M rows. For pre-processing, I need to tokenize this and filter rows with the large sequence. My tokenization took roughly 12mins. I used `map()` with batch size 1024 and multi-process with 96 processes. When I applied the `filter()` function it is taking too much time. I need to filter se...
39
Filter on dataset too much slowww I have a dataset with 50M rows. For pre-processing, I need to tokenize this and filter rows with the large sequence. My tokenization took roughly 12mins. I used `map()` with batch size 1024 and multi-process with 96 processes. When I applied the `filter()` function it is takin...
[ -0.25474002957344055, -0.2351187765598297, -0.10145193338394165, -0.22573474049568176, -0.14286434650421143, -0.28716859221458435, 0.2794649600982666, 0.23597829043865204, 0.3127666413784027, -0.11907652765512466, 0.062109604477882385, 0.13041572272777557, -0.12634707987308502, 0.422411084...
https://github.com/huggingface/datasets/issues/1790
ModuleNotFoundError: No module named 'apache_beam', when specific languages.
Hi ! Apache Beam is a framework used to define data transformation pipelines. These pipeline can then be run in many runtimes: DataFlow, Spark, Flink, etc. There also exist a local runner called the DirectRunner. Wikipedia is a dataset that requires some parsing, so to allow the processing to be run on this kind of...
```py import datasets wiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='./datasets') ``` then `ModuleNotFoundError: No module named 'apache_beam'` happend. The error doesn't appear when it's '20200501.en'. I don't know Apache Beam, but according to #498 it isn't necessary when it's saved to lo...
167
ModuleNotFoundError: No module named 'apache_beam', when specific languages. ```py import datasets wiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='./datasets') ``` then `ModuleNotFoundError: No module named 'apache_beam'` happend. The error doesn't appear when it's '20200501.en'. I don't k...
[ 0.19116747379302979, 0.0017938798991963267, -0.004188823979347944, 0.18698330223560333, 0.271381676197052, 0.1396203339099884, 0.42134740948677063, 0.3950364887714386, 0.22948026657104492, 0.1781059354543686, 0.10769926011562347, -0.09141667187213898, 0.05334948003292084, -0.10924547910690...
https://github.com/huggingface/datasets/issues/1790
ModuleNotFoundError: No module named 'apache_beam', when specific languages.
Thanks for your reply! I understood. I tried again with installing apache-beam, add ` beam_runner="DirectRunner"` and an anther `mwparserfromhell` is also required so I installed it. but, it also failed. It exited 1 without error message. ```py import datasets # BTW, 20200501.ja doesn't exist at wikipedia, s...
```py import datasets wiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='./datasets') ``` then `ModuleNotFoundError: No module named 'apache_beam'` happend. The error doesn't appear when it's '20200501.en'. I don't know Apache Beam, but according to #498 it isn't necessary when it's saved to lo...
279
ModuleNotFoundError: No module named 'apache_beam', when specific languages. ```py import datasets wiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='./datasets') ``` then `ModuleNotFoundError: No module named 'apache_beam'` happend. The error doesn't appear when it's '20200501.en'. I don't k...
[ 0.09179474413394928, 0.0745994970202446, 0.019585996866226196, 0.2518455684185028, 0.25708386301994324, 0.040905892848968506, 0.3200167417526245, 0.3480713963508606, 0.16672630608081818, 0.1959521323442459, 0.21741540729999542, 0.0488915741443634, -0.09746213257312775, -0.2695733308792114,...
https://github.com/huggingface/datasets/issues/1790
ModuleNotFoundError: No module named 'apache_beam', when specific languages.
Hi @miyamonz, I tried replicating this issue using the same snippet used by you. I am able to download the dataset without any issues, although I stopped it in the middle because the dataset is huge. Based on a similar issue [here](https://github.com/google-research/fixmatch/issues/23), it could be related to you...
```py import datasets wiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='./datasets') ``` then `ModuleNotFoundError: No module named 'apache_beam'` happend. The error doesn't appear when it's '20200501.en'. I don't know Apache Beam, but according to #498 it isn't necessary when it's saved to lo...
61
ModuleNotFoundError: No module named 'apache_beam', when specific languages. ```py import datasets wiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='./datasets') ``` then `ModuleNotFoundError: No module named 'apache_beam'` happend. The error doesn't appear when it's '20200501.en'. I don't k...
[ -0.08689077943563461, 0.1544124335050583, 0.03492113575339317, 0.2511787414550781, 0.35333332419395447, 0.10837160050868988, 0.32128453254699707, 0.4499053359031677, 0.3734614849090576, 0.008810571394860744, 0.10232548415660858, 0.006331880576908588, -0.009907256811857224, -0.0225601363927...
https://github.com/huggingface/datasets/issues/1790
ModuleNotFoundError: No module named 'apache_beam', when specific languages.
thanks for your reply and sorry for my late response. ## environment my local machine environment info - Ubuntu on WSL2 `lsb_release -a` ``` No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.2 LTS Release: 20.04 Codename: focal ``` RTX 2070 super Inside WS...
```py import datasets wiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='./datasets') ``` then `ModuleNotFoundError: No module named 'apache_beam'` happend. The error doesn't appear when it's '20200501.en'. I don't know Apache Beam, but according to #498 it isn't necessary when it's saved to lo...
606
ModuleNotFoundError: No module named 'apache_beam', when specific languages. ```py import datasets wiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='./datasets') ``` then `ModuleNotFoundError: No module named 'apache_beam'` happend. The error doesn't appear when it's '20200501.en'. I don't k...
[ -0.01646200381219387, -0.12084280699491501, -0.04492584615945816, 0.1289525330066681, 0.3602374196052551, 0.011790109798312187, 0.46351608633995056, 0.39907482266426086, 0.26570284366607666, 0.22124485671520233, 0.3692161440849304, -0.09264913201332092, 0.0788547471165657, -0.0503802374005...
https://github.com/huggingface/datasets/issues/1790
ModuleNotFoundError: No module named 'apache_beam', when specific languages.
I don't know if this is related, but there is this issue on the wikipedia processing that you reported at #2031 (open PR is at #2037 ) . Does the fix your proposed at #2037 helps in your case ? And for information, the DirectRunner of Apache Beam is not optimized for memory intensive tasks, so you must be right whe...
```py import datasets wiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='./datasets') ``` then `ModuleNotFoundError: No module named 'apache_beam'` happend. The error doesn't appear when it's '20200501.en'. I don't know Apache Beam, but according to #498 it isn't necessary when it's saved to lo...
72
ModuleNotFoundError: No module named 'apache_beam', when specific languages. ```py import datasets wiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='./datasets') ``` then `ModuleNotFoundError: No module named 'apache_beam'` happend. The error doesn't appear when it's '20200501.en'. I don't k...
[ 0.11173438280820847, -0.006441310979425907, 0.008771125227212906, 0.3851286768913269, 0.3766988515853882, 0.09046314656734467, 0.18669424951076508, 0.4343985617160797, 0.23802949488162994, 0.12736876308918, 0.17051461338996887, 0.022492023184895515, -0.012138871476054192, -0.21757300198078...
https://github.com/huggingface/datasets/issues/1790
ModuleNotFoundError: No module named 'apache_beam', when specific languages.
the #2037 doesn't solve my problem directly, but I found the point! https://github.com/huggingface/datasets/blob/349ac4398a3bcae6356f14c5754483383a60e8a4/datasets/wikipedia/wikipedia.py#L523 this `beam.transforms.Reshuffle()` cause the memory error. it makes sense if I consider the shuffle means. Beam's reshuffl...
```py import datasets wiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='./datasets') ``` then `ModuleNotFoundError: No module named 'apache_beam'` happend. The error doesn't appear when it's '20200501.en'. I don't know Apache Beam, but according to #498 it isn't necessary when it's saved to lo...
111
ModuleNotFoundError: No module named 'apache_beam', when specific languages. ```py import datasets wiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='./datasets') ``` then `ModuleNotFoundError: No module named 'apache_beam'` happend. The error doesn't appear when it's '20200501.en'. I don't k...
[ 0.11943493783473969, 0.1108863428235054, 0.009472830221056938, 0.429506778717041, 0.5010866522789001, 0.05407458171248436, 0.18194760382175446, 0.4008020758628845, -0.05477729067206383, 0.316261887550354, 0.11126868426799774, 0.02708803303539753, -0.19562506675720215, -0.3097156882286072, ...
https://github.com/huggingface/datasets/issues/1790
ModuleNotFoundError: No module named 'apache_beam', when specific languages.
The reshuffle is needed when you use parallelism. The objective is to redistribute the articles evenly on the workers, since the `_extract_content` step generated many articles per file. By using reshuffle, we can split the processing of the articles of one file into several workers. Without reshuffle, all the article...
```py import datasets wiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='./datasets') ``` then `ModuleNotFoundError: No module named 'apache_beam'` happend. The error doesn't appear when it's '20200501.en'. I don't know Apache Beam, but according to #498 it isn't necessary when it's saved to lo...
73
ModuleNotFoundError: No module named 'apache_beam', when specific languages. ```py import datasets wiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='./datasets') ``` then `ModuleNotFoundError: No module named 'apache_beam'` happend. The error doesn't appear when it's '20200501.en'. I don't k...
[ 0.040164764970541, 0.010120834223926067, -0.005872792098671198, 0.1293458193540573, -0.011950431391596794, 0.05967085435986519, 0.3661443293094635, 0.34801042079925537, 0.022974494844675064, 0.27976179122924805, 0.1604570597410202, 0.07360950857400894, 0.19807502627372742, -0.1953660994768...
https://github.com/huggingface/datasets/issues/1790
ModuleNotFoundError: No module named 'apache_beam', when specific languages.
Maybe the reshuffle step can be added only if the runner is not a DirectRunner ?
```py import datasets wiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='./datasets') ``` then `ModuleNotFoundError: No module named 'apache_beam'` happend. The error doesn't appear when it's '20200501.en'. I don't know Apache Beam, but according to #498 it isn't necessary when it's saved to lo...
16
ModuleNotFoundError: No module named 'apache_beam', when specific languages. ```py import datasets wiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='./datasets') ``` then `ModuleNotFoundError: No module named 'apache_beam'` happend. The error doesn't appear when it's '20200501.en'. I don't k...
[ -0.037829507142305374, 0.051789291203022, 0.04592253267765045, 0.161894753575325, 0.2217683643102646, 0.049549609422683716, 0.28777045011520386, 0.303241103887558, 0.025933723896741867, 0.1799335777759552, 0.2910350263118744, -0.019249025732278824, 0.1045149564743042, -0.023227764293551445...
https://github.com/huggingface/datasets/issues/1786
How to use split dataset
By default, all 3 splits will be loaded if you run the following: ```python from datasets import load_dataset dataset = load_dataset("lambada") print(dataset["train"]) print(dataset["valid"]) ``` If you wanted to do load this manually, you could do this: ```python from datasets import load_dataset dat...
![Capture1](https://user-images.githubusercontent.com/78090287/106057436-cb6a1f00-6111-11eb-8c9c-3658065b1fdf.PNG) Hey, I want to split the lambada dataset into corpus, test, train and valid txt files (like penn treebank) but I am not able to achieve this. What I am doing is, executing the lambada.py file in my pro...
56
How to use split dataset ![Capture1](https://user-images.githubusercontent.com/78090287/106057436-cb6a1f00-6111-11eb-8c9c-3658065b1fdf.PNG) Hey, I want to split the lambada dataset into corpus, test, train and valid txt files (like penn treebank) but I am not able to achieve this. What I am doing is, executing t...
[ -0.3074783384799957, -0.04643095284700394, -0.06252636015415192, 0.4651072323322296, 0.1305699348449707, 0.20379114151000977, 0.2765963077545166, 0.5807617902755737, -0.4295650124549866, 0.015046034008264542, -0.3925749659538269, 0.2540951669216156, 0.18384914100170135, 0.3340368866920471,...
https://github.com/huggingface/datasets/issues/1785
Not enough disk space (Needed: Unknown size) when caching on a cluster
Hi ! What do you mean by "disk_usage(".").free` can't compute on the cluster's shared disk" exactly ? Does it return 0 ?
I'm running some experiments where I'm caching datasets on a cluster and accessing it through multiple compute nodes. However, I get an error when loading the cached dataset from the shared disk. The exact error thrown: ```bash >>> load_dataset(dataset, cache_dir="/path/to/cluster/shared/path") OSError: Not eno...
22
Not enough disk space (Needed: Unknown size) when caching on a cluster I'm running some experiments where I'm caching datasets on a cluster and accessing it through multiple compute nodes. However, I get an error when loading the cached dataset from the shared disk. The exact error thrown: ```bash >>> load_dat...
[ -0.07774288952350616, -0.41871702671051025, -0.0846218541264534, 0.24316053092479706, -0.09700983017683029, 0.15066014230251312, 0.07053445279598236, 0.34012123942375183, 0.396234929561615, 0.39331603050231934, 0.41839298605918884, -0.13893121480941772, -0.0912499651312828, 0.0038914782926...
https://github.com/huggingface/datasets/issues/1785
Not enough disk space (Needed: Unknown size) when caching on a cluster
Yes, that's right. It shows 0 free space even though there is. I suspect it might have to do with permissions on the shared disk. ```python >>> disk_usage(".") usage(total=999999, used=999999, free=0) ```
I'm running some experiments where I'm caching datasets on a cluster and accessing it through multiple compute nodes. However, I get an error when loading the cached dataset from the shared disk. The exact error thrown: ```bash >>> load_dataset(dataset, cache_dir="/path/to/cluster/shared/path") OSError: Not eno...
32
Not enough disk space (Needed: Unknown size) when caching on a cluster I'm running some experiments where I'm caching datasets on a cluster and accessing it through multiple compute nodes. However, I get an error when loading the cached dataset from the shared disk. The exact error thrown: ```bash >>> load_dat...
[ -0.07774288952350616, -0.41871702671051025, -0.0846218541264534, 0.24316053092479706, -0.09700983017683029, 0.15066014230251312, 0.07053445279598236, 0.34012123942375183, 0.396234929561615, 0.39331603050231934, 0.41839298605918884, -0.13893121480941772, -0.0912499651312828, 0.0038914782926...
https://github.com/huggingface/datasets/issues/1785
Not enough disk space (Needed: Unknown size) when caching on a cluster
That's an interesting behavior... Do you know any other way to get the free space that works in your case ? Also if it's a permission issue could you try fix the permissions and let mus know if that helped ?
I'm running some experiments where I'm caching datasets on a cluster and accessing it through multiple compute nodes. However, I get an error when loading the cached dataset from the shared disk. The exact error thrown: ```bash >>> load_dataset(dataset, cache_dir="/path/to/cluster/shared/path") OSError: Not eno...
41
Not enough disk space (Needed: Unknown size) when caching on a cluster I'm running some experiments where I'm caching datasets on a cluster and accessing it through multiple compute nodes. However, I get an error when loading the cached dataset from the shared disk. The exact error thrown: ```bash >>> load_dat...
[ -0.07774288952350616, -0.41871702671051025, -0.0846218541264534, 0.24316053092479706, -0.09700983017683029, 0.15066014230251312, 0.07053445279598236, 0.34012123942375183, 0.396234929561615, 0.39331603050231934, 0.41839298605918884, -0.13893121480941772, -0.0912499651312828, 0.0038914782926...
https://github.com/huggingface/datasets/issues/1785
Not enough disk space (Needed: Unknown size) when caching on a cluster
I think its an issue on the clusters end (unclear exactly why -- maybe something with docker containers?), will close the issue
I'm running some experiments where I'm caching datasets on a cluster and accessing it through multiple compute nodes. However, I get an error when loading the cached dataset from the shared disk. The exact error thrown: ```bash >>> load_dataset(dataset, cache_dir="/path/to/cluster/shared/path") OSError: Not eno...
22
Not enough disk space (Needed: Unknown size) when caching on a cluster I'm running some experiments where I'm caching datasets on a cluster and accessing it through multiple compute nodes. However, I get an error when loading the cached dataset from the shared disk. The exact error thrown: ```bash >>> load_dat...
[ -0.07774288952350616, -0.41871702671051025, -0.0846218541264534, 0.24316053092479706, -0.09700983017683029, 0.15066014230251312, 0.07053445279598236, 0.34012123942375183, 0.396234929561615, 0.39331603050231934, 0.41839298605918884, -0.13893121480941772, -0.0912499651312828, 0.0038914782926...
https://github.com/huggingface/datasets/issues/1784
JSONDecodeError on JSON with multiple lines
Hi ! The `json` dataset script does support this format. For example loading a dataset with this format works on my side: ```json {"key1":11, "key2":12, "key3":13} {"key1":21, "key2":22, "key3":23} ``` Can you show the full stacktrace please ? Also which version of datasets and pyarrow are you using ?
Hello :), I have been trying to load data using a JSON file. Based on the [docs](https://huggingface.co/docs/datasets/loading_datasets.html#json-files), the following format is supported: ```json {"key1":11, "key2":12, "key3":13} {"key1":21, "key2":22, "key3":23} ``` But, when I try loading a dataset with th...
49
JSONDecodeError on JSON with multiple lines Hello :), I have been trying to load data using a JSON file. Based on the [docs](https://huggingface.co/docs/datasets/loading_datasets.html#json-files), the following format is supported: ```json {"key1":11, "key2":12, "key3":13} {"key1":21, "key2":22, "key3":23} `...
[ 0.07569852471351624, -0.004834028892219067, -0.029282810166478157, 0.4874003529548645, 0.3547830581665039, 0.14951938390731812, 0.4615655839443207, 0.06345672905445099, 0.1587531715631485, 0.018791066482663155, 0.16414323449134827, 0.28063270449638367, -0.04647229611873627, 0.2131016552448...
https://github.com/huggingface/datasets/issues/1784
JSONDecodeError on JSON with multiple lines
Hi Quentin! I apologize for bothering you. There was some issue with my pyarrow version as far as I understand. I don't remember the exact version I was using as I didn't check it. I repeated it with `datasets 1.2.1` and `pyarrow 2.0.0` and it worked. Closing this issue. Again, sorry for the bother. Thanks...
Hello :), I have been trying to load data using a JSON file. Based on the [docs](https://huggingface.co/docs/datasets/loading_datasets.html#json-files), the following format is supported: ```json {"key1":11, "key2":12, "key3":13} {"key1":21, "key2":22, "key3":23} ``` But, when I try loading a dataset with th...
56
JSONDecodeError on JSON with multiple lines Hello :), I have been trying to load data using a JSON file. Based on the [docs](https://huggingface.co/docs/datasets/loading_datasets.html#json-files), the following format is supported: ```json {"key1":11, "key2":12, "key3":13} {"key1":21, "key2":22, "key3":23} `...
[ 0.11187915503978729, -0.0649547353386879, -0.012783469632267952, 0.5094955563545227, 0.36685115098953247, 0.14520752429962158, 0.4116666316986084, 0.06978582590818405, 0.11648772656917572, 0.07758273929357529, 0.12275238335132599, 0.3139994740486145, -0.013312596827745438, 0.17431649565696...
https://github.com/huggingface/datasets/issues/1783
Dataset Examples Explorer
Hi @ChewKokWah, We're working on it! In the meantime, you can still find the dataset explorer at the following URL: https://huggingface.co/datasets/viewer/
In the Older version of the Dataset, there are a useful Dataset Explorer that allow user to visualize the examples (training, test and validation) of a particular dataset, it is no longer there in current version. Hope HuggingFace can re-enable the feature that at least allow viewing of the first 20 examples of a ...
21
Dataset Examples Explorer In the Older version of the Dataset, there are a useful Dataset Explorer that allow user to visualize the examples (training, test and validation) of a particular dataset, it is no longer there in current version. Hope HuggingFace can re-enable the feature that at least allow viewing of ...
[ -0.3861367702484131, -0.387398362159729, -0.061884526163339615, 0.29741257429122925, 0.1567152887582779, 0.2989472448825836, 0.08448292315006256, 0.35528409481048584, -0.06431106477975845, 0.376342236995697, 0.0037852777168154716, 0.29944220185279846, -0.29005733132362366, 0.52883404493331...
https://github.com/huggingface/datasets/issues/1783
Dataset Examples Explorer
Glad to see that it still exist, this existing one is more than good enough for me, it is feature rich, simple to use and concise. Hope similar feature can be retain in the future version.
In the Older version of the Dataset, there are a useful Dataset Explorer that allow user to visualize the examples (training, test and validation) of a particular dataset, it is no longer there in current version. Hope HuggingFace can re-enable the feature that at least allow viewing of the first 20 examples of a ...
36
Dataset Examples Explorer In the Older version of the Dataset, there are a useful Dataset Explorer that allow user to visualize the examples (training, test and validation) of a particular dataset, it is no longer there in current version. Hope HuggingFace can re-enable the feature that at least allow viewing of ...
[ -0.3628716766834259, -0.4238748550415039, -0.046912845224142075, 0.3083788752555847, 0.12723509967327118, 0.2346736490726471, 0.1710147112607956, 0.3707433044910431, -0.046005334705114365, 0.37545105814933777, 0.03735959529876709, 0.2407732456922531, -0.3106936514377594, 0.5867727994918823...
https://github.com/huggingface/datasets/issues/1781
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' during import
Hi ! I'm not able to reproduce the issue. Can you try restarting your runtime ? The PyExtensionType is available in pyarrow starting 0.17.1 iirc. If restarting your runtime doesn't fix this, can you try updating pyarrow ? ``` pip install pyarrow --upgrade ```
I'm using Colab. And suddenly this morning, there is this error. Have a look below! ![screenshot-colab research google com-2021 01 26-08-15-36](https://user-images.githubusercontent.com/45964869/105799890-fdaf3b80-5fae-11eb-8f06-11b65cdccc30.png)
44
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' during import I'm using Colab. And suddenly this morning, there is this error. Have a look below! ![screenshot-colab research google com-2021 01 26-08-15-36](https://user-images.githubusercontent.com/45964869/105799890-fdaf3b80-5fae-11eb-8f06-11b...
[ -0.1790660172700882, -0.1585773080587387, -0.044393882155418396, 0.2205853909254074, 0.18019793927669525, -0.06392691284418106, 0.1799791306257248, 0.26824215054512024, -0.3114219903945923, 0.19323432445526123, -0.036104802042245865, 0.43298816680908203, -0.16673225164413452, -0.1118563786...
https://github.com/huggingface/datasets/issues/1781
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' during import
Yes indeed. Also it looks like Pyarrow 3.0.0 got released on pypi 10 hours ago. This might be related to the bug, I'll investigate EDIT: looks like the 3.0.0 release doesn't have unexpected breaking changes for us, so I don't think the issue comes from that
I'm using Colab. And suddenly this morning, there is this error. Have a look below! ![screenshot-colab research google com-2021 01 26-08-15-36](https://user-images.githubusercontent.com/45964869/105799890-fdaf3b80-5fae-11eb-8f06-11b65cdccc30.png)
46
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' during import I'm using Colab. And suddenly this morning, there is this error. Have a look below! ![screenshot-colab research google com-2021 01 26-08-15-36](https://user-images.githubusercontent.com/45964869/105799890-fdaf3b80-5fae-11eb-8f06-11b...
[ -0.013139103539288044, -0.06683257967233658, -0.03311915695667267, 0.14838816225528717, 0.15526390075683594, -0.136658176779747, 0.29721179604530334, 0.32552459836006165, -0.3937355577945709, 0.18657316267490387, -0.017269041389226913, 0.3017129600048065, -0.09787213802337646, -0.135952413...
https://github.com/huggingface/datasets/issues/1781
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' during import
Installing datasets installs pyarrow>=0.17.1 so in theory it doesn't matter which version of pyarrow colab has by default (which is currently pyarrow 0.14.1). Also now the colab runtime refresh the pyarrow version automatically after the update from pip (previously you needed to restart your runtime). I guess wha...
I'm using Colab. And suddenly this morning, there is this error. Have a look below! ![screenshot-colab research google com-2021 01 26-08-15-36](https://user-images.githubusercontent.com/45964869/105799890-fdaf3b80-5fae-11eb-8f06-11b65cdccc30.png)
72
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' during import I'm using Colab. And suddenly this morning, there is this error. Have a look below! ![screenshot-colab research google com-2021 01 26-08-15-36](https://user-images.githubusercontent.com/45964869/105799890-fdaf3b80-5fae-11eb-8f06-11b...
[ -0.4181622862815857, 0.15043915808200836, -0.019076533615589142, 0.1533479392528534, 0.13422733545303345, -0.02369346283376217, 0.25424617528915405, 0.2454892098903656, -0.32417112588882446, 0.11164049059152603, 0.03812059760093689, 0.3774307072162628, -0.081313356757164, -0.04786325991153...
https://github.com/huggingface/datasets/issues/1781
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' during import
Yes colab doesn’t reload preloaded library unless you restart the instance. Maybe we should move the check on top of the init
I'm using Colab. And suddenly this morning, there is this error. Have a look below! ![screenshot-colab research google com-2021 01 26-08-15-36](https://user-images.githubusercontent.com/45964869/105799890-fdaf3b80-5fae-11eb-8f06-11b65cdccc30.png)
22
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' during import I'm using Colab. And suddenly this morning, there is this error. Have a look below! ![screenshot-colab research google com-2021 01 26-08-15-36](https://user-images.githubusercontent.com/45964869/105799890-fdaf3b80-5fae-11eb-8f06-11b...
[ -0.1401677429676056, -0.14400196075439453, -0.03869426250457764, 0.2877446115016937, 0.20868819952011108, -0.15753273665905, 0.1334635466337204, 0.1926027089357376, -0.27476322650909424, 0.12113486975431442, -0.06184003874659538, 0.41798681020736694, -0.1491585075855255, 0.0373290106654167...
https://github.com/huggingface/datasets/issues/1776
[Question & Bug Report] Can we preprocess a dataset on the fly?
We are very actively working on this. How does your dataset look like in practice (number/size/type of files)?
I know we can use `Datasets.map` to preprocess a dataset, but I'm using it with very large corpus which generates huge cache file (several TB cache from a 400 GB text file). I have no disk large enough to save it. Can we preprocess a dataset on the fly without generating cache? BTW, I tried raising `writer_batch_si...
18
[Question & Bug Report] Can we preprocess a dataset on the fly? I know we can use `Datasets.map` to preprocess a dataset, but I'm using it with very large corpus which generates huge cache file (several TB cache from a 400 GB text file). I have no disk large enough to save it. Can we preprocess a dataset on the fly ...
[ -0.3903963565826416, -0.19714827835559845, -0.07739083468914032, -0.0038241061847656965, 0.3977740406990051, 0.33564260601997375, 0.26100385189056396, 0.29689648747444153, -0.0846412256360054, 0.1327672153711319, 0.06934796273708344, 0.12865325808525085, -0.13560107350349426, 0.22886551916...
https://github.com/huggingface/datasets/issues/1776
[Question & Bug Report] Can we preprocess a dataset on the fly?
It's a text file with many lines (about 1B) of Chinese sentences. I use it to train language model using https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm_wwm.py
I know we can use `Datasets.map` to preprocess a dataset, but I'm using it with very large corpus which generates huge cache file (several TB cache from a 400 GB text file). I have no disk large enough to save it. Can we preprocess a dataset on the fly without generating cache? BTW, I tried raising `writer_batch_si...
21
[Question & Bug Report] Can we preprocess a dataset on the fly? I know we can use `Datasets.map` to preprocess a dataset, but I'm using it with very large corpus which generates huge cache file (several TB cache from a 400 GB text file). I have no disk large enough to save it. Can we preprocess a dataset on the fly ...
[ -0.37494978308677673, -0.3105449080467224, -0.025356240570545197, 0.07814650237560272, 0.41907641291618347, 0.3276047110557556, 0.2523878812789917, 0.24938976764678955, -0.11202412098646164, 0.06025981158018112, -0.04217410832643509, 0.08074977248907089, -0.11859795451164246, 0.16778141260...
https://github.com/huggingface/datasets/issues/1776
[Question & Bug Report] Can we preprocess a dataset on the fly?
Indeed I will submit a PR in a fez days to enable processing on-the-fly :) This can be useful in language modeling for tokenization, padding etc.
I know we can use `Datasets.map` to preprocess a dataset, but I'm using it with very large corpus which generates huge cache file (several TB cache from a 400 GB text file). I have no disk large enough to save it. Can we preprocess a dataset on the fly without generating cache? BTW, I tried raising `writer_batch_si...
26
[Question & Bug Report] Can we preprocess a dataset on the fly? I know we can use `Datasets.map` to preprocess a dataset, but I'm using it with very large corpus which generates huge cache file (several TB cache from a 400 GB text file). I have no disk large enough to save it. Can we preprocess a dataset on the fly ...
[ -0.38063034415245056, -0.19696876406669617, -0.07462406903505325, -0.006863976828753948, 0.4007774591445923, 0.3399392366409302, 0.27787846326828003, 0.26342713832855225, -0.09953608363866806, 0.14619992673397064, 0.1342775523662567, 0.1849377155303955, -0.1421179622411728, 0.2875472605228...
https://github.com/huggingface/datasets/issues/1776
[Question & Bug Report] Can we preprocess a dataset on the fly?
Hi @acul3, Please look at the discussion on a related Issue #1825. I think using `set_transform` after building from source should do.
I know we can use `Datasets.map` to preprocess a dataset, but I'm using it with very large corpus which generates huge cache file (several TB cache from a 400 GB text file). I have no disk large enough to save it. Can we preprocess a dataset on the fly without generating cache? BTW, I tried raising `writer_batch_si...
22
[Question & Bug Report] Can we preprocess a dataset on the fly? I know we can use `Datasets.map` to preprocess a dataset, but I'm using it with very large corpus which generates huge cache file (several TB cache from a 400 GB text file). I have no disk large enough to save it. Can we preprocess a dataset on the fly ...
[ -0.4329628348350525, -0.14976295828819275, -0.021089879795908928, 0.011729194782674313, 0.48501768708229065, 0.27308791875839233, 0.306505024433136, 0.2762749195098877, -0.17471298575401306, 0.10266122221946716, 0.09820547699928284, 0.19635440409183502, -0.08684181421995163, 0.246044754981...
https://github.com/huggingface/datasets/issues/1775
Efficient ways to iterate the dataset
It seems that selecting a subset of colums directly from the dataset, i.e., dataset["column"], is slow.
For a large dataset that does not fits the memory, how can I select only a subset of features from each example? If I iterate over the dataset and then select the subset of features one by one, the resulted memory usage will be huge. Any ways to solve this? Thanks
16
Efficient ways to iterate the dataset For a large dataset that does not fits the memory, how can I select only a subset of features from each example? If I iterate over the dataset and then select the subset of features one by one, the resulted memory usage will be huge. Any ways to solve this? Thanks It seem...
[ -0.2726210355758667, -0.3842424154281616, -0.09381949156522751, 0.3328416347503662, -0.06488361954689026, 0.06871729344129562, -0.08692619204521179, 0.2765877842903137, 0.15681496262550354, 0.1853521466255188, 0.10415738075971603, 0.09687274694442749, 0.07577352225780487, 0.171301648020744...
https://github.com/huggingface/datasets/issues/1774
is it possible to make slice to be more compatible like python list and numpy?
Hi ! Thanks for reporting. I am working on changes in the way data are sliced from arrow. I can probably fix your issue with the changes I'm doing. If you have some code to reproduce the issue it would be nice so I can make sure that this case will be supported :) I'll make a PR in a few days
Hi, see below error: ``` AssertionError: Requested slice [:10000000000000000] incompatible with 20 examples. ```
62
is it possible to make slice to be more compatible like python list and numpy? Hi, see below error: ``` AssertionError: Requested slice [:10000000000000000] incompatible with 20 examples. ``` Hi ! Thanks for reporting. I am working on changes in the way data are sliced from arrow. I can probably fix your issue...
[ 0.004519388545304537, -0.06874530762434006, -0.24153326451778412, 0.05697120353579521, 0.4289461672306061, -0.30124402046203613, 0.12670300900936127, 0.37132689356803894, -0.13171705603599548, 0.43240442872047424, -0.014954859390854836, 0.7657302618026733, -0.07993333041667938, 0.285709887...