html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 70 51.8k | body stringlengths 0 29.8k | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings list |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/2193 | Filtering/mapping on one column is very slow | Hi ! Yes we are working on making `filter` significantly faster. You can look at related PRs here: #2060 #2178
I think you can expect to have the fast version of `filter` available next week.
We'll make it only select one column, and we'll also make the overall filtering operation way faster by avoiding many arr... | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | 68 | Filtering/mapping on one column is very slow
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_toke... | [
-0.12373079359531403,
0.29838573932647705,
-0.020849253982305527,
-0.2533702850341797,
0.10165750980377197,
-0.13878673315048218,
0.30278411507606506,
0.612910270690918,
0.24762602150440216,
-0.032855261117219925,
-0.098599374294281,
0.48608213663101196,
0.03813553974032402,
-0.18357942998... |
https://github.com/huggingface/datasets/issues/2193 | Filtering/mapping on one column is very slow | @lhoestq Thanks for the response— it's great to hear that we'll be getting a much faster `filter` method soon. However, my use case does also involve using `map` over a single column in order to pre-compute roughly uniformly sized batches, and right now that is also very slow. Is there any plan to make `map` faster for... | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | 96 | Filtering/mapping on one column is very slow
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_toke... | [
-0.18856193125247955,
0.30073273181915283,
-0.006220961455255747,
-0.29155653715133667,
0.1139058917760849,
-0.12519113719463348,
0.37745389342308044,
0.6256682276725769,
0.28753218054771423,
0.01728115975856781,
-0.03644689545035362,
0.5041139125823975,
0.02904377691447735,
-0.16418948769... |
https://github.com/huggingface/datasets/issues/2193 | Filtering/mapping on one column is very slow | Currently the optimal setup for single-column computations is probably to do something like
```python
result = dataset.map(f, input_columns="my_col", remove_columns=dataset.column_names)
```
This has two advantages:
- input_columns="my_col" allows to only read the column "my_col"
- remove_columns=dataset.column_n... | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | 82 | Filtering/mapping on one column is very slow
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_toke... | [
-0.1294832080602646,
0.3391113877296448,
0.01074558962136507,
-0.2183181643486023,
0.09912174195051193,
-0.0936415046453476,
0.40800750255584717,
0.5750511288642883,
0.23245948553085327,
0.08215249329805374,
-0.06951151043176651,
0.4634191393852234,
0.02809939533472061,
-0.1797309666872024... |
https://github.com/huggingface/datasets/issues/2193 | Filtering/mapping on one column is very slow | Hi @lhoestq ,
I'm hijacking this issue, because I'm currently trying to do the approach you recommend:
> Currently the optimal setup for single-column computations is probably to do something like
>
> ```python
> result = dataset.map(f, input_columns="my_col", remove_columns=dataset.column_names)
> ```
He... | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | 285 | Filtering/mapping on one column is very slow
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_toke... | [
-0.1297883540391922,
0.3438465893268585,
0.005627683363854885,
-0.23468345403671265,
0.10948649048805237,
-0.12536682188510895,
0.3955201506614685,
0.5808494091033936,
0.2243223637342453,
0.06910619884729385,
-0.06487752497196198,
0.48750317096710205,
0.02443121001124382,
-0.17363737523555... |
https://github.com/huggingface/datasets/issues/2193 | Filtering/mapping on one column is very slow | Hi ! Can you open a separate issue for that ?
Also if you could provide a google colab or a sample code to reproduce this issue that would be helpful.
On my side I was not able to reproduce this error. | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | 42 | Filtering/mapping on one column is very slow
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_toke... | [
-0.07392751425504684,
0.26226136088371277,
0.010453229770064354,
-0.14601370692253113,
0.17261536419391632,
-0.09211436659097672,
0.3422681987285614,
0.5855396389961243,
0.24096359312534332,
0.013878952711820602,
-0.028818471357226372,
0.4858112931251526,
0.0747995674610138,
-0.24202534556... |
https://github.com/huggingface/datasets/issues/2193 | Filtering/mapping on one column is very slow | @lhoestq Sorry I'm just responding now. I'm currently using your recommendation for the map on a single column, and I've gotten it to be fast enough to sort of work for my use case by just setting `num_proc=10`, although it's still quite slow. It's clear that it is still loading the entirety of each row into memory and... | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | 195 | Filtering/mapping on one column is very slow
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_toke... | [
-0.11003413051366806,
0.26948580145835876,
-0.0035481860395520926,
-0.22678841650485992,
0.1348385512828827,
-0.15745609998703003,
0.3228949308395386,
0.5779765844345093,
0.25163760781288147,
0.01715421862900257,
-0.008435046300292015,
0.5262534618377686,
-0.0003768367459997535,
-0.2199468... |
https://github.com/huggingface/datasets/issues/2193 | Filtering/mapping on one column is very slow | Hi @norabelrose ! I'm glad you managed to make this work on your side.
Regarding memory usage, you can try to drop the columns that you don't want to use for your `map` for now.
In the future we'll try to find a way to not load unnecessary columns in memory in `map`. Currently the way it works is that it gets the b... | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | 126 | Filtering/mapping on one column is very slow
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_toke... | [
-0.10902230441570282,
0.28565314412117004,
-0.017312370240688324,
-0.13604886829853058,
0.17431554198265076,
-0.08466331660747528,
0.27682358026504517,
0.5508018136024475,
0.2953396439552307,
-0.02807306870818138,
-0.04300348088145256,
0.4847538471221924,
-0.010495577938854694,
-0.21682904... |
https://github.com/huggingface/datasets/issues/2193 | Filtering/mapping on one column is very slow | @lhoestq After looking through the source code, it looks like the following solution has at least some chance of working:
- refactor `Dataset.map()` so that the `input_columns` parameter is implemented by using the `self.formatted_as()` context manager with `columns=input_columns`
- change `Dataset._getitem()` so tha... | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | 62 | Filtering/mapping on one column is very slow
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_toke... | [
-0.12732212245464325,
0.3481042981147766,
-0.006475737318396568,
-0.15352921187877655,
0.14380286633968353,
-0.14673542976379395,
0.36344319581985474,
0.606902539730072,
0.1660744994878769,
-0.0057874987833201885,
-0.10351882874965668,
0.5548496842384338,
0.0025607093703001738,
-0.19553515... |
https://github.com/huggingface/datasets/issues/2193 | Filtering/mapping on one column is very slow | Looks like a great direction :)
Note that `query_table` doesn't bring data into memory. Only `format_table` does.
Also the dataset may already have a format with `columns=` already defined so we would need to define the formatted `input_dataset` like:
```python
# before the `map` main for loop
input_columns = inpu... | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | 148 | Filtering/mapping on one column is very slow
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_toke... | [
-0.12052276730537415,
0.29167163372039795,
-0.006858001928776503,
-0.19557629525661469,
0.11129999905824661,
-0.10715050995349884,
0.2545417845249176,
0.5994861721992493,
0.20434807240962982,
-0.028510797768831253,
-0.03400079905986786,
0.4793541431427002,
0.021737689152359962,
-0.21317027... |
https://github.com/huggingface/datasets/issues/2193 | Filtering/mapping on one column is very slow | Good to know about `query_table` not bringing anything into memory. I was under the impression that it did because a while back I looked at my `map` operation in pdb and it looked like it was spending forever in line 93 of formatting.py, `return pa.concat_tables(....)`, although that was before the `fast_slice` interpo... | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | 181 | Filtering/mapping on one column is very slow
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_toke... | [
-0.10460545867681503,
0.32244428992271423,
-0.01583397574722767,
-0.18998320400714874,
0.09967653453350067,
-0.09264678508043289,
0.22558385133743286,
0.5739008784294128,
0.13459138572216034,
-0.005186064168810844,
-0.09151383489370346,
0.45793259143829346,
0.07698604464530945,
-0.23750734... |
https://github.com/huggingface/datasets/issues/2193 | Filtering/mapping on one column is very slow | `query_table` simply slices/concatenates parts of the table. The actual data inside the table is not brought in memory.
Also I'm more in favor of declaring `input_dataset = self.with_format(...)` since `formatted_as` may update the dataset fingerprint of `self`, which is not expected when someone runs `map`.
> It's... | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | 145 | Filtering/mapping on one column is very slow
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_toke... | [
-0.090509794652462,
0.2783810496330261,
-0.007305453065782785,
-0.17817318439483643,
0.14086709916591644,
-0.09612910449504852,
0.3184327781200409,
0.5706533789634705,
0.18682286143302917,
-0.0280761681497097,
-0.05864184722304344,
0.4846482276916504,
0.09406675398349762,
-0.26581510901451... |
https://github.com/huggingface/datasets/issues/2193 | Filtering/mapping on one column is very slow | That sounds good to me! I just submitted a PR (#2246) implementing your approach. I also changed how `_query_table` handles Iterable keys since it still seemed like `pa.concat_tables` was taking a long time to create the table for each batch. Now my whole `map()` operation takes 1 min 46 seconds where it used to take s... | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | 62 | Filtering/mapping on one column is very slow
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_toke... | [
-0.17371796071529388,
0.3303045630455017,
-0.015150217339396477,
-0.23526018857955933,
0.09672679007053375,
-0.1373567283153534,
0.24519705772399902,
0.6020464301109314,
0.12157492339611053,
-0.05612529069185257,
-0.07012146711349487,
0.5131319761276245,
0.07048051059246063,
-0.20480145514... |
https://github.com/huggingface/datasets/issues/2190 | News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs | Hi @anassalamah,
Could you please try with this:
```python
train_ds = load_dataset("news_commentary", lang1="ar", lang2="en", split='train[:98%]')
val_ds = load_dataset("news_commentary", lang1="ar", lang2="en", split='train[98%:]')
``` | I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi.
```
train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]')
val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]')
# filtering out examples that a... | 22 | News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs
I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi.
```
train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]')
val_ds = load_datas... | [
-0.16038630902767181,
0.004446377512067556,
-0.21704241633415222,
0.24928432703018188,
-0.053791046142578125,
0.0885385200381279,
0.3200056552886963,
0.22940011322498322,
-0.025390462949872017,
-0.23579920828342438,
-0.2461351901292801,
0.19127431511878967,
0.18397976458072662,
0.274913400... |
https://github.com/huggingface/datasets/issues/2190 | News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs | Hello @albertvillanova,
Thanks for the suggestion. I didn't know you could do that. however, it didn't resolve the issue

| I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi.
```
train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]')
val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]')
# filtering out examples that a... | 20 | News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs
I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi.
```
train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]')
val_ds = load_datas... | [
-0.20709189772605896,
0.10822296887636185,
-0.17505010962486267,
0.32578739523887634,
-0.11737152189016342,
0.1063091829419136,
0.2994770109653473,
0.26404184103012085,
-0.04221085086464882,
-0.26377928256988525,
-0.28765690326690674,
0.23029889166355133,
0.22866931557655334,
0.18732734024... |
https://github.com/huggingface/datasets/issues/2189 | save_to_disk doesn't work when we use concatenate_datasets function before creating the final dataset_object. | Hi ! We refactored save_to_disk in #2025 so this doesn't happen.
Feel free to try it on master for now
We'll do a new release soon | As you can see, it saves the entire dataset.
@lhoestq
You can check by going through the following example,
```
from datasets import load_from_disk,concatenate_datasets
loaded_data=load_from_disk('/home/gsir059/HNSW-ori/my_knowledge_dataset')
n=20
kb_list=[loaded_data.shard(n, i, contiguous=True) for i... | 26 | save_to_disk doesn't work when we use concatenate_datasets function before creating the final dataset_object.
As you can see, it saves the entire dataset.
@lhoestq
You can check by going through the following example,
```
from datasets import load_from_disk,concatenate_datasets
loaded_data=load_from_di... | [
-0.25447580218315125,
0.036903031170368195,
-0.07479166239500046,
0.023532187566161156,
0.20227767527103424,
0.2077014297246933,
0.3147447407245636,
0.37458524107933044,
-0.07030641287565231,
0.2908634543418884,
-0.10103646665811539,
0.19494947791099548,
0.00232862145639956,
0.204632475972... |
https://github.com/huggingface/datasets/issues/2188 | Duplicate data in Timit dataset | Hi ! Thanks for reporting
If I recall correctly this has been recently fixed #1995
Can you try to upgrade your local version of `datasets` ?
```
pip install --upgrade datasets
``` | I ran a simple code to list all texts in Timit dataset and the texts were all the same.
Is this dataset corrupted?
**Code:**
timit = load_dataset("timit_asr")
print(*timit['train']['text'], sep='\n')
**Result:**
Would such an act of refusal be useful?
Would such an act of refusal be useful?
Would such an act of... | 32 | Duplicate data in Timit dataset
I ran a simple code to list all texts in Timit dataset and the texts were all the same.
Is this dataset corrupted?
**Code:**
timit = load_dataset("timit_asr")
print(*timit['train']['text'], sep='\n')
**Result:**
Would such an act of refusal be useful?
Would such an act of refusa... | [
-0.00855990219861269,
-0.31162741780281067,
-0.08459082990884781,
0.6408430337905884,
0.29728323221206665,
0.17986057698726654,
0.21272236108779907,
0.34836575388908386,
-0.3550945222377777,
0.06406489759683609,
-0.1848575323820114,
0.3607739806175232,
-0.03706881031394005,
0.1377493739128... |
https://github.com/huggingface/datasets/issues/2187 | Question (potential issue?) related to datasets caching | An educated guess: does this refer to the fact that depending on the custom column names in the dataset files (csv in this case), there is a dataset loader being created? and this dataset loader - using the "custom data configuration" is used among all jobs running using this particular csv files? (thinking out loud he... | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | 95 | Question (potential issue?) related to datasets caching
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the followi... | [
-0.0747397169470787,
0.03292403370141983,
-0.11125218123197556,
0.18403862416744232,
0.20810317993164062,
0.199586421251297,
0.34123748540878296,
0.0353645384311676,
0.3006588816642761,
-0.1421155035495758,
0.17081208527088165,
0.09101539105176926,
-0.23205524682998657,
0.0387958399951458,... |
https://github.com/huggingface/datasets/issues/2187 | Question (potential issue?) related to datasets caching | Hi ! Currently disabling the caching means that all the dataset transform like `map`, `filter` etc. ignore the cache: it doesn't write nor read processed cache files.
However `load_dataset` reuses datasets that have already been prepared: it does reload prepared dataset files.
Indeed from the documentation:
> data... | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | 202 | Question (potential issue?) related to datasets caching
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the followi... | [
-0.09970808774232864,
0.061342593282461166,
-0.09445241838693619,
0.1375739425420761,
0.28683575987815857,
0.1569393277168274,
0.29463276267051697,
0.0576290525496006,
0.21895618736743927,
-0.22742898762226105,
0.05633801594376564,
0.18915870785713196,
-0.15384486317634583,
-0.134917765855... |
https://github.com/huggingface/datasets/issues/2187 | Question (potential issue?) related to datasets caching | Thank you for the clarification.
This is a bit confusing. On one hand, it says that cache files are always recreated and written to a temporary directory that is removed; on the other hand the last bullet point makes me think that since the default according to the docs for `download_mode (Optional datasets.Generat... | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | 112 | Question (potential issue?) related to datasets caching
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the followi... | [
0.023499730974435806,
-0.006731436122208834,
-0.10065873712301254,
0.22953364253044128,
0.21939003467559814,
0.16460666060447693,
0.1426832526922226,
-0.12215955555438995,
0.26811498403549194,
-0.22139713168144226,
0.14639055728912354,
0.04763802886009216,
-0.08117979764938354,
-0.09891978... |
https://github.com/huggingface/datasets/issues/2187 | Question (potential issue?) related to datasets caching | It would be nice if the documentation elaborated on all the possible values for `download_mode` and/or a link to `datasets.GenerateMode`.
This info here:
```
"""`Enum` for how to treat pre-existing downloads and data.
The default mode is `REUSE_DATASET_IF_EXISTS`, which will reuse both
raw downloads a... | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | 84 | Question (potential issue?) related to datasets caching
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the followi... | [
-0.13257518410682678,
0.004261156544089317,
-0.10032559186220169,
0.14501966536045074,
0.2611149251461029,
0.08856367319822311,
0.14978010952472687,
-0.005439947359263897,
0.052364934235811234,
-0.2209997922182083,
-0.010533762164413929,
-0.009321311488747597,
-0.07880279421806335,
0.01480... |
https://github.com/huggingface/datasets/issues/2187 | Question (potential issue?) related to datasets caching | I have another question. Assuming that I understood correctly and there is reuse of datasets files when caching is disabled (!), I'm guessing there is a directory that is created based on some information on the dataset file. I'm interested in the situation where I'm loading a (custom) dataset from local disk. What inf... | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | 139 | Question (potential issue?) related to datasets caching
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the followi... | [
0.03961418941617012,
0.17447881400585175,
-0.12205327302217484,
0.36889076232910156,
0.12289860099554062,
0.09724798053503036,
0.34789401292800903,
-0.043518614023923874,
0.17479446530342102,
-0.11833667755126953,
0.08155407756567001,
-0.07398992031812668,
-0.096710205078125,
-0.1997422575... |
https://github.com/huggingface/datasets/issues/2187 | Question (potential issue?) related to datasets caching | Thanks for the feedback, we'll work in improving this aspect of the documentation.
> Where are these files stored? I guess not in the temporary directory that is removed...
We're using the Arrow file format to load datasets. Therefore each time you load a dataset, it is prepared as an arrow file on your disk. By ... | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | 231 | Question (potential issue?) related to datasets caching
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the followi... | [
0.042652662843465805,
0.11284630000591278,
-0.07477432489395142,
0.3651134669780731,
0.18751360476016998,
0.16579142212867737,
0.3618486225605011,
-0.07639751583337784,
0.19209377467632294,
-0.14571736752986908,
0.12443500757217407,
0.09013627469539642,
-0.26943644881248474,
-0.12038256227... |
https://github.com/huggingface/datasets/issues/2187 | Question (potential issue?) related to datasets caching | Thank you for all your clarifications, really helpful!
If you have the bandwidth, please do revisit the api wrt cache disabling. Anywhere in the computer stack (hardware included) where you disable the cache, one assumes there is no caching that happens. | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | 41 | Question (potential issue?) related to datasets caching
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the followi... | [
-0.08065541088581085,
-0.01909247227013111,
-0.07845887541770935,
0.18294355273246765,
0.20736485719680786,
0.23291267454624176,
0.17175619304180145,
0.012628389522433281,
0.20395857095718384,
-0.16847355663776398,
0.10436806082725525,
0.06517931818962097,
-0.1479160636663437,
-0.048604663... |
https://github.com/huggingface/datasets/issues/2187 | Question (potential issue?) related to datasets caching | I have another question about caching, this time in the case where FORCE_REDOWNLOAD is used to load the dataset, the datasets cache is one directory as defined by HF_HOME and there are multiple concurrent jobs running in a cluster using the same local dataset (i.e., same local files in the cluster). Does anything in th... | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | 115 | Question (potential issue?) related to datasets caching
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the followi... | [
-0.19765882194042206,
0.09977450221776962,
-0.12614706158638,
0.2399425506591797,
0.02894391119480133,
0.04343106225132942,
0.4564795196056366,
0.0984458476305008,
0.4220702648162842,
-0.21270230412483215,
-0.02498313970863819,
-0.040010951459407806,
-0.03659181669354439,
0.013408511877059... |
https://github.com/huggingface/datasets/issues/2187 | Question (potential issue?) related to datasets caching | I got another error that convinces me there is a race condition (one of the test files had zero samples at prediction time). I think it comes down to the fact that the `config_id` above (used in the naming for the cache) has no information on who's touching the data. If I have 2 concurrent jobs, both loading the same d... | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | 74 | Question (potential issue?) related to datasets caching
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the followi... | [
-0.2517685890197754,
0.19136862456798553,
-0.12854312360286713,
0.22529469430446625,
-0.027027824893593788,
0.07782381027936935,
0.32869085669517517,
0.1005266085267067,
0.3222953677177429,
-0.165065735578537,
0.12396406382322311,
-0.12795709073543549,
-0.19243605434894562,
-0.014436149038... |
https://github.com/huggingface/datasets/issues/2187 | Question (potential issue?) related to datasets caching | We're using a locking mechanism to prevent two processes from writing at the same time. The locking is based on the `filelock` module.
Also directories that are being written use a suffix ".incomplete" so that reading is not possible on a dataset being written.
Do you think you could provide a simple code to reprod... | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | 61 | Question (potential issue?) related to datasets caching
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the followi... | [
-0.19675281643867493,
0.11761070042848587,
-0.08972480148077011,
0.35525575280189514,
0.1577596217393875,
0.1552673876285553,
0.1879119575023651,
0.16086383163928986,
0.31022462248802185,
-0.18814903497695923,
0.1910191923379898,
0.017973188310861588,
-0.2734490633010864,
0.026845009997487... |
https://github.com/huggingface/datasets/issues/2187 | Question (potential issue?) related to datasets caching | I can provide details about the code I'm running (it's really-really close to some official samples from the huggingface transformers examples, I can point to the exact sample file, I kept a record of that). I can also describe in which conditions this race occurs (I'm convinced it has to do with forcing the redownload... | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | 291 | Question (potential issue?) related to datasets caching
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the followi... | [
-0.2184484899044037,
-0.09565647691488266,
-0.01374092511832714,
0.32557323575019836,
0.14435279369354248,
0.040122583508491516,
0.21020619571208954,
-0.0006144361686892807,
0.2891821265220642,
-0.23899391293525696,
-0.017255382612347603,
-0.12228798866271973,
-0.21927659213542938,
0.06673... |
https://github.com/huggingface/datasets/issues/2185 | .map() and distributed training | Hi, one workaround would be to save the mapped(tokenized in your case) file using `save_to_disk`, and having each process load this file using `load_from_disk`. This is what I am doing, and in this case, I turn off the ability to automatically load from the cache.
Also, multiprocessing the map function seems to be s... | Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_path=my_path)
[...]
def tokeni... | 62 | .map() and distributed training
Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_pa... | [
-0.253761887550354,
-0.22071681916713715,
0.1177765429019928,
0.07457482814788818,
0.21161654591560364,
-0.10975951701402664,
0.5170725584030151,
0.010205402038991451,
0.28015685081481934,
0.023763200268149376,
0.18224747478961945,
0.34344911575317383,
-0.22906416654586792,
-0.585544884204... |
https://github.com/huggingface/datasets/issues/2185 | .map() and distributed training | Thanks @hwijeen for the workaround, feels a bit prototypical but it works! (it seems files are written twice then though)
(I haven't observed slowness using multiprocessed map function but I could be wrong) | Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_path=my_path)
[...]
def tokeni... | 33 | .map() and distributed training
Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_pa... | [
-0.253761887550354,
-0.22071681916713715,
0.1177765429019928,
0.07457482814788818,
0.21161654591560364,
-0.10975951701402664,
0.5170725584030151,
0.010205402038991451,
0.28015685081481934,
0.023763200268149376,
0.18224747478961945,
0.34344911575317383,
-0.22906416654586792,
-0.585544884204... |
https://github.com/huggingface/datasets/issues/2185 | .map() and distributed training | To my understanding, files are written twice anyhow(one after load_dataset, another aftet map). It's just that you now have it at the location where you can see, whereas it was secretlely saved at caching folder(.cache/huggingface/datasets by default)! Correct me if I'm wrong! | Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_path=my_path)
[...]
def tokeni... | 42 | .map() and distributed training
Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_pa... | [
-0.253761887550354,
-0.22071681916713715,
0.1177765429019928,
0.07457482814788818,
0.21161654591560364,
-0.10975951701402664,
0.5170725584030151,
0.010205402038991451,
0.28015685081481934,
0.023763200268149376,
0.18224747478961945,
0.34344911575317383,
-0.22906416654586792,
-0.585544884204... |
https://github.com/huggingface/datasets/issues/2185 | .map() and distributed training | So to answer my initial question, I was just doing something stupid as I was not re-giving the `preprocessing_num_workers` arguments when launching the distributed training (and it was then set to `None`). I initially thought the hash was computed only with the `tokenize_function` but it's all arguments. Thanks @lhoest... | Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_path=my_path)
[...]
def tokeni... | 51 | .map() and distributed training
Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_pa... | [
-0.253761887550354,
-0.22071681916713715,
0.1177765429019928,
0.07457482814788818,
0.21161654591560364,
-0.10975951701402664,
0.5170725584030151,
0.010205402038991451,
0.28015685081481934,
0.023763200268149376,
0.18224747478961945,
0.34344911575317383,
-0.22906416654586792,
-0.585544884204... |
https://github.com/huggingface/datasets/issues/2181 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | Hi ! Can you try to increase the block size ? For example
```python
block_size_10MB = 10<<20
load_dataset("json", ..., block_size=block_size_10MB)
```
The block size corresponds to how much bytes to process at a time from the input stream.
This will determine multi-threading granularity as well as the size of ind... | Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-pack... | 64 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries)
Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as ... | [
0.016556626185774803,
0.012686754576861858,
-0.007510800380259752,
0.34093838930130005,
0.16045328974723816,
-0.2647140324115753,
0.19363261759281158,
0.5859464406967163,
-0.1621605008840561,
-0.0815066248178482,
0.16547328233718872,
0.14416268467903137,
-0.015083249658346176,
0.0145620461... |
https://github.com/huggingface/datasets/issues/2181 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | Hi @lhoestq! Thank you for your prompt reply.
I have experimented with (10<<20, 10<<28, 10<<30, 10<<33, 10<<34), since my machine has 192G of memory, but it's either the above-mentioned error or processed killed because of OOM.
Could you give me a bit of background on why block size needs to be exactly calibrated?
... | Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-pack... | 66 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries)
Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as ... | [
0.016556626185774803,
0.012686754576861858,
-0.007510800380259752,
0.34093838930130005,
0.16045328974723816,
-0.2647140324115753,
0.19363261759281158,
0.5859464406967163,
-0.1621605008840561,
-0.0815066248178482,
0.16547328233718872,
0.14416268467903137,
-0.015083249658346176,
0.0145620461... |
https://github.com/huggingface/datasets/issues/2181 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | We're using the JSON loader of pyarrow. It parses the file chunk by chunk to load the dataset.
This issue happens when there's no delimiter in one chunk of data. For json line, the delimiter is the end of line.
So with a big value for chunk_size this should have worked unless you have one extremely long line in your ... | Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-pack... | 95 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries)
Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as ... | [
0.016556626185774803,
0.012686754576861858,
-0.007510800380259752,
0.34093838930130005,
0.16045328974723816,
-0.2647140324115753,
0.19363261759281158,
0.5859464406967163,
-0.1621605008840561,
-0.0815066248178482,
0.16547328233718872,
0.14416268467903137,
-0.015083249658346176,
0.0145620461... |
https://github.com/huggingface/datasets/issues/2181 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | I'm using `pyarrow==3.0.0` with `datasets==1.5.0`.
Your point totally makes sense. I will check if my jsonl file contains an extremely long file and let you know.
Here are some different error messages that I got when tweaking `block_size`. I also suspect that this is related to the pyarrow... but I guess it wou... | Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-pack... | 137 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries)
Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as ... | [
0.016556626185774803,
0.012686754576861858,
-0.007510800380259752,
0.34093838930130005,
0.16045328974723816,
-0.2647140324115753,
0.19363261759281158,
0.5859464406967163,
-0.1621605008840561,
-0.0815066248178482,
0.16547328233718872,
0.14416268467903137,
-0.015083249658346176,
0.0145620461... |
https://github.com/huggingface/datasets/issues/2181 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | I am getting the same error. When I tweak the block_size, I also find:
`OverflowError: value too large to convert to int32_t`
and
`pyarrow.lib.ArrowInvalid: Exceeded maximum rows`
| Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-pack... | 27 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries)
Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as ... | [
0.016556626185774803,
0.012686754576861858,
-0.007510800380259752,
0.34093838930130005,
0.16045328974723816,
-0.2647140324115753,
0.19363261759281158,
0.5859464406967163,
-0.1621605008840561,
-0.0815066248178482,
0.16547328233718872,
0.14416268467903137,
-0.015083249658346176,
0.0145620461... |
https://github.com/huggingface/datasets/issues/2181 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | I made more tests. I used a smaller dataset and I was getting the same error, which means that it was not necessarily linked to the dataset size. To make both my smaller and larger datasets work, I got rid of lists with the json file. I had the following data format:
```python
[
{'key': "a", 'value': ['one', 'two'... | Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-pack... | 120 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries)
Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as ... | [
0.016556626185774803,
0.012686754576861858,
-0.007510800380259752,
0.34093838930130005,
0.16045328974723816,
-0.2647140324115753,
0.19363261759281158,
0.5859464406967163,
-0.1621605008840561,
-0.0815066248178482,
0.16547328233718872,
0.14416268467903137,
-0.015083249658346176,
0.0145620461... |
https://github.com/huggingface/datasets/issues/2181 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | Thanks @hwijeen for reporting and thanks @jpilaul for pointing this out.
Indeed, those are different JSON-like formats:
- the first one is the **standard JSON** format: all the file content is JSON-valid, thus all content is either a JSON object (between curly brackets `{...}`) or a JSON array (between square brack... | Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-pack... | 104 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries)
Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as ... | [
0.016556626185774803,
0.012686754576861858,
-0.007510800380259752,
0.34093838930130005,
0.16045328974723816,
-0.2647140324115753,
0.19363261759281158,
0.5859464406967163,
-0.1621605008840561,
-0.0815066248178482,
0.16547328233718872,
0.14416268467903137,
-0.015083249658346176,
0.0145620461... |
https://github.com/huggingface/datasets/issues/2181 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | Thanks @albertvillanova for your explanation, it is helpful to know (maybe add to docs?)!
However, the problem I described above happened when I was dealing with jsonl files 😿
Although I did not thoroughly inspect, I suspect the cause was the one extremely long document in my case. | Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-pack... | 48 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries)
Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as ... | [
0.016556626185774803,
0.012686754576861858,
-0.007510800380259752,
0.34093838930130005,
0.16045328974723816,
-0.2647140324115753,
0.19363261759281158,
0.5859464406967163,
-0.1621605008840561,
-0.0815066248178482,
0.16547328233718872,
0.14416268467903137,
-0.015083249658346176,
0.0145620461... |
https://github.com/huggingface/datasets/issues/2176 | Converting a Value to a ClassLabel | Hi @nelson-liu!
Here is what I do to convert a string to class label:
```python
from datasets import load_dataset, features
dset = load_dataset(...)
col_name = "the string column name"
class_names = dset.unique(col_name)
class_feature = features.ClassLabel(names=sorted(class_names))
dset = dset.map(lam... | Hi!
In the docs for `cast`, it's noted that `For non-trivial conversion, e.g. string <-> ClassLabel you should use map() to update the Dataset.`
Would it be possible to have an example that demonstrates such a string <-> ClassLabel conversion using `map`? Thanks! | 50 | Converting a Value to a ClassLabel
Hi!
In the docs for `cast`, it's noted that `For non-trivial conversion, e.g. string <-> ClassLabel you should use map() to update the Dataset.`
Would it be possible to have an example that demonstrates such a string <-> ClassLabel conversion using `map`? Thanks!
Hi @nelson... | [
-0.025565078482031822,
-0.1936045140028,
0.05003351718187332,
0.059382569044828415,
0.6273640394210815,
0.21152478456497192,
0.2829762101173401,
0.08987747132778168,
0.07599488645792007,
-0.05686754360795021,
0.10290462523698807,
0.6814892292022705,
-0.033919621258974075,
0.115833103656768... |
https://github.com/huggingface/datasets/issues/2175 | dataset.search_batch() function outputs all -1 indices sometime. | Actually, I found the answer [here](https://github.com/facebookresearch/faiss/wiki/FAQ#what-does-it-mean-when-a-search-returns--1-ids).
So we have to do some modifications to the code for instances where the index doesn't retrieve any IDs. | I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_... | 25 | dataset.search_batch() function outputs all -1 indices sometime.
I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingf... | [
0.011297260411083698,
-0.37210583686828613,
-0.10199861973524094,
0.019268691539764404,
0.20218749344348907,
-0.09668779373168945,
0.2803097665309906,
0.29684391617774963,
0.13914161920547485,
0.4209131896495819,
-0.2324288785457611,
-0.2434539943933487,
0.1058330088853836,
-0.033897802233... |
https://github.com/huggingface/datasets/issues/2175 | dataset.search_batch() function outputs all -1 indices sometime. | @lhoestq @patrickvonplaten
I also found another short bug in the retrieval part. Especially, when retrieving documents. If Faiss returns the -1 as the index, the retriever will always use the last element in the dataset.
please check [def get_doc_dicts function](https://github.com/huggingface/transformers/blo... | I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_... | 52 | dataset.search_batch() function outputs all -1 indices sometime.
I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingf... | [
0.015852317214012146,
-0.2806917130947113,
-0.06329286843538284,
0.0849648043513298,
0.08535473793745041,
-0.10384609550237656,
0.23690800368785858,
0.30231061577796936,
0.10523372888565063,
0.3437609374523163,
-0.25177329778671265,
-0.21534369885921478,
0.16499073803424835,
-0.26178383827... |
https://github.com/huggingface/datasets/issues/2175 | dataset.search_batch() function outputs all -1 indices sometime. | Hi !
No it happens sometimes to return -1, especially if your dataset is small.
If your dataset is big enough it shouldn't happen in my experience.
Ideally we should ignore all the -1 that are returned. It should be possible to change that in RAG's code | I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_... | 47 | dataset.search_batch() function outputs all -1 indices sometime.
I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingf... | [
-0.019722802564501762,
-0.4322829842567444,
-0.08124140650033951,
0.047830481082201004,
0.21979759633541107,
-0.09081786125898361,
0.3115733563899994,
0.2803554832935333,
0.13101249933242798,
0.4431152939796448,
-0.2505156993865967,
-0.15310947597026825,
0.09101846814155579,
-0.07913532853... |
https://github.com/huggingface/datasets/issues/2175 | dataset.search_batch() function outputs all -1 indices sometime. | I also checked with some indexes it returns more -1s. Specially with IVF
when nprobr is very low. It doesn't happen when using HNSW though. But at
the moment if it happens, dataset will always return the last element.
Maybe we should change it to repeat the most last valid retrieved doc id.
What do you think?
On Wed, ... | I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_... | 150 | dataset.search_batch() function outputs all -1 indices sometime.
I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingf... | [
-0.03521956503391266,
-0.37993696331977844,
-0.047816477715969086,
0.04257233813405037,
0.15346087515354156,
-0.08229730278253555,
0.29746106266975403,
0.3594943881034851,
0.15454955399036407,
0.40005606412887573,
-0.2719639241695404,
-0.13553020358085632,
0.18018461763858795,
-0.116247713... |
https://github.com/huggingface/datasets/issues/2175 | dataset.search_batch() function outputs all -1 indices sometime. | That would be an easy way to workaround this issue. Feel free to open a PR on `transformers` and ping me ! :) | I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_... | 23 | dataset.search_batch() function outputs all -1 indices sometime.
I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingf... | [
-0.02550831437110901,
-0.4342889189720154,
-0.06778895109891891,
0.07370910048484802,
0.2557210624217987,
-0.08688480406999588,
0.368497759103775,
0.2831262946128845,
0.07728293538093567,
0.4528711140155792,
-0.22231830656528473,
-0.19843102991580963,
0.04483788087964058,
-0.06366024911403... |
https://github.com/huggingface/datasets/issues/2175 | dataset.search_batch() function outputs all -1 indices sometime. | Sure. Will push everything together with RAG end to end. :) thanks a lot.
On Wed, Apr 7, 2021, 21:16 Quentin Lhoest ***@***.***> wrote:
> That would be an easy way to workaround this issue. Feel free to open a PR
> on transformers and ping me ! :)
>
> —
> You are receiving this because you authored the thread.
> Rep... | I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_... | 82 | dataset.search_batch() function outputs all -1 indices sometime.
I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingf... | [
-0.02831314317882061,
-0.44597774744033813,
-0.06211835518479347,
0.1084141954779625,
0.292106032371521,
-0.07177858799695969,
0.35497137904167175,
0.30725163221359253,
0.09947563707828522,
0.4373633563518524,
-0.28836923837661743,
-0.1778712421655655,
0.047587428241968155,
-0.040546830743... |
https://github.com/huggingface/datasets/issues/2170 | Wikipedia historic dumps are deleted but hf/datasets hardcodes dump date | It seems that this can be fixed from user's end by including a `date` argument, like this:
`dataset = datasets.load_dataset('wikipedia', '20200501.en', date='20210420')`
You can get available dates from [here](https://dumps.wikimedia.org/enwiki/).
This is not a proper fix however as all the files will still ha... | Wikimedia does not keep all historical dumps. For example, as of today https://dumps.wikimedia.org/kowiki/ only provides
```
20201220/ 02-Feb-2021 01:36 -
20210101/ 21-Feb-2021 01:26 -
20210120/ ... | 48 | Wikipedia historic dumps are deleted but hf/datasets hardcodes dump date
Wikimedia does not keep all historical dumps. For example, as of today https://dumps.wikimedia.org/kowiki/ only provides
```
20201220/ 02-Feb-2021 01:36 -
20210101/ ... | [
-0.04804723709821701,
0.3399818241596222,
-0.022192353382706642,
0.031580086797475815,
-0.3193787932395935,
0.1565127670764923,
0.30782821774482727,
0.5392642617225647,
0.17642217874526978,
0.10108543932437897,
-0.07535654306411743,
0.07239773869514465,
0.2024517059326172,
-0.2449930608272... |
https://github.com/huggingface/datasets/issues/2166 | Regarding Test Sets for the GEM datasets | Hi @vyraun ! The test references for CommonGen are not publicly available: you can reach out to the original dataset authors if you would like to ask for them, but we will not be releasing them as part of GEM (March 31st was the release date for the test set inputs, references are incidentally released for some of the ... | @yjernite Hi, are the test sets for the GEM datasets scheduled to be [added soon](https://gem-benchmark.com/shared_task)?
e.g.
```
from datasets import load_dataset
DATASET_NAME="common_gen"
data = load_dataset("gem", DATASET_NAME)
```
The test set doesn't have the target or references.
```
data['test... | 71 | Regarding Test Sets for the GEM datasets
@yjernite Hi, are the test sets for the GEM datasets scheduled to be [added soon](https://gem-benchmark.com/shared_task)?
e.g.
```
from datasets import load_dataset
DATASET_NAME="common_gen"
data = load_dataset("gem", DATASET_NAME)
```
The test set doesn't have t... | [
-0.31991317868232727,
-0.09612433612346649,
-0.197315976023674,
0.1844353824853897,
-0.09196000546216965,
0.11846629530191422,
0.28373202681541443,
0.3940262496471405,
-0.10528639703989029,
-0.016125405207276344,
0.1892586052417755,
0.26997146010398865,
-0.3344813287258148,
0.1294025033712... |
https://github.com/huggingface/datasets/issues/2165 | How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset | Hi,
a HF dataset can be converted to a Torch Dataset with a simple wrapper as follows:
```python
from torch.utils.data import Dataset
class HFDataset(Dataset):
def __init__(self, dset):
self.dset = dset
def __getitem__(self, idx):
return self.dset[idx]
def __len__(self):
... | Hi,
I'm trying to pretraine deep-speed model using HF arxiv dataset like:
```
train_ds = nlp.load_dataset('scientific_papers', 'arxiv')
train_ds.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_attention_mask", "labels"],
)
engine, _, _, _ = deepspeed.initialize(
... | 124 | How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset
Hi,
I'm trying to pretraine deep-speed model using HF arxiv dataset like:
```
train_ds = nlp.load_dataset('scientific_papers', 'arxiv')
train_ds.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_a... | [
-0.2578475773334503,
-0.26522037386894226,
0.09229052066802979,
0.33822378516197205,
0.19224417209625244,
0.2227148562669754,
-0.030272992327809334,
0.34570184350013733,
-0.1513582020998001,
-0.18866297602653503,
-0.34864741563796997,
0.32489919662475586,
-0.19426944851875305,
-0.142053470... |
https://github.com/huggingface/datasets/issues/2165 | How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset | Interesting ! Thanks for sharing this @mariosasko . I like the idea
This looks like something we should add IMO | Hi,
I'm trying to pretraine deep-speed model using HF arxiv dataset like:
```
train_ds = nlp.load_dataset('scientific_papers', 'arxiv')
train_ds.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_attention_mask", "labels"],
)
engine, _, _, _ = deepspeed.initialize(
... | 20 | How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset
Hi,
I'm trying to pretraine deep-speed model using HF arxiv dataset like:
```
train_ds = nlp.load_dataset('scientific_papers', 'arxiv')
train_ds.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_a... | [
-0.2704311013221741,
-0.3552311956882477,
0.05783390626311302,
0.3274025321006775,
0.1308024376630783,
0.2287476509809494,
-0.07467088848352432,
0.3752877712249756,
-0.15198221802711487,
-0.24214109778404236,
-0.38851723074913025,
0.3905748724937439,
-0.24195609986782074,
0.015906864777207... |
https://github.com/huggingface/datasets/issues/2165 | How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset | @mariosasko
Thx for your code!
It perfectly works with a small modification for HF NLP dataset:
```
original_ds = nlp.load_dataset('scientific_papers', 'arxiv')
train_ds = HFDataset(train_ds['train']) # needs splitting
``` | Hi,
I'm trying to pretraine deep-speed model using HF arxiv dataset like:
```
train_ds = nlp.load_dataset('scientific_papers', 'arxiv')
train_ds.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_attention_mask", "labels"],
)
engine, _, _, _ = deepspeed.initialize(
... | 28 | How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset
Hi,
I'm trying to pretraine deep-speed model using HF arxiv dataset like:
```
train_ds = nlp.load_dataset('scientific_papers', 'arxiv')
train_ds.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_a... | [
-0.22635585069656372,
-0.3335733115673065,
0.07031462341547012,
0.32715824246406555,
0.1331760138273239,
0.1971442848443985,
-0.06429531425237656,
0.37336546182632446,
-0.13911882042884827,
-0.2662571370601654,
-0.39848268032073975,
0.38498398661613464,
-0.17944997549057007,
-0.00958076305... |
https://github.com/huggingface/datasets/issues/2165 | How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset | @lhoestq Sadly, from Python 3.7 onwards `torch.utils.data.Dataset` doesn't support the virtual subclass mechanism due to `typing.Generic` type no longer having `abc.ABCMeta` as its metaclass.
With that in mind, another option is to remove a direct type check (`isinstance(dataset, torch.utils.data.Dataset)`) in `deep... | Hi,
I'm trying to pretraine deep-speed model using HF arxiv dataset like:
```
train_ds = nlp.load_dataset('scientific_papers', 'arxiv')
train_ds.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_attention_mask", "labels"],
)
engine, _, _, _ = deepspeed.initialize(
... | 108 | How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset
Hi,
I'm trying to pretraine deep-speed model using HF arxiv dataset like:
```
train_ds = nlp.load_dataset('scientific_papers', 'arxiv')
train_ds.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_a... | [
-0.19205740094184875,
-0.270402729511261,
0.09888160973787308,
0.3232179284095764,
0.23827889561653137,
0.1883176565170288,
-0.045660071074962616,
0.36336857080459595,
-0.15260277688503265,
-0.22917580604553223,
-0.2917279899120331,
0.3924196660518646,
-0.25033605098724365,
-0.153774961829... |
https://github.com/huggingface/datasets/issues/2165 | How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset | That makes sense ! Feel free to open an issue on their repo and discuss this idea | Hi,
I'm trying to pretraine deep-speed model using HF arxiv dataset like:
```
train_ds = nlp.load_dataset('scientific_papers', 'arxiv')
train_ds.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_attention_mask", "labels"],
)
engine, _, _, _ = deepspeed.initialize(
... | 17 | How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset
Hi,
I'm trying to pretraine deep-speed model using HF arxiv dataset like:
```
train_ds = nlp.load_dataset('scientific_papers', 'arxiv')
train_ds.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_a... | [
-0.26200178265571594,
-0.30597761273384094,
0.04827328398823738,
0.3386891484260559,
0.11748916655778885,
0.22666697204113007,
-0.09718702733516693,
0.3648381233215332,
-0.1662658154964447,
-0.2185191810131073,
-0.34978699684143066,
0.39448294043540955,
-0.2521584928035736,
-0.001438890350... |
https://github.com/huggingface/datasets/issues/2165 | How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset | @y-rokutan Hi, now if you install `deepspeed` from master (this feature will be available in the next official release), the code should work without subclassing. Let us know if you still have any issues. | Hi,
I'm trying to pretraine deep-speed model using HF arxiv dataset like:
```
train_ds = nlp.load_dataset('scientific_papers', 'arxiv')
train_ds.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_attention_mask", "labels"],
)
engine, _, _, _ = deepspeed.initialize(
... | 34 | How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset
Hi,
I'm trying to pretraine deep-speed model using HF arxiv dataset like:
```
train_ds = nlp.load_dataset('scientific_papers', 'arxiv')
train_ds.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_a... | [
-0.2635490596294403,
-0.37623122334480286,
0.0858726054430008,
0.3691845238208771,
0.18167510628700256,
0.3435540795326233,
-0.05182429775595665,
0.4010277986526489,
-0.16504302620887756,
-0.2584497630596161,
-0.35081416368484497,
0.32045677304267883,
-0.24456410109996796,
0.01427294407039... |
https://github.com/huggingface/datasets/issues/2165 | How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset | Worth mentioning that any function that expects a `torch..Dataset` (like `torch..DataLoader`) will fail a mypy-esque typecheck if a `datasets.Dataset` is passed, even though it implements the interface correctly (I think). The virtual subclass idea was a good one- I wonder if there's another workaround given the Generi... | Hi,
I'm trying to pretraine deep-speed model using HF arxiv dataset like:
```
train_ds = nlp.load_dataset('scientific_papers', 'arxiv')
train_ds.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_attention_mask", "labels"],
)
engine, _, _, _ = deepspeed.initialize(
... | 96 | How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset
Hi,
I'm trying to pretraine deep-speed model using HF arxiv dataset like:
```
train_ds = nlp.load_dataset('scientific_papers', 'arxiv')
train_ds.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_a... | [
-0.20782184600830078,
-0.23314884305000305,
0.13846804201602936,
0.27257949113845825,
0.26682648062705994,
0.15376226603984833,
-0.0031081351917237043,
0.32406821846961975,
-0.07489686459302902,
-0.2203991711139679,
-0.3576209247112274,
0.39065954089164734,
-0.26360416412353516,
0.10520820... |
https://github.com/huggingface/datasets/issues/2162 | visualization for cc100 is broken | This looks like an issue with the cc100 dataset itself but not sure
Did you try loading cc100 on your machine ? | Hi
visualization through dataset viewer for cc100 is broken
https://huggingface.co/datasets/viewer/
thanks a lot
| 22 | visualization for cc100 is broken
Hi
visualization through dataset viewer for cc100 is broken
https://huggingface.co/datasets/viewer/
thanks a lot
This looks like an issue with the cc100 dataset itself but not sure
Did you try loading cc100 on your machine ? | [
-0.5656794905662537,
-0.20044372975826263,
-0.08503558486700058,
0.12931989133358002,
0.2095072865486145,
-0.0013308462221175432,
0.16415360569953918,
0.13681326806545258,
-0.06592384725809097,
0.42319008708000183,
-0.05236514285206795,
0.23769977688789368,
0.12892696261405945,
0.426997274... |
https://github.com/huggingface/datasets/issues/2162 | visualization for cc100 is broken | Hi
loading works fine, but the viewer only is broken
thanks
On Wed, Apr 7, 2021 at 12:17 PM Quentin Lhoest ***@***.***>
wrote:
> This looks like an issue with the cc100 dataset itself but not sure
> Did you try loading cc100 on your machine ?
>
> —
> You are receiving this because you authored the thread.
> Reply to ... | Hi
visualization through dataset viewer for cc100 is broken
https://huggingface.co/datasets/viewer/
thanks a lot
| 80 | visualization for cc100 is broken
Hi
visualization through dataset viewer for cc100 is broken
https://huggingface.co/datasets/viewer/
thanks a lot
Hi
loading works fine, but the viewer only is broken
thanks
On Wed, Apr 7, 2021 at 12:17 PM Quentin Lhoest ***@***.***>
wrote:
> This looks like an issue with ... | [
-0.48923343420028687,
-0.2571876347064972,
-0.015440049581229687,
0.16780713200569153,
0.22792312502861023,
0.03674132376909256,
0.14369532465934753,
0.08045840263366699,
-0.038478169590234756,
0.3180072605609894,
-0.14256501197814941,
0.2456284761428833,
0.14194174110889435,
0.45386964082... |
https://github.com/huggingface/datasets/issues/2161 | any possibility to download part of large datasets only? | oh, great, really awesome feature to have, thank you very much for the great, fabulous work | Hi
Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks | 16 | any possibility to download part of large datasets only?
Hi
Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks
oh, great, really awesome feature to ha... | [
-0.45692262053489685,
-0.4417872428894043,
-0.18960921466350555,
0.1484076976776123,
0.1038997620344162,
0.18816092610359192,
-0.2670144736766815,
0.36616963148117065,
0.06326068937778473,
0.4351857900619507,
-0.44700953364372253,
-0.12800273299217224,
-0.10398080945014954,
0.3638651669025... |
https://github.com/huggingface/datasets/issues/2161 | any possibility to download part of large datasets only? | We'll work on dataset streaming soon. This should allow you to only load the examples you need ;) | Hi
Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks | 18 | any possibility to download part of large datasets only?
Hi
Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks
We'll work on dataset streaming soon. T... | [
-0.4160081446170807,
-0.4227994978427887,
-0.09121213853359222,
0.22108875215053558,
0.11789201945066452,
0.20861409604549408,
-0.21589204668998718,
0.3746377229690552,
0.10753163695335388,
0.3137334883213043,
-0.2778741419315338,
-0.22648458182811737,
-0.09712027758359909,
0.3623844683170... |
https://github.com/huggingface/datasets/issues/2161 | any possibility to download part of large datasets only? | thanks a lot Quentin, this would be really really a great feature to have
On Wed, Apr 7, 2021 at 12:14 PM Quentin Lhoest ***@***.***>
wrote:
> We'll work on dataset streaming soon. This should allow you to only load
> the examples you need ;)
>
> —
> You are receiving this because you authored the thread.
> Reply to ... | Hi
Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks | 79 | any possibility to download part of large datasets only?
Hi
Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks
thanks a lot Quentin, this would be rea... | [
-0.4333568513393402,
-0.45348721742630005,
-0.04873242601752281,
0.26677635312080383,
0.10812929272651672,
0.24318015575408936,
-0.16233552992343903,
0.4673762321472168,
0.07347514480352402,
0.3738211393356323,
-0.36424607038497925,
-0.20177604258060455,
-0.09591159224510193,
0.45779165625... |
https://github.com/huggingface/datasets/issues/2161 | any possibility to download part of large datasets only? | Is streaming completed? On the 1.8.0 docs it is mentioned (https://huggingface.co/docs/datasets/dataset_streaming.html), but when following the example I get the following error:
```
>>> dataset2 = load_dataset("amazon_us_reviews", "Pet_Products_v1_00", split='train', streaming=True)
----------------------------... | Hi
Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks | 123 | any possibility to download part of large datasets only?
Hi
Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks
Is streaming completed? On the 1.8.0 do... | [
-0.44760918617248535,
-0.3885071277618408,
0.009524649009108543,
0.37012380361557007,
0.10674423724412918,
0.19297006726264954,
-0.03403511643409729,
0.5148776173591614,
0.08198701590299606,
0.23246873915195465,
-0.4534533619880676,
-0.19906330108642578,
-0.17841672897338867,
0.40002283453... |
https://github.com/huggingface/datasets/issues/2161 | any possibility to download part of large datasets only? | Hi ! Streaming is available on `master` only right now. We'll make a new release 1.9.0 on Monday :) | Hi
Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks | 19 | any possibility to download part of large datasets only?
Hi
Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks
Hi ! Streaming is available on `master`... | [
-0.4722626805305481,
-0.425556480884552,
-0.11318700015544891,
0.13441991806030273,
0.08804664015769958,
0.17386476695537567,
-0.35428884625434875,
0.3757151663303375,
-0.05025842785835266,
0.32865607738494873,
-0.412924200296402,
-0.22793155908584595,
-0.14858122169971466,
0.3793705105781... |
https://github.com/huggingface/datasets/issues/2160 | data_args.preprocessing_num_workers almost freezes | Hi.
I cannot always reproduce this issue, and on later runs I did not see it so far. Sometimes also I set 8 processes but I see less being showed, is this normal, here only 5 are shown for 8 being set, thanks
```
#3: 11%|███████████████▊ ... | Hi @lhoestq
I am running this code from huggingface transformers https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py
to speed up tokenization, since I am running on multiple datasets, I am using data_args.preprocessing_num_workers = 4 with opus100 corpus but this moves ... | 71 | data_args.preprocessing_num_workers almost freezes
Hi @lhoestq
I am running this code from huggingface transformers https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py
to speed up tokenization, since I am running on multiple datasets, I am using data_args.preprocessi... | [
-0.2514106035232544,
-0.24791917204856873,
-0.16687627136707306,
0.06956710666418076,
0.12973304092884064,
-0.1807381957769394,
0.4334705173969269,
0.0990600511431694,
-0.35381627082824707,
0.2440420240163803,
0.06387791782617569,
0.23994523286819458,
0.09411625564098358,
-0.13301853835582... |
https://github.com/huggingface/datasets/issues/2158 | viewer "fake_news_english" error | Thanks for reporting !
The viewer doesn't have all the dependencies of the datasets. We may add openpyxl to be able to show this dataset properly | When I visit the [Huggingface - viewer](https://huggingface.co/datasets/viewer/) web site, under the dataset "fake_news_english" I've got this error:
> ImportError: To be able to use this dataset, you need to install the following dependencies['openpyxl'] using 'pip install # noqa: requires this pandas optional depe... | 26 | viewer "fake_news_english" error
When I visit the [Huggingface - viewer](https://huggingface.co/datasets/viewer/) web site, under the dataset "fake_news_english" I've got this error:
> ImportError: To be able to use this dataset, you need to install the following dependencies['openpyxl'] using 'pip install # noqa:... | [
-0.14934365451335907,
-0.18128103017807007,
0.03364337980747223,
0.34163251519203186,
0.23430880904197693,
0.28634315729141235,
-0.01579497568309307,
0.22440718114376068,
0.1654278039932251,
0.0734669417142868,
-0.22256994247436523,
-0.12595099210739136,
-0.03064192458987236,
0.32860922813... |
https://github.com/huggingface/datasets/issues/2153 | load_dataset ignoring features | Nice question which helped me a lot! I have wasted a lot of time to the `DatasetDict` creation from a csv file. Hope the document of this module add some simple examples. | First of all, I'm sorry if it is a repeated issue or the changes are already in master, I searched and I didn't find anything.
I'm using datasets 1.5.0

As you can see, when I load the dataset, the C... | 32 | load_dataset ignoring features
First of all, I'm sorry if it is a repeated issue or the changes are already in master, I searched and I didn't find anything.
I'm using datasets 1.5.0

As you can se... | [
-0.08561431616544724,
-0.03049202635884285,
0.01294018980115652,
0.2842150628566742,
0.42568227648735046,
0.2531650960445404,
0.6406397819519043,
-0.05394746735692024,
0.2234637588262558,
0.05332408472895622,
0.15431979298591614,
0.33632534742355347,
-0.11442297697067261,
0.459934294223785... |
https://github.com/huggingface/datasets/issues/2148 | Add configurable options to `seqeval` metric | Hi @marrodion.
Thanks for pointing this out. It would be great to incorporate this metric-specific enhancement.
Another possibility would be to require the user to input the scheme as a string `mode="strict", scheme="IOB2"` and then dynamically import the corresponding module using Python `importlib`:
```python... | Right now `load_metric("seqeval")` only works in the default mode of evaluation (equivalent to conll evaluation).
However, seqeval library [supports](https://github.com/chakki-works/seqeval#support-features) different evaluation schemes (IOB1, IOB2, etc.), which can be plugged in just by supporting additional kwargs... | 61 | Add configurable options to `seqeval` metric
Right now `load_metric("seqeval")` only works in the default mode of evaluation (equivalent to conll evaluation).
However, seqeval library [supports](https://github.com/chakki-works/seqeval#support-features) different evaluation schemes (IOB1, IOB2, etc.), which can be ... | [
-0.44041454792022705,
0.1902090609073639,
-0.08451559394598007,
-0.1625579297542572,
0.0749473050236702,
-0.1645829677581787,
0.1490214765071869,
0.24983245134353638,
-0.0836084708571434,
0.3530941605567932,
-0.46700891852378845,
0.24865522980690002,
-0.03232021629810333,
0.269384890794754... |
https://github.com/huggingface/datasets/issues/2146 | Dataset file size on disk is very large with 3D Array | Hi ! In the arrow file we store all the integers as uint8.
So your arrow file should weigh around `height x width x n_channels x n_images` bytes.
What feature type do your TFDS dataset have ?
If it uses a `tfds.features.Image` type, then what is stored is the encoded data (as png or jpg for example). Since these... | Hi,
I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8.
The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`.
`{
"description": "",
"citation": ""... | 114 | Dataset file size on disk is very large with 3D Array
Hi,
I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8.
The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.j... | [
-0.14527352154254913,
-0.11326315999031067,
-0.1544179469347,
0.42015567421913147,
0.2135636806488037,
0.11391289532184601,
0.5066813230514526,
0.2730786204338074,
0.020523080602288246,
0.035847555845975876,
-0.1937045007944107,
0.08741509914398193,
-0.1694582849740982,
0.31504395604133606... |
https://github.com/huggingface/datasets/issues/2146 | Dataset file size on disk is very large with 3D Array | Thanks for the prompt response. You're right about the encoding, I have the `tfds.features.Image` feature type you mentioned.
However, as described in the `dataset_info.json`, my dataset is made of 1479 (224x224x3) images. 1479 x 224 x 224 x 3 = 222630912 bytes which is far from the actual size 520803408 bytes.
An... | Hi,
I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8.
The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`.
`{
"description": "",
"citation": ""... | 62 | Dataset file size on disk is very large with 3D Array
Hi,
I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8.
The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.j... | [
-0.14527352154254913,
-0.11326315999031067,
-0.1544179469347,
0.42015567421913147,
0.2135636806488037,
0.11391289532184601,
0.5066813230514526,
0.2730786204338074,
0.020523080602288246,
0.035847555845975876,
-0.1937045007944107,
0.08741509914398193,
-0.1694582849740982,
0.31504395604133606... |
https://github.com/huggingface/datasets/issues/2146 | Dataset file size on disk is very large with 3D Array | @lhoestq I changed the data structure so I have a 2D Array feature type instead of a 3D Array by grouping the two last dimensions ( a 224x672 2D Array instead of a 224x224x3 3D Array). The file size is now 223973964 bytes, nearly half the previous size! Which is around of what I would expect.
I found similar behavio... | Hi,
I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8.
The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`.
`{
"description": "",
"citation": ""... | 77 | Dataset file size on disk is very large with 3D Array
Hi,
I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8.
The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.j... | [
-0.14527352154254913,
-0.11326315999031067,
-0.1544179469347,
0.42015567421913147,
0.2135636806488037,
0.11391289532184601,
0.5066813230514526,
0.2730786204338074,
0.020523080602288246,
0.035847555845975876,
-0.1937045007944107,
0.08741509914398193,
-0.1694582849740982,
0.31504395604133606... |
https://github.com/huggingface/datasets/issues/2146 | Dataset file size on disk is very large with 3D Array | Interesting !
This may be because of the offsets that are stored with the array data.
Currently the offsets are stored even if the `shape` of the arrays is fixed. This was needed because of some issues with pyarrow a few months ago. I think these issues have been addressed now, so we can probably try to remove them... | Hi,
I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8.
The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`.
`{
"description": "",
"citation": ""... | 80 | Dataset file size on disk is very large with 3D Array
Hi,
I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8.
The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.j... | [
-0.14527352154254913,
-0.11326315999031067,
-0.1544179469347,
0.42015567421913147,
0.2135636806488037,
0.11391289532184601,
0.5066813230514526,
0.2730786204338074,
0.020523080602288246,
0.035847555845975876,
-0.1937045007944107,
0.08741509914398193,
-0.1694582849740982,
0.31504395604133606... |
https://github.com/huggingface/datasets/issues/2146 | Dataset file size on disk is very large with 3D Array | Yeah for sure, can you be a bit more specific about where the offset is stored in the code base ? And any reference to pyarrow issues if you have some. I would be very interested in contributing to `datasets` by trying to fix this issue. | Hi,
I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8.
The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`.
`{
"description": "",
"citation": ""... | 46 | Dataset file size on disk is very large with 3D Array
Hi,
I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8.
The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.j... | [
-0.14527352154254913,
-0.11326315999031067,
-0.1544179469347,
0.42015567421913147,
0.2135636806488037,
0.11391289532184601,
0.5066813230514526,
0.2730786204338074,
0.020523080602288246,
0.035847555845975876,
-0.1937045007944107,
0.08741509914398193,
-0.1694582849740982,
0.31504395604133606... |
https://github.com/huggingface/datasets/issues/2146 | Dataset file size on disk is very large with 3D Array | Pyarrow has two types of lists: variable length lists and fixed size lists.
Currently we store the ArrayXD data as variable length lists. They take more disk space because they must store both actual data and offsets.
In the `datasets` code this is done here:
https://github.com/huggingface/nlp/blob/dbac87c8a083f80... | Hi,
I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8.
The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`.
`{
"description": "",
"citation": ""... | 75 | Dataset file size on disk is very large with 3D Array
Hi,
I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8.
The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.j... | [
-0.14527352154254913,
-0.11326315999031067,
-0.1544179469347,
0.42015567421913147,
0.2135636806488037,
0.11391289532184601,
0.5066813230514526,
0.2730786204338074,
0.020523080602288246,
0.035847555845975876,
-0.1937045007944107,
0.08741509914398193,
-0.1694582849740982,
0.31504395604133606... |
https://github.com/huggingface/datasets/issues/2144 | Loading wikipedia 20200501.en throws pyarrow related error | That's how I loaded the dataset
```python
from datasets import load_dataset
ds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache')
``` | **Problem description**
I am getting the following error when trying to load wikipedia/20200501.en dataset.
**Error log**
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikiped... | 17 | Loading wikipedia 20200501.en throws pyarrow related error
**Problem description**
I am getting the following error when trying to load wikipedia/20200501.en dataset.
**Error log**
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, to... | [
-0.10415403544902802,
0.3526463508605957,
0.03405788168311119,
0.363310307264328,
0.2745778262615204,
0.18020787835121155,
0.2762240171432495,
0.46232542395591736,
-0.04348812252283096,
-0.11378400772809982,
0.02473749965429306,
0.23602178692817688,
0.15206794440746307,
-0.1178018674254417... |
https://github.com/huggingface/datasets/issues/2144 | Loading wikipedia 20200501.en throws pyarrow related error | Hi ! It looks like the arrow file in the folder
`/usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931` is corrupted.
Can you take a look and check that it's 18.3GB ?
If not, then maybe you need to redownload it:
```python
from datasets ... | **Problem description**
I am getting the following error when trying to load wikipedia/20200501.en dataset.
**Error log**
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikiped... | 46 | Loading wikipedia 20200501.en throws pyarrow related error
**Problem description**
I am getting the following error when trying to load wikipedia/20200501.en dataset.
**Error log**
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, to... | [
-0.10415403544902802,
0.3526463508605957,
0.03405788168311119,
0.363310307264328,
0.2745778262615204,
0.18020787835121155,
0.2762240171432495,
0.46232542395591736,
-0.04348812252283096,
-0.11378400772809982,
0.02473749965429306,
0.23602178692817688,
0.15206794440746307,
-0.1178018674254417... |
https://github.com/huggingface/datasets/issues/2144 | Loading wikipedia 20200501.en throws pyarrow related error | > Hi ! It looks like the arrow file in the folder
> `/usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931` is corrupted.
>
> Can you take a look and check that it's 18.3GB ?
>
> If not, then maybe you need to redownload it:
>
> ```pyth... | **Problem description**
I am getting the following error when trying to load wikipedia/20200501.en dataset.
**Error log**
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikiped... | 113 | Loading wikipedia 20200501.en throws pyarrow related error
**Problem description**
I am getting the following error when trying to load wikipedia/20200501.en dataset.
**Error log**
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, to... | [
-0.10415403544902802,
0.3526463508605957,
0.03405788168311119,
0.363310307264328,
0.2745778262615204,
0.18020787835121155,
0.2762240171432495,
0.46232542395591736,
-0.04348812252283096,
-0.11378400772809982,
0.02473749965429306,
0.23602178692817688,
0.15206794440746307,
-0.1178018674254417... |
https://github.com/huggingface/datasets/issues/2144 | Loading wikipedia 20200501.en throws pyarrow related error | I just tried on my side and got no issues.
When downloading the dataset again, did it crash at 10.7GB as well ? | **Problem description**
I am getting the following error when trying to load wikipedia/20200501.en dataset.
**Error log**
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikiped... | 23 | Loading wikipedia 20200501.en throws pyarrow related error
**Problem description**
I am getting the following error when trying to load wikipedia/20200501.en dataset.
**Error log**
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, to... | [
-0.10415403544902802,
0.3526463508605957,
0.03405788168311119,
0.363310307264328,
0.2745778262615204,
0.18020787835121155,
0.2762240171432495,
0.46232542395591736,
-0.04348812252283096,
-0.11378400772809982,
0.02473749965429306,
0.23602178692817688,
0.15206794440746307,
-0.1178018674254417... |
https://github.com/huggingface/datasets/issues/2144 | Loading wikipedia 20200501.en throws pyarrow related error | > I just tried on my side and got no issues.
> When downloading the dataset again, did it crash at 10.7GB as well ?
Yes i have tried it multiple times on different machines. I am wondering if you could share the screenshot of your dependency versions and i will try to make them the same as yours? | **Problem description**
I am getting the following error when trying to load wikipedia/20200501.en dataset.
**Error log**
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikiped... | 59 | Loading wikipedia 20200501.en throws pyarrow related error
**Problem description**
I am getting the following error when trying to load wikipedia/20200501.en dataset.
**Error log**
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, to... | [
-0.10415403544902802,
0.3526463508605957,
0.03405788168311119,
0.363310307264328,
0.2745778262615204,
0.18020787835121155,
0.2762240171432495,
0.46232542395591736,
-0.04348812252283096,
-0.11378400772809982,
0.02473749965429306,
0.23602178692817688,
0.15206794440746307,
-0.1178018674254417... |
https://github.com/huggingface/datasets/issues/2144 | Loading wikipedia 20200501.en throws pyarrow related error | I tried using `datasets` from `master` on macos with python 3.7.2
I also have `requests==2.23.0` and `tqdm==4.45.0`. | **Problem description**
I am getting the following error when trying to load wikipedia/20200501.en dataset.
**Error log**
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikiped... | 17 | Loading wikipedia 20200501.en throws pyarrow related error
**Problem description**
I am getting the following error when trying to load wikipedia/20200501.en dataset.
**Error log**
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, to... | [
-0.10415403544902802,
0.3526463508605957,
0.03405788168311119,
0.363310307264328,
0.2745778262615204,
0.18020787835121155,
0.2762240171432495,
0.46232542395591736,
-0.04348812252283096,
-0.11378400772809982,
0.02473749965429306,
0.23602178692817688,
0.15206794440746307,
-0.1178018674254417... |
https://github.com/huggingface/datasets/issues/2139 | TypeError when using save_to_disk in a dataset loaded with ReadInstruction split | Hi !
I think this has been fixed recently on `master`.
Can you try again by installing `datasets` from `master` ?
```
pip install git+https://github.com/huggingface/datasets.git
``` | Hi,
Loading a dataset with `load_dataset` using a split defined via `ReadInstruction` and then saving it to disk results in the following error: `TypeError: Object of type ReadInstruction is not JSON serializable`.
Here is the minimal reproducible example:
```python
from datasets import load_dataset
from dat... | 26 | TypeError when using save_to_disk in a dataset loaded with ReadInstruction split
Hi,
Loading a dataset with `load_dataset` using a split defined via `ReadInstruction` and then saving it to disk results in the following error: `TypeError: Object of type ReadInstruction is not JSON serializable`.
Here is the mini... | [
-0.13170675933361053,
0.21055981516838074,
0.04380515217781067,
0.3900924026966095,
0.3082846999168396,
0.2542075216770172,
0.4115677773952484,
0.21830563247203827,
0.15596672892570496,
0.09270302951335907,
-0.18100301921367645,
0.3821820914745331,
-0.18318785727024078,
0.46920621395111084... |
https://github.com/huggingface/datasets/issues/2135 | en language data from MLQA dataset is missing | Hi ! Indeed only the languages of the `translate-train` data are included...
I can't find a link to download the english train set on https://github.com/facebookresearch/MLQA though, do you know where we can download it ? | Hi
I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue. | 35 | en language data from MLQA dataset is missing
Hi
I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue.
Hi ! Indeed only the languages of the `translate-train` data are included...
I can't find a link to ... | [
-0.03446245566010475,
0.20468363165855408,
-0.22718720138072968,
0.23080536723136902,
0.0578235499560833,
0.2997928559780121,
-0.03953339532017708,
-0.046116262674331665,
-0.14981397986412048,
0.12646563351154327,
0.1958925575017929,
-0.14981429278850555,
0.06739091873168945,
0.48139491677... |
https://github.com/huggingface/datasets/issues/2135 | en language data from MLQA dataset is missing | Hi @lhoestq
thank you very much for coming back to me, now I see, you are right, in the link you sent I see split of {split}-context-{context_language}-question-{question_language}.json with context_language=question_language=en, TFDS most probably has extracted english ones from these files as en language files, bu... | Hi
I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue. | 57 | en language data from MLQA dataset is missing
Hi
I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue.
Hi @lhoestq
thank you very much for coming back to me, now I see, you are right, in the link you se... | [
-0.04081933572888374,
-0.03622107952833176,
-0.17737673223018646,
0.3228135108947754,
0.13230359554290771,
0.2861717939376831,
0.09528352320194244,
0.10732787847518921,
-0.23436090350151062,
0.11953610926866531,
0.09481558948755264,
0.05139292776584625,
0.16803930699825287,
0.5239029526710... |
https://github.com/huggingface/datasets/issues/2135 | en language data from MLQA dataset is missing | I close the ticket, since I do not see any en existing, they have trained on "SQuAD V1.1" instead. Thanks. | Hi
I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue. | 20 | en language data from MLQA dataset is missing
Hi
I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue.
I close the ticket, since I do not see any en existing, they have trained on "SQuAD V1.1" instead. Th... | [
-0.06140502914786339,
-0.05503939092159271,
-0.18548381328582764,
0.08395896852016449,
0.14346171915531158,
0.2676542401313782,
0.18588891625404358,
-0.041590869426727295,
-0.18070118129253387,
0.14829450845718384,
0.22839893400669098,
0.19727569818496704,
0.17762528359889984,
0.3383446633... |
https://github.com/huggingface/datasets/issues/2134 | Saving large in-memory datasets with save_to_disk crashes because of pickling | Hi !
Indeed `save_to_disk` doesn't call pickle anymore. Though the `OverflowError` can still appear for in-memory datasets bigger than 4GB. This happens when doing this for example:
```python
import pyarrow as pa
import pickle
arr = pa.array([0] * ((4 * 8 << 30) // 64))
table = pa.Table.from_arrays([a], names=[... | Using Datasets 1.5.0 on Python 3.7.
Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so... | 134 | Saving large in-memory datasets with save_to_disk crashes because of pickling
Using Datasets 1.5.0 on Python 3.7.
Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively fa... | [
-0.31688857078552246,
0.09959537535905838,
0.11977995187044144,
0.37088823318481445,
0.25212299823760986,
0.0021296427585184574,
-0.2834981381893158,
0.4639686048030853,
0.1905568689107895,
0.09646537899971008,
0.08088338375091553,
0.5146647691726685,
-0.3989093601703644,
0.290598809719085... |
https://github.com/huggingface/datasets/issues/2134 | Saving large in-memory datasets with save_to_disk crashes because of pickling | Hi!
So I've managed to created a minimum working (well technically crashing) example for the multiprocessing case, I create a huge list of zeros, like in your example, and then I try to .map(None, num_proc=2) over it, which then crashes, here's the code:
```python
from datasets import Dataset
if __name__ == '_... | Using Datasets 1.5.0 on Python 3.7.
Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so... | 832 | Saving large in-memory datasets with save_to_disk crashes because of pickling
Using Datasets 1.5.0 on Python 3.7.
Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively fa... | [
-0.31688857078552246,
0.09959537535905838,
0.11977995187044144,
0.37088823318481445,
0.25212299823760986,
0.0021296427585184574,
-0.2834981381893158,
0.4639686048030853,
0.1905568689107895,
0.09646537899971008,
0.08088338375091553,
0.5146647691726685,
-0.3989093601703644,
0.290598809719085... |
https://github.com/huggingface/datasets/issues/2134 | Saving large in-memory datasets with save_to_disk crashes because of pickling | I just merged a fix #2150 that allows to pickle tables bigger than 4GiB
Feel free to try it on the `master` branch ! | Using Datasets 1.5.0 on Python 3.7.
Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so... | 24 | Saving large in-memory datasets with save_to_disk crashes because of pickling
Using Datasets 1.5.0 on Python 3.7.
Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively fa... | [
-0.31688857078552246,
0.09959537535905838,
0.11977995187044144,
0.37088823318481445,
0.25212299823760986,
0.0021296427585184574,
-0.2834981381893158,
0.4639686048030853,
0.1905568689107895,
0.09646537899971008,
0.08088338375091553,
0.5146647691726685,
-0.3989093601703644,
0.290598809719085... |
https://github.com/huggingface/datasets/issues/2134 | Saving large in-memory datasets with save_to_disk crashes because of pickling | awesome! I started getting this error as well when I tried to tokenize with a longer sequence length | Using Datasets 1.5.0 on Python 3.7.
Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so... | 18 | Saving large in-memory datasets with save_to_disk crashes because of pickling
Using Datasets 1.5.0 on Python 3.7.
Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively fa... | [
-0.31688857078552246,
0.09959537535905838,
0.11977995187044144,
0.37088823318481445,
0.25212299823760986,
0.0021296427585184574,
-0.2834981381893158,
0.4639686048030853,
0.1905568689107895,
0.09646537899971008,
0.08088338375091553,
0.5146647691726685,
-0.3989093601703644,
0.290598809719085... |
https://github.com/huggingface/datasets/issues/2134 | Saving large in-memory datasets with save_to_disk crashes because of pickling | @prokopCerny does this fix work for you? I found that with the latest master, my container with 500GB RAM starts crashing when I try to map a large dataset using `num_proc`.
@lhoestq would it be possible to implement some logic to keep the individual cache files small (say below 100mb)? I find this helps with loadin... | Using Datasets 1.5.0 on Python 3.7.
Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so... | 84 | Saving large in-memory datasets with save_to_disk crashes because of pickling
Using Datasets 1.5.0 on Python 3.7.
Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively fa... | [
-0.31688857078552246,
0.09959537535905838,
0.11977995187044144,
0.37088823318481445,
0.25212299823760986,
0.0021296427585184574,
-0.2834981381893158,
0.4639686048030853,
0.1905568689107895,
0.09646537899971008,
0.08088338375091553,
0.5146647691726685,
-0.3989093601703644,
0.290598809719085... |
https://github.com/huggingface/datasets/issues/2134 | Saving large in-memory datasets with save_to_disk crashes because of pickling | Closing since the original issue was fixed in #2150
Feel free to reopen if you are still experiencing it.
For the other problems, please open separate issues | Using Datasets 1.5.0 on Python 3.7.
Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so... | 27 | Saving large in-memory datasets with save_to_disk crashes because of pickling
Using Datasets 1.5.0 on Python 3.7.
Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively fa... | [
-0.31688857078552246,
0.09959537535905838,
0.11977995187044144,
0.37088823318481445,
0.25212299823760986,
0.0021296427585184574,
-0.2834981381893158,
0.4639686048030853,
0.1905568689107895,
0.09646537899971008,
0.08088338375091553,
0.5146647691726685,
-0.3989093601703644,
0.290598809719085... |
https://github.com/huggingface/datasets/issues/2133 | bug in mlqa dataset | If you print those questions, you get readable texts:
```python
>>> questions = [
... "\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u0631?",
... | Hi
Looking into MLQA dataset for langauge "ar":
```
"question": [
"\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u0631?",
"\u0643\u0... | 111 | bug in mlqa dataset
Hi
Looking into MLQA dataset for langauge "ar":
```
"question": [
"\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u... | [
-0.11585288494825363,
-0.23156796395778656,
-0.273404061794281,
0.21121206879615784,
0.332148939371109,
0.07119476795196533,
0.35246872901916504,
0.12133828550577164,
-0.32277193665504456,
0.22545383870601654,
0.06188729405403137,
0.28176429867744446,
0.25186237692832947,
0.256121039390563... |
https://github.com/huggingface/datasets/issues/2133 | bug in mlqa dataset | Hi @dorost1234.
In Python 3, strings are sequences of Unicode _code points_. Unicode is a specification that maps all characters (and emoji symbols) with its unique representation in terms of code points. That is what you see: Unicode code points (represented by a \u escaped sequence of 16-bit hex values).
Charac... | Hi
Looking into MLQA dataset for langauge "ar":
```
"question": [
"\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u0631?",
"\u0643\u0... | 121 | bug in mlqa dataset
Hi
Looking into MLQA dataset for langauge "ar":
```
"question": [
"\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u... | [
-0.11585288494825363,
-0.23156796395778656,
-0.273404061794281,
0.21121206879615784,
0.332148939371109,
0.07119476795196533,
0.35246872901916504,
0.12133828550577164,
-0.32277193665504456,
0.22545383870601654,
0.06188729405403137,
0.28176429867744446,
0.25186237692832947,
0.256121039390563... |
https://github.com/huggingface/datasets/issues/2132 | TydiQA dataset is mixed and is not split per language | You can filter the languages this way:
```python
tydiqa_en = tydiqa_dataset.filter(lambda x: x["language"] == "english")
```
Otherwise maybe we can have one configuration per language ?
What do you think of this for example ?
```python
load_dataset("tydiqa", "primary_task.en")
``` | Hi @lhoestq
Currently TydiQA is mixed and user can only access the whole training set of all languages:
https://www.tensorflow.org/datasets/catalog/tydi_qa
for using this dataset, one need to train/evaluate in each separate language, and having them mixed, makes it hard to use this dataset. This is much convenien... | 39 | TydiQA dataset is mixed and is not split per language
Hi @lhoestq
Currently TydiQA is mixed and user can only access the whole training set of all languages:
https://www.tensorflow.org/datasets/catalog/tydi_qa
for using this dataset, one need to train/evaluate in each separate language, and having them mixed, ... | [
-0.2592063248157501,
-0.23528869450092316,
-0.204411581158638,
0.2665458023548126,
0.277157187461853,
-0.02582903578877449,
0.34125909209251404,
0.3332071304321289,
-0.15149815380573273,
0.07768146693706512,
-0.3343155086040497,
-0.04627176374197006,
-0.016753319650888443,
0.42141956090927... |
https://github.com/huggingface/datasets/issues/2132 | TydiQA dataset is mixed and is not split per language | Hi
thank you very much for the great response, this will be really wonderful
to have one configuration per language, as one need the dataset in majority
of case per language for cross-lingual evaluations.
This becomes also then more close to TFDS format, which is separated per
language https://www.tensorflow.org/datase... | Hi @lhoestq
Currently TydiQA is mixed and user can only access the whole training set of all languages:
https://www.tensorflow.org/datasets/catalog/tydi_qa
for using this dataset, one need to train/evaluate in each separate language, and having them mixed, makes it hard to use this dataset. This is much convenien... | 145 | TydiQA dataset is mixed and is not split per language
Hi @lhoestq
Currently TydiQA is mixed and user can only access the whole training set of all languages:
https://www.tensorflow.org/datasets/catalog/tydi_qa
for using this dataset, one need to train/evaluate in each separate language, and having them mixed, ... | [
-0.3552142083644867,
-0.23230449855327606,
-0.18655994534492493,
0.24706265330314636,
0.305154949426651,
-0.08206144720315933,
0.3825843334197998,
0.33620837330818176,
-0.1914513260126114,
0.13157756626605988,
-0.4605717957019806,
-0.06283892691135406,
0.06377006322145462,
0.45318838953971... |
https://github.com/huggingface/datasets/issues/2131 | When training with Multi-Node Multi-GPU the worker 2 has TypeError: 'NoneType' object | Hi ! Thanks for reporting
I was able to reproduce this issue. This was caused by missing split infos if a worker reloads the cache of the other worker.
I just opened https://github.com/huggingface/datasets/pull/2137 to fix this issue | version: 1.5.0
met a very strange error, I am training large scale language model, and need train on 2 machines(workers).
And sometimes I will get this error `TypeError: 'NoneType' object is not iterable`
This is traceback
```
71 | | Traceback (most recent call last):
-- | -- | --
72 | | File "run_gpt.py"... | 37 | When training with Multi-Node Multi-GPU the worker 2 has TypeError: 'NoneType' object
version: 1.5.0
met a very strange error, I am training large scale language model, and need train on 2 machines(workers).
And sometimes I will get this error `TypeError: 'NoneType' object is not iterable`
This is traceback
```
... | [
-0.1806100606918335,
-0.44680941104888916,
0.012923717498779297,
0.5495145320892334,
0.1124853864312172,
-0.009134168736636639,
0.5815150141716003,
0.29670172929763794,
0.11292433738708496,
0.2367112636566162,
0.33802852034568787,
0.017376171424984932,
-0.1162155345082283,
0.16757296025753... |
https://github.com/huggingface/datasets/issues/2130 | wikiann dataset is missing columns | Here please find TFDS format of this dataset: https://www.tensorflow.org/datasets/catalog/wikiann
where there is a span column, this is really necessary to be able to use the data, and I appreciate your help @lhoestq | Hi
Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq | 32 | wikiann dataset is missing columns
Hi
Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq
Here please find TFDS format of this dataset: https://www.tensorflow.org/d... | [
0.0038931076414883137,
-0.43879345059394836,
-0.09474525600671768,
0.2763383984565735,
0.3197287619113922,
0.1833391934633255,
0.32081690430641174,
0.08076166361570358,
0.05312659963965416,
0.25700920820236206,
0.010993089526891708,
-0.23808708786964417,
0.07031048089265823,
0.420582383871... |
https://github.com/huggingface/datasets/issues/2130 | wikiann dataset is missing columns | Hi !
Apparently you can get the spans from the NER tags using `tags_to_spans` defined here:
https://github.com/tensorflow/datasets/blob/c7096bd38e86ed240b8b2c11ecab9893715a7d55/tensorflow_datasets/text/wikiann/wikiann.py#L81-L126
It would be nice to include the `spans` field in this dataset as in TFDS. This coul... | Hi
Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq | 61 | wikiann dataset is missing columns
Hi
Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq
Hi !
Apparently you can get the spans from the NER tags using `tags_to_sp... | [
0.009602583944797516,
-0.3400488793849945,
-0.04167189449071884,
0.22198966145515442,
0.29570823907852173,
0.15064091980457306,
0.34696105122566223,
0.03894595801830292,
0.09205089509487152,
0.27537107467651367,
0.048568274825811386,
-0.04506592079997063,
-0.008062528446316719,
0.363244771... |
https://github.com/huggingface/datasets/issues/2130 | wikiann dataset is missing columns | Hi @lhoestq
thank you very much for the help, it would be very nice to have it included, here is the full code, one need to also convert tags to string first:
```
import datasets
from datasets import load_dataset
def tags_to_spans(tags):
"""Convert tags to spans."""
spans = set()
span_start = 0
s... | Hi
Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq | 402 | wikiann dataset is missing columns
Hi
Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq
Hi @lhoestq
thank you very much for the help, it would be very nice to h... | [
0.054469551891088486,
-0.32701337337493896,
-0.04039029777050018,
0.13263121247291565,
0.2911989092826843,
0.24517272412776947,
0.4067167341709137,
0.1009385958313942,
0.3828306198120117,
0.15561658143997192,
-0.04468030110001564,
-0.21783453226089478,
0.022738009691238403,
0.5038778185844... |
https://github.com/huggingface/datasets/issues/2130 | wikiann dataset is missing columns | Cool ! Let me give you some context:
#### Contribution guide
You can find the contribution guide here:
https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md
It explains how to set up your dev environment in a few steps.
#### Dataset loading
Each Dataset is defined by a Table that have ma... | Hi
Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq | 208 | wikiann dataset is missing columns
Hi
Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq
Cool ! Let me give you some context:
#### Contribution guide
You can... | [
0.005695040337741375,
-0.26472556591033936,
-0.04006805643439293,
0.09957081079483032,
0.3907712399959564,
0.10898131132125854,
0.28839391469955444,
0.10415703803300858,
0.013261757791042328,
0.1453699916601181,
-0.032858956605196,
-0.012728188186883926,
0.022151349112391472,
0.42865008115... |
https://github.com/huggingface/datasets/issues/2129 | How to train BERT model with next sentence prediction? | Hi !
We're not using `TextDatasetForNextSentencePrediction` in `datasets`.
Although you can probably use the `TextDatasetForNextSentencePrediction.create_examples_from_document` on a dataset to prepare it for next sentence prediction. | Hello.
I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction
like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ?
| 25 | How to train BERT model with next sentence prediction?
Hello.
I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction
like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ?
Hi !
We're not using `TextDatasetForNex... | [
0.19170738756656647,
-0.4076448082923889,
-0.014769718050956726,
-0.2003287523984909,
0.0005590601940639317,
-0.2783758342266083,
0.14220641553401947,
-0.012717148289084435,
-0.025662310421466827,
0.1647023856639862,
0.1556011438369751,
0.04937278479337692,
-0.15480896830558777,
0.06175391... |
https://github.com/huggingface/datasets/issues/2129 | How to train BERT model with next sentence prediction? | Thanks.
Do you mean that `TextDatasetForNextSentencePrediction.create_exapmles_from_document` can be applied to dataset object other than `TextDatasetForNextSentencePrediction` e.g. a `Dataset` object which is loaded by `datasets.load_dataset`? | Hello.
I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction
like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ?
| 24 | How to train BERT model with next sentence prediction?
Hello.
I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction
like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ?
Thanks.
Do you mean that `TextDataset... | [
0.10564293712377548,
-0.4607718288898468,
0.0155632384121418,
-0.08984268456697464,
0.04743320494890213,
-0.28990641236305237,
0.15321138501167297,
-0.005172045435756445,
0.06400422006845474,
0.12141895294189453,
0.18533097207546234,
0.12819261848926544,
-0.15475152432918549,
0.12822137773... |
https://github.com/huggingface/datasets/issues/2129 | How to train BERT model with next sentence prediction? | It would probably require a bit of tweaking, but you can apply it to a dataset, yes.
This should give you a new dataset with sentence pairs you can train a model on.
You can find the documentation about dataset processing here:
https://huggingface.co/docs/datasets/processing.html#processing-data-with-map | Hello.
I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction
like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ?
| 43 | How to train BERT model with next sentence prediction?
Hello.
I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction
like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ?
It would probably require a bit of tweak... | [
0.22160688042640686,
-0.40787073969841003,
0.02459142915904522,
-0.12425870448350906,
0.03434876352548599,
-0.18445833027362823,
0.09041339159011841,
0.02212015725672245,
-0.05100822448730469,
0.0038794202264398336,
0.03292566537857056,
0.0937432199716568,
-0.18825004994869232,
0.173675358... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.