html_url
stringlengths
48
51
title
stringlengths
5
268
comments
stringlengths
63
51.8k
body
stringlengths
0
36.2k
comment_length
int64
16
1.52k
text
stringlengths
164
54.1k
embeddings
list
https://github.com/huggingface/datasets/issues/2170
Wikipedia historic dumps are deleted but hf/datasets hardcodes dump date
It seems that this can be fixed from user's end by including a `date` argument, like this: `dataset = datasets.load_dataset('wikipedia', '20200501.en', date='20210420')` You can get available dates from [here](https://dumps.wikimedia.org/enwiki/). This is not a proper fix however as all the files will still ha...
Wikimedia does not keep all historical dumps. For example, as of today https://dumps.wikimedia.org/kowiki/ only provides ``` 20201220/ 02-Feb-2021 01:36 - 20210101/ 21-Feb-2021 01:26 - 20210120/ ...
48
Wikipedia historic dumps are deleted but hf/datasets hardcodes dump date Wikimedia does not keep all historical dumps. For example, as of today https://dumps.wikimedia.org/kowiki/ only provides ``` 20201220/ 02-Feb-2021 01:36 - 20210101/ ...
[ -0.0480471551, 0.3399815559, -0.0221921876, 0.0315802284, -0.3193788826, 0.1565126628, 0.3078281581, 0.5392643809, 0.1764219552, 0.1010855809, -0.0753563195, 0.0723982453, 0.2024514973, -0.244992435, -0.1113701165, -0.1828337014, 0.0309122205, 0.0772669464, -0.1100763455, -0.29...
https://github.com/huggingface/datasets/issues/2166
Regarding Test Sets for the GEM datasets
Hi @vyraun ! The test references for CommonGen are not publicly available: you can reach out to the original dataset authors if you would like to ask for them, but we will not be releasing them as part of GEM (March 31st was the release date for the test set inputs, references are incidentally released for some of the ...
@yjernite Hi, are the test sets for the GEM datasets scheduled to be [added soon](https://gem-benchmark.com/shared_task)? e.g. ``` from datasets import load_dataset DATASET_NAME="common_gen" data = load_dataset("gem", DATASET_NAME) ``` The test set doesn't have the target or references. ``` data['test...
71
Regarding Test Sets for the GEM datasets @yjernite Hi, are the test sets for the GEM datasets scheduled to be [added soon](https://gem-benchmark.com/shared_task)? e.g. ``` from datasets import load_dataset DATASET_NAME="common_gen" data = load_dataset("gem", DATASET_NAME) ``` The test set doesn't have t...
[ -0.3199133873, -0.0961248577, -0.1973159611, 0.1844350994, -0.0919599235, 0.1184664592, 0.2837320864, 0.3940262496, -0.105286561, -0.0161251649, 0.1892586499, 0.2699713111, -0.3344812989, 0.1294026822, -0.0042846617, 0.2062072158, 0.0137711419, 0.0532654114, -0.1366806626, -0.1...
https://github.com/huggingface/datasets/issues/2165
How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset
Hi, a HF dataset can be converted to a Torch Dataset with a simple wrapper as follows: ```python from torch.utils.data import Dataset class HFDataset(Dataset): def __init__(self, dset): self.dset = dset def __getitem__(self, idx): return self.dset[idx] def __len__(self): ...
Hi, I'm trying to pretraine deep-speed model using HF arxiv dataset like: ``` train_ds = nlp.load_dataset('scientific_papers', 'arxiv') train_ds.set_format( type="torch", columns=["input_ids", "attention_mask", "global_attention_mask", "labels"], ) engine, _, _, _ = deepspeed.initialize( ...
124
How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset Hi, I'm trying to pretraine deep-speed model using HF arxiv dataset like: ``` train_ds = nlp.load_dataset('scientific_papers', 'arxiv') train_ds.set_format( type="torch", columns=["input_ids", "attention_mask", "global_a...
[ -0.2578474879, -0.2652202249, 0.092290543, 0.3382237554, 0.192244038, 0.2227147222, -0.0302730072, 0.3457019925, -0.1513580084, -0.188662976, -0.3486477137, 0.3248992264, -0.1942693293, -0.1420533508, 0.1868404895, -0.2680390775, 0.1673271507, -0.077385366, -0.2466961443, -0.15...
https://github.com/huggingface/datasets/issues/2165
How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset
Interesting ! Thanks for sharing this @mariosasko . I like the idea This looks like something we should add IMO
Hi, I'm trying to pretraine deep-speed model using HF arxiv dataset like: ``` train_ds = nlp.load_dataset('scientific_papers', 'arxiv') train_ds.set_format( type="torch", columns=["input_ids", "attention_mask", "global_attention_mask", "labels"], ) engine, _, _, _ = deepspeed.initialize( ...
20
How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset Hi, I'm trying to pretraine deep-speed model using HF arxiv dataset like: ``` train_ds = nlp.load_dataset('scientific_papers', 'arxiv') train_ds.set_format( type="torch", columns=["input_ids", "attention_mask", "global_a...
[ -0.2704310417, -0.3552312553, 0.0578338318, 0.3274024725, 0.1308026314, 0.2287475765, -0.0746708885, 0.3752877116, -0.1519821733, -0.2421407998, -0.3885170817, 0.3905746937, -0.2419558614, 0.0159072392, 0.1749467105, -0.3065535724, 0.221538499, 0.0879760385, -0.2604884207, -0.1...
https://github.com/huggingface/datasets/issues/2165
How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset
@mariosasko Thx for your code! It perfectly works with a small modification for HF NLP dataset: ``` original_ds = nlp.load_dataset('scientific_papers', 'arxiv') train_ds = HFDataset(train_ds['train']) # needs splitting ```
Hi, I'm trying to pretraine deep-speed model using HF arxiv dataset like: ``` train_ds = nlp.load_dataset('scientific_papers', 'arxiv') train_ds.set_format( type="torch", columns=["input_ids", "attention_mask", "global_attention_mask", "labels"], ) engine, _, _, _ = deepspeed.initialize( ...
28
How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset Hi, I'm trying to pretraine deep-speed model using HF arxiv dataset like: ``` train_ds = nlp.load_dataset('scientific_papers', 'arxiv') train_ds.set_format( type="torch", columns=["input_ids", "attention_mask", "global_a...
[ -0.2263560146, -0.33357355, 0.0703147426, 0.3271582723, 0.1331762373, 0.1971443146, -0.0642954111, 0.3733651042, -0.13911888, -0.2662571073, -0.3984828591, 0.3849839568, -0.1794497967, -0.0095808413, 0.1554361135, -0.3008969426, 0.1781212091, 0.0767289549, -0.2033019066, -0.219...
https://github.com/huggingface/datasets/issues/2165
How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset
@lhoestq Sadly, from Python 3.7 onwards `torch.utils.data.Dataset` doesn't support the virtual subclass mechanism due to `typing.Generic` type no longer having `abc.ABCMeta` as its metaclass. With that in mind, another option is to remove a direct type check (`isinstance(dataset, torch.utils.data.Dataset)`) in `deep...
Hi, I'm trying to pretraine deep-speed model using HF arxiv dataset like: ``` train_ds = nlp.load_dataset('scientific_papers', 'arxiv') train_ds.set_format( type="torch", columns=["input_ids", "attention_mask", "global_attention_mask", "labels"], ) engine, _, _, _ = deepspeed.initialize( ...
108
How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset Hi, I'm trying to pretraine deep-speed model using HF arxiv dataset like: ``` train_ds = nlp.load_dataset('scientific_papers', 'arxiv') train_ds.set_format( type="torch", columns=["input_ids", "attention_mask", "global_a...
[ -0.1920575798, -0.2704024911, 0.0988815427, 0.3232179284, 0.2382786423, 0.1883172244, -0.0456599332, 0.36336869, -0.1526026875, -0.229176119, -0.2917282581, 0.392419517, -0.2503359616, -0.1537751704, 0.1516398787, -0.2407063842, 0.1811587662, 0.0032428373, -0.269872725, -0.1991...
https://github.com/huggingface/datasets/issues/2165
How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset
That makes sense ! Feel free to open an issue on their repo and discuss this idea
Hi, I'm trying to pretraine deep-speed model using HF arxiv dataset like: ``` train_ds = nlp.load_dataset('scientific_papers', 'arxiv') train_ds.set_format( type="torch", columns=["input_ids", "attention_mask", "global_attention_mask", "labels"], ) engine, _, _, _ = deepspeed.initialize( ...
17
How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset Hi, I'm trying to pretraine deep-speed model using HF arxiv dataset like: ``` train_ds = nlp.load_dataset('scientific_papers', 'arxiv') train_ds.set_format( type="torch", columns=["input_ids", "attention_mask", "global_a...
[ -0.2620017529, -0.3059776723, 0.0482734777, 0.3386889994, 0.1174888909, 0.2266670763, -0.0971869156, 0.3648380339, -0.1662657112, -0.2185194641, -0.3497871161, 0.3944827914, -0.2521584332, -0.0014387334, 0.1679954827, -0.2934632897, 0.2071099728, 0.0787693486, -0.2426628768, -0...
https://github.com/huggingface/datasets/issues/2165
How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset
@y-rokutan Hi, now if you install `deepspeed` from master (this feature will be available in the next official release), the code should work without subclassing. Let us know if you still have any issues.
Hi, I'm trying to pretraine deep-speed model using HF arxiv dataset like: ``` train_ds = nlp.load_dataset('scientific_papers', 'arxiv') train_ds.set_format( type="torch", columns=["input_ids", "attention_mask", "global_attention_mask", "labels"], ) engine, _, _, _ = deepspeed.initialize( ...
34
How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset Hi, I'm trying to pretraine deep-speed model using HF arxiv dataset like: ``` train_ds = nlp.load_dataset('scientific_papers', 'arxiv') train_ds.set_format( type="torch", columns=["input_ids", "attention_mask", "global_a...
[ -0.2635490298, -0.3762310743, 0.085872449, 0.3691845834, 0.1816752553, 0.3435536325, -0.0518242046, 0.4010280073, -0.1650427282, -0.2584496439, -0.3508141637, 0.3204567134, -0.2445641458, 0.0142730158, 0.1966751069, -0.3255870044, 0.2107606679, 0.0282449163, -0.2117950916, -0.2...
https://github.com/huggingface/datasets/issues/2165
How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset
Worth mentioning that any function that expects a `torch..Dataset` (like `torch..DataLoader`) will fail a mypy-esque typecheck if a `datasets.Dataset` is passed, even though it implements the interface correctly (I think). The virtual subclass idea was a good one- I wonder if there's another workaround given the Generi...
Hi, I'm trying to pretraine deep-speed model using HF arxiv dataset like: ``` train_ds = nlp.load_dataset('scientific_papers', 'arxiv') train_ds.set_format( type="torch", columns=["input_ids", "attention_mask", "global_attention_mask", "labels"], ) engine, _, _, _ = deepspeed.initialize( ...
96
How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset Hi, I'm trying to pretraine deep-speed model using HF arxiv dataset like: ``` train_ds = nlp.load_dataset('scientific_papers', 'arxiv') train_ds.set_format( type="torch", columns=["input_ids", "attention_mask", "global_a...
[ -0.2078216225, -0.2331487536, 0.1384680122, 0.2725794911, 0.2668263912, 0.1537621319, -0.0031081163, 0.3240683377, -0.0748967007, -0.2203993797, -0.3576210439, 0.3906596303, -0.2636039555, 0.1052082479, 0.2797997296, -0.2113584876, 0.1772635579, -0.0501310118, -0.165951252, -0....
https://github.com/huggingface/datasets/issues/2162
visualization for cc100 is broken
This looks like an issue with the cc100 dataset itself but not sure Did you try loading cc100 on your machine ?
Hi visualization through dataset viewer for cc100 is broken https://huggingface.co/datasets/viewer/ thanks a lot
22
visualization for cc100 is broken Hi visualization through dataset viewer for cc100 is broken https://huggingface.co/datasets/viewer/ thanks a lot This looks like an issue with the cc100 dataset itself but not sure Did you try loading cc100 on your machine ?
[ -0.5656794906, -0.2004437, -0.0850356147, 0.1293198764, 0.2095072567, -0.0013309878, 0.1641537398, 0.1368133426, -0.065923512, 0.4231900871, -0.052365154, 0.2377000004, 0.1289269328, 0.4269972742, -0.0176099669, -0.2411685884, 0.053290084, 0.4060421288, -0.3985905647, 0.0212525...
https://github.com/huggingface/datasets/issues/2162
visualization for cc100 is broken
Hi loading works fine, but the viewer only is broken thanks On Wed, Apr 7, 2021 at 12:17 PM Quentin Lhoest ***@***.***> wrote: > This looks like an issue with the cc100 dataset itself but not sure > Did you try loading cc100 on your machine ? > > — > You are receiving this because you authored the thread. > Reply to ...
Hi visualization through dataset viewer for cc100 is broken https://huggingface.co/datasets/viewer/ thanks a lot
80
visualization for cc100 is broken Hi visualization through dataset viewer for cc100 is broken https://huggingface.co/datasets/viewer/ thanks a lot Hi loading works fine, but the viewer only is broken thanks On Wed, Apr 7, 2021 at 12:17 PM Quentin Lhoest ***@***.***> wrote: > This looks like an issue with ...
[ -0.489233166, -0.2571880221, -0.0154401315, 0.1678070128, 0.2279230058, 0.0367417149, 0.1436954886, 0.0804586262, -0.0384784043, 0.3180071712, -0.1425654143, 0.2456280887, 0.141941756, 0.4538695514, 0.0949074328, -0.32065925, 0.1359130591, 0.3763964772, -0.4416948259, 0.0868925...
https://github.com/huggingface/datasets/issues/2161
any possibility to download part of large datasets only?
oh, great, really awesome feature to have, thank you very much for the great, fabulous work
Hi Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks
16
any possibility to download part of large datasets only? Hi Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks oh, great, really awesome feature to ha...
[ -0.4569225609, -0.4417871237, -0.1896092445, 0.1484075487, 0.1038996279, 0.1881606877, -0.2670144737, 0.3661699295, 0.063260667, 0.4351859391, -0.4470096529, -0.1280027926, -0.1039809063, 0.3638651967, 0.1902366877, -0.0035722197, -0.1473829001, 0.252376914, -0.2892188728, -0.1...
https://github.com/huggingface/datasets/issues/2161
any possibility to download part of large datasets only?
We'll work on dataset streaming soon. This should allow you to only load the examples you need ;)
Hi Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks
18
any possibility to download part of large datasets only? Hi Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks We'll work on dataset streaming soon. T...
[ -0.4160079956, -0.4227997065, -0.0912120342, 0.2210886925, 0.1178916767, 0.2086139768, -0.2158918232, 0.3746379316, 0.107531555, 0.3137332797, -0.2778739631, -0.226484865, -0.0971202403, 0.3623842895, 0.2786153853, -0.1379797906, -0.1310558766, 0.2409259379, -0.1959341913, -0.1...
https://github.com/huggingface/datasets/issues/2161
any possibility to download part of large datasets only?
thanks a lot Quentin, this would be really really a great feature to have On Wed, Apr 7, 2021 at 12:14 PM Quentin Lhoest ***@***.***> wrote: > We'll work on dataset streaming soon. This should allow you to only load > the examples you need ;) > > — > You are receiving this because you authored the thread. > Reply to ...
Hi Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks
79
any possibility to download part of large datasets only? Hi Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks thanks a lot Quentin, this would be rea...
[ -0.4333570898, -0.453487128, -0.0487323664, 0.2667761445, 0.1081290692, 0.2431802452, -0.1623354107, 0.4673763812, 0.0734753683, 0.3738211095, -0.3642460704, -0.2017759383, -0.0959115103, 0.4577916563, 0.3195568025, -0.147272557, -0.1357190758, 0.2194816917, -0.2238367796, -0.1...
https://github.com/huggingface/datasets/issues/2161
any possibility to download part of large datasets only?
Is streaming completed? On the 1.8.0 docs it is mentioned (https://huggingface.co/docs/datasets/dataset_streaming.html), but when following the example I get the following error: ``` >>> dataset2 = load_dataset("amazon_us_reviews", "Pet_Products_v1_00", split='train', streaming=True) ----------------------------...
Hi Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks
123
any possibility to download part of large datasets only? Hi Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks Is streaming completed? On the 1.8.0 do...
[ -0.4476093948, -0.3885074854, 0.0095246946, 0.3701238632, 0.1067442074, 0.1929702312, -0.0340351164, 0.5148776174, 0.0819870085, 0.2324685901, -0.4534534812, -0.1990630925, -0.1784165055, 0.4000231028, 0.3254312575, -0.1704495102, -0.1151954532, 0.1878745407, -0.0883974954, -0....
https://github.com/huggingface/datasets/issues/2161
any possibility to download part of large datasets only?
Hi ! Streaming is available on `master` only right now. We'll make a new release 1.9.0 on Monday :)
Hi Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks
19
any possibility to download part of large datasets only? Hi Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks Hi ! Streaming is available on `master`...
[ -0.4722627997, -0.4255562127, -0.113186948, 0.1344198585, 0.0880463496, 0.1738645732, -0.3542889357, 0.3757151663, -0.0502583832, 0.3286558986, -0.4129244983, -0.2279314995, -0.1485812217, 0.3793703914, 0.2008759379, -0.0660859942, -0.1095424294, 0.2192228884, -0.1483976692, -0...
https://github.com/huggingface/datasets/issues/2160
data_args.preprocessing_num_workers almost freezes
Hi. I cannot always reproduce this issue, and on later runs I did not see it so far. Sometimes also I set 8 processes but I see less being showed, is this normal, here only 5 are shown for 8 being set, thanks ``` #3: 11%|███████████████▊ ...
Hi @lhoestq I am running this code from huggingface transformers https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py to speed up tokenization, since I am running on multiple datasets, I am using data_args.preprocessing_num_workers = 4 with opus100 corpus but this moves ...
71
data_args.preprocessing_num_workers almost freezes Hi @lhoestq I am running this code from huggingface transformers https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py to speed up tokenization, since I am running on multiple datasets, I am using data_args.preprocessi...
[ -0.2514105737, -0.247919023, -0.1668762416, 0.0695673451, 0.1297331899, -0.1807381362, 0.4334704578, 0.0990600809, -0.3538159132, 0.2440420538, 0.0638779625, 0.2399454266, 0.094116047, -0.133018896, -0.0470905676, 0.151622504, 0.1583947241, 0.0112237511, -0.1086991802, 0.151717...
https://github.com/huggingface/datasets/issues/2158
viewer "fake_news_english" error
Thanks for reporting ! The viewer doesn't have all the dependencies of the datasets. We may add openpyxl to be able to show this dataset properly
When I visit the [Huggingface - viewer](https://huggingface.co/datasets/viewer/) web site, under the dataset "fake_news_english" I've got this error: > ImportError: To be able to use this dataset, you need to install the following dependencies['openpyxl'] using 'pip install # noqa: requires this pandas optional depe...
26
viewer "fake_news_english" error When I visit the [Huggingface - viewer](https://huggingface.co/datasets/viewer/) web site, under the dataset "fake_news_english" I've got this error: > ImportError: To be able to use this dataset, you need to install the following dependencies['openpyxl'] using 'pip install # noqa:...
[ -0.1493436396, -0.1812809706, 0.0336433761, 0.3416323662, 0.2343087941, 0.2863432169, -0.0157953631, 0.2244073898, 0.1654276997, 0.0734672248, -0.2225703299, -0.1259511113, -0.0306418873, 0.3286090493, -0.0807030946, -0.2097441852, 0.2249316126, 0.1918755323, -0.0383204296, -0....
https://github.com/huggingface/datasets/issues/2153
load_dataset ignoring features
Nice question which helped me a lot! I have wasted a lot of time to the `DatasetDict` creation from a csv file. Hope the document of this module add some simple examples.
First of all, I'm sorry if it is a repeated issue or the changes are already in master, I searched and I didn't find anything. I'm using datasets 1.5.0 ![image](https://user-images.githubusercontent.com/37592763/113114369-8f376580-920b-11eb-900d-94365b59f04b.png) As you can see, when I load the dataset, the C...
32
load_dataset ignoring features First of all, I'm sorry if it is a repeated issue or the changes are already in master, I searched and I didn't find anything. I'm using datasets 1.5.0 ![image](https://user-images.githubusercontent.com/37592763/113114369-8f376580-920b-11eb-900d-94365b59f04b.png) As you can se...
[ -0.0856144503, -0.0304921698, 0.0129401591, 0.2842154205, 0.4256820977, 0.2531651855, 0.6406394839, -0.053947553, 0.2234635353, 0.0533239692, 0.1543195993, 0.33632496, -0.1144229174, 0.4599345326, -0.1519142389, -0.0312217567, 0.063283667, 0.1389199048, 0.0868092105, -0.1416219...
https://github.com/huggingface/datasets/issues/2148
Add configurable options to `seqeval` metric
Hi @marrodion. Thanks for pointing this out. It would be great to incorporate this metric-specific enhancement. Another possibility would be to require the user to input the scheme as a string `mode="strict", scheme="IOB2"` and then dynamically import the corresponding module using Python `importlib`: ```python...
Right now `load_metric("seqeval")` only works in the default mode of evaluation (equivalent to conll evaluation). However, seqeval library [supports](https://github.com/chakki-works/seqeval#support-features) different evaluation schemes (IOB1, IOB2, etc.), which can be plugged in just by supporting additional kwargs...
61
Add configurable options to `seqeval` metric Right now `load_metric("seqeval")` only works in the default mode of evaluation (equivalent to conll evaluation). However, seqeval library [supports](https://github.com/chakki-works/seqeval#support-features) different evaluation schemes (IOB1, IOB2, etc.), which can be ...
[ -0.4404144585, 0.1902092099, -0.0845154375, -0.1625579149, 0.0749472752, -0.1645830125, 0.1490215957, 0.2498327196, -0.083608374, 0.3530941904, -0.4670090377, 0.2486551404, -0.032320369, 0.2693846226, 0.0718923435, 0.3165939748, -0.2818851769, -0.0426250286, 0.0006641197, 0.108...
https://github.com/huggingface/datasets/issues/2146
Dataset file size on disk is very large with 3D Array
Hi ! In the arrow file we store all the integers as uint8. So your arrow file should weigh around `height x width x n_channels x n_images` bytes. What feature type do your TFDS dataset have ? If it uses a `tfds.features.Image` type, then what is stored is the encoded data (as png or jpg for example). Since these...
Hi, I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8. The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`. `{ "description": "", "citation": ""...
114
Dataset file size on disk is very large with 3D Array Hi, I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8. The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.j...
[ -0.1452736109, -0.1132630259, -0.1544178575, 0.4201556742, 0.2135637701, 0.1139130071, 0.5066813827, 0.2730787396, 0.0205231383, 0.0358475558, -0.1937046945, 0.0874153227, -0.1694585234, 0.315043956, 0.0677503571, 0.1006881222, -0.0745175779, 0.2513982952, -0.133478567, -0.0857...
https://github.com/huggingface/datasets/issues/2146
Dataset file size on disk is very large with 3D Array
Thanks for the prompt response. You're right about the encoding, I have the `tfds.features.Image` feature type you mentioned. However, as described in the `dataset_info.json`, my dataset is made of 1479 (224x224x3) images. 1479 x 224 x 224 x 3 = 222630912 bytes which is far from the actual size 520803408 bytes. An...
Hi, I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8. The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`. `{ "description": "", "citation": ""...
62
Dataset file size on disk is very large with 3D Array Hi, I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8. The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.j...
[ -0.1452736109, -0.1132630259, -0.1544178575, 0.4201556742, 0.2135637701, 0.1139130071, 0.5066813827, 0.2730787396, 0.0205231383, 0.0358475558, -0.1937046945, 0.0874153227, -0.1694585234, 0.315043956, 0.0677503571, 0.1006881222, -0.0745175779, 0.2513982952, -0.133478567, -0.0857...
https://github.com/huggingface/datasets/issues/2146
Dataset file size on disk is very large with 3D Array
@lhoestq I changed the data structure so I have a 2D Array feature type instead of a 3D Array by grouping the two last dimensions ( a 224x672 2D Array instead of a 224x224x3 3D Array). The file size is now 223973964 bytes, nearly half the previous size! Which is around of what I would expect. I found similar behavio...
Hi, I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8. The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`. `{ "description": "", "citation": ""...
77
Dataset file size on disk is very large with 3D Array Hi, I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8. The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.j...
[ -0.1452736109, -0.1132630259, -0.1544178575, 0.4201556742, 0.2135637701, 0.1139130071, 0.5066813827, 0.2730787396, 0.0205231383, 0.0358475558, -0.1937046945, 0.0874153227, -0.1694585234, 0.315043956, 0.0677503571, 0.1006881222, -0.0745175779, 0.2513982952, -0.133478567, -0.0857...
https://github.com/huggingface/datasets/issues/2146
Dataset file size on disk is very large with 3D Array
Interesting ! This may be because of the offsets that are stored with the array data. Currently the offsets are stored even if the `shape` of the arrays is fixed. This was needed because of some issues with pyarrow a few months ago. I think these issues have been addressed now, so we can probably try to remove them...
Hi, I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8. The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`. `{ "description": "", "citation": ""...
80
Dataset file size on disk is very large with 3D Array Hi, I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8. The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.j...
[ -0.1452736109, -0.1132630259, -0.1544178575, 0.4201556742, 0.2135637701, 0.1139130071, 0.5066813827, 0.2730787396, 0.0205231383, 0.0358475558, -0.1937046945, 0.0874153227, -0.1694585234, 0.315043956, 0.0677503571, 0.1006881222, -0.0745175779, 0.2513982952, -0.133478567, -0.0857...
https://github.com/huggingface/datasets/issues/2146
Dataset file size on disk is very large with 3D Array
Yeah for sure, can you be a bit more specific about where the offset is stored in the code base ? And any reference to pyarrow issues if you have some. I would be very interested in contributing to `datasets` by trying to fix this issue.
Hi, I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8. The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`. `{ "description": "", "citation": ""...
46
Dataset file size on disk is very large with 3D Array Hi, I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8. The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.j...
[ -0.1452736109, -0.1132630259, -0.1544178575, 0.4201556742, 0.2135637701, 0.1139130071, 0.5066813827, 0.2730787396, 0.0205231383, 0.0358475558, -0.1937046945, 0.0874153227, -0.1694585234, 0.315043956, 0.0677503571, 0.1006881222, -0.0745175779, 0.2513982952, -0.133478567, -0.0857...
https://github.com/huggingface/datasets/issues/2146
Dataset file size on disk is very large with 3D Array
Pyarrow has two types of lists: variable length lists and fixed size lists. Currently we store the ArrayXD data as variable length lists. They take more disk space because they must store both actual data and offsets. In the `datasets` code this is done here: https://github.com/huggingface/nlp/blob/dbac87c8a083f80...
Hi, I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8. The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`. `{ "description": "", "citation": ""...
75
Dataset file size on disk is very large with 3D Array Hi, I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8. The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.j...
[ -0.1452736109, -0.1132630259, -0.1544178575, 0.4201556742, 0.2135637701, 0.1139130071, 0.5066813827, 0.2730787396, 0.0205231383, 0.0358475558, -0.1937046945, 0.0874153227, -0.1694585234, 0.315043956, 0.0677503571, 0.1006881222, -0.0745175779, 0.2513982952, -0.133478567, -0.0857...
https://github.com/huggingface/datasets/issues/2144
Loading wikipedia 20200501.en throws pyarrow related error
That's how I loaded the dataset ```python from datasets import load_dataset ds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache') ```
**Problem description** I am getting the following error when trying to load wikipedia/20200501.en dataset. **Error log** Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikiped...
17
Loading wikipedia 20200501.en throws pyarrow related error **Problem description** I am getting the following error when trying to load wikipedia/20200501.en dataset. **Error log** Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, to...
[ -0.1041544527, 0.3526459336, 0.0340579785, 0.3633104265, 0.2745774984, 0.1802080125, 0.2762238383, 0.4623250961, -0.0434875786, -0.113784194, 0.0247378871, 0.2360218465, 0.1520679891, -0.1178019345, 0.1841087043, -0.1683135182, 0.0038772854, 0.0068802559, 0.0847920552, 0.132844...
https://github.com/huggingface/datasets/issues/2144
Loading wikipedia 20200501.en throws pyarrow related error
Hi ! It looks like the arrow file in the folder `/usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931` is corrupted. Can you take a look and check that it's 18.3GB ? If not, then maybe you need to redownload it: ```python from datasets ...
**Problem description** I am getting the following error when trying to load wikipedia/20200501.en dataset. **Error log** Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikiped...
46
Loading wikipedia 20200501.en throws pyarrow related error **Problem description** I am getting the following error when trying to load wikipedia/20200501.en dataset. **Error log** Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, to...
[ -0.1041544527, 0.3526459336, 0.0340579785, 0.3633104265, 0.2745774984, 0.1802080125, 0.2762238383, 0.4623250961, -0.0434875786, -0.113784194, 0.0247378871, 0.2360218465, 0.1520679891, -0.1178019345, 0.1841087043, -0.1683135182, 0.0038772854, 0.0068802559, 0.0847920552, 0.132844...
https://github.com/huggingface/datasets/issues/2144
Loading wikipedia 20200501.en throws pyarrow related error
> Hi ! It looks like the arrow file in the folder > `/usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931` is corrupted. > > Can you take a look and check that it's 18.3GB ? > > If not, then maybe you need to redownload it: > > ```pyth...
**Problem description** I am getting the following error when trying to load wikipedia/20200501.en dataset. **Error log** Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikiped...
113
Loading wikipedia 20200501.en throws pyarrow related error **Problem description** I am getting the following error when trying to load wikipedia/20200501.en dataset. **Error log** Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, to...
[ -0.1041544527, 0.3526459336, 0.0340579785, 0.3633104265, 0.2745774984, 0.1802080125, 0.2762238383, 0.4623250961, -0.0434875786, -0.113784194, 0.0247378871, 0.2360218465, 0.1520679891, -0.1178019345, 0.1841087043, -0.1683135182, 0.0038772854, 0.0068802559, 0.0847920552, 0.132844...
https://github.com/huggingface/datasets/issues/2144
Loading wikipedia 20200501.en throws pyarrow related error
I just tried on my side and got no issues. When downloading the dataset again, did it crash at 10.7GB as well ?
**Problem description** I am getting the following error when trying to load wikipedia/20200501.en dataset. **Error log** Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikiped...
23
Loading wikipedia 20200501.en throws pyarrow related error **Problem description** I am getting the following error when trying to load wikipedia/20200501.en dataset. **Error log** Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, to...
[ -0.1041544527, 0.3526459336, 0.0340579785, 0.3633104265, 0.2745774984, 0.1802080125, 0.2762238383, 0.4623250961, -0.0434875786, -0.113784194, 0.0247378871, 0.2360218465, 0.1520679891, -0.1178019345, 0.1841087043, -0.1683135182, 0.0038772854, 0.0068802559, 0.0847920552, 0.132844...
https://github.com/huggingface/datasets/issues/2144
Loading wikipedia 20200501.en throws pyarrow related error
> I just tried on my side and got no issues. > When downloading the dataset again, did it crash at 10.7GB as well ? Yes i have tried it multiple times on different machines. I am wondering if you could share the screenshot of your dependency versions and i will try to make them the same as yours?
**Problem description** I am getting the following error when trying to load wikipedia/20200501.en dataset. **Error log** Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikiped...
59
Loading wikipedia 20200501.en throws pyarrow related error **Problem description** I am getting the following error when trying to load wikipedia/20200501.en dataset. **Error log** Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, to...
[ -0.1041544527, 0.3526459336, 0.0340579785, 0.3633104265, 0.2745774984, 0.1802080125, 0.2762238383, 0.4623250961, -0.0434875786, -0.113784194, 0.0247378871, 0.2360218465, 0.1520679891, -0.1178019345, 0.1841087043, -0.1683135182, 0.0038772854, 0.0068802559, 0.0847920552, 0.132844...
https://github.com/huggingface/datasets/issues/2144
Loading wikipedia 20200501.en throws pyarrow related error
I tried using `datasets` from `master` on macos with python 3.7.2 I also have `requests==2.23.0` and `tqdm==4.45.0`.
**Problem description** I am getting the following error when trying to load wikipedia/20200501.en dataset. **Error log** Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikiped...
17
Loading wikipedia 20200501.en throws pyarrow related error **Problem description** I am getting the following error when trying to load wikipedia/20200501.en dataset. **Error log** Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, to...
[ -0.1041544527, 0.3526459336, 0.0340579785, 0.3633104265, 0.2745774984, 0.1802080125, 0.2762238383, 0.4623250961, -0.0434875786, -0.113784194, 0.0247378871, 0.2360218465, 0.1520679891, -0.1178019345, 0.1841087043, -0.1683135182, 0.0038772854, 0.0068802559, 0.0847920552, 0.132844...
https://github.com/huggingface/datasets/issues/2139
TypeError when using save_to_disk in a dataset loaded with ReadInstruction split
Hi ! I think this has been fixed recently on `master`. Can you try again by installing `datasets` from `master` ? ``` pip install git+https://github.com/huggingface/datasets.git ```
Hi, Loading a dataset with `load_dataset` using a split defined via `ReadInstruction` and then saving it to disk results in the following error: `TypeError: Object of type ReadInstruction is not JSON serializable`. Here is the minimal reproducible example: ```python from datasets import load_dataset from dat...
26
TypeError when using save_to_disk in a dataset loaded with ReadInstruction split Hi, Loading a dataset with `load_dataset` using a split defined via `ReadInstruction` and then saving it to disk results in the following error: `TypeError: Object of type ReadInstruction is not JSON serializable`. Here is the mini...
[ -0.1317064166, 0.2105593234, 0.0438050441, 0.3900920749, 0.3082846105, 0.2542074621, 0.4115678072, 0.218305409, 0.1559665054, 0.0927032307, -0.1810031384, 0.3821817338, -0.1831880063, 0.4692059755, -0.3343833685, -0.2261848152, 0.1314816028, 0.0324933827, 0.0247838777, 0.151653...
https://github.com/huggingface/datasets/issues/2135
en language data from MLQA dataset is missing
Hi ! Indeed only the languages of the `translate-train` data are included... I can't find a link to download the english train set on https://github.com/facebookresearch/MLQA though, do you know where we can download it ?
Hi I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue.
35
en language data from MLQA dataset is missing Hi I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue. Hi ! Indeed only the languages of the `translate-train` data are included... I can't find a link to ...
[ -0.0344623998, 0.2046835572, -0.2271873206, 0.2308053672, 0.0578234643, 0.2997930646, -0.039533522, -0.0461164117, -0.149814263, 0.1264655739, 0.1958927363, -0.1498143673, 0.0673908368, 0.4813949168, 0.2251039743, -0.2650110722, -0.0097862715, -0.0794419944, -0.1342738867, -0.4...
https://github.com/huggingface/datasets/issues/2135
en language data from MLQA dataset is missing
Hi @lhoestq thank you very much for coming back to me, now I see, you are right, in the link you sent I see split of {split}-context-{context_language}-question-{question_language}.json with context_language=question_language=en, TFDS most probably has extracted english ones from these files as en language files, bu...
Hi I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue.
57
en language data from MLQA dataset is missing Hi I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue. Hi @lhoestq thank you very much for coming back to me, now I see, you are right, in the link you se...
[ -0.0408194661, -0.0362209193, -0.1773767173, 0.3228134513, 0.1323034465, 0.2861717641, 0.0952834561, 0.1073277593, -0.2343605906, 0.1195359528, 0.0948156267, 0.0513929091, 0.1680395603, 0.5239027739, 0.2062966079, -0.4014205039, -0.0716604739, 0.0019191451, -0.176000163, -0.353...
https://github.com/huggingface/datasets/issues/2135
en language data from MLQA dataset is missing
I close the ticket, since I do not see any en existing, they have trained on "SQuAD V1.1" instead. Thanks.
Hi I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue.
20
en language data from MLQA dataset is missing Hi I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue. I close the ticket, since I do not see any en existing, they have trained on "SQuAD V1.1" instead. Th...
[ -0.0614050813, -0.0550394766, -0.1854838282, 0.0839589611, 0.1434617788, 0.2676543891, 0.1858890057, -0.0415907279, -0.1807010472, 0.1482945681, 0.2283990234, 0.1972754598, 0.1776254922, 0.3383447528, 0.3166696131, -0.2478694618, -0.0484906435, -0.0179906245, -0.048479598, -0.4...
https://github.com/huggingface/datasets/issues/2134
Saving large in-memory datasets with save_to_disk crashes because of pickling
Hi ! Indeed `save_to_disk` doesn't call pickle anymore. Though the `OverflowError` can still appear for in-memory datasets bigger than 4GB. This happens when doing this for example: ```python import pyarrow as pa import pickle arr = pa.array([0] * ((4 * 8 << 30) // 64)) table = pa.Table.from_arrays([a], names=[...
Using Datasets 1.5.0 on Python 3.7. Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so...
134
Saving large in-memory datasets with save_to_disk crashes because of pickling Using Datasets 1.5.0 on Python 3.7. Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively fa...
[ -0.3168885708, 0.0995955095, 0.1197799817, 0.3708882928, 0.2521230876, 0.0021296495, -0.283498019, 0.4639685452, 0.190556705, 0.0964655131, 0.0808834657, 0.5146648884, -0.3989092708, 0.2905988395, -0.157480225, -0.083372362, 0.2200171649, -0.1045618355, -0.2615798116, 0.1710372...
https://github.com/huggingface/datasets/issues/2134
Saving large in-memory datasets with save_to_disk crashes because of pickling
Hi! So I've managed to created a minimum working (well technically crashing) example for the multiprocessing case, I create a huge list of zeros, like in your example, and then I try to .map(None, num_proc=2) over it, which then crashes, here's the code: ```python from datasets import Dataset if __name__ == '_...
Using Datasets 1.5.0 on Python 3.7. Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so...
832
Saving large in-memory datasets with save_to_disk crashes because of pickling Using Datasets 1.5.0 on Python 3.7. Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively fa...
[ -0.3168885708, 0.0995955095, 0.1197799817, 0.3708882928, 0.2521230876, 0.0021296495, -0.283498019, 0.4639685452, 0.190556705, 0.0964655131, 0.0808834657, 0.5146648884, -0.3989092708, 0.2905988395, -0.157480225, -0.083372362, 0.2200171649, -0.1045618355, -0.2615798116, 0.1710372...
https://github.com/huggingface/datasets/issues/2134
Saving large in-memory datasets with save_to_disk crashes because of pickling
I just merged a fix #2150 that allows to pickle tables bigger than 4GiB Feel free to try it on the `master` branch !
Using Datasets 1.5.0 on Python 3.7. Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so...
24
Saving large in-memory datasets with save_to_disk crashes because of pickling Using Datasets 1.5.0 on Python 3.7. Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively fa...
[ -0.3168885708, 0.0995955095, 0.1197799817, 0.3708882928, 0.2521230876, 0.0021296495, -0.283498019, 0.4639685452, 0.190556705, 0.0964655131, 0.0808834657, 0.5146648884, -0.3989092708, 0.2905988395, -0.157480225, -0.083372362, 0.2200171649, -0.1045618355, -0.2615798116, 0.1710372...
https://github.com/huggingface/datasets/issues/2134
Saving large in-memory datasets with save_to_disk crashes because of pickling
awesome! I started getting this error as well when I tried to tokenize with a longer sequence length
Using Datasets 1.5.0 on Python 3.7. Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so...
18
Saving large in-memory datasets with save_to_disk crashes because of pickling Using Datasets 1.5.0 on Python 3.7. Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively fa...
[ -0.3168885708, 0.0995955095, 0.1197799817, 0.3708882928, 0.2521230876, 0.0021296495, -0.283498019, 0.4639685452, 0.190556705, 0.0964655131, 0.0808834657, 0.5146648884, -0.3989092708, 0.2905988395, -0.157480225, -0.083372362, 0.2200171649, -0.1045618355, -0.2615798116, 0.1710372...
https://github.com/huggingface/datasets/issues/2134
Saving large in-memory datasets with save_to_disk crashes because of pickling
@prokopCerny does this fix work for you? I found that with the latest master, my container with 500GB RAM starts crashing when I try to map a large dataset using `num_proc`. @lhoestq would it be possible to implement some logic to keep the individual cache files small (say below 100mb)? I find this helps with loadin...
Using Datasets 1.5.0 on Python 3.7. Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so...
84
Saving large in-memory datasets with save_to_disk crashes because of pickling Using Datasets 1.5.0 on Python 3.7. Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively fa...
[ -0.3168885708, 0.0995955095, 0.1197799817, 0.3708882928, 0.2521230876, 0.0021296495, -0.283498019, 0.4639685452, 0.190556705, 0.0964655131, 0.0808834657, 0.5146648884, -0.3989092708, 0.2905988395, -0.157480225, -0.083372362, 0.2200171649, -0.1045618355, -0.2615798116, 0.1710372...
https://github.com/huggingface/datasets/issues/2134
Saving large in-memory datasets with save_to_disk crashes because of pickling
Closing since the original issue was fixed in #2150 Feel free to reopen if you are still experiencing it. For the other problems, please open separate issues
Using Datasets 1.5.0 on Python 3.7. Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so...
27
Saving large in-memory datasets with save_to_disk crashes because of pickling Using Datasets 1.5.0 on Python 3.7. Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively fa...
[ -0.3168885708, 0.0995955095, 0.1197799817, 0.3708882928, 0.2521230876, 0.0021296495, -0.283498019, 0.4639685452, 0.190556705, 0.0964655131, 0.0808834657, 0.5146648884, -0.3989092708, 0.2905988395, -0.157480225, -0.083372362, 0.2200171649, -0.1045618355, -0.2615798116, 0.1710372...
https://github.com/huggingface/datasets/issues/2133
bug in mlqa dataset
If you print those questions, you get readable texts: ```python >>> questions = [ ... "\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u0631?", ...
Hi Looking into MLQA dataset for langauge "ar": ``` "question": [ "\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u0631?", "\u0643\u0...
111
bug in mlqa dataset Hi Looking into MLQA dataset for langauge "ar": ``` "question": [ "\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u...
[ -0.1158529744, -0.2315681726, -0.2734040916, 0.2112121135, 0.332148701, 0.0711948499, 0.352468878, 0.1213383377, -0.3227719665, 0.2254538387, 0.0618871227, 0.2817642987, 0.2518622279, 0.25612095, 0.1166764647, -0.0477858111, 0.1292088181, 0.1794564724, 0.0600871928, -0.19770464...
https://github.com/huggingface/datasets/issues/2133
bug in mlqa dataset
Hi @dorost1234. In Python 3, strings are sequences of Unicode _code points_. Unicode is a specification that maps all characters (and emoji symbols) with its unique representation in terms of code points. That is what you see: Unicode code points (represented by a \u escaped sequence of 16-bit hex values). Charac...
Hi Looking into MLQA dataset for langauge "ar": ``` "question": [ "\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u0631?", "\u0643\u0...
121
bug in mlqa dataset Hi Looking into MLQA dataset for langauge "ar": ``` "question": [ "\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u...
[ -0.1158529744, -0.2315681726, -0.2734040916, 0.2112121135, 0.332148701, 0.0711948499, 0.352468878, 0.1213383377, -0.3227719665, 0.2254538387, 0.0618871227, 0.2817642987, 0.2518622279, 0.25612095, 0.1166764647, -0.0477858111, 0.1292088181, 0.1794564724, 0.0600871928, -0.19770464...
https://github.com/huggingface/datasets/issues/2132
TydiQA dataset is mixed and is not split per language
You can filter the languages this way: ```python tydiqa_en = tydiqa_dataset.filter(lambda x: x["language"] == "english") ``` Otherwise maybe we can have one configuration per language ? What do you think of this for example ? ```python load_dataset("tydiqa", "primary_task.en") ```
Hi @lhoestq Currently TydiQA is mixed and user can only access the whole training set of all languages: https://www.tensorflow.org/datasets/catalog/tydi_qa for using this dataset, one need to train/evaluate in each separate language, and having them mixed, makes it hard to use this dataset. This is much convenien...
39
TydiQA dataset is mixed and is not split per language Hi @lhoestq Currently TydiQA is mixed and user can only access the whole training set of all languages: https://www.tensorflow.org/datasets/catalog/tydi_qa for using this dataset, one need to train/evaluate in each separate language, and having them mixed, ...
[ -0.2592063844, -0.2352886647, -0.2044115365, 0.2665457726, 0.2771571577, -0.0258290023, 0.3412592411, 0.3332073092, -0.1514982134, 0.0776814744, -0.334315598, -0.0462718271, -0.0167532805, 0.4214196205, 0.0419183187, -0.3201328218, -0.1105377153, 0.0581464022, -0.1372379214, -0...
https://github.com/huggingface/datasets/issues/2132
TydiQA dataset is mixed and is not split per language
Hi thank you very much for the great response, this will be really wonderful to have one configuration per language, as one need the dataset in majority of case per language for cross-lingual evaluations. This becomes also then more close to TFDS format, which is separated per language https://www.tensorflow.org/datase...
Hi @lhoestq Currently TydiQA is mixed and user can only access the whole training set of all languages: https://www.tensorflow.org/datasets/catalog/tydi_qa for using this dataset, one need to train/evaluate in each separate language, and having them mixed, makes it hard to use this dataset. This is much convenien...
145
TydiQA dataset is mixed and is not split per language Hi @lhoestq Currently TydiQA is mixed and user can only access the whole training set of all languages: https://www.tensorflow.org/datasets/catalog/tydi_qa for using this dataset, one need to train/evaluate in each separate language, and having them mixed, ...
[ -0.3552143574, -0.2323043644, -0.1865599304, 0.2470621318, 0.305155009, -0.0820613429, 0.382584095, 0.3362082243, -0.1914510876, 0.1315777004, -0.4605716467, -0.0628385991, 0.0637700111, 0.4531884789, 0.0351660959, -0.2893123031, -0.1692790985, 0.098513104, -0.24431023, 0.00380...
https://github.com/huggingface/datasets/issues/2131
When training with Multi-Node Multi-GPU the worker 2 has TypeError: 'NoneType' object
Hi ! Thanks for reporting I was able to reproduce this issue. This was caused by missing split infos if a worker reloads the cache of the other worker. I just opened https://github.com/huggingface/datasets/pull/2137 to fix this issue
version: 1.5.0 met a very strange error, I am training large scale language model, and need train on 2 machines(workers). And sometimes I will get this error `TypeError: 'NoneType' object is not iterable` This is traceback ``` 71 |   | Traceback (most recent call last): -- | -- | -- 72 |   | File "run_gpt.py"...
37
When training with Multi-Node Multi-GPU the worker 2 has TypeError: 'NoneType' object version: 1.5.0 met a very strange error, I am training large scale language model, and need train on 2 machines(workers). And sometimes I will get this error `TypeError: 'NoneType' object is not iterable` This is traceback ``` ...
[ -0.180610165, -0.4468093514, 0.0129235955, 0.5495147109, 0.11248523, -0.0091341212, 0.5815146565, 0.2967012227, 0.1129245311, 0.2367111444, 0.3380287588, 0.0173763577, -0.1162156761, 0.1675734371, -0.0424106121, -0.2821591496, -0.1764216423, 0.0434676856, -0.2278055698, -0.2145...
https://github.com/huggingface/datasets/issues/2130
wikiann dataset is missing columns
Here please find TFDS format of this dataset: https://www.tensorflow.org/datasets/catalog/wikiann where there is a span column, this is really necessary to be able to use the data, and I appreciate your help @lhoestq
Hi Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq
32
wikiann dataset is missing columns Hi Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq Here please find TFDS format of this dataset: https://www.tensorflow.org/d...
[ 0.003892977, -0.438793391, -0.0947453976, 0.2763384283, 0.3197287917, 0.1833391488, 0.3208169639, 0.0807618052, 0.0531266369, 0.257009089, 0.0109931706, -0.2380872071, 0.070310466, 0.4205825925, 0.3480724394, -0.82036376, 0.1133003384, 0.3876081705, -0.2301767617, -0.1201592609...
https://github.com/huggingface/datasets/issues/2130
wikiann dataset is missing columns
Hi ! Apparently you can get the spans from the NER tags using `tags_to_spans` defined here: https://github.com/tensorflow/datasets/blob/c7096bd38e86ed240b8b2c11ecab9893715a7d55/tensorflow_datasets/text/wikiann/wikiann.py#L81-L126 It would be nice to include the `spans` field in this dataset as in TFDS. This coul...
Hi Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq
61
wikiann dataset is missing columns Hi Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq Hi ! Apparently you can get the spans from the NER tags using `tags_to_sp...
[ 0.009602542, -0.3400488496, -0.0416718125, 0.2219899148, 0.2957080901, 0.1506407708, 0.346960932, 0.0389458872, 0.0920506716, 0.2753711045, 0.0485684313, -0.0450658649, -0.0080624251, 0.3632448018, 0.3476262093, -0.7035682201, 0.0311786141, 0.2775573134, -0.1697653383, 0.012547...
https://github.com/huggingface/datasets/issues/2130
wikiann dataset is missing columns
Hi @lhoestq thank you very much for the help, it would be very nice to have it included, here is the full code, one need to also convert tags to string first: ``` import datasets from datasets import load_dataset def tags_to_spans(tags): """Convert tags to spans.""" spans = set() span_start = 0 s...
Hi Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq
402
wikiann dataset is missing columns Hi Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq Hi @lhoestq thank you very much for the help, it would be very nice to h...
[ 0.0544692427, -0.327013433, -0.0403902158, 0.1326313913, 0.2911990583, 0.2451727241, 0.4067166448, 0.1009383053, 0.3828304708, 0.1556163877, -0.0446800925, -0.2178343982, 0.0227379184, 0.503877759, 0.3199463785, -0.5506706238, 0.1787426621, 0.2836462855, 0.0272959322, -0.171730...
https://github.com/huggingface/datasets/issues/2130
wikiann dataset is missing columns
Cool ! Let me give you some context: #### Contribution guide You can find the contribution guide here: https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md It explains how to set up your dev environment in a few steps. #### Dataset loading Each Dataset is defined by a Table that have ma...
Hi Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq
208
wikiann dataset is missing columns Hi Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq Cool ! Let me give you some context: #### Contribution guide You can...
[ 0.0056950436, -0.2647255957, -0.0400680825, 0.0995711461, 0.3907710016, 0.108981505, 0.2883934975, 0.1041570157, 0.0132620335, 0.1453700215, -0.0328591205, -0.0127277374, 0.022151161, 0.4286500216, 0.2620582581, -0.7255473733, 0.137414068, 0.2676770687, -0.2057021111, 0.0645835...
https://github.com/huggingface/datasets/issues/2129
How to train BERT model with next sentence prediction?
Hi ! We're not using `TextDatasetForNextSentencePrediction` in `datasets`. Although you can probably use the `TextDatasetForNextSentencePrediction.create_examples_from_document` on a dataset to prepare it for next sentence prediction.
Hello. I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ?
25
How to train BERT model with next sentence prediction? Hello. I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ? Hi ! We're not using `TextDatasetForNex...
[ 0.1917073578, -0.4076449871, -0.0147695569, -0.2003289908, 0.0005587882, -0.2783756852, 0.1422065794, -0.0127170496, -0.0256621707, 0.164702341, 0.1556011885, 0.0493725799, -0.1548089236, 0.0617542081, 0.328505367, -0.6952241063, 0.1574009806, 0.2188373357, 0.0097453222, -0.150...
https://github.com/huggingface/datasets/issues/2129
How to train BERT model with next sentence prediction?
Thanks. Do you mean that `TextDatasetForNextSentencePrediction.create_exapmles_from_document` can be applied to dataset object other than `TextDatasetForNextSentencePrediction` e.g. a `Dataset` object which is loaded by `datasets.load_dataset`?
Hello. I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ?
24
How to train BERT model with next sentence prediction? Hello. I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ? Thanks. Do you mean that `TextDataset...
[ 0.1056430787, -0.4607717991, 0.015563231, -0.0898428485, 0.0474330075, -0.2899063826, 0.1532113403, -0.0051718513, 0.0640042052, 0.12141902, 0.1853311956, 0.1281925142, -0.1547513604, 0.1282217056, 0.3845977485, -0.6245179176, 0.1314675212, 0.2932180762, -0.0514883585, -0.24333...
https://github.com/huggingface/datasets/issues/2129
How to train BERT model with next sentence prediction?
It would probably require a bit of tweaking, but you can apply it to a dataset, yes. This should give you a new dataset with sentence pairs you can train a model on. You can find the documentation about dataset processing here: https://huggingface.co/docs/datasets/processing.html#processing-data-with-map
Hello. I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ?
43
How to train BERT model with next sentence prediction? Hello. I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ? It would probably require a bit of tweak...
[ 0.2216069102, -0.4078708589, 0.0245913938, -0.1242587194, 0.0343488269, -0.1844583005, 0.0904134139, 0.022120377, -0.0510081649, 0.0038792149, 0.0329258516, 0.0937429443, -0.1882499754, 0.173675254, 0.3407391906, -0.6513745785, 0.1297620684, 0.1553863287, -0.0457978584, -0.0744...
https://github.com/huggingface/datasets/issues/2128
Dialogue action slot name and value are reversed in MultiWoZ 2.2
Hi Good catch ! Thanks for reporting If you are interested in contributing, feel free to open a PR to fix this :)
Hi @yjernite, thank you for adding MultiWoZ 2.2 in the huggingface datasets platform. It is beneficial! I spot an error that the order of Dialogue action slot names and values are reversed. https://github.com/huggingface/datasets/blob/649b2c469779bc4221e1b6969aa2496d63eb5953/datasets/multi_woz_v22/multi_woz_v22.p...
23
Dialogue action slot name and value are reversed in MultiWoZ 2.2 Hi @yjernite, thank you for adding MultiWoZ 2.2 in the huggingface datasets platform. It is beneficial! I spot an error that the order of Dialogue action slot names and values are reversed. https://github.com/huggingface/datasets/blob/649b2c469779...
[ 0.4248687625, -0.3456932008, 0.017481396, 0.4803556204, -0.2102745771, -0.0071411864, 0.1827980876, 0.1119404435, -0.100788489, 0.2386504561, -0.35799101, 0.130564332, 0.02305362, 0.3927916884, -0.0238378309, -0.2758992314, 0.0942917094, -0.0138568049, 0.0493687391, 0.131144851...
https://github.com/huggingface/datasets/issues/2123
Problem downloading GEM wiki_auto_asset_turk dataset
Hi, sadly I can't replicate the problem on my Windows machine. Try to update the library to the newest version with: ```bash pip install git+https://github.com/huggingface/datasets ```
@yjernite ### Summary I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code. ### Steps to reproduce Code snippet: from datasets import load_dataset #dataset = load_dataset('gem', 'web_nlg_en') d...
26
Problem downloading GEM wiki_auto_asset_turk dataset @yjernite ### Summary I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code. ### Steps to reproduce Code snippet: from datasets import load_d...
[ -0.0816878304, -0.0280800704, -0.0464758538, 0.4495637715, 0.2806960046, 0.1159588397, 0.3161947727, 0.17786102, 0.2470398396, -0.1171151102, 0.1766639948, 0.1722978055, 0.11423444, -0.0033534069, -0.168628633, -0.1824518144, -0.0154064838, -0.0730276853, 0.0597671308, -0.18034...
https://github.com/huggingface/datasets/issues/2123
Problem downloading GEM wiki_auto_asset_turk dataset
Is there an error message ? What stacktrace do you get if you interrupt the execution of the program while downloading ?
@yjernite ### Summary I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code. ### Steps to reproduce Code snippet: from datasets import load_dataset #dataset = load_dataset('gem', 'web_nlg_en') d...
22
Problem downloading GEM wiki_auto_asset_turk dataset @yjernite ### Summary I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code. ### Steps to reproduce Code snippet: from datasets import load_d...
[ -0.1321627349, 0.0142392963, -0.0815601721, 0.4399940968, 0.2992445529, 0.1402181685, 0.3794735372, 0.1477880925, 0.2320192009, -0.0426291712, 0.2344710231, 0.1440672427, 0.1074129865, -0.0218768213, -0.1961920857, -0.1418477297, -0.0774906278, -0.0699519292, 0.1331875771, -0.1...
https://github.com/huggingface/datasets/issues/2123
Problem downloading GEM wiki_auto_asset_turk dataset
Sorry for the long time since my last comment, I tried again and don't seem to have the problem anymore, thanks for your support!
@yjernite ### Summary I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code. ### Steps to reproduce Code snippet: from datasets import load_dataset #dataset = load_dataset('gem', 'web_nlg_en') d...
24
Problem downloading GEM wiki_auto_asset_turk dataset @yjernite ### Summary I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code. ### Steps to reproduce Code snippet: from datasets import load_d...
[ -0.0690354481, -0.0099979816, -0.0715509728, 0.4162677526, 0.2822788358, 0.1222891435, 0.3851535618, 0.2134168744, 0.2334706783, -0.1133929715, 0.2428504676, 0.1628346592, 0.1382038146, -0.0105032744, -0.1870105863, -0.1910442263, -0.0316888355, -0.0682895333, 0.1373941153, -0....
https://github.com/huggingface/datasets/issues/2123
Problem downloading GEM wiki_auto_asset_turk dataset
Great ! I'm closing the issue then. Feel free to re-open if you experience this issue again
@yjernite ### Summary I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code. ### Steps to reproduce Code snippet: from datasets import load_dataset #dataset = load_dataset('gem', 'web_nlg_en') d...
17
Problem downloading GEM wiki_auto_asset_turk dataset @yjernite ### Summary I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code. ### Steps to reproduce Code snippet: from datasets import load_d...
[ -0.0635911077, 0.0054958081, -0.0699881092, 0.3975269198, 0.2704816461, 0.1117716953, 0.4051201046, 0.2125453651, 0.2219946533, -0.0980709195, 0.2423040271, 0.1652897745, 0.1437680721, -0.0094376672, -0.1849551201, -0.1691983044, -0.02092731, -0.0780063421, 0.1397192776, -0.228...
https://github.com/huggingface/datasets/issues/2116
Creating custom dataset results in error while calling the map() function
Hi, the `_data` attribute is missing due to `MyDataset.__init__` not calling the parent `__init__`. However, I don't think it's a good idea to subclass the `datasets.Dataset` class (e.g. it's kind of dangerous to override `datasets.Dataset.__getitem__`). Instead, it's better to follow the "association over inheritan...
calling `map()` of `datasets` library results into an error while defining a Custom dataset. Reproducible example: ``` import datasets class MyDataset(datasets.Dataset): def __init__(self, sentences): "Initialization" self.samples = sentences def __len__(self): "Denotes the ...
75
Creating custom dataset results in error while calling the map() function calling `map()` of `datasets` library results into an error while defining a Custom dataset. Reproducible example: ``` import datasets class MyDataset(datasets.Dataset): def __init__(self, sentences): "Initialization" ...
[ -0.3095270395, 0.1919833869, -0.0280815456, 0.0809182301, 0.2322636992, 0.024453098, 0.4063164592, 0.3732212186, 0.2232669592, 0.0579671077, 0.1070921496, 0.4760223329, -0.3742124736, -0.0511050522, 0.0784918964, -0.0089361854, 0.0559965149, 0.1856333464, -0.1835916489, -0.0389...
https://github.com/huggingface/datasets/issues/2106
WMT19 Dataset for Kazakh-English is not formatted correctly
Hi ! Thanks for reporting By looking at the raw `news-commentary-v14.en-kk.tsv` file, it looks like there are at least 17 lines with this issue. Moreover these issues are not always the same: - L97 is only `kk` text and must be appended at the end of the `kk` text of the **next** line - L2897 is only `kk` text an...
In addition to the bug of languages being switched from Issue @415, there are incorrect translations in the dataset because the English-Kazakh translations have a one off formatting error. The News Commentary v14 parallel data set for kk-en from http://www.statmt.org/wmt19/translation-task.html has a bug here: > ...
144
WMT19 Dataset for Kazakh-English is not formatted correctly In addition to the bug of languages being switched from Issue @415, there are incorrect translations in the dataset because the English-Kazakh translations have a one off formatting error. The News Commentary v14 parallel data set for kk-en from http://ww...
[ -0.0953303054, -0.5491960645, -0.0432182774, 0.3162058294, -0.086027205, 0.010772177, 0.1837911457, 0.172209233, -0.1140667275, 0.2022086829, 0.1289510578, 0.0987926573, 0.1117266864, 0.5228420496, 0.0502477176, -0.2106285691, 0.1143199056, -0.1941501498, -0.2592694163, -0.1341...
https://github.com/huggingface/datasets/issues/2105
Request to remove S2ORC dataset
Hello @kyleclo! Currently, we are getting the data from your bucket, so if you remove it the HF script won't work anymore :) Until you solve things on your end, @lhoestq suggested we just return a warning message when people try to load that dataset from HF. What would you like it to say?
Hi! I was wondering if it's possible to remove [S2ORC](https://huggingface.co/datasets/s2orc) from hosting on Huggingface's platform? Unfortunately, there are some legal considerations about how we make this data available. Happy to add back to Huggingface's platform once we work out those hurdles! Thanks!
54
Request to remove S2ORC dataset Hi! I was wondering if it's possible to remove [S2ORC](https://huggingface.co/datasets/s2orc) from hosting on Huggingface's platform? Unfortunately, there are some legal considerations about how we make this data available. Happy to add back to Huggingface's platform once we work ou...
[ 0.249745965, -0.3549623489, -0.0154877398, 0.2169870436, 0.0509788804, -0.0599794872, 0.0153632453, 0.1701466441, 0.3005090058, 0.0649965778, -0.3151451349, -0.2419162393, -0.3971518278, 0.3670790195, -0.2082138062, 0.1481841058, -0.12108998, 0.0473448373, -0.2688441277, -0.021...
https://github.com/huggingface/datasets/issues/2105
Request to remove S2ORC dataset
Hi @kyleclo, as of today, you have not removed your bucket data yet, and therefore HuggingFace can download it from there. Is it OK? Are you planning to eventually delete it? Thank you.
Hi! I was wondering if it's possible to remove [S2ORC](https://huggingface.co/datasets/s2orc) from hosting on Huggingface's platform? Unfortunately, there are some legal considerations about how we make this data available. Happy to add back to Huggingface's platform once we work out those hurdles! Thanks!
33
Request to remove S2ORC dataset Hi! I was wondering if it's possible to remove [S2ORC](https://huggingface.co/datasets/s2orc) from hosting on Huggingface's platform? Unfortunately, there are some legal considerations about how we make this data available. Happy to add back to Huggingface's platform once we work ou...
[ 0.4957600534, -0.3337315321, -0.0840625316, 0.5156828165, 0.0438570231, -0.1015332788, -0.0326936692, 0.0329051092, -0.0669208243, 0.035270758, -0.4459502697, -0.1573462933, -0.4066599011, 0.3072045743, -0.2146409154, 0.1066133976, -0.0704355091, -0.0257811863, -0.2145420462, 0...
https://github.com/huggingface/datasets/issues/2105
Request to remove S2ORC dataset
Hi! Sorry I missed @yjernite 's previous message, thanks for responding! Is there an option where we can keep our data in our bucket, but the HF script no longer pulls data from it?
Hi! I was wondering if it's possible to remove [S2ORC](https://huggingface.co/datasets/s2orc) from hosting on Huggingface's platform? Unfortunately, there are some legal considerations about how we make this data available. Happy to add back to Huggingface's platform once we work out those hurdles! Thanks!
34
Request to remove S2ORC dataset Hi! I was wondering if it's possible to remove [S2ORC](https://huggingface.co/datasets/s2orc) from hosting on Huggingface's platform? Unfortunately, there are some legal considerations about how we make this data available. Happy to add back to Huggingface's platform once we work ou...
[ 0.3147093356, -0.2077658474, -0.023903884, 0.3282621503, -0.0140944244, -0.1829029918, 0.017519135, 0.1725749373, 0.1771009713, 0.1621483266, -0.4038799107, -0.2786832452, -0.312787652, 0.4572265446, -0.1433836669, 0.1945002824, -0.0702700987, 0.0524483994, -0.3202082813, 0.032...
https://github.com/huggingface/datasets/issues/2104
Trouble loading wiki_movies
Hi ! `wiki_movies` was added in `datasets==1.2.0`. However it looks like you have `datasets==1.1.2`. To use `wiki_movies`, please update `datasets` with ``` pip install --upgrade datasets ```
Hello, I am trying to load_dataset("wiki_movies") and it gives me this error - `FileNotFoundError: Couldn't find file locally at wiki_movies/wiki_movies.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/wiki_movies/wiki_movies.py or https://s3.amazonaws.com/datasets.huggingfa...
27
Trouble loading wiki_movies Hello, I am trying to load_dataset("wiki_movies") and it gives me this error - `FileNotFoundError: Couldn't find file locally at wiki_movies/wiki_movies.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/wiki_movies/wiki_movies.py or https://s3.am...
[ -0.2073981762, -0.0135146892, -0.0442205295, 0.3873901069, 0.2891762257, 0.196327135, 0.1652408093, 0.2904251218, 0.1623349339, -0.0591127612, -0.0651293248, 0.0398703106, 0.0390819609, 0.0095691886, 0.2180411816, -0.1336186528, 0.1319329292, -0.0677643269, 0.0409928747, -0.151...
https://github.com/huggingface/datasets/issues/2104
Trouble loading wiki_movies
Thanks a lot! That solved it and I was able to upload a model trained on it as well :)
Hello, I am trying to load_dataset("wiki_movies") and it gives me this error - `FileNotFoundError: Couldn't find file locally at wiki_movies/wiki_movies.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/wiki_movies/wiki_movies.py or https://s3.amazonaws.com/datasets.huggingfa...
20
Trouble loading wiki_movies Hello, I am trying to load_dataset("wiki_movies") and it gives me this error - `FileNotFoundError: Couldn't find file locally at wiki_movies/wiki_movies.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/wiki_movies/wiki_movies.py or https://s3.am...
[ -0.1121565253, -0.0139942784, 0.0073025585, 0.4026292264, 0.321033448, 0.2078267038, 0.242879644, 0.2546979189, 0.1591462791, -0.1293400675, -0.0809660554, -0.063657552, -0.0174833629, 0.0371454507, 0.2689794004, -0.1464954168, 0.1440911591, -0.0804252923, 0.0386455692, -0.1759...
https://github.com/huggingface/datasets/issues/2103
citation, homepage, and license fields of `dataset_info.json` are duplicated many times
Thanks for reporting :) Maybe we can concatenate fields only if they are different. Currently this is done here: https://github.com/huggingface/nlp/blob/349ac4398a3bcae6356f14c5754483383a60e8a4/src/datasets/info.py#L180-L196 This can be a good first contribution to the library. Please comment if you'd like t...
This happens after a `map` operation when `num_proc` is set to `>1`. I tested this by cleaning up the json before running the `map` op on the dataset so it's unlikely it's coming from an earlier concatenation. Example result: ``` "citation": "@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {...
43
citation, homepage, and license fields of `dataset_info.json` are duplicated many times This happens after a `map` operation when `num_proc` is set to `>1`. I tested this by cleaning up the json before running the `map` op on the dataset so it's unlikely it's coming from an earlier concatenation. Example result: ...
[ 0.1381230354, 0.0136454394, -0.0630833879, 0.3942778111, -0.0054806597, 0.0727091655, 0.2289254218, 0.3846516609, -0.0693550259, 0.0431419201, 0.0773390532, 0.6607300043, 0.4834605157, -0.1113681644, 0.0986290872, 0.254014641, 0.0726541951, 0.0371470563, 0.1630942225, -0.024760...
https://github.com/huggingface/datasets/issues/2099
load_from_disk takes a long time to load local dataset
Hi ! Can you share more information about the features of your dataset ? You can get them by printing `my_dataset.features` Can you also share the code of your `map` function ?
I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helpin...
32
load_from_disk takes a long time to load local dataset I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.u...
[ -0.3500204086, -0.1982149035, -0.054451324, 0.2700722516, 0.3246715069, 0.0501474664, 0.4278253615, 0.1963978708, 0.775203824, -0.012891979, 0.1478900313, 0.5318042636, 0.0868460536, -0.0826434121, 0.1153062582, 0.094270125, 0.3582593799, 0.2302629352, 0.4533495009, -0.18876345...
https://github.com/huggingface/datasets/issues/2099
load_from_disk takes a long time to load local dataset
It is actually just the tokenized `wikipedia` dataset with `input_ids`, `attention_mask`, etc, with one extra column which is a list of integers. The `text` column is removed during tokenization. ``` def add_len_and_seq(example): end_idx = example['input_ids'].index(SEP) example['actual_len'] = end_idx-1 ...
I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helpin...
51
load_from_disk takes a long time to load local dataset I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.u...
[ -0.2723699212, -0.1468665451, -0.0388626717, 0.187440455, 0.4000376463, 0.0453447104, 0.4641445875, 0.2294566035, 0.6530973315, -0.039413128, 0.2031133026, 0.5122695565, -0.0241839066, -0.1652988344, 0.1699288785, 0.1053498685, 0.3435816765, 0.1828207076, 0.5284674168, -0.22277...
https://github.com/huggingface/datasets/issues/2099
load_from_disk takes a long time to load local dataset
Is `PAD_ID` a python integer ? You need all the integers in `example['seq']` to have the same type. Does this work if you remove the `np.uint8` and use python integers instead ?
I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helpin...
32
load_from_disk takes a long time to load local dataset I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.u...
[ -0.2943400145, -0.2248800844, -0.0571159944, 0.3033533096, 0.315648526, 0.049701184, 0.4435778856, 0.2428751737, 0.679972887, -0.0187010784, 0.152965501, 0.5113557577, 0.0885320008, -0.108503677, 0.13383618, 0.0986103639, 0.3429254293, 0.206077233, 0.475964576, -0.1658245623, ...
https://github.com/huggingface/datasets/issues/2099
load_from_disk takes a long time to load local dataset
yup I casted it to `np.uint8` outside the function where it was defined. It was originally using python integers.
I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helpin...
19
load_from_disk takes a long time to load local dataset I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.u...
[ -0.2997468114, -0.2523109019, -0.0470443331, 0.2852841914, 0.3280438781, 0.0210996699, 0.4262611568, 0.2177658826, 0.7503626347, -0.0394667983, 0.1555926949, 0.5000485778, 0.0829722658, -0.0863408893, 0.1403079927, 0.116669856, 0.3541413844, 0.2110988945, 0.4538334608, -0.16082...
https://github.com/huggingface/datasets/issues/2099
load_from_disk takes a long time to load local dataset
Strangely, even when I manually created `np.arrays` of specific `dtypes`, the types in the final `dataset_info.json` that gets written are still `int64`. Update: I tried creating lists of `int8`s and got the same result.
I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helpin...
34
load_from_disk takes a long time to load local dataset I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.u...
[ -0.217398718, -0.187672317, -0.0520241372, 0.3366708755, 0.329988569, 0.0349010751, 0.4153966308, 0.2823086083, 0.6786059141, -0.1099483073, 0.1326244473, 0.5752292871, 0.1942533553, -0.0471497513, 0.1443948299, 0.1174119934, 0.3587274551, 0.2722391784, 0.3763957322, -0.1764422...
https://github.com/huggingface/datasets/issues/2099
load_from_disk takes a long time to load local dataset
Yes this is a known issue: #625 We're working on making the precision kept for numpy :) To specify the precision of the integers, currently one needs to specify the output features with `.map(..., features=output_features)`
I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helpin...
35
load_from_disk takes a long time to load local dataset I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.u...
[ -0.2853319049, -0.2187871188, -0.0280981362, 0.2345815897, 0.3253433108, 0.0638444796, 0.4536395073, 0.2049432695, 0.6538358927, -0.0338120349, 0.1011138484, 0.4731714427, 0.1274630427, -0.1699026078, 0.0634806976, 0.1007766277, 0.3845755458, 0.1702151299, 0.4007257521, -0.2281...
https://github.com/huggingface/datasets/issues/2099
load_from_disk takes a long time to load local dataset
Do you know what step is taking forever in the code ? What happens if you interrupt the execution of the dataset loading ?
I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helpin...
24
load_from_disk takes a long time to load local dataset I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.u...
[ -0.357709378, -0.191145733, -0.054289069, 0.2522045374, 0.3178027868, 0.0741005391, 0.4124633372, 0.2019447237, 0.6933250427, 0.0321033821, 0.1932914257, 0.4987611175, 0.1199499443, -0.0854376033, 0.0782647654, 0.1477905661, 0.3398174644, 0.2383553833, 0.4402107, -0.1339953989,...
https://github.com/huggingface/datasets/issues/2099
load_from_disk takes a long time to load local dataset
After a synchronous discussion, we found that the cache file sizes have an enormous effect on the loading speed: smaller cache files result in faster load times. `num_proc` controls the number of cache files that are being written and is inversely proportional to the individual file size. In other words, increase `num_...
I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helpin...
66
load_from_disk takes a long time to load local dataset I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.u...
[ -0.3226238191, -0.1608169079, -0.0745908245, 0.2495032996, 0.2985534072, 0.0387582481, 0.4221408963, 0.2822893262, 0.742570281, -0.0142083783, 0.1286984831, 0.5218667984, 0.1710797697, -0.0604943484, 0.1026291102, 0.1128982082, 0.3352092505, 0.2497864068, 0.4394030869, -0.10520...
https://github.com/huggingface/datasets/issues/2092
How to disable making arrow tables in load_dataset ?
Hi ! We plan to add streaming features in the future. This should allow to load a dataset instantaneously without generating the arrow table. The trade-off is that accessing examples from a streaming dataset must be done in an iterative way, and with an additional (but hopefully minor) overhead. What do you think a...
Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ?
84
How to disable making arrow tables in load_dataset ? Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ? Hi ! We plan to add streaming features in the future. This should allow to load a dataset instantaneously without generating the arrow table. ...
[ -0.3108789325, -0.3046274483, -0.1025218889, -0.0479800291, 0.0687684789, -0.0700552538, 0.1910865754, -0.0508346967, 0.1361464262, 0.1569509059, 0.3037062287, 0.1940683275, -0.184898451, 0.3730877638, 0.3175566196, -0.0077213403, 0.1201917678, 0.4260941148, -0.2305736989, 0.04...
https://github.com/huggingface/datasets/issues/2092
How to disable making arrow tables in load_dataset ?
People mainly want this feature either because it takes too much time too make arrow tables, or they occupy too much memory on the disk. I think both the problem can be solved if we provide arrow tables themselves on datasets hub. Can we do this currently @lhoestq ?
Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ?
49
How to disable making arrow tables in load_dataset ? Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ? People mainly want this feature either because it takes too much time too make arrow tables, or they occupy too much memory on the disk. I think ...
[ -0.2959378958, -0.2490315884, -0.1036209464, 0.2489057332, -0.04503401, 0.1605257541, 0.2928256094, -0.0211193673, 0.3396776915, 0.2951470017, 0.0326352678, 0.4835271239, 0.0262509361, 0.0174948294, 0.2436441034, 0.1563293636, -0.014244833, 0.5176953673, -0.3697736561, -0.02299...
https://github.com/huggingface/datasets/issues/2092
How to disable making arrow tables in load_dataset ?
@lhoestq I think the ```try_from_hf_gcs``` provide the same functionality. What all datasets are available on HF GCS? Are all the datasets on huggingFace datasets hub are made available on GCS, automatically?
Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ?
31
How to disable making arrow tables in load_dataset ? Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ? @lhoestq I think the ```try_from_hf_gcs``` provide the same functionality. What all datasets are available on HF GCS? Are all the datasets on hug...
[ -0.0699812546, -0.5376312137, -0.0606943034, 0.2699545026, 0.0311885942, 0.1857694685, 0.2047214657, -0.0738598108, 0.5251615644, 0.191554144, -0.2083573043, 0.2142614275, 0.1429018825, 0.0653284341, 0.2521748245, 0.0714228824, 0.0697016269, 0.31918028, -0.4851082563, -0.268643...
https://github.com/huggingface/datasets/issues/2092
How to disable making arrow tables in load_dataset ?
Only datasets like wikipedia, wiki40b, wiki_dpr and natural questions are available already processed on the HF google storage. This is used to download directly the arrow file instead of building it from the original data files.
Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ?
36
How to disable making arrow tables in load_dataset ? Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ? Only datasets like wikipedia, wiki40b, wiki_dpr and natural questions are available already processed on the HF google storage. This is used to d...
[ -0.1666768789, -0.0997224748, -0.0602355227, 0.2515959144, -0.0468831956, 0.1143142357, 0.2708076835, 0.0048357891, 0.469778657, 0.1632673889, -0.037074741, 0.1923090369, 0.1889045686, 0.0169301033, 0.2863856256, 0.1164423227, -0.0252679437, 0.4686412215, -0.3328587711, -0.1243...
https://github.com/huggingface/datasets/issues/2092
How to disable making arrow tables in load_dataset ?
@lhoestq How can we make sure that the data we upload on HuggingFace hub is available in form of preprocessed arrow files ?
Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ?
23
How to disable making arrow tables in load_dataset ? Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ? @lhoestq How can we make sure that the data we upload on HuggingFace hub is available in form of preprocessed arrow files ?
[ -0.1100970656, -0.4545694888, -0.1245341375, 0.3275224566, 0.1385050714, 0.066650182, 0.1080088466, 0.0051871324, 0.2907290161, 0.2163161337, -0.0762574822, 0.3311559558, 0.0758256018, 0.2674564123, 0.3114097416, 0.0753524378, -0.0126064634, 0.337411195, -0.4707742631, -0.07295...
https://github.com/huggingface/datasets/issues/2092
How to disable making arrow tables in load_dataset ?
We're still working on this :) This will be available soon Users will be able to put their processed arrow files on the Hub
Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ?
24
How to disable making arrow tables in load_dataset ? Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ? We're still working on this :) This will be available soon Users will be able to put their processed arrow files on the Hub
[ -0.3103678823, -0.209383592, -0.146432355, 0.2652353942, 0.0345697738, 0.0666943341, 0.2737095654, 0.113288559, 0.3428302109, 0.2253295183, 0.0225783847, 0.5258625746, 0.0204505976, 0.1348217726, 0.2148839533, 0.2104864568, -0.0567544587, 0.3835603893, -0.3702233732, -0.0217933...
https://github.com/huggingface/datasets/issues/2089
Add documentaton for dataset README.md files
Hi ! We are using the [datasets-tagging app](https://github.com/huggingface/datasets-tagging) to select the tags to add. We are also adding the full list of tags in #2107 This covers multilinguality, language_creators, licenses, size_categories and task_categories. In general if you want to add a tag that doesn...
Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what should be passed to language_creators? - which valu...
94
Add documentaton for dataset README.md files Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what shoul...
[ -0.2009667307, 0.2520237267, -0.1304932833, 0.0798936486, 0.1809326559, 0.3571425378, 0.3119643331, 0.1849831194, 0.1644607782, -0.0163467042, -0.0213758387, 0.3338292539, -0.0963024572, 0.1983851343, 0.1711072326, 0.0399824381, 0.11463397, -0.063079454, 0.3094628453, -0.181299...
https://github.com/huggingface/datasets/issues/2089
Add documentaton for dataset README.md files
@lhoestq hmm - ok thanks for the answer. To be honest I am not sure if this issue can be closed now. I just wanted to point out that this should either be documented or linked in the documentation. If you feel like it is (will be) please just close this.
Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what should be passed to language_creators? - which valu...
51
Add documentaton for dataset README.md files Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what shoul...
[ -0.1834502369, 0.2513653636, -0.1089650914, 0.197436288, 0.1107169837, 0.3312234581, 0.3745903969, 0.1719959825, 0.0320845582, 0.0473188497, 0.0283440165, 0.2382667959, -0.0631278381, 0.0927071348, 0.0448256135, 0.0325521156, 0.0677399114, -0.0201823022, 0.2282106876, -0.271778...
https://github.com/huggingface/datasets/issues/2089
Add documentaton for dataset README.md files
We're still working on the validation+documentation in this. Feel free to keep this issue open till we've added them
Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what should be passed to language_creators? - which valu...
19
Add documentaton for dataset README.md files Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what shoul...
[ -0.2152336538, 0.2649826109, -0.1258544028, 0.1683344841, 0.1531974524, 0.2933406532, 0.29241395, 0.1951220781, 0.03196555, 0.0209627748, 0.0500942022, 0.2049099058, -0.079276301, 0.1006223708, -0.0031887074, 0.0226524621, 0.0487907827, -0.050570786, 0.2097900659, -0.253251195,...
https://github.com/huggingface/datasets/issues/2089
Add documentaton for dataset README.md files
Hi ! There's the tagging app at https://huggingface.co/datasets/tagging/ that you can use. It shows the list of all the tags you can use. It is based on all the tag sets defined in this folder: https://github.com/huggingface/datasets/tree/master/src/datasets/utils/resources
Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what should be passed to language_creators? - which valu...
36
Add documentaton for dataset README.md files Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what shoul...
[ -0.2866168618, 0.0945326909, -0.1588486135, 0.1930642426, 0.276850462, 0.3156138062, 0.2661201656, 0.1931914836, 0.1716580242, 0.0233467873, -0.1272898912, 0.2412942052, -0.1264386624, 0.3597061932, 0.0914904326, 0.0267628301, 0.0860835165, -0.1028400958, 0.159258619, -0.184579...
https://github.com/huggingface/datasets/issues/2089
Add documentaton for dataset README.md files
I don't think so. Feel free to take a look at the tags of other models (example [here](https://huggingface.co/bert-base-uncased/blob/main/README.md)). But we should definitely have some docs or an app to write the tags. Feel free to open an issue in the `transformers` repo or in the `huggingface_hub` repo so we can dis...
Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what should be passed to language_creators? - which valu...
52
Add documentaton for dataset README.md files Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what shoul...
[ -0.1301450878, -0.0352430791, -0.0532185026, 0.2471045703, 0.2942427099, 0.2559413016, 0.4152005017, 0.1293423623, 0.0928789824, -0.0484242737, -0.1729550511, 0.070088461, -0.1718711853, 0.3322657049, 0.1102283075, -0.0799867138, 0.0745818689, -0.064012073, 0.0829298943, -0.342...
https://github.com/huggingface/datasets/issues/2083
`concatenate_datasets` throws error when changing the order of datasets to concatenate
Hi, this bug is related to `Dataset.{remove_columns, rename_column, flatten}` not propagating the change to the schema metadata when the info features are updated, so this line is the culprit: ```python common_voice_train = common_voice_train.remove_columns(['client_id', 'up_votes', 'down_votes', 'age', 'gender', ...
Hey, I played around with the `concatenate_datasets(...)` function: https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate_datasets#datasets.concatenate_datasets and noticed that when the order in which the datasets are concatenated changes an error is thrown where it shou...
70
`concatenate_datasets` throws error when changing the order of datasets to concatenate Hey, I played around with the `concatenate_datasets(...)` function: https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate_datasets#datasets.concatenate_datasets and noticed that when ...
[ -0.0796852559, -0.0035570066, 0.049324166, 0.0962453336, 0.4001434445, 0.2031108588, 0.1476462334, 0.1685889214, -0.4686339498, 0.0390346982, -0.1133435071, 0.223585844, 0.1865084916, 0.2279958427, -0.1524719894, -0.3764881492, 0.1803440005, -0.0335423462, -0.0318524428, 0.1589...
https://github.com/huggingface/datasets/issues/2080
Multidimensional arrays in a Dataset
Hi ! This is actually supported ! but not yet in `from_pandas`. You can use `from_dict` for now instead: ```python from datasets import Dataset, Array2D, Features, Value import pandas as pd import numpy as np dataset = { 'bbox': [ np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]), np.array([[1...
Hi, I'm trying to put together a `datasets.Dataset` to be used with LayoutLM which is available in `transformers`. This model requires as input the bounding boxes of each of the token of a sequence. This is when I realized that `Dataset` does not support multi-dimensional arrays as a value for a column in a row. ...
165
Multidimensional arrays in a Dataset Hi, I'm trying to put together a `datasets.Dataset` to be used with LayoutLM which is available in `transformers`. This model requires as input the bounding boxes of each of the token of a sequence. This is when I realized that `Dataset` does not support multi-dimensional array...
[ 0.0149022182, -0.2567944527, -0.0799354538, 0.1915314198, 0.449570477, 0.054927513, 0.8991121054, 0.101094313, 0.1804243475, 0.0678302348, -0.1109768748, 0.3204249144, -0.2819291949, 0.052211158, 0.1784114987, -0.4505749643, 0.1631147712, 0.2940376103, -0.3680664897, 0.09732827...
https://github.com/huggingface/datasets/issues/2080
Multidimensional arrays in a Dataset
Thanks for the explanation. With my original DataFrame, I did ``` dataset = dataset.to_dict("list") ``` and then the rest of the transformation from dictionary works just fine.
Hi, I'm trying to put together a `datasets.Dataset` to be used with LayoutLM which is available in `transformers`. This model requires as input the bounding boxes of each of the token of a sequence. This is when I realized that `Dataset` does not support multi-dimensional arrays as a value for a column in a row. ...
27
Multidimensional arrays in a Dataset Hi, I'm trying to put together a `datasets.Dataset` to be used with LayoutLM which is available in `transformers`. This model requires as input the bounding boxes of each of the token of a sequence. This is when I realized that `Dataset` does not support multi-dimensional array...
[ 0.0149022182, -0.2567944527, -0.0799354538, 0.1915314198, 0.449570477, 0.054927513, 0.8991121054, 0.101094313, 0.1804243475, 0.0678302348, -0.1109768748, 0.3204249144, -0.2819291949, 0.052211158, 0.1784114987, -0.4505749643, 0.1631147712, 0.2940376103, -0.3680664897, 0.09732827...
https://github.com/huggingface/datasets/issues/2078
MemoryError when computing WER metric
Hi ! Thanks for reporting. We're indeed using `jiwer` to compute the WER. Maybe instead of calling `jiwer.wer` once for all the preditions/references we can compute the WER iteratively to avoid memory issues ? I'm not too familial with `jiwer` but this must be possible. Currently the code to compute the WER is d...
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File ...
56
MemoryError when computing WER metric Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Tra...
[ 0.1249789745, -0.2007294446, 0.0587180294, 0.3256507516, 0.4333939254, 0.0737895146, -0.2180434167, 0.2950880527, 0.0288313068, 0.4309869707, 0.0965306684, -0.0946348086, -0.3058875501, -0.5752206445, -0.1799837202, -0.3347723484, -0.1305914521, -0.060313765, -0.0939676315, 0.2...
https://github.com/huggingface/datasets/issues/2078
MemoryError when computing WER metric
Hi, I've just pushed a pull request that is related to this issue https://github.com/huggingface/datasets/pull/2169. It's not iterative, but it should avoid memory errors. It's based on the editdistance python library. An iterative implementation should be as easy as storing scores and words stepwise and dividing at...
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File ...
48
MemoryError when computing WER metric Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Tra...
[ 0.1249789745, -0.2007294446, 0.0587180294, 0.3256507516, 0.4333939254, 0.0737895146, -0.2180434167, 0.2950880527, 0.0288313068, 0.4309869707, 0.0965306684, -0.0946348086, -0.3058875501, -0.5752206445, -0.1799837202, -0.3347723484, -0.1305914521, -0.060313765, -0.0939676315, 0.2...
https://github.com/huggingface/datasets/issues/2078
MemoryError when computing WER metric
I see, this was solved by other thread. Ok, let me know if you want to switch the implementation for any reason :)
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File ...
23
MemoryError when computing WER metric Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Tra...
[ 0.1249789745, -0.2007294446, 0.0587180294, 0.3256507516, 0.4333939254, 0.0737895146, -0.2180434167, 0.2950880527, 0.0288313068, 0.4309869707, 0.0965306684, -0.0946348086, -0.3058875501, -0.5752206445, -0.1799837202, -0.3347723484, -0.1305914521, -0.060313765, -0.0939676315, 0.2...
https://github.com/huggingface/datasets/issues/2078
MemoryError when computing WER metric
Thanks for diving into this anyway ^^' As you said this actually got solved a few days ago
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File ...
18
MemoryError when computing WER metric Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Tra...
[ 0.1249789745, -0.2007294446, 0.0587180294, 0.3256507516, 0.4333939254, 0.0737895146, -0.2180434167, 0.2950880527, 0.0288313068, 0.4309869707, 0.0965306684, -0.0946348086, -0.3058875501, -0.5752206445, -0.1799837202, -0.3347723484, -0.1305914521, -0.060313765, -0.0939676315, 0.2...
https://github.com/huggingface/datasets/issues/2078
MemoryError when computing WER metric
Someone created an issue https://github.com/jitsi/jiwer/issues/40 at jiwer which shows that this is still a problem in the current version. Would be curious to figure out how this can be fixed by jiwer... :) I assume that it runs of out memory because it's trying to compute the WER over (too many) test samples?
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File ...
53
MemoryError when computing WER metric Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Tra...
[ 0.1249789745, -0.2007294446, 0.0587180294, 0.3256507516, 0.4333939254, 0.0737895146, -0.2180434167, 0.2950880527, 0.0288313068, 0.4309869707, 0.0965306684, -0.0946348086, -0.3058875501, -0.5752206445, -0.1799837202, -0.3347723484, -0.1305914521, -0.060313765, -0.0939676315, 0.2...
https://github.com/huggingface/datasets/issues/2078
MemoryError when computing WER metric
Hi ! It's computed iteratively so not sure what could go wrong https://github.com/huggingface/datasets/blob/8afd0ba8c27800a55ea69d9fcd702dc97d9c16d8/metrics/wer/wer.py#L100-L106 @NiklasHoltmeyer what version of `datasets` are you running ?
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File ...
22
MemoryError when computing WER metric Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Tra...
[ 0.1249789745, -0.2007294446, 0.0587180294, 0.3256507516, 0.4333939254, 0.0737895146, -0.2180434167, 0.2950880527, 0.0288313068, 0.4309869707, 0.0965306684, -0.0946348086, -0.3058875501, -0.5752206445, -0.1799837202, -0.3347723484, -0.1305914521, -0.060313765, -0.0939676315, 0.2...
https://github.com/huggingface/datasets/issues/2078
MemoryError when computing WER metric
One possible explanation might be that it is the user who is passing all the sentences in a single element to `wer.compute`? As current implementation iterates over the elements of `predictions` and `references`, this can be problematic if `predictions` and `references` contain a single huge element each. This c...
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File ...
103
MemoryError when computing WER metric Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Tra...
[ 0.1249789745, -0.2007294446, 0.0587180294, 0.3256507516, 0.4333939254, 0.0737895146, -0.2180434167, 0.2950880527, 0.0288313068, 0.4309869707, 0.0965306684, -0.0946348086, -0.3058875501, -0.5752206445, -0.1799837202, -0.3347723484, -0.1305914521, -0.060313765, -0.0939676315, 0.2...
https://github.com/huggingface/datasets/issues/2078
MemoryError when computing WER metric
Hi all, in my case I was using and older version of datasets and, as @albertvillanova points out, passing the full list of sentences for the metric calculation. The problem was in the way jiwer implements WER, as it tries to compute WER for the full list at once instead of doing it element-wise. I think that with th...
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File ...
82
MemoryError when computing WER metric Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Tra...
[ 0.1249789745, -0.2007294446, 0.0587180294, 0.3256507516, 0.4333939254, 0.0737895146, -0.2180434167, 0.2950880527, 0.0288313068, 0.4309869707, 0.0965306684, -0.0946348086, -0.3058875501, -0.5752206445, -0.1799837202, -0.3347723484, -0.1305914521, -0.060313765, -0.0939676315, 0.2...
https://github.com/huggingface/datasets/issues/2078
MemoryError when computing WER metric
@lhoestq i was using Datasets==1.5.0 with 1.6.1 it worked (atleast the first run) but 1.5.0 is not compatible with my preprocessing. i cant save my dataset to a parquet file while using the latest datasets version -> ``` File "../preprocess_dataset.py", line 132, in <module> pq.write_table(train_dataset.da...
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File ...
96
MemoryError when computing WER metric Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Tra...
[ 0.1249789745, -0.2007294446, 0.0587180294, 0.3256507516, 0.4333939254, 0.0737895146, -0.2180434167, 0.2950880527, 0.0288313068, 0.4309869707, 0.0965306684, -0.0946348086, -0.3058875501, -0.5752206445, -0.1799837202, -0.3347723484, -0.1305914521, -0.060313765, -0.0939676315, 0.2...
https://github.com/huggingface/datasets/issues/2076
Issue: Dataset download error
Hi @XuhuiZhou, thanks for reporting this issue. Indeed, the old links are no longer valid (404 Not Found error), and the script must be updated with the new links to Google Drive.
The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link?
32
Issue: Dataset download error The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link? Hi @Xuhui...
[ -0.2337625623, 0.1458082795, 0.0028069636, -0.0309510417, -0.0274383835, 0.0824486315, 0.3759452701, 0.2059573531, 0.1537597179, 0.0144034671, 0.1848477572, -0.1723967791, 0.2339598238, 0.2969176471, 0.1645151824, -0.0703358129, 0.1124436706, 0.0390217677, -0.2078586519, 0.0084...
https://github.com/huggingface/datasets/issues/2076
Issue: Dataset download error
It would be nice to update the urls indeed ! To do this, you just need to replace the urls in `iwslt2017.py` and then update the dataset_infos.json file with ``` datasets-cli test ./datasets/iwslt2017 --all_configs --save_infos --ignore_verifications ```
The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link?
37
Issue: Dataset download error The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link? It would ...
[ -0.2658807337, 0.1200054884, -0.0836952478, -0.1060318723, -0.0561512299, 0.013199131, 0.1135272607, 0.3138659894, 0.1883040071, -0.0665743649, 0.0962726325, 0.0473743118, 0.2630379498, 0.2957365513, 0.0410316512, 0.0087641925, 0.0977709815, -0.0120597463, -0.2833595276, 0.0477...