html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 63 51.8k | body stringlengths 0 36.2k ⌀ | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings list |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/2400 | Concatenate several datasets with removed columns is not working. | Hi,
did you fill out the env info section manually or by copy-pasting the output of the `datasets-cli env` command?
This code should work without issues on 1.6.2 version (I'm working on master (1.6.2.dev0 version) and can't reproduce this error). | ## Describe the bug
You can't concatenate datasets when you removed columns before.
## Steps to reproduce the bug
```python
from datasets import load_dataset, concatenate_datasets
wikiann= load_dataset("wikiann","en")
wikiann["train"] = wikiann["train"].remove_columns(["langs","spans"])
wikiann["test"] =... | 40 | Concatenate several datasets with removed columns is not working.
## Describe the bug
You can't concatenate datasets when you removed columns before.
## Steps to reproduce the bug
```python
from datasets import load_dataset, concatenate_datasets
wikiann= load_dataset("wikiann","en")
wikiann["train"] = w... | [
-0.0503960066,
-0.1630987227,
-0.0372492559,
0.2921581268,
0.0712868497,
0.1608152837,
0.4275568128,
0.3972991109,
-0.1280240417,
0.0971979126,
-0.2116854042,
0.2830113173,
0.1164147109,
0.2401472479,
-0.3315172493,
-0.2897304296,
0.3033925295,
-0.0497660302,
-0.1116108745,
0.0... |
https://github.com/huggingface/datasets/issues/2396 | strange datasets from OSCAR corpus | Hi ! Thanks for reporting
cc @pjox is this an issue from the data ?
Anyway we should at least mention that OSCAR could contain such contents in the dataset card, you're totally right @jerryIsHere | 

From the [official site ](https://oscar-corpus.com/), the Yue Chinese dataset should have 2.2K... | 35 | strange datasets from OSCAR corpus


From the [official site ](https://oscar-corpus.com/), the... | [
0.2676994801,
0.0570944957,
-0.0023011297,
0.5770169497,
0.1565005481,
0.0609704219,
-0.012423533,
0.2492677718,
-0.264593184,
0.0486130267,
-0.5796422958,
-0.1079981253,
0.043104548,
-0.0511755645,
-0.0094941296,
-0.3019306958,
0.1105531976,
0.0599582195,
0.0344042145,
-0.3692... |
https://github.com/huggingface/datasets/issues/2396 | strange datasets from OSCAR corpus | Hi @jerryIsHere , sorry for the late response! Sadly this is normal, the problem comes form fasttext's classifier which we used to create the original corpus. In general the classifier is not really capable of properly recognizing Yue Chineese so the file ends un being just noise from Common Crawl. Some of these proble... | 

From the [official site ](https://oscar-corpus.com/), the Yue Chinese dataset should have 2.2K... | 93 | strange datasets from OSCAR corpus


From the [official site ](https://oscar-corpus.com/), the... | [
0.2676994801,
0.0570944957,
-0.0023011297,
0.5770169497,
0.1565005481,
0.0609704219,
-0.012423533,
0.2492677718,
-0.264593184,
0.0486130267,
-0.5796422958,
-0.1079981253,
0.043104548,
-0.0511755645,
-0.0094941296,
-0.3019306958,
0.1105531976,
0.0599582195,
0.0344042145,
-0.3692... |
https://github.com/huggingface/datasets/issues/2391 | Missing original answers in kilt-TriviaQA | That could be useful indeed! Feel free to open a PR on the dataset card if you already have some code that runs, otherwise we'll take care of it soon :) | I previously opened an issue at https://github.com/facebookresearch/KILT/issues/42 but from the answer of @fabiopetroni it seems that the problem comes from HF-datasets
## Describe the bug
The `answer` field in kilt-TriviaQA, e.g. `kilt_tasks['train_triviaqa'][0]['output']['answer']` contains a list of alternative ... | 31 | Missing original answers in kilt-TriviaQA
I previously opened an issue at https://github.com/facebookresearch/KILT/issues/42 but from the answer of @fabiopetroni it seems that the problem comes from HF-datasets
## Describe the bug
The `answer` field in kilt-TriviaQA, e.g. `kilt_tasks['train_triviaqa'][0]['output'... | [
0.5064163208,
-0.3483676016,
-0.0351328589,
0.062851347,
-0.1213427335,
-0.1011348441,
0.0344418846,
0.3133408725,
0.1268100291,
0.183856383,
0.1640157402,
0.4927674234,
0.1122972146,
0.4052691162,
-0.2509844005,
0.3889524341,
-0.0829043239,
-0.0019128356,
-0.0266752802,
-0.318... |
https://github.com/huggingface/datasets/issues/2391 | Missing original answers in kilt-TriviaQA | I can open a PR but there is 2 details to fix:
- the name for the corresponding key (e.g. `original_answer`)
- how to implement it: I’m not sure what happens when you map `lambda x: {'input': ...}` as it keeps the other keys (e.g. `output`) intact but here since we want to set a nested value (e.g. `x['output']['origi... | I previously opened an issue at https://github.com/facebookresearch/KILT/issues/42 but from the answer of @fabiopetroni it seems that the problem comes from HF-datasets
## Describe the bug
The `answer` field in kilt-TriviaQA, e.g. `kilt_tasks['train_triviaqa'][0]['output']['answer']` contains a list of alternative ... | 84 | Missing original answers in kilt-TriviaQA
I previously opened an issue at https://github.com/facebookresearch/KILT/issues/42 but from the answer of @fabiopetroni it seems that the problem comes from HF-datasets
## Describe the bug
The `answer` field in kilt-TriviaQA, e.g. `kilt_tasks['train_triviaqa'][0]['output'... | [
0.5155783892,
-0.3537393212,
0.0327611715,
0.0395883359,
-0.1053856537,
-0.0799774751,
-0.0295904893,
0.2565176487,
0.0855737627,
0.1321028769,
0.0442917421,
0.5437101722,
0.1936984211,
0.4238487482,
-0.1813196242,
0.3287058473,
0.0641256198,
0.1050168648,
0.1597806513,
-0.2960... |
https://github.com/huggingface/datasets/issues/2387 | datasets 1.6 ignores cache | Looks like there are multiple issues regarding this (#2386, #2322) and it's a WIP #2329. Currently these datasets are being loaded in-memory which is causing this issue. Quoting @mariosasko here for a quick fix:
> set `keep_in_memory` to `False` when loading a dataset (`sst = load_dataset("sst", keep_in_memory=False... | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/... | 69 | datasets 1.6 ignores cache
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/ope... | [
-0.2158486694,
0.0203109235,
0.0422427356,
0.140415296,
0.1821745038,
0.0462313406,
0.1409807801,
0.3139567971,
0.0551588237,
-0.0644534901,
-0.2359383404,
-0.0579473227,
-0.0247007441,
-0.3315779269,
-0.1081902683,
0.01980526,
0.2144650817,
0.0230759755,
-0.1678414047,
-0.0409... |
https://github.com/huggingface/datasets/issues/2387 | datasets 1.6 ignores cache | Hi ! Since `datasets` 1.6.0 we no longer keep small datasets (<250MB) on disk and load them in RAM instead by default. This makes data processing and iterating on data faster. However datasets in RAM currently have no way to reload previous results from the cache (since nothing is written on disk). We are working on ma... | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/... | 106 | datasets 1.6 ignores cache
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/ope... | [
-0.2158486694,
0.0203109235,
0.0422427356,
0.140415296,
0.1821745038,
0.0462313406,
0.1409807801,
0.3139567971,
0.0551588237,
-0.0644534901,
-0.2359383404,
-0.0579473227,
-0.0247007441,
-0.3315779269,
-0.1081902683,
0.01980526,
0.2144650817,
0.0230759755,
-0.1678414047,
-0.0409... |
https://github.com/huggingface/datasets/issues/2387 | datasets 1.6 ignores cache | OK, It doesn't look like we can use the proposed workaround - see https://github.com/huggingface/transformers/issues/11801
Could you please add an env var for us to be able to turn off this unwanted in our situation behavior? It is really problematic for dev work, when one needs to restart the training very often an... | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/... | 104 | datasets 1.6 ignores cache
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/ope... | [
-0.2158486694,
0.0203109235,
0.0422427356,
0.140415296,
0.1821745038,
0.0462313406,
0.1409807801,
0.3139567971,
0.0551588237,
-0.0644534901,
-0.2359383404,
-0.0579473227,
-0.0247007441,
-0.3315779269,
-0.1081902683,
0.01980526,
0.2144650817,
0.0230759755,
-0.1678414047,
-0.0409... |
https://github.com/huggingface/datasets/issues/2387 | datasets 1.6 ignores cache | Hi @stas00,
You are right: an env variable is needed to turn off this behavior. I am adding it.
For the moment there is a config parameter to turn off this behavior: `datasets.config.MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES = None`
You can find this info in the docs:
- in the docstring of the parameter `keep_in_m... | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/... | 115 | datasets 1.6 ignores cache
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/ope... | [
-0.2158486694,
0.0203109235,
0.0422427356,
0.140415296,
0.1821745038,
0.0462313406,
0.1409807801,
0.3139567971,
0.0551588237,
-0.0644534901,
-0.2359383404,
-0.0579473227,
-0.0247007441,
-0.3315779269,
-0.1081902683,
0.01980526,
0.2144650817,
0.0230759755,
-0.1678414047,
-0.0409... |
https://github.com/huggingface/datasets/issues/2387 | datasets 1.6 ignores cache | Yes, but this still requires one to edit the standard example scripts, so if I'm doing that already I just as well can add `keep_in_memory=False`.
May be the low hanging fruit is to add `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES` env var to match the config, and if the user sets it to 0, then it'll be the same as `keep_in... | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/... | 58 | datasets 1.6 ignores cache
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/ope... | [
-0.2158486694,
0.0203109235,
0.0422427356,
0.140415296,
0.1821745038,
0.0462313406,
0.1409807801,
0.3139567971,
0.0551588237,
-0.0644534901,
-0.2359383404,
-0.0579473227,
-0.0247007441,
-0.3315779269,
-0.1081902683,
0.01980526,
0.2144650817,
0.0230759755,
-0.1678414047,
-0.0409... |
https://github.com/huggingface/datasets/issues/2387 | datasets 1.6 ignores cache | @stas00, however, for the moment, setting the value to `0` is equivalent to the opposite, i.e. `keep_in_memory=True`. This means the max size until which I load in memory is 0 bytes.
Tell me if this is logical/convenient, or I should change it. | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/... | 42 | datasets 1.6 ignores cache
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/ope... | [
-0.2158486694,
0.0203109235,
0.0422427356,
0.140415296,
0.1821745038,
0.0462313406,
0.1409807801,
0.3139567971,
0.0551588237,
-0.0644534901,
-0.2359383404,
-0.0579473227,
-0.0247007441,
-0.3315779269,
-0.1081902683,
0.01980526,
0.2144650817,
0.0230759755,
-0.1678414047,
-0.0409... |
https://github.com/huggingface/datasets/issues/2387 | datasets 1.6 ignores cache | In my PR, to turn off current default bahavior, you should set env variable to one of: `{"", "OFF", "NO", "FALSE"}`.
For example:
```
MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=
``` | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/... | 26 | datasets 1.6 ignores cache
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/ope... | [
-0.2158486694,
0.0203109235,
0.0422427356,
0.140415296,
0.1821745038,
0.0462313406,
0.1409807801,
0.3139567971,
0.0551588237,
-0.0644534901,
-0.2359383404,
-0.0579473227,
-0.0247007441,
-0.3315779269,
-0.1081902683,
0.01980526,
0.2144650817,
0.0230759755,
-0.1678414047,
-0.0409... |
https://github.com/huggingface/datasets/issues/2387 | datasets 1.6 ignores cache | IMHO, this behaviour is not very intuitive, as 0 is a normal quantity of bytes. So `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=0` to me reads as don't cache ever.
Also "SIZE_IN_BYTES" that can take one of `{"", "OFF", "NO", "FALSE"}` is also quite odd.
I think supporting a very simple `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTE... | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/... | 89 | datasets 1.6 ignores cache
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/ope... | [
-0.2158486694,
0.0203109235,
0.0422427356,
0.140415296,
0.1821745038,
0.0462313406,
0.1409807801,
0.3139567971,
0.0551588237,
-0.0644534901,
-0.2359383404,
-0.0579473227,
-0.0247007441,
-0.3315779269,
-0.1081902683,
0.01980526,
0.2144650817,
0.0230759755,
-0.1678414047,
-0.0409... |
https://github.com/huggingface/datasets/issues/2387 | datasets 1.6 ignores cache | I understand your point @stas00, as I am not very convinced with current implementation.
My concern is: which numerical value should then pass a user who wants `keep_in_memory=True` by default, independently of dataset size? Currently it is `0` for this case. | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/... | 41 | datasets 1.6 ignores cache
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/ope... | [
-0.2158486694,
0.0203109235,
0.0422427356,
0.140415296,
0.1821745038,
0.0462313406,
0.1409807801,
0.3139567971,
0.0551588237,
-0.0644534901,
-0.2359383404,
-0.0579473227,
-0.0247007441,
-0.3315779269,
-0.1081902683,
0.01980526,
0.2144650817,
0.0230759755,
-0.1678414047,
-0.0409... |
https://github.com/huggingface/datasets/issues/2387 | datasets 1.6 ignores cache | That's a good question, and again the normal bytes can be used for that:
```
MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=1e12 # (~2**40)
```
Since it's unlikely that anybody will have more than 1TB RAM.
It's also silly that it uses BYTES and not MBYTES - that level of refinement doesn't seem to be of a practical use in ... | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/... | 127 | datasets 1.6 ignores cache
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/ope... | [
-0.2158486694,
0.0203109235,
0.0422427356,
0.140415296,
0.1821745038,
0.0462313406,
0.1409807801,
0.3139567971,
0.0551588237,
-0.0644534901,
-0.2359383404,
-0.0579473227,
-0.0247007441,
-0.3315779269,
-0.1081902683,
0.01980526,
0.2144650817,
0.0230759755,
-0.1678414047,
-0.0409... |
https://github.com/huggingface/datasets/issues/2387 | datasets 1.6 ignores cache | Great! Thanks, @stas00.
I am implementing your suggestion to turn off default value when set to `0`.
For the other suggestion (allowing different metric prefixes), I will discuss with @lhoestq to agree on its implementation. | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/... | 35 | datasets 1.6 ignores cache
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/ope... | [
-0.2158486694,
0.0203109235,
0.0422427356,
0.140415296,
0.1821745038,
0.0462313406,
0.1409807801,
0.3139567971,
0.0551588237,
-0.0644534901,
-0.2359383404,
-0.0579473227,
-0.0247007441,
-0.3315779269,
-0.1081902683,
0.01980526,
0.2144650817,
0.0230759755,
-0.1678414047,
-0.0409... |
https://github.com/huggingface/datasets/issues/2377 | ArrowDataset.save_to_disk produces files that cannot be read using pyarrow.feather | Hi ! This is because we are actually using the arrow streaming format. We plan to switch to the arrow IPC format.
More info at #1933 | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from pyarrow import feather
dataset = load_dataset('imdb', split='train')
dataset.save_to_disk('dataset_dir')
table = feather.read_table('dataset_dir/dataset.arro... | 26 | ArrowDataset.save_to_disk produces files that cannot be read using pyarrow.feather
## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from pyarrow import feather
dataset = load_dataset('imdb', split='train')
data... | [
-0.204033345,
0.1602187455,
-0.0240917876,
0.3715009391,
0.148974359,
0.2544103265,
-0.0069091367,
0.5975213647,
-0.516123414,
-0.1188539118,
-0.3667039871,
0.6461248398,
-0.0722651407,
-0.7478834987,
0.0741809383,
0.2550879121,
0.1921529323,
0.102844052,
-0.0828898773,
-0.0657... |
https://github.com/huggingface/datasets/issues/2373 | Loading dataset from local path | Version below works, checked again in the docs, and data_files should be a path.
```
ds = datasets.load_dataset('my_script.py',
data_files='/data/dir/corpus.txt',
cache_dir='.')
``` | I'm trying to load a local dataset with the code below
```
ds = datasets.load_dataset('my_script.py',
data_files='corpus.txt',
data_dir='/data/dir',
cache_dir='.')
```
But internally a BuilderConfig is created, which tries to u... | 21 | Loading dataset from local path
I'm trying to load a local dataset with the code below
```
ds = datasets.load_dataset('my_script.py',
data_files='corpus.txt',
data_dir='/data/dir',
cache_dir='.')
```
But internally a BuilderC... | [
-0.363489449,
0.2240850776,
0.0534441322,
0.4514978528,
0.1415302753,
-0.120705992,
0.4987969398,
0.0649359003,
0.1139451861,
0.1279035211,
0.1670623124,
0.116478838,
-0.0000267204,
0.1618969738,
0.2033522278,
0.2234761715,
-0.0005673284,
0.076843597,
-0.156112045,
-0.068644799... |
https://github.com/huggingface/datasets/issues/2363 | Trying to use metric.compute but get OSError | also, I test the function on some little data , get the same message:
```
Python 3.8.5 (default, Jan 27 2021, 15:41:15)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_metric
>>> metric = load_metric('accuracy')
>>> metric.add_batch(pre... | I want to use metric.compute from load_metric('accuracy') to get training accuracy, but receive OSError. I am wondering what is the mechanism behind the metric calculation, why would it report an OSError?
```python
195 for epoch in range(num_train_epochs):
196 model.train()
197 for step, batch... | 113 | Trying to use metric.compute but get OSError
I want to use metric.compute from load_metric('accuracy') to get training accuracy, but receive OSError. I am wondering what is the mechanism behind the metric calculation, why would it report an OSError?
```python
195 for epoch in range(num_train_epochs):
196 ... | [
-0.394418478,
-0.2529712617,
-0.0330116637,
0.3631942272,
0.2531315088,
-0.1011047661,
0.2328527421,
0.1392672658,
0.2106888741,
0.6745792627,
-0.0271203984,
0.0674139038,
-0.0561969616,
0.0007949398,
-0.1410536468,
-0.1666644216,
-0.1239941716,
0.0129454276,
0.0000074526,
0.07... |
https://github.com/huggingface/datasets/issues/2363 | Trying to use metric.compute but get OSError | Hi @hyusterr,
If you look at the example provided in `metrics/accuracy.py`, it only does `metric.compute()` to calculate the accuracy. Here's an example:
```
from datasets import load_metric
metric = load_metric('accuracy')
output = metric.compute(predictions=[1, 1, 1, 1], references=[1, 1, 0, 0])
print(output['a... | I want to use metric.compute from load_metric('accuracy') to get training accuracy, but receive OSError. I am wondering what is the mechanism behind the metric calculation, why would it report an OSError?
```python
195 for epoch in range(num_train_epochs):
196 model.train()
197 for step, batch... | 44 | Trying to use metric.compute but get OSError
I want to use metric.compute from load_metric('accuracy') to get training accuracy, but receive OSError. I am wondering what is the mechanism behind the metric calculation, why would it report an OSError?
```python
195 for epoch in range(num_train_epochs):
196 ... | [
-0.394418478,
-0.2529712617,
-0.0330116637,
0.3631942272,
0.2531315088,
-0.1011047661,
0.2328527421,
0.1392672658,
0.2106888741,
0.6745792627,
-0.0271203984,
0.0674139038,
-0.0561969616,
0.0007949398,
-0.1410536468,
-0.1666644216,
-0.1239941716,
0.0129454276,
0.0000074526,
0.07... |
https://github.com/huggingface/datasets/issues/2363 | Trying to use metric.compute but get OSError | I thought I can use Metric to collect predictions and references, this follows the step from huggingface's sample colab.
BTW, I fix the problem by setting other cache_dir in load_metric, but I'm still wondering about the mechanism. | I want to use metric.compute from load_metric('accuracy') to get training accuracy, but receive OSError. I am wondering what is the mechanism behind the metric calculation, why would it report an OSError?
```python
195 for epoch in range(num_train_epochs):
196 model.train()
197 for step, batch... | 37 | Trying to use metric.compute but get OSError
I want to use metric.compute from load_metric('accuracy') to get training accuracy, but receive OSError. I am wondering what is the mechanism behind the metric calculation, why would it report an OSError?
```python
195 for epoch in range(num_train_epochs):
196 ... | [
-0.394418478,
-0.2529712617,
-0.0330116637,
0.3631942272,
0.2531315088,
-0.1011047661,
0.2328527421,
0.1392672658,
0.2106888741,
0.6745792627,
-0.0271203984,
0.0674139038,
-0.0561969616,
0.0007949398,
-0.1410536468,
-0.1666644216,
-0.1239941716,
0.0129454276,
0.0000074526,
0.07... |
https://github.com/huggingface/datasets/issues/2363 | Trying to use metric.compute but get OSError | I tried this code on a colab notebook and it worked fine (with gpu enabled):
```
from datasets import load_metric
metric = load_metric('accuracy')
output = metric.add_batch(predictions=[1, 1, 1, 1], references=[1, 1, 0, 0])
final_score = metric.compute()
print(final_score) # 0.5
```
Also, in `load_metric`, I s... | I want to use metric.compute from load_metric('accuracy') to get training accuracy, but receive OSError. I am wondering what is the mechanism behind the metric calculation, why would it report an OSError?
```python
195 for epoch in range(num_train_epochs):
196 model.train()
197 for step, batch... | 53 | Trying to use metric.compute but get OSError
I want to use metric.compute from load_metric('accuracy') to get training accuracy, but receive OSError. I am wondering what is the mechanism behind the metric calculation, why would it report an OSError?
```python
195 for epoch in range(num_train_epochs):
196 ... | [
-0.394418478,
-0.2529712617,
-0.0330116637,
0.3631942272,
0.2531315088,
-0.1011047661,
0.2328527421,
0.1392672658,
0.2106888741,
0.6745792627,
-0.0271203984,
0.0674139038,
-0.0561969616,
0.0007949398,
-0.1410536468,
-0.1666644216,
-0.1239941716,
0.0129454276,
0.0000074526,
0.07... |
https://github.com/huggingface/datasets/issues/2363 | Trying to use metric.compute but get OSError | Hi ! By default it caches the predictions and references used to compute the metric in `~/.cache/huggingface/datasets/metrics` (not `~/.datasets/`). Let me update the documentation @bhavitvyamalik .
The cache is used to store all the predictions and references passed to `add_batch` for example in order to compute th... | I want to use metric.compute from load_metric('accuracy') to get training accuracy, but receive OSError. I am wondering what is the mechanism behind the metric calculation, why would it report an OSError?
```python
195 for epoch in range(num_train_epochs):
196 model.train()
197 for step, batch... | 87 | Trying to use metric.compute but get OSError
I want to use metric.compute from load_metric('accuracy') to get training accuracy, but receive OSError. I am wondering what is the mechanism behind the metric calculation, why would it report an OSError?
```python
195 for epoch in range(num_train_epochs):
196 ... | [
-0.394418478,
-0.2529712617,
-0.0330116637,
0.3631942272,
0.2531315088,
-0.1011047661,
0.2328527421,
0.1392672658,
0.2106888741,
0.6745792627,
-0.0271203984,
0.0674139038,
-0.0561969616,
0.0007949398,
-0.1410536468,
-0.1666644216,
-0.1239941716,
0.0129454276,
0.0000074526,
0.07... |
https://github.com/huggingface/datasets/issues/2356 | How to Add New Metrics Guide | Hi ! sorry for the late response
It would be fantastic to have a guide for adding metrics as well ! Currently we only have this template here:
https://github.com/huggingface/datasets/blob/master/templates/new_metric_script.py
We can also include test utilities for metrics in the guide.
We have a pytest suite... | **Is your feature request related to a problem? Please describe.**
Currently there is an absolutely fantastic guide for how to contribute a new dataset to the library. However, there isn't one for adding new metrics.
**Describe the solution you'd like**
I'd like for a guide in a similar style to the dataset guide ... | 176 | How to Add New Metrics Guide
**Is your feature request related to a problem? Please describe.**
Currently there is an absolutely fantastic guide for how to contribute a new dataset to the library. However, there isn't one for adding new metrics.
**Describe the solution you'd like**
I'd like for a guide in a simi... | [
0.0184294712,
0.0091659082,
-0.0002698584,
-0.1558316499,
0.0394448303,
0.0882946551,
-0.0663277581,
0.030540023,
0.0264216457,
0.0817609876,
0.0160234831,
0.2692243457,
-0.1336125731,
0.2791261971,
0.2526021898,
-0.1617622823,
-0.1102727205,
-0.0691409409,
0.1617093384,
0.0204... |
https://github.com/huggingface/datasets/issues/2350 | `FaissIndex.save` throws error on GPU | Just in case, this is a workaround that I use in my code and it seems to do the job.
```python
if use_gpu_index:
data["train"]._indexes["text_emb"].faiss_index = faiss.index_gpu_to_cpu(data["train"]._indexes["text_emb"].faiss_index)
``` | ## Describe the bug
After training an index with a factory string `OPQ16_128,IVF512,PQ32` on GPU, `.save_faiss_index` throws this error.
```
File "index_wikipedia.py", line 119, in <module>
data["train"].save_faiss_index("text_emb", index_save_path)
File "/home/vlialin/miniconda3/envs/cat/lib/python3.8... | 27 | `FaissIndex.save` throws error on GPU
## Describe the bug
After training an index with a factory string `OPQ16_128,IVF512,PQ32` on GPU, `.save_faiss_index` throws this error.
```
File "index_wikipedia.py", line 119, in <module>
data["train"].save_faiss_index("text_emb", index_save_path)
File "/home/v... | [
-0.1138691455,
0.2311162949,
0.0493214503,
0.2159425914,
0.3592372239,
0.1624384522,
0.5158486962,
0.5090634823,
0.2508895695,
0.224903509,
0.0669302344,
0.2369600236,
0.050932359,
-0.0680038705,
-0.1380848289,
-0.0389069952,
0.2700771391,
0.2628393471,
0.1573567986,
-0.1693147... |
https://github.com/huggingface/datasets/issues/2347 | Add an API to access the language and pretty name of a dataset | Hi ! With @bhavitvyamalik we discussed about having something like
```python
from datasets import load_dataset_card
dataset_card = load_dataset_card("squad")
print(dataset_card.metadata.pretty_name)
# Stanford Question Answering Dataset (SQuAD)
print(dataset_card.metadata.languages)
# ["en"]
```
What do yo... | It would be super nice to have an API to get some metadata of the dataset from the name and args passed to `load_dataset`. This way we could programmatically infer the language and the name of a dataset when creating model cards automatically in the Transformers examples scripts. | 95 | Add an API to access the language and pretty name of a dataset
It would be super nice to have an API to get some metadata of the dataset from the name and args passed to `load_dataset`. This way we could programmatically infer the language and the name of a dataset when creating model cards automatically in the Trans... | [
-0.1883868426,
0.11107862,
-0.1189652979,
0.3318738341,
0.3078960478,
0.1275812238,
0.2714891434,
0.2705739737,
-0.1630475968,
0.2370298654,
0.1823675483,
0.5307033658,
-0.2200145423,
0.342644304,
0.2076427042,
-0.1973492205,
-0.0799135193,
-0.0694942623,
0.0691230595,
-0.06899... |
https://github.com/huggingface/datasets/issues/2347 | Add an API to access the language and pretty name of a dataset | What dataset_info method are you talking about @julien-c ? In `huggingface_hub` I can only see `model_info`. | It would be super nice to have an API to get some metadata of the dataset from the name and args passed to `load_dataset`. This way we could programmatically infer the language and the name of a dataset when creating model cards automatically in the Transformers examples scripts. | 16 | Add an API to access the language and pretty name of a dataset
It would be super nice to have an API to get some metadata of the dataset from the name and args passed to `load_dataset`. This way we could programmatically infer the language and the name of a dataset when creating model cards automatically in the Trans... | [
-0.2759426832,
-0.1491748691,
-0.0873623937,
0.4150778353,
0.34213835,
0.118972078,
0.3897389174,
0.3709772229,
-0.0223612078,
0.3746368289,
0.0779035091,
0.5090551376,
-0.0601513535,
0.3841818571,
0.3461504281,
-0.2023470253,
-0.1172479391,
0.0129679786,
-0.0803342462,
-0.1211... |
https://github.com/huggingface/datasets/issues/2345 | [Question] How to move and reuse preprocessed dataset? | <s>Hi :) Can you share with us the code you used ?</s>
EDIT: from https://github.com/huggingface/transformers/issues/11665#issuecomment-838348291 I understand you're using the run_clm.py script. Can you share your logs ?
| Hi, I am training a gpt-2 from scratch using run_clm.py.
I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess),
I tried to :
copy path_to_cache_dir/datasets to new_cache_dir/datasets
set export HF_DATASETS_CACHE="new_cache_dir/"
but the program still re-preprocess the whole dataset... | 28 | [Question] How to move and reuse preprocessed dataset?
Hi, I am training a gpt-2 from scratch using run_clm.py.
I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess),
I tried to :
copy path_to_cache_dir/datasets to new_cache_dir/datasets
set export HF_DATASETS_CACHE="new_cache_di... | [
-0.1601244658,
-0.326133579,
0.0653829128,
0.3396511972,
0.286459893,
0.1715733111,
0.1354573667,
0.1812241077,
-0.0809118152,
-0.1558371335,
-0.1377900839,
0.039877139,
-0.1689591855,
0.0587201193,
0.2985677421,
-0.2673768997,
0.155224815,
0.0188225321,
-0.3201617599,
0.047562... |
https://github.com/huggingface/datasets/issues/2345 | [Question] How to move and reuse preprocessed dataset? | Also note that for the caching to work, you must reuse the exact same parameters as in the first run. Did you change any parameter ? The `preprocessing_num_workers` should also stay the same | Hi, I am training a gpt-2 from scratch using run_clm.py.
I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess),
I tried to :
copy path_to_cache_dir/datasets to new_cache_dir/datasets
set export HF_DATASETS_CACHE="new_cache_dir/"
but the program still re-preprocess the whole dataset... | 33 | [Question] How to move and reuse preprocessed dataset?
Hi, I am training a gpt-2 from scratch using run_clm.py.
I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess),
I tried to :
copy path_to_cache_dir/datasets to new_cache_dir/datasets
set export HF_DATASETS_CACHE="new_cache_di... | [
-0.2378593534,
-0.1530077606,
0.0416560732,
0.3084846735,
0.1833168715,
0.2134884894,
-0.0576085076,
0.2500772178,
-0.1095816568,
-0.2252921611,
-0.0464637913,
0.1580040753,
-0.1260225922,
-0.0357171781,
0.2702541649,
-0.1903043389,
0.1926394999,
-0.0515301898,
-0.2087466568,
-... |
https://github.com/huggingface/datasets/issues/2345 | [Question] How to move and reuse preprocessed dataset? | > Also note that for the caching to work, you must reuse the exact same parameters as in the first run. Did you change any parameter ? The `preprocessing_num_workers` should also stay the same
I only changed the `preprocessing_num_workers` maybe it is the problem~ I will try again~ | Hi, I am training a gpt-2 from scratch using run_clm.py.
I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess),
I tried to :
copy path_to_cache_dir/datasets to new_cache_dir/datasets
set export HF_DATASETS_CACHE="new_cache_dir/"
but the program still re-preprocess the whole dataset... | 48 | [Question] How to move and reuse preprocessed dataset?
Hi, I am training a gpt-2 from scratch using run_clm.py.
I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess),
I tried to :
copy path_to_cache_dir/datasets to new_cache_dir/datasets
set export HF_DATASETS_CACHE="new_cache_di... | [
-0.2302791923,
-0.1823707819,
0.0412170999,
0.3404656649,
0.198071003,
0.2086262852,
-0.0375071615,
0.2514613271,
-0.103703633,
-0.2087677568,
-0.0314360633,
0.1630257368,
-0.1305733025,
-0.044030685,
0.2582703829,
-0.1911839098,
0.1755549014,
-0.0452621803,
-0.1915272325,
-0.0... |
https://github.com/huggingface/datasets/issues/2344 | Is there a way to join multiple datasets in one? | Hi ! We don't have `join`/`merge` on a certain column as in pandas.
Maybe you can just use the [concatenate_datasets](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate#datasets.concatenate_datasets) function.
| **Is your feature request related to a problem? Please describe.**
I need to join 2 datasets, one that is in the hub and another I've created from my files. Is there an easy way to join these 2?
**Describe the solution you'd like**
Id like to join them with a merge or join method, just like pandas dataframes.
**Add... | 21 | Is there a way to join multiple datasets in one?
**Is your feature request related to a problem? Please describe.**
I need to join 2 datasets, one that is in the hub and another I've created from my files. Is there an easy way to join these 2?
**Describe the solution you'd like**
Id like to join them with a merge o... | [
-0.4773366153,
-0.6935089827,
-0.0979439393,
0.179587394,
0.1399443597,
0.3247910142,
-0.2019500881,
0.0913564041,
0.0232806709,
0.0314715132,
-0.5830706954,
-0.0147506073,
0.2058378011,
0.4802642167,
0.0973033533,
-0.3352278173,
0.2061463147,
0.0533262491,
-0.0682544932,
0.190... |
https://github.com/huggingface/datasets/issues/2337 | NonMatchingChecksumError for web_of_science dataset | I've raised a PR for this. Should work with `dataset = load_dataset("web_of_science", "WOS11967", ignore_verifications=True)`once it gets merged into the main branch. Thanks for reporting this! | NonMatchingChecksumError when trying to download the web_of_science dataset.
>NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://data.mendeley.com/datasets/9rw3vkcfy4/6/files/c9ea673d-5542-44c0-ab7b-f1311f7d61df/WebOfScience.zip?dl=1']
Setting `ignore_verfications=True` results... | 25 | NonMatchingChecksumError for web_of_science dataset
NonMatchingChecksumError when trying to download the web_of_science dataset.
>NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://data.mendeley.com/datasets/9rw3vkcfy4/6/files/c9ea673d-5542-44c0-ab7b-f1311f7d61df/WebOfScience.zi... | [
-0.0851586312,
-0.0791751593,
-0.0687736571,
0.2122484297,
0.1446296424,
0.1670501083,
-0.0674465969,
0.356159687,
0.335711211,
0.1703766286,
-0.1841287464,
-0.0882943049,
0.032024473,
-0.0491008647,
-0.1792023033,
0.3910675943,
0.1115435809,
0.0704057142,
0.1489616632,
-0.0231... |
https://github.com/huggingface/datasets/issues/2330 | Allow passing `desc` to `tqdm` in `Dataset.map()` | I think the user could pass the `desc` parameter to `map` so that it can be displayed in the tqdm progress bar, as suggested by @cccntu.
When there's no multiprocessing, the `desc` of the progress bar could be the `desc` passed by the user.
In multiprocessing, we were already using a `desc` equal to `"#" + str(rank... | It's normal to have many `map()` calls, and some of them can take a few minutes,
it would be nice to have a description on the progress bar.
Alternative solution:
Print the description before/after the `map()` call. | 145 | Allow passing `desc` to `tqdm` in `Dataset.map()`
It's normal to have many `map()` calls, and some of them can take a few minutes,
it would be nice to have a description on the progress bar.
Alternative solution:
Print the description before/after the `map()` call.
I think the user could pass the `desc` parame... | [
-0.3543033898,
0.0359196924,
-0.0605330952,
-0.1690315008,
0.3744508326,
-0.1667571962,
0.2543699741,
0.2177438587,
-0.3104321063,
0.4162535965,
0.2674727738,
0.6729351282,
-0.0182512179,
0.0839985982,
-0.1400042027,
0.0010157244,
-0.2245402187,
0.2880353034,
-0.1542478055,
0.1... |
https://github.com/huggingface/datasets/issues/2327 | A syntax error in example | cc @beurkinger but I think this has been fixed internally and will soon be updated right ? | 
Sorry to report with an image, I can't find the template source code of this snippet. | 17 | A syntax error in example

Sorry to report with an image, I can't find the template source code of this snippet.
cc @beurkinger but I think this has been fixed internally and will soon be updated right ? | [
0.0715635717,
-0.4951070547,
-0.1797280759,
-0.1540905684,
0.0774733424,
-0.2150442153,
0.1975010335,
0.2825199366,
-0.3641222715,
0.1687916964,
0.2558272183,
0.3997047842,
0.0529386811,
0.0039085615,
0.0124355946,
-0.1999748051,
0.0728852749,
0.2567011416,
0.0948016867,
0.0897... |
https://github.com/huggingface/datasets/issues/2323 | load_dataset("timit_asr") gives back duplicates of just one sample text | Thanks @ekeleshian for having reported.
I am closing this issue once that you updated `datasets`. Feel free to reopen it if the problem persists. | ## Describe the bug
When you look up on key ["train"] and then ['text'], you get back a list with just one sentence duplicated 4620 times. Namely, the sentence "Would such an act of refusal be useful?". Similarly when you look up ['test'] and then ['text'], the list is one sentence repeated "The bungalow was pleasant... | 24 | load_dataset("timit_asr") gives back duplicates of just one sample text
## Describe the bug
When you look up on key ["train"] and then ['text'], you get back a list with just one sentence duplicated 4620 times. Namely, the sentence "Would such an act of refusal be useful?". Similarly when you look up ['test'] and t... | [
0.2821879983,
-0.4890302122,
0.0636293143,
0.3835483789,
0.155076921,
0.0093224104,
0.2025513202,
0.1971772313,
-0.1960432082,
0.0225639325,
-0.0537860133,
0.2588160634,
-0.06076096,
-0.0019164644,
0.1897579581,
0.0341103002,
0.1301133335,
-0.0039514499,
-0.0855849013,
-0.21644... |
https://github.com/huggingface/datasets/issues/2322 | Calls to map are not cached. | I tried upgrading to `datasets==1.6.2` and downgrading to `1.6.0`. Both versions produce the same output.
Downgrading to `1.5.0` works and produces the following output for me:
```bash
Downloading: 9.20kB [00:00, 3.94MB/s]
Downloading: 5.99kB [00:00, 3.29MB/s]
No config sp... | ## Describe the bug
Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed?
## Steps to reproduce the bug
```python
import datasets
datasets.set_caching_enabled(True)
sst = datasets.load_dataset("sst")
def foo(samples, i):
print("executed", i[:10])... | 387 | Calls to map are not cached.
## Describe the bug
Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed?
## Steps to reproduce the bug
```python
import datasets
datasets.set_caching_enabled(True)
sst = datasets.load_dataset("sst")
def foo(samples, i):... | [
-0.1652985811,
-0.3959718943,
-0.0088476408,
0.1456609815,
0.2062432468,
-0.1019177735,
0.300268203,
0.1251289845,
0.3953082561,
0.0206205156,
-0.0369779579,
0.2428550422,
0.1699712873,
-0.2849352956,
0.188086763,
0.2596048415,
0.2713772058,
-0.0385336578,
-0.2219657153,
-0.174... |
https://github.com/huggingface/datasets/issues/2322 | Calls to map are not cached. | Hi,
set `keep_in_memory` to False when loading a dataset (`sst = load_dataset("sst", keep_in_memory=False)`) to prevent it from loading in-memory. Currently, in-memory datasets fail to find cached files due to this check (always False for them):
https://github.com/huggingface/datasets/blob/241a0b4a3a868778ee91e76... | ## Describe the bug
Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed?
## Steps to reproduce the bug
```python
import datasets
datasets.set_caching_enabled(True)
sst = datasets.load_dataset("sst")
def foo(samples, i):
print("executed", i[:10])... | 46 | Calls to map are not cached.
## Describe the bug
Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed?
## Steps to reproduce the bug
```python
import datasets
datasets.set_caching_enabled(True)
sst = datasets.load_dataset("sst")
def foo(samples, i):... | [
-0.1652985811,
-0.3959718943,
-0.0088476408,
0.1456609815,
0.2062432468,
-0.1019177735,
0.300268203,
0.1251289845,
0.3953082561,
0.0206205156,
-0.0369779579,
0.2428550422,
0.1699712873,
-0.2849352956,
0.188086763,
0.2596048415,
0.2713772058,
-0.0385336578,
-0.2219657153,
-0.174... |
https://github.com/huggingface/datasets/issues/2322 | Calls to map are not cached. | Hi @villmow, thanks for reporting.
As @mariosasko has pointed out, we did not consider this case when introducing the feature of automatic in-memory for small datasets. This needs to be fixed. | ## Describe the bug
Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed?
## Steps to reproduce the bug
```python
import datasets
datasets.set_caching_enabled(True)
sst = datasets.load_dataset("sst")
def foo(samples, i):
print("executed", i[:10])... | 31 | Calls to map are not cached.
## Describe the bug
Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed?
## Steps to reproduce the bug
```python
import datasets
datasets.set_caching_enabled(True)
sst = datasets.load_dataset("sst")
def foo(samples, i):... | [
-0.1652985811,
-0.3959718943,
-0.0088476408,
0.1456609815,
0.2062432468,
-0.1019177735,
0.300268203,
0.1251289845,
0.3953082561,
0.0206205156,
-0.0369779579,
0.2428550422,
0.1699712873,
-0.2849352956,
0.188086763,
0.2596048415,
0.2713772058,
-0.0385336578,
-0.2219657153,
-0.174... |
https://github.com/huggingface/datasets/issues/2322 | Calls to map are not cached. | Hi ! Currently a dataset that is in memory doesn't know doesn't know in which directory it has to read/write cache files.
On the other hand, a dataset that loaded from the disk (via memory mapping) uses the directory from which the dataset is located to read/write cache files.
Because of that, currently in-memory d... | ## Describe the bug
Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed?
## Steps to reproduce the bug
```python
import datasets
datasets.set_caching_enabled(True)
sst = datasets.load_dataset("sst")
def foo(samples, i):
print("executed", i[:10])... | 82 | Calls to map are not cached.
## Describe the bug
Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed?
## Steps to reproduce the bug
```python
import datasets
datasets.set_caching_enabled(True)
sst = datasets.load_dataset("sst")
def foo(samples, i):... | [
-0.1652985811,
-0.3959718943,
-0.0088476408,
0.1456609815,
0.2062432468,
-0.1019177735,
0.300268203,
0.1251289845,
0.3953082561,
0.0206205156,
-0.0369779579,
0.2428550422,
0.1699712873,
-0.2849352956,
0.188086763,
0.2596048415,
0.2713772058,
-0.0385336578,
-0.2219657153,
-0.174... |
https://github.com/huggingface/datasets/issues/2319 | UnicodeDecodeError for OSCAR (Afrikaans) | Thanks for reporting, @sgraaf.
I am going to have a look at it.
I guess the expected codec is "UTF-8". Normally, when no explicitly codec is passed, Python uses one which is platform-dependent. For Linux machines, the default codec is `utf_8`, which is OK. However for Windows machine, the default codec is `cp125... | ## Describe the bug
When loading the [OSCAR dataset](https://huggingface.co/datasets/oscar) (specifically `unshuffled_deduplicated_af`), I encounter a `UnicodeDecodeError`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("oscar", "unshuffled_deduplicated_af")
```... | 57 | UnicodeDecodeError for OSCAR (Afrikaans)
## Describe the bug
When loading the [OSCAR dataset](https://huggingface.co/datasets/oscar) (specifically `unshuffled_deduplicated_af`), I encounter a `UnicodeDecodeError`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset(... | [
-0.0694421232,
-0.0689754039,
-0.0686591119,
0.5462195277,
0.4998695552,
0.0235808808,
0.1604427248,
0.1600640714,
-0.2532078028,
0.2616167367,
0.1081411839,
0.074565649,
-0.1329876184,
-0.1621267796,
0.0694556236,
-0.2411272228,
0.0238732491,
-0.0095883496,
0.1490587145,
-0.11... |
https://github.com/huggingface/datasets/issues/2319 | UnicodeDecodeError for OSCAR (Afrikaans) | @sgraaf, I have just merged the fix in the master branch.
You can either:
- install `datasets` from source code
- wait until we make the next release of `datasets`
- set the `utf-8` codec as your default instead of `cp1252`. This can be done by activating the Python [UTF-8 mode](https://www.python.org/dev/peps/pe... | ## Describe the bug
When loading the [OSCAR dataset](https://huggingface.co/datasets/oscar) (specifically `unshuffled_deduplicated_af`), I encounter a `UnicodeDecodeError`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("oscar", "unshuffled_deduplicated_af")
```... | 66 | UnicodeDecodeError for OSCAR (Afrikaans)
## Describe the bug
When loading the [OSCAR dataset](https://huggingface.co/datasets/oscar) (specifically `unshuffled_deduplicated_af`), I encounter a `UnicodeDecodeError`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset(... | [
-0.0694421232,
-0.0689754039,
-0.0686591119,
0.5462195277,
0.4998695552,
0.0235808808,
0.1604427248,
0.1600640714,
-0.2532078028,
0.2616167367,
0.1081411839,
0.074565649,
-0.1329876184,
-0.1621267796,
0.0694556236,
-0.2411272228,
0.0238732491,
-0.0095883496,
0.1490587145,
-0.11... |
https://github.com/huggingface/datasets/issues/2318 | [api request] API to obtain "dataset_module" dynamic path? | Hi @richardliaw,
First, thanks for the compliments.
In relation with your request, currently, the dynamic modules path is obtained this way:
```python
from datasets.load import init_dynamic_modules, MODULE_NAME_FOR_DYNAMIC_MODULES
dynamic_modules_path = init_dynamic_modules(MODULE_NAME_FOR_DYNAMIC_MODULES)
... | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
This is an awesome library.
It seems like the dynamic module path in this library has broken some of hyperparameter tuning functionality: https://discuss.huggingface.co/t/using-hyperparamet... | 63 | [api request] API to obtain "dataset_module" dynamic path?
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
This is an awesome library.
It seems like the dynamic module path in this library has broken some of hyperparameter tuning funct... | [
-0.1038031429,
-0.2795982659,
-0.1310286522,
0.0527427234,
0.3254469037,
-0.2634875178,
-0.0613951944,
0.1563363969,
-0.2127551734,
0.3393892348,
-0.0749454126,
0.7778300643,
-0.4725227654,
0.3166540861,
0.2836634815,
-0.4223525226,
-0.1075098291,
0.0503051355,
-0.2626202703,
0... |
https://github.com/huggingface/datasets/issues/2318 | [api request] API to obtain "dataset_module" dynamic path? | Hi @richardliaw, the feature is on the master branch and will be included in the next release in a couple of weeks. | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
This is an awesome library.
It seems like the dynamic module path in this library has broken some of hyperparameter tuning functionality: https://discuss.huggingface.co/t/using-hyperparamet... | 22 | [api request] API to obtain "dataset_module" dynamic path?
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
This is an awesome library.
It seems like the dynamic module path in this library has broken some of hyperparameter tuning funct... | [
-0.127039969,
-0.2253050208,
-0.1368042678,
0.1205857098,
0.2940101326,
-0.3005486727,
-0.0833771154,
0.1747500002,
-0.2136731446,
0.3675939441,
-0.0689019337,
0.7768820524,
-0.4621844292,
0.3776901066,
0.2999123037,
-0.4458759129,
-0.0975355133,
0.0067401747,
-0.3226865232,
0.... |
https://github.com/huggingface/datasets/issues/2308 | Add COCO evaluation metrics | Hi @NielsRogge,
I'd like to contribute these metrics to datasets. Let's start with `CocoEvaluator` first? Currently how are are you sending the ground truths and predictions in coco_evaluator?
| I'm currently working on adding Facebook AI's DETR model (end-to-end object detection with Transformers) to HuggingFace Transformers. The model is working fine, but regarding evaluation, I'm currently relying on external `CocoEvaluator` and `PanopticEvaluator` objects which are defined in the original repository ([here... | 28 | Add COCO evaluation metrics
I'm currently working on adding Facebook AI's DETR model (end-to-end object detection with Transformers) to HuggingFace Transformers. The model is working fine, but regarding evaluation, I'm currently relying on external `CocoEvaluator` and `PanopticEvaluator` objects which are defined in ... | [
-0.309964776,
-0.2109861821,
-0.0590806678,
0.0364998505,
0.1357953995,
-0.128011927,
0.0327848941,
-0.1446953267,
-0.2019299418,
0.1269809604,
-0.682729125,
0.123969458,
-0.1395808607,
0.1238020509,
-0.2454463989,
-0.0975499526,
-0.004918267,
-0.0187487733,
-0.2175132483,
0.12... |
https://github.com/huggingface/datasets/issues/2308 | Add COCO evaluation metrics | Great!
Here's a notebook that illustrates how I'm using `CocoEvaluator`: https://drive.google.com/file/d/1VV92IlaUiuPOORXULIuAdtNbBWCTCnaj/view?usp=sharing
The evaluation is near the end of the notebook.
| I'm currently working on adding Facebook AI's DETR model (end-to-end object detection with Transformers) to HuggingFace Transformers. The model is working fine, but regarding evaluation, I'm currently relying on external `CocoEvaluator` and `PanopticEvaluator` objects which are defined in the original repository ([here... | 20 | Add COCO evaluation metrics
I'm currently working on adding Facebook AI's DETR model (end-to-end object detection with Transformers) to HuggingFace Transformers. The model is working fine, but regarding evaluation, I'm currently relying on external `CocoEvaluator` and `PanopticEvaluator` objects which are defined in ... | [
-0.2997464538,
-0.217308417,
-0.0662909895,
0.020168744,
0.1273828,
-0.1438954771,
0.0299046244,
-0.1443724185,
-0.2041878402,
0.1345137656,
-0.6726329327,
0.1221150458,
-0.1353888959,
0.1291401982,
-0.2434386462,
-0.086376369,
-0.0150744235,
-0.0097321682,
-0.2172837108,
0.126... |
https://github.com/huggingface/datasets/issues/2308 | Add COCO evaluation metrics | I went through the code you've [mentioned](https://github.com/facebookresearch/detr/blob/a54b77800eb8e64e3ad0d8237789fcbf2f8350c5/datasets/coco_eval.py) and I think there are 2 options on how we can go ahead:
1) Implement how DETR people have done this (they're relying very heavily on the official implementation and... | I'm currently working on adding Facebook AI's DETR model (end-to-end object detection with Transformers) to HuggingFace Transformers. The model is working fine, but regarding evaluation, I'm currently relying on external `CocoEvaluator` and `PanopticEvaluator` objects which are defined in the original repository ([here... | 133 | Add COCO evaluation metrics
I'm currently working on adding Facebook AI's DETR model (end-to-end object detection with Transformers) to HuggingFace Transformers. The model is working fine, but regarding evaluation, I'm currently relying on external `CocoEvaluator` and `PanopticEvaluator` objects which are defined in ... | [
-0.2762453556,
-0.2085914016,
-0.0654033571,
0.0466304719,
0.1744097322,
-0.1518914253,
0.0246489551,
-0.1304700524,
-0.1931970417,
0.1273567677,
-0.6784847379,
0.1213096976,
-0.1210909337,
0.11382626,
-0.2690064609,
-0.095024012,
-0.0252746753,
-0.0111771049,
-0.2147165835,
0.... |
https://github.com/huggingface/datasets/issues/2308 | Add COCO evaluation metrics | Ok, thanks for the update.
Indeed, the metrics API of Datasets is framework agnostic, so we can't rely on a PyTorch-only implementation.
[This file](https://github.com/cocodataset/cocoapi/blob/ed842bffd41f6ff38707c4f0968d2cfd91088688/PythonAPI/pycocotools/cocoeval.py) is probably want we need to implement.
| I'm currently working on adding Facebook AI's DETR model (end-to-end object detection with Transformers) to HuggingFace Transformers. The model is working fine, but regarding evaluation, I'm currently relying on external `CocoEvaluator` and `PanopticEvaluator` objects which are defined in the original repository ([here... | 31 | Add COCO evaluation metrics
I'm currently working on adding Facebook AI's DETR model (end-to-end object detection with Transformers) to HuggingFace Transformers. The model is working fine, but regarding evaluation, I'm currently relying on external `CocoEvaluator` and `PanopticEvaluator` objects which are defined in ... | [
-0.2992970049,
-0.2228717506,
-0.0646217316,
0.0151258446,
0.1335905045,
-0.1412789524,
0.0264431983,
-0.1363360733,
-0.1954928935,
0.1230500937,
-0.680290401,
0.1259328127,
-0.1547849774,
0.1198724136,
-0.2277212143,
-0.0956799686,
-0.0067188358,
-0.025702687,
-0.2065681964,
0... |
https://github.com/huggingface/datasets/issues/2301 | Unable to setup dev env on Windows | Hi @gchhablani,
There are some 3rd-party dependencies that require to build code in C. In this case, it is the library `python-Levenshtein`.
On Windows, in order to be able to build C code, you need to install at least `Microsoft C++ Build Tools` version 14. You can find more info here: https://visualstudio.micr... | Hi
I tried installing the `".[dev]"` version on Windows 10 after cloning.
Here is the error I'm facing:
```bat
(env) C:\testing\datasets>pip install -e ".[dev]"
Obtaining file:///C:/testing/datasets
Requirement already satisfied: numpy>=1.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datas... | 52 | Unable to setup dev env on Windows
Hi
I tried installing the `".[dev]"` version on Windows 10 after cloning.
Here is the error I'm facing:
```bat
(env) C:\testing\datasets>pip install -e ".[dev]"
Obtaining file:///C:/testing/datasets
Requirement already satisfied: numpy>=1.17 in c:\programdata\anaconda3\e... | [
-0.4121342003,
-0.0855781734,
-0.0977277234,
-0.1387810409,
0.3114473522,
0.0683752969,
0.3448514938,
0.0465875827,
-0.1001304463,
0.1562346071,
0.0433797985,
0.2424266189,
0.1581841856,
0.2162972838,
0.0545784608,
0.0232142229,
0.1360676885,
0.43907547,
-0.2185963988,
0.037438... |
https://github.com/huggingface/datasets/issues/2300 | Add VoxPopuli | I'm happy to take this on:) One question: The original unlabelled data is stored unsegmented (see e.g. https://github.com/facebookresearch/voxpopuli/blob/main/voxpopuli/get_unlabelled_data.py#L30), but segmenting the audio in the dataset would require a dependency on something like soundfile or torchaudio. An alternati... | ## Adding a Dataset
- **Name:** Voxpopuli
- **Description:** VoxPopuli is raw data is collected from 2009-2020 European Parliament event recordings
- **Paper:** https://arxiv.org/abs/2101.00390
- **Data:** https://github.com/facebookresearch/voxpopuli
- **Motivation:** biggest unlabeled speech dataset
**Note**:... | 65 | Add VoxPopuli
## Adding a Dataset
- **Name:** Voxpopuli
- **Description:** VoxPopuli is raw data is collected from 2009-2020 European Parliament event recordings
- **Paper:** https://arxiv.org/abs/2101.00390
- **Data:** https://github.com/facebookresearch/voxpopuli
- **Motivation:** biggest unlabeled speech data... | [
-0.2925266922,
0.2328723967,
-0.0486934707,
-0.0753520727,
-0.1558789164,
-0.1852182299,
0.3809294999,
0.2100731432,
-0.0260843709,
0.2548183203,
-0.2955556214,
0.0939443707,
-0.51401335,
0.2282820791,
-0.0125597799,
-0.3366333246,
0.0352493264,
0.1592860818,
0.1379972845,
-0.1... |
https://github.com/huggingface/datasets/issues/2300 | Add VoxPopuli | Hey @jfainberg,
This sounds great! I think adding a dependency would not be a big problem, however automatically segmenting the data probably means that it would take a very long time to do:
```python
dataset = load_dataset("voxpopuli", "french")
```
=> so as a start I think your option 2 is the way to go! | ## Adding a Dataset
- **Name:** Voxpopuli
- **Description:** VoxPopuli is raw data is collected from 2009-2020 European Parliament event recordings
- **Paper:** https://arxiv.org/abs/2101.00390
- **Data:** https://github.com/facebookresearch/voxpopuli
- **Motivation:** biggest unlabeled speech dataset
**Note**:... | 54 | Add VoxPopuli
## Adding a Dataset
- **Name:** Voxpopuli
- **Description:** VoxPopuli is raw data is collected from 2009-2020 European Parliament event recordings
- **Paper:** https://arxiv.org/abs/2101.00390
- **Data:** https://github.com/facebookresearch/voxpopuli
- **Motivation:** biggest unlabeled speech data... | [
-0.3622049093,
0.206050843,
-0.0382299051,
0.0014165512,
-0.121406801,
-0.1296065599,
0.285892576,
0.348855257,
0.1810038835,
0.2673178911,
-0.1220444217,
0.1734329909,
-0.5791188478,
0.2486566752,
0.1030064523,
-0.1610256284,
0.1628734469,
0.2866097987,
0.0956862718,
-0.188000... |
https://github.com/huggingface/datasets/issues/2294 | Slow #0 when using map to tokenize. | Hi ! Have you tried other values for `preprocessing_num_workers` ? Is it always process 0 that is slower ?
There are no difference between process 0 and the others except that it processes the first shard of the dataset. | Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
loa... | 39 | Slow #0 when using map to tokenize.
Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_... | [
-0.4360867441,
-0.3340416849,
-0.0331812017,
-0.0555870757,
0.0697018057,
-0.194478631,
0.3812373281,
0.2073156983,
-0.2748003006,
0.0517778769,
0.315757215,
0.3878785372,
-0.2385891527,
-0.0081706708,
-0.2220513821,
0.1165851876,
0.1717215329,
0.1350862235,
0.369813174,
-0.136... |
https://github.com/huggingface/datasets/issues/2294 | Slow #0 when using map to tokenize. | Hi, I have found the reason of it. Before using the map function to tokenize the data, I concatenate the wikipedia and bookcorpus first, like this:
```if args.dataset_name1 is not None:
dataset1 = load_dataset(args.dataset_name1, args.dataset_config_name1, split="train")
dataset1 = dataset1.remove_co... | Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
loa... | 172 | Slow #0 when using map to tokenize.
Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_... | [
-0.4216777682,
-0.2463770658,
-0.0001557823,
0.0670947582,
0.0309752263,
-0.208566159,
0.4310255945,
0.2237498164,
-0.2545166612,
0.0450730659,
0.233474046,
0.3255723417,
-0.1414713711,
0.0176059473,
-0.2975895703,
0.0944022089,
0.1700221151,
0.1978503168,
0.3180064559,
-0.1069... |
https://github.com/huggingface/datasets/issues/2294 | Slow #0 when using map to tokenize. | That makes sense ! You can indeed use `map` on both datasets separately and then concatenate.
Another option is to concatenate, then shuffle, and then `map`. | Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
loa... | 26 | Slow #0 when using map to tokenize.
Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_... | [
-0.4388077259,
-0.2752539814,
-0.0035829507,
-0.0195051711,
0.0376553573,
-0.1688465476,
0.3846201301,
0.2470968217,
-0.2695714831,
0.0726474002,
0.1870825589,
0.3446467221,
-0.1986023337,
0.0340103582,
-0.3061352372,
0.0878570378,
0.1165523082,
0.1894625872,
0.2597584128,
-0.0... |
https://github.com/huggingface/datasets/issues/2288 | Load_dataset for local CSV files | Hi,
this is not a standard CSV file (requires additional preprocessing) so I wouldn't label this as s bug. You could parse the examples with the regex module or the string API to extract the data, but the following approach is probably the easiest (once you load the data):
```python
import ast
# load the dataset ... | The method load_dataset fails to correctly load a dataset from csv.
Moreover, I am working on a token-classification task ( POS tagging) , where each row in my CSV contains two columns each of them having a list of strings.
row example:
```tokens | labels
['I' , 'am', 'John'] | ['PRON', 'AUX', 'PROPN' ]
``... | 72 | Load_dataset for local CSV files
The method load_dataset fails to correctly load a dataset from csv.
Moreover, I am working on a token-classification task ( POS tagging) , where each row in my CSV contains two columns each of them having a list of strings.
row example:
```tokens | labels
['I' , 'am', 'John']... | [
-0.1104321033,
-0.0460994393,
0.0017929586,
0.0747478604,
0.4480467439,
0.0627774,
0.455213517,
0.21968247,
0.2994900048,
-0.1329666227,
0.2060757577,
0.4624340534,
-0.0728464946,
0.0797174126,
-0.0546836816,
-0.2258628756,
0.1463977695,
0.1216583997,
-0.0681669116,
-0.05140581... |
https://github.com/huggingface/datasets/issues/2288 | Load_dataset for local CSV files | Hi,
Thanks for the reply.
I have already used ```ast.literal_eval``` to evaluate the string into list, but I was getting another error:
```
ArrowInvalid: Could not convert X with type str: tried to convert to int
```
Why this happens ? Should labels be mapped to their ids and use int instead of str ? | The method load_dataset fails to correctly load a dataset from csv.
Moreover, I am working on a token-classification task ( POS tagging) , where each row in my CSV contains two columns each of them having a list of strings.
row example:
```tokens | labels
['I' , 'am', 'John'] | ['PRON', 'AUX', 'PROPN' ]
``... | 55 | Load_dataset for local CSV files
The method load_dataset fails to correctly load a dataset from csv.
Moreover, I am working on a token-classification task ( POS tagging) , where each row in my CSV contains two columns each of them having a list of strings.
row example:
```tokens | labels
['I' , 'am', 'John']... | [
-0.1745121032,
-0.1091789901,
-0.0231067296,
0.0980234742,
0.4885459542,
0.0214577187,
0.4982628822,
0.2223206609,
0.3661426008,
-0.0579063818,
0.0228599496,
0.5268844366,
-0.0411036983,
0.1786160022,
-0.0871041268,
-0.1635923833,
0.150417909,
0.1215565056,
-0.0954685137,
-0.05... |
https://github.com/huggingface/datasets/issues/2285 | Help understanding how to build a dataset for language modeling as with the old TextDataset |
I received an answer for this question on the HuggingFace Datasets forum by @lhoestq
Hi !
If you want to tokenize line by line, you can use this:
```
max_seq_length = 512
num_proc = 4
def tokenize_function(examples):
# Remove empty lines
examples["text"] = [line for line in examples["text"] if len(lin... | Hello,
I am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens limit of most tokenizers.
I would like to understand what is the process to build a text datas... | 270 | Help understanding how to build a dataset for language modeling as with the old TextDataset
Hello,
I am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens lim... | [
-0.2392033041,
-0.0001859662,
0.0175925791,
0.1512311101,
0.1369384974,
-0.1672842056,
0.5160797834,
0.0982798561,
-0.0554775,
-0.1596170366,
0.1894438863,
-0.1941062212,
-0.164652586,
0.1340248287,
0.1435167044,
-0.0841107741,
0.1339856386,
0.0562884845,
0.1950118393,
0.023583... |
https://github.com/huggingface/datasets/issues/2279 | Compatibility with Ubuntu 18 and GLIBC 2.27? | From the trace this seems like an error in the tokenizer library instead.
Do you mind opening an issue at https://github.com/huggingface/tokenizers instead? | ## Describe the bug
For use on Ubuntu systems, it seems that datasets requires GLIBC 2.29. However, Ubuntu 18 runs with GLIBC 2.27 and it seems [non-trivial to upgrade GLIBC to 2.29 for Ubuntu 18 users](https://www.digitalocean.com/community/questions/how-install-glibc-2-29-or-higher-in-ubuntu-18-04).
I'm not sure... | 22 | Compatibility with Ubuntu 18 and GLIBC 2.27?
## Describe the bug
For use on Ubuntu systems, it seems that datasets requires GLIBC 2.29. However, Ubuntu 18 runs with GLIBC 2.27 and it seems [non-trivial to upgrade GLIBC to 2.29 for Ubuntu 18 users](https://www.digitalocean.com/community/questions/how-install-glibc-2-... | [
-0.1428117901,
-0.1723928303,
0.1075328141,
0.1302348077,
0.059638612,
-0.0772049576,
0.2908616364,
0.2598787248,
0.2259384692,
0.0111388378,
0.1740459353,
0.226789847,
-0.2015923411,
0.0207945332,
0.059122771,
-0.2397232503,
0.4002413452,
0.1342248023,
-0.3602031469,
-0.045036... |
https://github.com/huggingface/datasets/issues/2279 | Compatibility with Ubuntu 18 and GLIBC 2.27? | Hi @tginart, thanks for reporting.
I think this issue is already open at `tokenizers` library: https://github.com/huggingface/tokenizers/issues/685 | ## Describe the bug
For use on Ubuntu systems, it seems that datasets requires GLIBC 2.29. However, Ubuntu 18 runs with GLIBC 2.27 and it seems [non-trivial to upgrade GLIBC to 2.29 for Ubuntu 18 users](https://www.digitalocean.com/community/questions/how-install-glibc-2-29-or-higher-in-ubuntu-18-04).
I'm not sure... | 16 | Compatibility with Ubuntu 18 and GLIBC 2.27?
## Describe the bug
For use on Ubuntu systems, it seems that datasets requires GLIBC 2.29. However, Ubuntu 18 runs with GLIBC 2.27 and it seems [non-trivial to upgrade GLIBC to 2.29 for Ubuntu 18 users](https://www.digitalocean.com/community/questions/how-install-glibc-2-... | [
-0.1428117901,
-0.1723928303,
0.1075328141,
0.1302348077,
0.059638612,
-0.0772049576,
0.2908616364,
0.2598787248,
0.2259384692,
0.0111388378,
0.1740459353,
0.226789847,
-0.2015923411,
0.0207945332,
0.059122771,
-0.2397232503,
0.4002413452,
0.1342248023,
-0.3602031469,
-0.045036... |
https://github.com/huggingface/datasets/issues/2278 | Loss result inGptNeoForCasual | Hi ! I think you might have to ask on the `transformers` repo on or the forum at https://discuss.huggingface.co/
Closing since it's not related to this library | Is there any way you give the " loss" and "logits" results in the gpt neo api? | 27 | Loss result inGptNeoForCasual
Is there any way you give the " loss" and "logits" results in the gpt neo api?
Hi ! I think you might have to ask on the `transformers` repo on or the forum at https://discuss.huggingface.co/
Closing since it's not related to this library | [
-0.1725191623,
-0.5047551394,
-0.0494230837,
0.4731767476,
-0.0721401349,
-0.3464439511,
0.0108580738,
0.0527498983,
-0.4997343719,
0.1422907859,
-0.0995593518,
-0.038861081,
0.1009051949,
0.254296571,
0.064613238,
-0.2345311195,
-0.0951779261,
0.2984146476,
-0.1135937497,
-0.2... |
https://github.com/huggingface/datasets/issues/2276 | concatenate_datasets loads all the data into memory | Therefore, when I try to concatenate larger datasets (5x 35GB data sets) I also get an out of memory error, since over 90GB of swap space was used at the time of the crash:
```
---------------------------------------------------------------------------
MemoryError Traceback (most rece... | ## Describe the bug
When I try to concatenate 2 datasets (10GB each) , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.
 , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.
![image]... | [
-0.2084972411,
-0.1142904833,
0.0484715514,
0.4346455634,
0.1993300915,
0.1706531197,
-0.1373959482,
0.2591967583,
-0.1694814265,
0.0429845415,
0.0269324817,
0.2109087408,
0.0059093661,
-0.2283866107,
-0.1217164323,
0.1358806491,
0.2059147805,
0.0483618155,
-0.2932785153,
-0.02... |
https://github.com/huggingface/datasets/issues/2276 | concatenate_datasets loads all the data into memory | Hi ! this looks like an important issue. Let me try to reproduce this.
Cc @samsontmr this might be related to the memory issue you have in #2134 | ## Describe the bug
When I try to concatenate 2 datasets (10GB each) , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.
 , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.
![image]... | [
-0.1655791104,
0.0086439569,
0.0250857137,
0.3855162561,
0.2144167423,
0.1329607069,
0.0000971392,
0.2391079068,
-0.2387408912,
0.0435205214,
0.1007451862,
0.1849092394,
0.134906143,
-0.1963894218,
-0.1764697731,
0.2529346347,
0.2075013667,
0.1770740151,
-0.335496366,
-0.052640... |
https://github.com/huggingface/datasets/issues/2276 | concatenate_datasets loads all the data into memory | @lhoestq Just went to open a similar issue.
It seems like deep copying (tested on master) the dataset object writes the table's record batches (`dset._data._batches`) into RAM.
To find the bug, I modified the `_deepcopy` function in `table.py` as follows:
```python
def _deepcopy(x, memo: dict):
"""deepcopy... | ## Describe the bug
When I try to concatenate 2 datasets (10GB each) , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.
 , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.
![image]... | [
-0.1189590171,
0.083305344,
0.1260729134,
0.3719976544,
0.0171170533,
0.1133872122,
-0.113791585,
0.3422017097,
-0.3763221502,
-0.0313395187,
-0.0029889611,
0.2833090425,
0.1032994166,
-0.1568288207,
0.0146104116,
0.4244331717,
0.2601238489,
0.1345783919,
-0.4625254571,
-0.1483... |
https://github.com/huggingface/datasets/issues/2276 | concatenate_datasets loads all the data into memory | Thanks for the insights @mariosasko ! I'm working on a fix.
Since this is a big issue I'll make a patch release as soon as this is fixed | ## Describe the bug
When I try to concatenate 2 datasets (10GB each) , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.
 , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.
![image]... | [
-0.1398508996,
0.0151146287,
0.0576702692,
0.3696734309,
0.1839566827,
0.1596759856,
-0.0336783044,
0.2669216394,
-0.2130461782,
0.0089293523,
0.0735272765,
0.1821825504,
0.1092259213,
-0.2046646327,
-0.1817609817,
0.3304116428,
0.2417272329,
0.2320262343,
-0.2873945832,
-0.082... |
https://github.com/huggingface/datasets/issues/2276 | concatenate_datasets loads all the data into memory | Hi @samsontmr @TaskManager91 the fix is on the master branch, feel free to install `datasets` from source and let us know if you still have issues | ## Describe the bug
When I try to concatenate 2 datasets (10GB each) , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.
 , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.
![image]... | [
-0.1705919653,
0.0092171757,
0.0292839464,
0.3677371442,
0.1755020022,
0.1506509483,
-0.0363331027,
0.2557010055,
-0.2184763104,
0.0205547977,
0.0756373778,
0.2006985843,
0.1137188151,
-0.1913735867,
-0.1878062934,
0.321284771,
0.2241286188,
0.2327462882,
-0.3028806448,
-0.0799... |
https://github.com/huggingface/datasets/issues/2275 | SNLI dataset has labels of -1 | Hi @puzzler10,
Those examples where `gold_label` field was empty, -1 label was alloted to it. In order to remove it you can filter the samples from train/val/test splits. Here's how you can drop those rows from the dataset:
`dataset = load_dataset("snli")`
`dataset_test_filter = dataset['test'].filter(lambda exampl... | There are a number of rows with a label of -1 in the SNLI dataset. The dataset descriptions [here](https://nlp.stanford.edu/projects/snli/) and [here](https://github.com/huggingface/datasets/tree/master/datasets/snli) don't list -1 as a label possibility, and neither does the dataset viewer. As examples, see index 107... | 69 | SNLI dataset has labels of -1
There are a number of rows with a label of -1 in the SNLI dataset. The dataset descriptions [here](https://nlp.stanford.edu/projects/snli/) and [here](https://github.com/huggingface/datasets/tree/master/datasets/snli) don't list -1 as a label possibility, and neither does the dataset v... | [
0.2989761531,
-0.3761274517,
-0.032856673,
0.2222824693,
0.0694199651,
0.081588015,
0.3312397599,
0.1691943109,
0.1813281626,
0.2444590926,
-0.3545391262,
0.4890281856,
-0.1160791144,
0.2643212378,
-0.0070859157,
0.1549566984,
0.2352823913,
0.303981334,
0.2841203213,
-0.3298658... |
https://github.com/huggingface/datasets/issues/2272 | Bug in Dataset.class_encode_column | This has been fixed in this commit: https://github.com/huggingface/datasets/pull/2254/commits/88676c930216cd4cc31741b99827b477d2b46cb6
It was introduced in #2246 : using map with `input_columns` doesn't return the other columns anymore | ## Describe the bug
All the rest of the columns except the one passed to `Dataset.class_encode_column` are discarded.
## Expected results
All the original columns should be kept.
This needs regression tests.
| 24 | Bug in Dataset.class_encode_column
## Describe the bug
All the rest of the columns except the one passed to `Dataset.class_encode_column` are discarded.
## Expected results
All the original columns should be kept.
This needs regression tests.
This has been fixed in this commit: https://github.com/hugg... | [
-0.0357848555,
-0.1575218588,
-0.086810872,
0.2376332879,
0.5289871693,
0.1175257117,
0.5686453581,
0.370449245,
0.2597453296,
0.1947686523,
-0.1332873255,
0.5430199504,
0.0994263291,
0.3341945708,
0.0166780986,
-0.1077984795,
0.1042394713,
0.1556780934,
-0.5332589149,
-0.24857... |
https://github.com/huggingface/datasets/issues/2267 | DatasetDict save load Failing test in 1.6 not in 1.5 | I'm not able to reproduce this, do you think you can provide a code that creates a DatasetDict that has this issue when saving and reloading ? | ## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a dataset dict from jsonl
path = '/test/foo'
ds_dict.s... | 27 | DatasetDict save load Failing test in 1.6 not in 1.5
## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a data... | [
-0.1946798712,
0.1140981093,
-0.0127639389,
0.2849428952,
0.0645920932,
-0.0614171587,
0.1323401183,
0.4565899968,
0.442237556,
0.1792287529,
0.2358966023,
0.3808514178,
0.054587096,
-0.05616818,
-0.1504479647,
-0.0622489341,
0.2574025393,
0.136252284,
0.0472621433,
0.126932963... |
https://github.com/huggingface/datasets/issues/2267 | DatasetDict save load Failing test in 1.6 not in 1.5 | Hi, I just ran into a similar error. Here is the minimal code to reproduce:
```python
from datasets import load_dataset, DatasetDict
ds = load_dataset('super_glue', 'multirc')
ds.save_to_disk('tempds')
ds = DatasetDict.load_from_disk('tempds')
```
```bash
Reusing dataset super_glue (/home/idahl/.cache/h... | ## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a dataset dict from jsonl
path = '/test/foo'
ds_dict.s... | 226 | DatasetDict save load Failing test in 1.6 not in 1.5
## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a data... | [
-0.1946798712,
0.1140981093,
-0.0127639389,
0.2849428952,
0.0645920932,
-0.0614171587,
0.1323401183,
0.4565899968,
0.442237556,
0.1792287529,
0.2358966023,
0.3808514178,
0.054587096,
-0.05616818,
-0.1504479647,
-0.0622489341,
0.2574025393,
0.136252284,
0.0472621433,
0.126932963... |
https://github.com/huggingface/datasets/issues/2267 | DatasetDict save load Failing test in 1.6 not in 1.5 | My current workaround is to remove the idx feature:
```
from datasets import load_dataset, DatasetDict, Value
ds = load_dataset('super_glue', 'multirc')
ds = ds.remove_columns('idx')
ds.save_to_disk('tempds')
ds = DatasetDict.load_from_disk('tempds')
```
works. | ## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a dataset dict from jsonl
path = '/test/foo'
ds_dict.s... | 29 | DatasetDict save load Failing test in 1.6 not in 1.5
## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a data... | [
-0.1946798712,
0.1140981093,
-0.0127639389,
0.2849428952,
0.0645920932,
-0.0614171587,
0.1323401183,
0.4565899968,
0.442237556,
0.1792287529,
0.2358966023,
0.3808514178,
0.054587096,
-0.05616818,
-0.1504479647,
-0.0622489341,
0.2574025393,
0.136252284,
0.0472621433,
0.126932963... |
https://github.com/huggingface/datasets/issues/2267 | DatasetDict save load Failing test in 1.6 not in 1.5 | It looks like this issue comes from the order of the fields in the 'idx' struct that is different for some reason.
I'm looking into it. Note that as a workaround you can also flatten the nested features with `ds = ds.flatten()` | ## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a dataset dict from jsonl
path = '/test/foo'
ds_dict.s... | 42 | DatasetDict save load Failing test in 1.6 not in 1.5
## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a data... | [
-0.1946798712,
0.1140981093,
-0.0127639389,
0.2849428952,
0.0645920932,
-0.0614171587,
0.1323401183,
0.4565899968,
0.442237556,
0.1792287529,
0.2358966023,
0.3808514178,
0.054587096,
-0.05616818,
-0.1504479647,
-0.0622489341,
0.2574025393,
0.136252284,
0.0472621433,
0.126932963... |
https://github.com/huggingface/datasets/issues/2267 | DatasetDict save load Failing test in 1.6 not in 1.5 | I just pushed a fix on `master`. We'll do a new release soon !
Thanks for reporting | ## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a dataset dict from jsonl
path = '/test/foo'
ds_dict.s... | 17 | DatasetDict save load Failing test in 1.6 not in 1.5
## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a data... | [
-0.1946798712,
0.1140981093,
-0.0127639389,
0.2849428952,
0.0645920932,
-0.0614171587,
0.1323401183,
0.4565899968,
0.442237556,
0.1792287529,
0.2358966023,
0.3808514178,
0.054587096,
-0.05616818,
-0.1504479647,
-0.0622489341,
0.2574025393,
0.136252284,
0.0472621433,
0.126932963... |
https://github.com/huggingface/datasets/issues/2262 | NewsPH NLI dataset script fails to access test data. | Thanks @bhavitvyamalik for the fix !
The fix will be available in the next release.
It's already available on the `master` branch. For now you can either install `datasets` from source or use `script_version="master"` in `load_dataset` to use the fixed version of this dataset. | In Newsph-NLI Dataset (#1192), it fails to access test data.
According to the script below, the download manager will download the train data when trying to download the test data.
https://github.com/huggingface/datasets/blob/2a2dd6316af2cc7fdf24e4779312e8ee0c7ed98b/datasets/newsph_nli/newsph_nli.py#L71
If yo... | 44 | NewsPH NLI dataset script fails to access test data.
In Newsph-NLI Dataset (#1192), it fails to access test data.
According to the script below, the download manager will download the train data when trying to download the test data.
https://github.com/huggingface/datasets/blob/2a2dd6316af2cc7fdf24e4779312e8ee... | [
-0.1607161313,
0.2849107087,
-0.1269760132,
0.1755046844,
0.0859021693,
0.0402895324,
0.2562073767,
0.3658848107,
0.041933585,
0.3072819412,
0.0052474048,
0.0433719717,
0.0158908386,
0.21912238,
0.1775155962,
-0.1738475114,
-0.0040244837,
0.0109897936,
-0.1373092532,
0.05216496... |
https://github.com/huggingface/datasets/issues/2256 | Running `datase.map` with `num_proc > 1` uses a lot of memory | Thanks for reporting ! We are working on this and we'll do a patch release very soon. | ## Describe the bug
Running `datase.map` with `num_proc > 1` leads to a tremendous memory usage that requires swapping on disk and it becomes very slow.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dstc8_datset = load_dataset("roskoN/dstc8-reddit-corpus", keep_in_memory=False)
... | 17 | Running `datase.map` with `num_proc > 1` uses a lot of memory
## Describe the bug
Running `datase.map` with `num_proc > 1` leads to a tremendous memory usage that requires swapping on disk and it becomes very slow.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dstc8_datset = load... | [
-0.0754547492,
-0.3018320799,
-0.0413842462,
0.3514221609,
0.2028201818,
0.0986169651,
0.0996903926,
0.2842757106,
0.2723554373,
0.2235053629,
0.185262233,
0.4316781461,
-0.145794943,
-0.0435095765,
-0.0000908966,
0.1481416076,
0.286978066,
0.0148507114,
0.1028159261,
0.0590187... |
https://github.com/huggingface/datasets/issues/2256 | Running `datase.map` with `num_proc > 1` uses a lot of memory | We did a patch release to fix this issue.
It should be fixed in the new version 1.6.1
Thanks again for reporting and for the details :) | ## Describe the bug
Running `datase.map` with `num_proc > 1` leads to a tremendous memory usage that requires swapping on disk and it becomes very slow.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dstc8_datset = load_dataset("roskoN/dstc8-reddit-corpus", keep_in_memory=False)
... | 27 | Running `datase.map` with `num_proc > 1` uses a lot of memory
## Describe the bug
Running `datase.map` with `num_proc > 1` leads to a tremendous memory usage that requires swapping on disk and it becomes very slow.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dstc8_datset = load... | [
-0.0729509965,
-0.2929819822,
-0.0424273573,
0.3707071841,
0.2069270909,
0.0879343525,
0.0891322419,
0.2784175575,
0.2625880837,
0.2296096832,
0.1797682643,
0.4309555292,
-0.1489770263,
-0.027789006,
-0.0165793095,
0.1438742429,
0.2830033004,
0.019841861,
0.0992945731,
0.071644... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Hi ! Sorry to hear that. This may come from another issue then.
First can we check if this latency comes from the dataset itself ?
You can try to load your dataset and benchmark the speed of querying random examples inside it ?
```python
import time
import numpy as np
from datasets import load_from_disk
da... | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 101 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Hi @lhoestq, here is the result. I additionally measured time to `load_from_disk`:
* 60GB
```
loading took: 22.618776321411133
ramdom indexing 100 times took: 0.10214924812316895
```
* 600GB
```
loading took: 1176.1764674186707
ramdom indexing 100 times took: 2.853600025177002
```
Hmm.. I double checke... | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 59 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | I'm surprised by the speed change. Can you give more details about your dataset ?
The speed depends on the number of batches in the arrow tables and the distribution of the lengths of the batches.
You can access the batches by doing `dataset.data.to_batches()` (use only for debugging) (it doesn't bring data in memory... | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 84 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Also if you could give us more info about your env like your OS, version of pyarrow and if you're using an HDD or a SSD | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 26 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Here are some details of my 600GB dataset. This is a dataset AFTER the `map` function and once I load this dataset, I do not use `map` anymore in the training. Regarding the distribution of the lengths, it is almost uniform (90% is 512 tokens, and 10% is randomly shorter than that -- typical setting for language modeli... | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 118 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Regarding the environment, I am running the code on a cloud server. Here are some info:
```
Ubuntu 18.04.5 LTS # cat /etc/issue
pyarrow 3.0.0 # pip list | grep pyarrow
```
The data is stored in SSD and it is mounted to the machine via Network File System.
If you could point me to some of the ... | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 76 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | I am not sure how I could provide you with the reproducible code, since the problem only arises when the data is big. For the moment, I would share the part that I think is relevant. Feel free to ask me for more info.
```python
class MyModel(pytorch_lightning.LightningModule)
def setup(self, stage):
s... | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 71 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Hi ! Sorry for the delay I haven't had a chance to take a look at this yet. Are you still experiencing this issue ?
I'm asking because the latest patch release 1.6.2 fixed a few memory issues that could have lead to slow downs | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 45 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Hi! I just ran the same code with different datasets (one is 60 GB and another 600 GB), and the latter runs much slower. ETA differs by 10x. | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 28 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | @lhoestq and @hwijeen
Despite upgrading to datasets 1.6.2, still experiencing extremely slow (2h00) loading for a 300Gb local dataset shard size 1.1Gb on local HDD (40Mb/s read speed). This corresponds almost exactly to total data divided by reading speed implying that it reads the entire dataset at each load.
St... | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 227 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Hi @lhoestq thanks for the quick turn-around, actually the plain vanilla way, without an particular knack or fashion, I tried to look into the documentation for some alternative but couldn't find any
> dataset = load_from_disk(dataset_path=os.path.join(datasets_dir,dataset_dir)) | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 36 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | I’m facing the same issue when loading a 900GB dataset (stored via `save_to_disk`): `load_from_disk(path_to_dir)` takes 1.5 hours and htop consistently shows high IO rates > 120 M/s. | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 27 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | @tsproisl same here, smells like ~~teen spirit~~ intended generator inadvertently ending up iterator
@lhoestq perhaps solution to detect bug location in code is to track its signature via HD read usage monitoring, option is to add tracking decorator on top each function and sequentially close all hatches from top to... | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 57 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | I wasn't able to reproduce this on a toy dataset of around 300GB:
```python
import datasets as ds
s = ds.load_dataset("squad", split="train")
s4000 = ds.concatenate_datasets([s] * 4000)
print(ds.utils.size_str(s4000.data.nbytes)) # '295.48 GiB'
s4000.save_to_disk("tmp/squad_4000")
```
```python
import... | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 130 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Just tried on google colab and got ~1min for a 15GB dataset (only 200 times SQuAD), while it should be instantaneous. The time is spent reading the Apache Arrow table from the memory mapped file. This might come a virtual disk management issue. I'm trying to see if I can still speed it up on colab. | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 56 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | @lhoestq what is Google Colab's HD read speed, is it possible to introspect incl. make like SSD or HDD ? | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 20 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | @lhoestq Thank you! The issue is getting more interesting. The second script is still running, but it's definitely taking much longer than 15 seconds. | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 24 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Okay, here’s the ouput:
Blocks read 158396
Elapsed time: 529.10s
Also using datasets 1.6.2. Do you have any ideas, how to pinpoint the problem? | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 24 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | @lhoestq, @tsproisl mmmh still writing on my side about 1h to go, thinking on it are your large datasets all monoblock unsharded ? mine is 335 times 1.18Gb shards. | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 29 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | The 529.10s was a bit too optimistic. I cancelled the reading process once before running it completely, therefore the harddrive cache probably did its work.
Here are three consecutive runs
First run (freshly written to disk):
Blocks read 309702
Elapsed time: 1267.74s
Second run (immediately after):
Blocks read... | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 62 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | @lhoestq
First test
> elapsed time: 11219.05s
Second test running bear with me, for Windows users slight trick to modify original "disk0" string:
First find physical unit relevant key in dictionnary
```
import psutil
psutil.disk_io_counters(perdisk=True)
```
> {'PhysicalDrive0': sdiskio(read_count=18453... | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 115 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Unfortunately no. Thanks for running the benchmark though, it shows that you machine does a lot of read operations. This is not expected: in other machines it does almost no read operations which enables a very fast loading.
I did some tests on google colab and have the same issue. The first time the dataset arrow f... | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 177 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Just want to say that I am seeing the same issue. Dataset size if 268GB and it takes **3 hours** to load `load_from_disk`, using dataset version `1.9.0`. Filesystem underneath is `Lustre` | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 31 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Hi @lhoestq, confirmed Windows issue, exact same code running on Linux OS total loading time about 3 minutes. | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 18 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2252 | Slow dataloading with big datasets issue persists | Hmm that's different from what I got. I was on Ubuntu when reporting the initial issue. | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | 16 | Slow dataloading with big datasets issue persists
Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action ... | [
-0.4040594697,
0.2139613926,
-0.1343097538,
0.1632937044,
0.2269002348,
-0.1808894873,
0.2195486724,
0.4007900655,
0.1186831445,
-0.0092016449,
-0.3094537854,
0.0660658255,
0.1796628684,
0.1480763704,
-0.1164038032,
0.1400314867,
0.010011836,
0.0462013781,
-0.1018279493,
-0.059... |
https://github.com/huggingface/datasets/issues/2250 | some issue in loading local txt file as Dataset for run_mlm.py | Hi,
1. try
```python
dataset = load_dataset("text", data_files={"train": ["a1.txt", "b1.txt"], "test": ["c1.txt"]})
```
instead.
Sadly, I can't reproduce the error on my machine. If the above code doesn't resolve the issue, try to update the library to the
newest version (`pip instal... | 
first of all, I tried to load 3 .txt files as a dataset (sure that the directory and permission is OK.), I face with the below error.
> FileNotFoundError: [Errno 2] No such file or directory: 'c'
by ... | 110 | some issue in loading local txt file as Dataset for run_mlm.py

first of all, I tried to load 3 .txt files as a dataset (sure that the directory and permission is OK.), I face with the below error.
> F... | [
-0.2187602073,
-0.1667420566,
0.031390164,
0.4202875793,
0.410284847,
0.2086267173,
0.3599504828,
0.332182467,
0.0959177613,
-0.0865045413,
0.1432957649,
0.1121849567,
-0.3130261004,
0.1154058799,
0.2711094022,
-0.1855070144,
-0.0543439202,
0.1169155836,
0.0061773253,
-0.052779... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.