html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 63 51.8k | body stringlengths 0 36.2k ⌀ | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings list |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/2691 | xtreme / pan-x cannot be downloaded | Hmmm, the file (https://www.dropbox.com/s/dl/12h3qqog6q4bjve/panx_dataset.tar) really seems to be unavailable... I tried from various connexions and machines and got the same 404 error. Maybe the dataset has been loaded from the cache in your case? | ## Describe the bug
Dataset xtreme / pan-x cannot be loaded
Seems related to https://github.com/huggingface/datasets/pull/2326
## Steps to reproduce the bug
```python
dataset = load_dataset("xtreme", "PAN-X.fr")
```
## Expected results
Load the dataset
## Actual results
```
FileNotFoundError:... | 34 | xtreme / pan-x cannot be downloaded
## Describe the bug
Dataset xtreme / pan-x cannot be loaded
Seems related to https://github.com/huggingface/datasets/pull/2326
## Steps to reproduce the bug
```python
dataset = load_dataset("xtreme", "PAN-X.fr")
```
## Expected results
Load the dataset
## Act... | [
-0.2679746151,
-0.4418176115,
-0.1176262721,
0.263651669,
0.2272866368,
0.109441027,
-0.1848931462,
0.2379712164,
0.1063496098,
0.1472173631,
-0.1563760489,
0.2612070739,
0.0340301096,
0.0054812445,
0.1887653619,
-0.2929279804,
-0.0231134109,
0.1571930051,
0.0854806453,
-0.0127... |
https://github.com/huggingface/datasets/issues/2691 | xtreme / pan-x cannot be downloaded | Yes @severo, weird... I could access the file when I answered to you, but now I cannot longer access it either... Maybe it was from the cache as you point out.
Anyway, I have opened an issue in the GitHub repository responsible for the original dataset: https://github.com/afshinrahimi/mmner/issues/4
I have also con... | ## Describe the bug
Dataset xtreme / pan-x cannot be loaded
Seems related to https://github.com/huggingface/datasets/pull/2326
## Steps to reproduce the bug
```python
dataset = load_dataset("xtreme", "PAN-X.fr")
```
## Expected results
Load the dataset
## Actual results
```
FileNotFoundError:... | 62 | xtreme / pan-x cannot be downloaded
## Describe the bug
Dataset xtreme / pan-x cannot be loaded
Seems related to https://github.com/huggingface/datasets/pull/2326
## Steps to reproduce the bug
```python
dataset = load_dataset("xtreme", "PAN-X.fr")
```
## Expected results
Load the dataset
## Act... | [
-0.3182061613,
-0.3272893429,
-0.1103756279,
0.2835795581,
0.2328956872,
0.0658766776,
-0.0919084772,
0.1860668957,
0.046389956,
0.2400703877,
-0.2260717005,
0.2097330093,
-0.0343384668,
-0.148910448,
0.1411536634,
-0.2330800295,
-0.0239751991,
0.0649311766,
0.0869292766,
-0.00... |
https://github.com/huggingface/datasets/issues/2691 | xtreme / pan-x cannot be downloaded | Reply from the author/maintainer:
> Will fix the issue and let you know during the weekend. | ## Describe the bug
Dataset xtreme / pan-x cannot be loaded
Seems related to https://github.com/huggingface/datasets/pull/2326
## Steps to reproduce the bug
```python
dataset = load_dataset("xtreme", "PAN-X.fr")
```
## Expected results
Load the dataset
## Actual results
```
FileNotFoundError:... | 16 | xtreme / pan-x cannot be downloaded
## Describe the bug
Dataset xtreme / pan-x cannot be loaded
Seems related to https://github.com/huggingface/datasets/pull/2326
## Steps to reproduce the bug
```python
dataset = load_dataset("xtreme", "PAN-X.fr")
```
## Expected results
Load the dataset
## Act... | [
-0.3508208096,
-0.488174051,
-0.1012571454,
0.2679371834,
0.2424272746,
0.0743882954,
-0.1332193464,
0.2452751994,
0.1442700922,
0.1187145188,
-0.2436652482,
0.2646327913,
0.0461621732,
0.0465012416,
0.1917270422,
-0.3267599642,
0.0447654724,
0.1279416531,
0.0111563457,
-0.0376... |
https://github.com/huggingface/datasets/issues/2691 | xtreme / pan-x cannot be downloaded | The author told that apparently Dropbox has changed their policy and no longer allow downloading the file without having signed in first. The author asked Hugging Face to host their dataset. | ## Describe the bug
Dataset xtreme / pan-x cannot be loaded
Seems related to https://github.com/huggingface/datasets/pull/2326
## Steps to reproduce the bug
```python
dataset = load_dataset("xtreme", "PAN-X.fr")
```
## Expected results
Load the dataset
## Actual results
```
FileNotFoundError:... | 31 | xtreme / pan-x cannot be downloaded
## Describe the bug
Dataset xtreme / pan-x cannot be loaded
Seems related to https://github.com/huggingface/datasets/pull/2326
## Steps to reproduce the bug
```python
dataset = load_dataset("xtreme", "PAN-X.fr")
```
## Expected results
Load the dataset
## Act... | [
-0.2759370506,
-0.3480377793,
-0.0593666956,
0.347079277,
0.108641468,
0.0554847941,
0.0214384086,
0.2132672668,
0.3107881844,
0.0681135803,
-0.237227127,
0.2172870934,
-0.0742675662,
0.1829393953,
0.2306468487,
-0.167099461,
0.0182071999,
0.038396161,
0.1040742174,
-0.10596650... |
https://github.com/huggingface/datasets/issues/2689 | cannot save the dataset to disk after rename_column | Hi ! That's because you are trying to overwrite a file that is already open and being used.
Indeed `foo/dataset.arrow` is open and used by your `dataset` object.
When you do `rename_column`, the resulting dataset reads the data from the same arrow file.
In other cases like when using `map` on the other hand, the r... | ## Describe the bug
If you use `rename_column` and do no other modification, you will be unable to save the dataset using `save_to_disk`
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
In [1]: from datasets import Dataset, load_from_disk
In [5]: dataset=Dataset.from_dict({'foo': [0]})... | 102 | cannot save the dataset to disk after rename_column
## Describe the bug
If you use `rename_column` and do no other modification, you will be unable to save the dataset using `save_to_disk`
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
In [1]: from datasets import Dataset, load_from... | [
-0.0523224063,
0.2342096865,
-0.023168223,
-0.0133465482,
0.3893550038,
0.2707955241,
0.4929156899,
0.2463013977,
0.0219599847,
0.1201096177,
-0.0814654306,
0.4607100785,
-0.1719952822,
-0.1512236893,
-0.0136408638,
-0.0714223087,
0.2988218069,
-0.1098155007,
-0.0071084313,
0.1... |
https://github.com/huggingface/datasets/issues/2688 | hebrew language codes he and iw should be treated as aliases | Hi @eyaler, thanks for reporting.
While you are true with respect the Hebrew language tag ("iw" is deprecated and "he" is the preferred value), in the "mc4" dataset (which is a derived dataset) we have kept the language tags present in the original dataset: [Google C4](https://www.tensorflow.org/datasets/catalog/c4)... | https://huggingface.co/datasets/mc4 not listed when searching for hebrew datasets (he) as it uses the older language code iw, preventing discoverability. | 46 | hebrew language codes he and iw should be treated as aliases
https://huggingface.co/datasets/mc4 not listed when searching for hebrew datasets (he) as it uses the older language code iw, preventing discoverability.
Hi @eyaler, thanks for reporting.
While you are true with respect the Hebrew language tag ("iw" i... | [
-0.184782818,
-0.0222231038,
-0.154320851,
0.0329203978,
0.0282861479,
0.0720351189,
0.5509454012,
0.3117235005,
0.0522168428,
0.1043042615,
-0.3407954872,
-0.2625412941,
0.0178138409,
0.0098183248,
0.1778060198,
0.1035924926,
0.0763272569,
0.0028081359,
0.0307024829,
-0.259934... |
https://github.com/huggingface/datasets/issues/2688 | hebrew language codes he and iw should be treated as aliases | For discoverability on the website I updated the YAML tags at the top of the mC4 dataset card https://github.com/huggingface/datasets/commit/38288087b1b02f97586e0346e8f28f4960f1fd37
Once the website is updated, mC4 will be listed in https://huggingface.co/datasets?filter=languages:he
| https://huggingface.co/datasets/mc4 not listed when searching for hebrew datasets (he) as it uses the older language code iw, preventing discoverability. | 30 | hebrew language codes he and iw should be treated as aliases
https://huggingface.co/datasets/mc4 not listed when searching for hebrew datasets (he) as it uses the older language code iw, preventing discoverability.
For discoverability on the website I updated the YAML tags at the top of the mC4 dataset card https:... | [
-0.1667789668,
-0.0835686401,
-0.0892025158,
0.0611215942,
0.2324799001,
-0.0180357527,
0.2219813615,
0.4122387469,
-0.0158844683,
-0.0184126757,
-0.3140927851,
-0.1698117554,
0.0033470078,
0.2545408607,
0.1932535022,
0.1975349784,
0.0163776055,
-0.0476837158,
-0.0980866328,
-0... |
https://github.com/huggingface/datasets/issues/2681 | 5 duplicate datasets | Yes this was documented in the PR that added this hf->paperswithcode mapping (https://github.com/huggingface/datasets/pull/2404) and AFAICT those are slightly distinct datasets so I think it's a wontfix
For context on the paperswithcode mapping you can also refer to https://github.com/huggingface/huggingface_hub/pul... | ## Describe the bug
In 5 cases, I could find a dataset on Paperswithcode which references two Hugging Face datasets as dataset loaders. They are:
- https://paperswithcode.com/dataset/multinli -> https://huggingface.co/datasets/multi_nli and https://huggingface.co/datasets/multi_nli_mismatch
<img width="838... | 45 | 5 duplicate datasets
## Describe the bug
In 5 cases, I could find a dataset on Paperswithcode which references two Hugging Face datasets as dataset loaders. They are:
- https://paperswithcode.com/dataset/multinli -> https://huggingface.co/datasets/multi_nli and https://huggingface.co/datasets/multi_nli_mismatch... | [
0.0702891797,
-0.0303928498,
0.0002650926,
0.3026072085,
0.1615349054,
0.0224872008,
0.3812558949,
0.1487318873,
0.0632144287,
0.2060805708,
-0.1454353482,
0.0585725456,
0.1328155994,
-0.0630716011,
0.0719404891,
-0.0062968764,
0.2474245131,
-0.1418851316,
-0.227530852,
-0.0822... |
https://github.com/huggingface/datasets/issues/2679 | Cannot load the blog_authorship_corpus due to codec errors | Hi @izaskr, thanks for reporting.
However the traceback you joined does not correspond to the codec error message: it is about other error `NonMatchingSplitsSizesError`. Maybe you missed some important part of your traceback...
I'm going to have a look at the dataset anyway... | ## Describe the bug
A codec error is raised while loading the blog_authorship_corpus.
## Steps to reproduce the bug
```
from datasets import load_dataset
raw_datasets = load_dataset("blog_authorship_corpus")
```
## Expected results
Loading the dataset without errors.
## Actual results
An error simila... | 43 | Cannot load the blog_authorship_corpus due to codec errors
## Describe the bug
A codec error is raised while loading the blog_authorship_corpus.
## Steps to reproduce the bug
```
from datasets import load_dataset
raw_datasets = load_dataset("blog_authorship_corpus")
```
## Expected results
Loading the ... | [
-0.2084891796,
0.4780813158,
-0.0540075712,
0.3998492062,
0.3617139459,
0.22684066,
0.0785164982,
0.4288443029,
-0.1586737037,
0.24357526,
0.0468283147,
0.2679276466,
-0.0666647255,
-0.2108727545,
0.0490831584,
0.0175639372,
-0.088299796,
0.1426827163,
0.2282307148,
-0.23428109... |
https://github.com/huggingface/datasets/issues/2679 | Cannot load the blog_authorship_corpus due to codec errors | Hi @izaskr, thanks again for having reported this issue.
After investigation, I have created a Pull Request (#2685) to fix several issues with this dataset:
- the `NonMatchingSplitsSizesError`
- the `UnicodeDecodeError`
Once the Pull Request merged into master, you will be able to load this dataset if you insta... | ## Describe the bug
A codec error is raised while loading the blog_authorship_corpus.
## Steps to reproduce the bug
```
from datasets import load_dataset
raw_datasets = load_dataset("blog_authorship_corpus")
```
## Expected results
Loading the dataset without errors.
## Actual results
An error simila... | 75 | Cannot load the blog_authorship_corpus due to codec errors
## Describe the bug
A codec error is raised while loading the blog_authorship_corpus.
## Steps to reproduce the bug
```
from datasets import load_dataset
raw_datasets = load_dataset("blog_authorship_corpus")
```
## Expected results
Loading the ... | [
-0.2084891796,
0.4780813158,
-0.0540075712,
0.3998492062,
0.3617139459,
0.22684066,
0.0785164982,
0.4288443029,
-0.1586737037,
0.24357526,
0.0468283147,
0.2679276466,
-0.0666647255,
-0.2108727545,
0.0490831584,
0.0175639372,
-0.088299796,
0.1426827163,
0.2282307148,
-0.23428109... |
https://github.com/huggingface/datasets/issues/2679 | Cannot load the blog_authorship_corpus due to codec errors | @albertvillanova
Can you shed light on how this fix works?
We're experiencing a similar issue.
If we run several runs (eg in a Wandb sweep) the first run "works" but then we get `NonMatchingSplitsSizesError`
| run num | actual train examples # | expected example # | recorded example # |
| ------- | -------... | ## Describe the bug
A codec error is raised while loading the blog_authorship_corpus.
## Steps to reproduce the bug
```
from datasets import load_dataset
raw_datasets = load_dataset("blog_authorship_corpus")
```
## Expected results
Loading the dataset without errors.
## Actual results
An error simila... | 135 | Cannot load the blog_authorship_corpus due to codec errors
## Describe the bug
A codec error is raised while loading the blog_authorship_corpus.
## Steps to reproduce the bug
```
from datasets import load_dataset
raw_datasets = load_dataset("blog_authorship_corpus")
```
## Expected results
Loading the ... | [
-0.2084891796,
0.4780813158,
-0.0540075712,
0.3998492062,
0.3617139459,
0.22684066,
0.0785164982,
0.4288443029,
-0.1586737037,
0.24357526,
0.0468283147,
0.2679276466,
-0.0666647255,
-0.2108727545,
0.0490831584,
0.0175639372,
-0.088299796,
0.1426827163,
0.2282307148,
-0.23428109... |
https://github.com/huggingface/datasets/issues/2678 | Import Error in Kaggle notebook | @lhoestq I did, and then let pip handle the installation in `pip import datasets`. I also tried using conda but it gives the same error.
Edit: pyarrow version on kaggle is 4.0.0, it gets replaced with 4.0.1. So, I don't think uninstalling will change anything.
```
Install Trace of datasets:
Collecting datasets
... | ## Describe the bug
Not able to import datasets library in kaggle notebooks
## Steps to reproduce the bug
```python
!pip install datasets
import datasets
```
## Expected results
No such error
## Actual results
```
ImportError Traceback (most recent call last)
<ipython-inp... | 322 | Import Error in Kaggle notebook
## Describe the bug
Not able to import datasets library in kaggle notebooks
## Steps to reproduce the bug
```python
!pip install datasets
import datasets
```
## Expected results
No such error
## Actual results
```
ImportError Traceback (mo... | [
-0.3157350421,
-0.0831314325,
-0.110972181,
0.1658832282,
0.1794454604,
-0.0220795088,
0.3208148181,
0.2708317041,
-0.028439695,
-0.0525300987,
-0.1436262727,
0.8592839241,
0.0922810435,
0.2378657758,
0.0209103487,
-0.0479101799,
0.1613053977,
0.2256400436,
-0.0434861705,
0.051... |
https://github.com/huggingface/datasets/issues/2678 | Import Error in Kaggle notebook | You may need to restart your kaggle notebook after installing a newer version of `pyarrow`.
If it doesn't work we'll probably have to create an issue on [arrow's JIRA](https://issues.apache.org/jira/projects/ARROW/issues/), and maybe ask kaggle why it could fail | ## Describe the bug
Not able to import datasets library in kaggle notebooks
## Steps to reproduce the bug
```python
!pip install datasets
import datasets
```
## Expected results
No such error
## Actual results
```
ImportError Traceback (most recent call last)
<ipython-inp... | 37 | Import Error in Kaggle notebook
## Describe the bug
Not able to import datasets library in kaggle notebooks
## Steps to reproduce the bug
```python
!pip install datasets
import datasets
```
## Expected results
No such error
## Actual results
```
ImportError Traceback (mo... | [
-0.3392024338,
-0.0755515173,
-0.114092052,
0.1258437485,
0.2003503591,
0.0130887469,
0.3512436152,
0.3286039829,
-0.030617414,
-0.0346203335,
-0.1804618388,
0.8273154497,
0.0816760883,
0.2334336936,
0.0459382161,
-0.0699089319,
0.1140711382,
0.2013819665,
-0.049039986,
0.07037... |
https://github.com/huggingface/datasets/issues/2678 | Import Error in Kaggle notebook | > You may need to restart your kaggle notebook before after installing a newer version of `pyarrow`.
>
> If it doesn't work we'll probably have to create an issue on [arrow's JIRA](https://issues.apache.org/jira/projects/ARROW/issues/), and maybe ask kaggle why it could fail
It works after restarting.
My bad, I ... | ## Describe the bug
Not able to import datasets library in kaggle notebooks
## Steps to reproduce the bug
```python
!pip install datasets
import datasets
```
## Expected results
No such error
## Actual results
```
ImportError Traceback (most recent call last)
<ipython-inp... | 57 | Import Error in Kaggle notebook
## Describe the bug
Not able to import datasets library in kaggle notebooks
## Steps to reproduce the bug
```python
!pip install datasets
import datasets
```
## Expected results
No such error
## Actual results
```
ImportError Traceback (mo... | [
-0.3438678086,
-0.077080369,
-0.1142193526,
0.1320917755,
0.1916217208,
0.0109462254,
0.3406417668,
0.3310462236,
-0.0405513309,
-0.0315703414,
-0.1770030409,
0.8259584308,
0.0784069076,
0.2167555094,
0.0422965325,
-0.0751288161,
0.1151350588,
0.2059999555,
-0.0540337972,
0.068... |
https://github.com/huggingface/datasets/issues/2677 | Error when downloading C4 | Hi Thanks for reporting !
It looks like these files are not correctly reported in the list of expected files to download, let me fix that ;) | Hi,
I am trying to download `en` corpus from C4 dataset. However, I get an error caused by validation files download (see image). My code is very primitive:
`datasets.load_dataset('c4', 'en')`
Is this a bug or do I have some configurations missing on my server?
Thanks!
<img width="1014" alt="Снимок экрана 2... | 27 | Error when downloading C4
Hi,
I am trying to download `en` corpus from C4 dataset. However, I get an error caused by validation files download (see image). My code is very primitive:
`datasets.load_dataset('c4', 'en')`
Is this a bug or do I have some configurations missing on my server?
Thanks!
<img width... | [
0.0968456864,
-0.0686113238,
-0.0539070442,
0.475012958,
0.2311237305,
0.2395801097,
0.0058157924,
0.1676317602,
-0.017036479,
-0.1058176458,
-0.0259848945,
-0.2992730737,
0.1618456542,
-0.0318005756,
0.0029822944,
-0.1193321347,
0.1358549297,
0.0848606601,
-0.1391198337,
-0.35... |
https://github.com/huggingface/datasets/issues/2677 | Error when downloading C4 | Alright this is fixed now. We'll do a new release soon to make the fix available.
In the meantime feel free to simply pass `ignore_verifications=True` to `load_dataset` to skip this error | Hi,
I am trying to download `en` corpus from C4 dataset. However, I get an error caused by validation files download (see image). My code is very primitive:
`datasets.load_dataset('c4', 'en')`
Is this a bug or do I have some configurations missing on my server?
Thanks!
<img width="1014" alt="Снимок экрана 2... | 31 | Error when downloading C4
Hi,
I am trying to download `en` corpus from C4 dataset. However, I get an error caused by validation files download (see image). My code is very primitive:
`datasets.load_dataset('c4', 'en')`
Is this a bug or do I have some configurations missing on my server?
Thanks!
<img width... | [
0.0360018574,
-0.0390106738,
-0.0446486548,
0.420443505,
0.2243452966,
0.2426007539,
-0.0268923435,
0.1530850083,
-0.0672713369,
-0.0291625503,
0.0164736286,
-0.2508124709,
0.156677559,
0.049930647,
0.0004604336,
-0.0572999716,
0.1233926639,
0.076857686,
-0.1300050616,
-0.29652... |
https://github.com/huggingface/datasets/issues/2669 | Metric kwargs are not passed to underlying external metric f1_score | Hi @BramVanroy, thanks for reporting.
First, note that `"min"` is not an allowed value for `average`. According to scikit-learn [documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html), `average` can only take the values: `{"micro", "macro", "samples", "weighted", "binary"} or... | ## Describe the bug
When I want to use F1 score with average="min", this keyword argument does not seem to be passed through to the underlying sklearn metric. This is evident because [sklearn](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) throws an error telling me so.
## Steps to... | 96 | Metric kwargs are not passed to underlying external metric f1_score
## Describe the bug
When I want to use F1 score with average="min", this keyword argument does not seem to be passed through to the underlying sklearn metric. This is evident because [sklearn](https://scikit-learn.org/stable/modules/generated/sklear... | [
-0.0516288616,
-0.5501688123,
0.0873142183,
0.2038567215,
0.4681912363,
-0.0643797219,
0.1748596728,
-0.1953882575,
0.3451756835,
0.2828974426,
0.0679135993,
0.4462165534,
0.178371802,
0.024255123,
-0.0182034746,
0.0616982765,
0.1058043242,
-0.358712703,
-0.0846229792,
-0.29872... |
https://github.com/huggingface/datasets/issues/2669 | Metric kwargs are not passed to underlying external metric f1_score | Thanks, that was it. A bit strange though, since `load_metric` had an argument `metric_init_kwargs`. I assume that that's for specific initialisation arguments whereas `average` is for the function itself. | ## Describe the bug
When I want to use F1 score with average="min", this keyword argument does not seem to be passed through to the underlying sklearn metric. This is evident because [sklearn](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) throws an error telling me so.
## Steps to... | 29 | Metric kwargs are not passed to underlying external metric f1_score
## Describe the bug
When I want to use F1 score with average="min", this keyword argument does not seem to be passed through to the underlying sklearn metric. This is evident because [sklearn](https://scikit-learn.org/stable/modules/generated/sklear... | [
-0.0516288616,
-0.5501688123,
0.0873142183,
0.2038567215,
0.4681912363,
-0.0643797219,
0.1748596728,
-0.1953882575,
0.3451756835,
0.2828974426,
0.0679135993,
0.4462165534,
0.178371802,
0.024255123,
-0.0182034746,
0.0616982765,
0.1058043242,
-0.358712703,
-0.0846229792,
-0.29872... |
https://github.com/huggingface/datasets/issues/2663 | [`to_json`] add multi-proc sharding support | Hi @stas00,
I want to work on this issue and I was thinking why don't we use `imap` [in this loop](https://github.com/huggingface/datasets/blob/440b14d0dd428ae1b25881aa72ba7bbb8ad9ff84/src/datasets/io/json.py#L99)? This way, using offset (which is being used to slice the pyarrow table) we can convert pyarrow table t... | As discussed on slack it appears that `to_json` is quite slow on huge datasets like OSCAR.
I implemented sharded saving, which is much much faster - but the tqdm bars all overwrite each other, so it's hard to make sense of the progress, so if possible ideally this multi-proc support could be implemented internally i... | 139 | [`to_json`] add multi-proc sharding support
As discussed on slack it appears that `to_json` is quite slow on huge datasets like OSCAR.
I implemented sharded saving, which is much much faster - but the tqdm bars all overwrite each other, so it's hard to make sense of the progress, so if possible ideally this multi-... | [
-0.2420467287,
-0.3245051801,
-0.0334172957,
-0.0440205336,
-0.0977220982,
-0.0860113725,
0.4022953808,
0.1104016379,
-0.0120916683,
0.332368046,
-0.0399443097,
0.2764455676,
-0.1313704252,
0.2204845101,
-0.2396436185,
-0.0810557604,
0.1407508999,
-0.1243922263,
0.4335383475,
0... |
https://github.com/huggingface/datasets/issues/2655 | Allow the selection of multiple columns at once | Hi! I was looking into this and hope you can clarify a point. Your my_dataset variable would be of type DatasetDict which means the alternative you've described (dict comprehension) is what makes sense.
Is there a reason why you wouldn't want to convert my_dataset to a pandas df if you'd like to use it like one? Plea... | **Is your feature request related to a problem? Please describe.**
Similar to pandas, it would be great if we could select multiple columns at once.
**Describe the solution you'd like**
```python
my_dataset = ... # Has columns ['idx', 'sentence', 'label']
idx, label = my_dataset[['idx', 'label']]
```
**... | 64 | Allow the selection of multiple columns at once
**Is your feature request related to a problem? Please describe.**
Similar to pandas, it would be great if we could select multiple columns at once.
**Describe the solution you'd like**
```python
my_dataset = ... # Has columns ['idx', 'sentence', 'label']
id... | [
-0.0573707223,
-0.2296810895,
-0.1969854683,
0.0512580872,
0.201422736,
0.218650192,
0.5181803703,
0.1173711419,
0.3343808353,
0.4082848132,
-0.1473215967,
0.3830781877,
0.0007235444,
0.2195995152,
-0.2610400915,
-0.3490297794,
-0.1463272274,
0.1411354691,
0.0857270882,
0.01358... |
https://github.com/huggingface/datasets/issues/2655 | Allow the selection of multiple columns at once | Hi! Sorry for the delay.
In this case, the dataset would be a `datasets.Dataset` and we want to select multiple columns, the `idx` and `label` columns for example.
My issue is that my dataset is too big for memory if I load everything into pandas. | **Is your feature request related to a problem? Please describe.**
Similar to pandas, it would be great if we could select multiple columns at once.
**Describe the solution you'd like**
```python
my_dataset = ... # Has columns ['idx', 'sentence', 'label']
idx, label = my_dataset[['idx', 'label']]
```
**... | 45 | Allow the selection of multiple columns at once
**Is your feature request related to a problem? Please describe.**
Similar to pandas, it would be great if we could select multiple columns at once.
**Describe the solution you'd like**
```python
my_dataset = ... # Has columns ['idx', 'sentence', 'label']
id... | [
-0.1123059094,
-0.3079829514,
-0.1617852449,
0.1745086908,
0.2768076658,
0.2954531014,
0.4830074012,
0.1078619808,
0.2709564865,
0.4354279041,
-0.1071547046,
0.1811607331,
-0.0057087317,
0.1110231802,
-0.120576553,
-0.4149239063,
-0.146011427,
0.1239383668,
0.1093642265,
0.1609... |
https://github.com/huggingface/datasets/issues/2654 | Give a user feedback if the dataset he loads is streamable or not | I understand it already raises a `NotImplementedError` exception, eg:
```
>>> dataset = load_dataset("journalists_questions", name="plain_text", split="train", streaming=True)
[...]
NotImplementedError: Extraction protocol for file at https://drive.google.com/uc?export=download&id=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_... | **Is your feature request related to a problem? Please describe.**
I would love to know if a `dataset` is with the current implementation streamable or not.
**Describe the solution you'd like**
We could show a warning when a dataset is loaded with `load_dataset('...',streaming=True)` when its lot streamable, e.g.... | 30 | Give a user feedback if the dataset he loads is streamable or not
**Is your feature request related to a problem? Please describe.**
I would love to know if a `dataset` is with the current implementation streamable or not.
**Describe the solution you'd like**
We could show a warning when a dataset is loaded wit... | [
-0.3121465743,
0.1084253937,
-0.0998006389,
0.061705064,
0.1377474368,
-0.1033981517,
0.1825838089,
0.2757679522,
-0.0348221101,
0.2482771128,
0.2372251749,
0.2212014049,
-0.4187543988,
0.2228206992,
-0.2498126775,
-0.105773896,
-0.1940761656,
0.2266801894,
0.1973601878,
0.0241... |
https://github.com/huggingface/datasets/issues/2653 | Add SD task for SUPERB | Note that this subset requires us to:
* generate the LibriMix corpus from LibriSpeech
* prepare the corpus for diarization
As suggested by @lhoestq we should perform these steps locally and add the prepared data to this public repo on the Hub: https://huggingface.co/datasets/superb/superb-data
Then we can use... | Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization).
Steps:
- [x] Generate the LibriMix corpus
- [x] Prepare the corpus for diarization
- [x] Up... | 94 | Add SD task for SUPERB
Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization).
Steps:
- [x] Generate the LibriMix corpus
- [x] Prepare the corpus ... | [
-0.2387333661,
-0.1557696164,
0.006651578,
0.1112359911,
0.3777109385,
-0.1991262138,
0.0082018431,
-0.1370198429,
0.1117519662,
0.3464578688,
-0.3380067348,
0.4965120256,
-0.1501336843,
0.4443321824,
0.2138431221,
0.2012201846,
0.0950578973,
0.2073051333,
-0.2991645932,
0.0513... |
https://github.com/huggingface/datasets/issues/2653 | Add SD task for SUPERB | @lewtun @lhoestq:
I have already generated the LibriMix corpus and prepared the corpus for diarization. The output is 3 dirs (train, dev, test), each one containing 6 files: reco2dur rttm segments spk2utt utt2spk wav.scp
Next steps:
- Upload these files to the superb-data repo
- Transcribe the correspondi... | Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization).
Steps:
- [x] Generate the LibriMix corpus
- [x] Prepare the corpus for diarization
- [x] Up... | 73 | Add SD task for SUPERB
Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization).
Steps:
- [x] Generate the LibriMix corpus
- [x] Prepare the corpus ... | [
-0.230632484,
-0.3324862719,
-0.0547318049,
0.0661656335,
0.3352738619,
-0.2812387347,
0.0837416053,
-0.0611490868,
-0.1144545525,
0.4088608325,
-0.4140784144,
0.4271571934,
-0.157126233,
0.3467776477,
0.1571530849,
0.0596583597,
0.0408409312,
0.2455735356,
-0.3144540191,
-0.04... |
https://github.com/huggingface/datasets/issues/2651 | Setting log level higher than warning does not suppress progress bar | Hi,
you can suppress progress bars by patching logging as follows:
```python
import datasets
import logging
datasets.logging.get_verbosity = lambda: logging.NOTSET
# map call ...
``` | ## Describe the bug
I would like to disable progress bars for `.map` method (and other methods like `.filter` and `load_dataset` as well).
According to #1627 one can suppress it by setting log level higher than `warning`, however doing so doesn't suppress it with version 1.9.0.
I also tried to set `DATASETS_VERBOS... | 25 | Setting log level higher than warning does not suppress progress bar
## Describe the bug
I would like to disable progress bars for `.map` method (and other methods like `.filter` and `load_dataset` as well).
According to #1627 one can suppress it by setting log level higher than `warning`, however doing so doesn't ... | [
-0.4419383407,
-0.1637800634,
0.087679781,
-0.1625065655,
0.1394934207,
-0.0227062069,
0.4293663502,
0.2288746685,
0.0989456102,
0.1445455998,
0.1754807979,
0.6159741282,
-0.1407675296,
0.1008442417,
-0.193881169,
0.1940041333,
0.0229355339,
0.0618276671,
0.097369723,
-0.005187... |
https://github.com/huggingface/datasets/issues/2651 | Setting log level higher than warning does not suppress progress bar | Note also that you can disable the progress bar with
```python
from datasets.utils import disable_progress_bar
disable_progress_bar()
```
See https://github.com/huggingface/datasets/blob/8814b393984c1c2e1800ba370de2a9f7c8644908/src/datasets/utils/tqdm_utils.py#L84 | ## Describe the bug
I would like to disable progress bars for `.map` method (and other methods like `.filter` and `load_dataset` as well).
According to #1627 one can suppress it by setting log level higher than `warning`, however doing so doesn't suppress it with version 1.9.0.
I also tried to set `DATASETS_VERBOS... | 19 | Setting log level higher than warning does not suppress progress bar
## Describe the bug
I would like to disable progress bars for `.map` method (and other methods like `.filter` and `load_dataset` as well).
According to #1627 one can suppress it by setting log level higher than `warning`, however doing so doesn't ... | [
-0.4388270676,
-0.2037363797,
0.0796425939,
-0.1522615999,
0.1692188233,
-0.0159657318,
0.4777555466,
0.2021974921,
0.0318350941,
0.1475352794,
0.1682005376,
0.5794952512,
-0.1528974026,
0.1000313386,
-0.1835142821,
0.1944967359,
0.0263120402,
0.0424374044,
0.0457619987,
0.0004... |
https://github.com/huggingface/datasets/issues/2646 | downloading of yahoo_answers_topics dataset failed | Hi ! I just tested and it worked fine today for me.
I think this is because the dataset is stored on Google Drive which has a quota limit for the number of downloads per day, see this similar issue https://github.com/huggingface/datasets/issues/996
Feel free to try again today, now that the quota was reset | ## Describe the bug
I get an error datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files when I try to download the yahoo_answers_topics dataset
## Steps to reproduce the bug
self.dataset = load_dataset(
'yahoo_answers_topics', cache_dir=self.config... | 53 | downloading of yahoo_answers_topics dataset failed
## Describe the bug
I get an error datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files when I try to download the yahoo_answers_topics dataset
## Steps to reproduce the bug
self.dataset = load_dataset(
... | [
-0.4173351228,
0.1438480616,
-0.0522664301,
0.21039249,
0.2381460518,
-0.0566483699,
0.2377112359,
0.2905798256,
0.1442445517,
0.0573150925,
-0.0823222175,
-0.0347227193,
0.0095931776,
0.2967810929,
-0.114185296,
0.153219834,
0.1207121015,
-0.2423341125,
-0.3290194273,
0.187282... |
https://github.com/huggingface/datasets/issues/2645 | load_dataset processing failed with OS error after downloading a dataset | Hi ! It looks like an issue with pytorch.
Could you try to run `import torch` and see if it raises an error ? | ## Describe the bug
After downloading a dataset like opus100, there is a bug that
OSError: Cannot find data file.
Original error:
dlopen: cannot load any more object with static TLS
## Steps to reproduce the bug
```python
from datasets import load_dataset
this_dataset = load_dataset('opus100', 'af-en')
```
... | 24 | load_dataset processing failed with OS error after downloading a dataset
## Describe the bug
After downloading a dataset like opus100, there is a bug that
OSError: Cannot find data file.
Original error:
dlopen: cannot load any more object with static TLS
## Steps to reproduce the bug
```python
from datasets... | [
-0.4430501163,
0.2031194568,
-0.0454363041,
0.4847755432,
0.3039819002,
-0.0233754013,
0.234968856,
0.3660462201,
-0.1082288623,
0.0927523896,
-0.1625689119,
0.4678140879,
-0.0074350247,
-0.0836083069,
-0.0174170695,
-0.1323248595,
0.0387221277,
0.2778642476,
-0.6045761108,
-0.... |
https://github.com/huggingface/datasets/issues/2645 | load_dataset processing failed with OS error after downloading a dataset | > Hi ! It looks like an issue with pytorch.
>
> Could you try to run `import torch` and see if it raises an error ?
It works. Thank you! | ## Describe the bug
After downloading a dataset like opus100, there is a bug that
OSError: Cannot find data file.
Original error:
dlopen: cannot load any more object with static TLS
## Steps to reproduce the bug
```python
from datasets import load_dataset
this_dataset = load_dataset('opus100', 'af-en')
```
... | 31 | load_dataset processing failed with OS error after downloading a dataset
## Describe the bug
After downloading a dataset like opus100, there is a bug that
OSError: Cannot find data file.
Original error:
dlopen: cannot load any more object with static TLS
## Steps to reproduce the bug
```python
from datasets... | [
-0.4430501163,
0.2031194568,
-0.0454363041,
0.4847755432,
0.3039819002,
-0.0233754013,
0.234968856,
0.3660462201,
-0.1082288623,
0.0927523896,
-0.1625689119,
0.4678140879,
-0.0074350247,
-0.0836083069,
-0.0174170695,
-0.1323248595,
0.0387221277,
0.2778642476,
-0.6045761108,
-0.... |
https://github.com/huggingface/datasets/issues/2644 | Batched `map` not allowed to return 0 items | Hi ! Thanks for reporting. Indeed it looks like type inference makes it fail. We should probably just ignore this step until a non-empty batch is passed. | ## Describe the bug
I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingface.co/docs/datasets/processing.html#augmenting... | 27 | Batched `map` not allowed to return 0 items
## Describe the bug
I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingfa... | [
-0.2328358591,
-0.3903884888,
-0.0444569401,
0.1644898802,
-0.1085447446,
-0.0687688664,
0.1163394749,
0.4056321681,
0.6400415897,
0.1178811938,
-0.0565715581,
0.2460853755,
-0.3954411447,
0.1043542847,
-0.1629213542,
0.1608026028,
-0.0449330881,
0.1405792534,
-0.1691172719,
-0... |
https://github.com/huggingface/datasets/issues/2644 | Batched `map` not allowed to return 0 items | Sounds good! Do you want me to propose a PR? I'm quite busy right now, but if it's not too urgent I could take a look next week. | ## Describe the bug
I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingface.co/docs/datasets/processing.html#augmenting... | 28 | Batched `map` not allowed to return 0 items
## Describe the bug
I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingfa... | [
-0.2328358591,
-0.3903884888,
-0.0444569401,
0.1644898802,
-0.1085447446,
-0.0687688664,
0.1163394749,
0.4056321681,
0.6400415897,
0.1178811938,
-0.0565715581,
0.2460853755,
-0.3954411447,
0.1043542847,
-0.1629213542,
0.1608026028,
-0.0449330881,
0.1405792534,
-0.1691172719,
-0... |
https://github.com/huggingface/datasets/issues/2644 | Batched `map` not allowed to return 0 items | Sure if you're interested feel free to open a PR :)
You can also ping me anytime if you have questions or if I can help ! | ## Describe the bug
I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingface.co/docs/datasets/processing.html#augmenting... | 27 | Batched `map` not allowed to return 0 items
## Describe the bug
I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingfa... | [
-0.2328358591,
-0.3903884888,
-0.0444569401,
0.1644898802,
-0.1085447446,
-0.0687688664,
0.1163394749,
0.4056321681,
0.6400415897,
0.1178811938,
-0.0565715581,
0.2460853755,
-0.3954411447,
0.1043542847,
-0.1629213542,
0.1608026028,
-0.0449330881,
0.1405792534,
-0.1691172719,
-0... |
https://github.com/huggingface/datasets/issues/2644 | Batched `map` not allowed to return 0 items | Sorry to ping you, @lhoestq, did you have a chance to take a look at the proposed PR? Thank you! | ## Describe the bug
I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingface.co/docs/datasets/processing.html#augmenting... | 20 | Batched `map` not allowed to return 0 items
## Describe the bug
I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingfa... | [
-0.2328358591,
-0.3903884888,
-0.0444569401,
0.1644898802,
-0.1085447446,
-0.0687688664,
0.1163394749,
0.4056321681,
0.6400415897,
0.1178811938,
-0.0565715581,
0.2460853755,
-0.3954411447,
0.1043542847,
-0.1629213542,
0.1608026028,
-0.0449330881,
0.1405792534,
-0.1691172719,
-0... |
https://github.com/huggingface/datasets/issues/2644 | Batched `map` not allowed to return 0 items | Yes and it's all good, thank you :)
Feel free to close this issue if it's good for you | ## Describe the bug
I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingface.co/docs/datasets/processing.html#augmenting... | 19 | Batched `map` not allowed to return 0 items
## Describe the bug
I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingfa... | [
-0.2328358591,
-0.3903884888,
-0.0444569401,
0.1644898802,
-0.1085447446,
-0.0687688664,
0.1163394749,
0.4056321681,
0.6400415897,
0.1178811938,
-0.0565715581,
0.2460853755,
-0.3954411447,
0.1043542847,
-0.1629213542,
0.1608026028,
-0.0449330881,
0.1405792534,
-0.1691172719,
-0... |
https://github.com/huggingface/datasets/issues/2643 | Enum used in map functions will raise a RecursionError with dill. | I'm running into this as well. (Thank you so much for reporting @jorgeecardona — was staring at this massive stack trace and unsure what exactly was wrong!) | ## Describe the bug
Enums used in functions pass to `map` will fail at pickling with a maximum recursion exception as described here: https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284
In my particular case, I use an enum to define an argument with fixed options using the `TraininigArguments` ... | 27 | Enum used in map functions will raise a RecursionError with dill.
## Describe the bug
Enums used in functions pass to `map` will fail at pickling with a maximum recursion exception as described here: https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284
In my particular case, I use an enum to d... | [
0.1053605825,
0.1992539614,
0.0405321009,
0.1406992674,
-0.0743715391,
-0.0783434883,
0.1477603465,
0.2337794751,
0.1263177246,
0.1690645218,
0.1453329176,
0.9332158566,
-0.290869683,
-0.3072979152,
-0.0783376619,
0.0934458673,
0.022367673,
0.0455224141,
-0.618113935,
-0.245678... |
https://github.com/huggingface/datasets/issues/2643 | Enum used in map functions will raise a RecursionError with dill. | Hi ! Thanks for reporting :)
Until this is fixed on `dill`'s side, we could implement a custom saving in our Pickler indefined in utils.py_utils.py
There is already a suggestion in this message about how to do it:
https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284
Let me know if such a worka... | ## Describe the bug
Enums used in functions pass to `map` will fail at pickling with a maximum recursion exception as described here: https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284
In my particular case, I use an enum to define an argument with fixed options using the `TraininigArguments` ... | 61 | Enum used in map functions will raise a RecursionError with dill.
## Describe the bug
Enums used in functions pass to `map` will fail at pickling with a maximum recursion exception as described here: https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284
In my particular case, I use an enum to d... | [
0.1053605825,
0.1992539614,
0.0405321009,
0.1406992674,
-0.0743715391,
-0.0783434883,
0.1477603465,
0.2337794751,
0.1263177246,
0.1690645218,
0.1453329176,
0.9332158566,
-0.290869683,
-0.3072979152,
-0.0783376619,
0.0934458673,
0.022367673,
0.0455224141,
-0.618113935,
-0.245678... |
https://github.com/huggingface/datasets/issues/2643 | Enum used in map functions will raise a RecursionError with dill. | I have the same bug.
the code is as follows:

the error is:

Look for the solution for thi... | ## Describe the bug
Enums used in functions pass to `map` will fail at pickling with a maximum recursion exception as described here: https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284
In my particular case, I use an enum to define an argument with fixed options using the `TraininigArguments` ... | 22 | Enum used in map functions will raise a RecursionError with dill.
## Describe the bug
Enums used in functions pass to `map` will fail at pickling with a maximum recursion exception as described here: https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284
In my particular case, I use an enum to d... | [
0.1053605825,
0.1992539614,
0.0405321009,
0.1406992674,
-0.0743715391,
-0.0783434883,
0.1477603465,
0.2337794751,
0.1263177246,
0.1690645218,
0.1453329176,
0.9332158566,
-0.290869683,
-0.3072979152,
-0.0783376619,
0.0934458673,
0.022367673,
0.0455224141,
-0.618113935,
-0.245678... |
https://github.com/huggingface/datasets/issues/2643 | Enum used in map functions will raise a RecursionError with dill. | Hi ! I think your RecursionError comes from a different issue @BitcoinNLPer , could you open a separate issue please ?
Also which dataset are you using ? I tried loading `CodedotAI/code_clippy` but I get a different error
```python
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "... | ## Describe the bug
Enums used in functions pass to `map` will fail at pickling with a maximum recursion exception as described here: https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284
In my particular case, I use an enum to define an argument with fixed options using the `TraininigArguments` ... | 144 | Enum used in map functions will raise a RecursionError with dill.
## Describe the bug
Enums used in functions pass to `map` will fail at pickling with a maximum recursion exception as described here: https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284
In my particular case, I use an enum to d... | [
0.1053605825,
0.1992539614,
0.0405321009,
0.1406992674,
-0.0743715391,
-0.0783434883,
0.1477603465,
0.2337794751,
0.1263177246,
0.1690645218,
0.1453329176,
0.9332158566,
-0.290869683,
-0.3072979152,
-0.0783376619,
0.0934458673,
0.022367673,
0.0455224141,
-0.618113935,
-0.245678... |
https://github.com/huggingface/datasets/issues/2642 | Support multi-worker with streaming dataset (IterableDataset). | Hi ! This is a great idea :)
I think we could have something similar to what we have in `datasets.Dataset.map`, i.e. a `num_proc` parameter that tells how many processes to spawn to parallelize the data processing.
Regarding AUTOTUNE, this could be a nice feature as well, we could see how to add it in a second ste... | **Is your feature request related to a problem? Please describe.**
The current `.map` does not support multi-process, CPU can become bottleneck if the pre-processing is complex (e.g. t5 span masking).
**Describe the solution you'd like**
Ideally `.map` should support multi-worker like tfds, with `AUTOTUNE`.
**D... | 58 | Support multi-worker with streaming dataset (IterableDataset).
**Is your feature request related to a problem? Please describe.**
The current `.map` does not support multi-process, CPU can become bottleneck if the pre-processing is complex (e.g. t5 span masking).
**Describe the solution you'd like**
Ideally `.ma... | [
-0.6293720007,
-0.5262187719,
-0.1424406171,
-0.0420490392,
-0.1927179545,
-0.0104304729,
0.5165581107,
0.1782798469,
0.0827182531,
0.1480464339,
-0.084755674,
0.3020121157,
-0.2687704265,
0.2669042945,
-0.0826475248,
-0.2372147143,
-0.1250283271,
0.0643195212,
-0.0082344916,
0... |
https://github.com/huggingface/datasets/issues/2641 | load_dataset("financial_phrasebank") NonMatchingChecksumError | Hi! It's probably because this dataset is stored on google drive and it has a per day quota limit. It should work if you retry, I was able to initiate the download.
Similar issue [here](https://github.com/huggingface/datasets/issues/2646) | ## Describe the bug
Attempting to download the financial_phrasebank dataset results in a NonMatchingChecksumError
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("financial_phrasebank", 'sentences_allagree')
```
## Expected results
I expect to see the financi... | 35 | load_dataset("financial_phrasebank") NonMatchingChecksumError
## Describe the bug
Attempting to download the financial_phrasebank dataset results in a NonMatchingChecksumError
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("financial_phrasebank", 'sentences_all... | [
-0.1221616641,
0.2191511244,
-0.078407377,
0.2884526253,
0.2203613222,
0.1347172856,
0.0658891574,
0.3310612738,
0.2095744461,
0.1821720004,
-0.1414381266,
0.0533877052,
0.1674160063,
0.0750156567,
-0.1415329278,
0.0439131409,
0.042703405,
-0.0287594274,
0.0304530486,
-0.108990... |
https://github.com/huggingface/datasets/issues/2641 | load_dataset("financial_phrasebank") NonMatchingChecksumError | Hi ! Loading the dataset works on my side as well.
Feel free to try again and let us know if it works for you know | ## Describe the bug
Attempting to download the financial_phrasebank dataset results in a NonMatchingChecksumError
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("financial_phrasebank", 'sentences_allagree')
```
## Expected results
I expect to see the financi... | 26 | load_dataset("financial_phrasebank") NonMatchingChecksumError
## Describe the bug
Attempting to download the financial_phrasebank dataset results in a NonMatchingChecksumError
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("financial_phrasebank", 'sentences_all... | [
-0.1198335662,
0.1794099808,
-0.0883816183,
0.3396990001,
0.1879687905,
0.1173867434,
0.0060674399,
0.3996657729,
0.1419465989,
0.1976888478,
-0.1613842696,
0.2437269837,
0.2304035574,
-0.1089785174,
-0.0927742943,
0.1406146437,
0.133822009,
0.0298897363,
0.0345544443,
-0.16684... |
https://github.com/huggingface/datasets/issues/2641 | load_dataset("financial_phrasebank") NonMatchingChecksumError | Thank you! I've been trying periodically for the past month, and no luck yet with this particular dataset. Just tried again and still hitting the checksum error.
Code:
`dataset = load_dataset("financial_phrasebank", "sentences_allagree") `
Traceback:
```
----------------------------------------------------... | ## Describe the bug
Attempting to download the financial_phrasebank dataset results in a NonMatchingChecksumError
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("financial_phrasebank", 'sentences_allagree')
```
## Expected results
I expect to see the financi... | 174 | load_dataset("financial_phrasebank") NonMatchingChecksumError
## Describe the bug
Attempting to download the financial_phrasebank dataset results in a NonMatchingChecksumError
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("financial_phrasebank", 'sentences_all... | [
-0.1219039708,
0.2504357994,
-0.0632825419,
0.3291989267,
0.1633205563,
0.1306017339,
0.0294304229,
0.3981848061,
0.1042742431,
0.1001081094,
-0.237029925,
0.2519071102,
0.2408211976,
-0.180886507,
-0.1394714266,
0.1404774636,
0.1074731424,
0.0431748442,
-0.0138182454,
-0.09089... |
https://github.com/huggingface/datasets/issues/2630 | Progress bars are not properly rendered in Jupyter notebook | To add my experience when trying to debug this issue:
Seems like previously the workaround given [here](https://github.com/tqdm/tqdm/issues/485#issuecomment-473338308) worked around this issue. But with the latest version of jupyter/tqdm I still get terminal warnings that IPython tried to send a message from a forke... | ## Describe the bug
The progress bars are not Jupyter widgets; regular progress bars appear (like in a terminal).
## Steps to reproduce the bug
```python
ds.map(tokenize, num_proc=10)
```
## Expected results
Jupyter widgets displaying the progress bars.
## Actual results
Simple plane progress bars.
cc... | 44 | Progress bars are not properly rendered in Jupyter notebook
## Describe the bug
The progress bars are not Jupyter widgets; regular progress bars appear (like in a terminal).
## Steps to reproduce the bug
```python
ds.map(tokenize, num_proc=10)
```
## Expected results
Jupyter widgets displaying the progress... | [
0.1817371398,
0.0266525224,
-0.0048843366,
0.056550622,
0.1184259281,
-0.3483625352,
0.5567898154,
0.417384088,
-0.2593961954,
0.0024574315,
-0.2125668824,
0.6025976539,
0.2658940554,
0.1672157794,
0.0038400502,
-0.2379226089,
-0.1584529132,
0.1035000682,
-0.3469717205,
0.00248... |
https://github.com/huggingface/datasets/issues/2630 | Progress bars are not properly rendered in Jupyter notebook | Hi @mludv, thanks for the hint!!! :)
We will definitely take it into account to try to fix this issue... It seems somehow related to `multiprocessing` and `tqdm`... | ## Describe the bug
The progress bars are not Jupyter widgets; regular progress bars appear (like in a terminal).
## Steps to reproduce the bug
```python
ds.map(tokenize, num_proc=10)
```
## Expected results
Jupyter widgets displaying the progress bars.
## Actual results
Simple plane progress bars.
cc... | 28 | Progress bars are not properly rendered in Jupyter notebook
## Describe the bug
The progress bars are not Jupyter widgets; regular progress bars appear (like in a terminal).
## Steps to reproduce the bug
```python
ds.map(tokenize, num_proc=10)
```
## Expected results
Jupyter widgets displaying the progress... | [
0.0840750486,
-0.017447263,
-0.0553434193,
0.1467485279,
0.1418052316,
-0.2826744616,
0.4365518093,
0.3081711829,
-0.2813869119,
0.1335178465,
-0.1734684259,
0.495898366,
0.3021129668,
0.3606436551,
-0.0934586599,
-0.2819550335,
-0.0838801637,
0.1640948057,
-0.3097081482,
-0.00... |
https://github.com/huggingface/datasets/issues/2629 | Load datasets from the Hub without requiring a dataset script | This is so cool, let us know if we can help with anything on the hub side (@Pierrci @elishowk) 🎉 | As a user I would like to be able to upload my csv/json/text/parquet/etc. files in a dataset repository on the Hugging Face Hub and be able to load this dataset with `load_dataset` without having to implement a dataset script.
Moreover I would like to be able to specify which file goes into which split using the `da... | 20 | Load datasets from the Hub without requiring a dataset script
As a user I would like to be able to upload my csv/json/text/parquet/etc. files in a dataset repository on the Hugging Face Hub and be able to load this dataset with `load_dataset` without having to implement a dataset script.
Moreover I would like to b... | [
-0.4575759768,
-0.0447010323,
-0.0421365835,
0.1991640925,
-0.1262193471,
0.1635168195,
0.3772644997,
0.1787042916,
0.4132858813,
0.1388911754,
-0.239180699,
0.3387720585,
-0.0637955964,
0.5494148135,
0.2678031325,
0.1731882095,
0.1355400532,
0.3373375237,
0.0156678688,
0.10871... |
https://github.com/huggingface/datasets/issues/2622 | Integration with AugLy | Hi,
you can define your own custom formatting with `Dataset.set_transform()` and then run the tokenizer with the batches of augmented data as follows:
```python
dset = load_dataset("imdb", split="train") # Let's say we are working with the IMDB dataset
dset.set_transform(lambda ex: {"text": augly_text_augmentati... | **Is your feature request related to a problem? Please describe.**
Facebook recently launched a library, [AugLy](https://github.com/facebookresearch/AugLy) , that has a unified API for augmentations for image, video and text.
It would be pretty exciting to have it hooked up to HF libraries so that we can make NLP m... | 68 | Integration with AugLy
**Is your feature request related to a problem? Please describe.**
Facebook recently launched a library, [AugLy](https://github.com/facebookresearch/AugLy) , that has a unified API for augmentations for image, video and text.
It would be pretty exciting to have it hooked up to HF libraries ... | [
-0.1749235094,
-0.1920809746,
-0.143452093,
-0.2396784276,
0.1912635118,
0.0263555869,
0.1673914492,
0.3273122907,
-0.3234901428,
0.0160068404,
0.0447254367,
-0.0170346815,
-0.2608953416,
0.1105001718,
-0.0723291188,
-0.1290824413,
0.1623858362,
0.128782928,
0.0835959092,
-0.06... |
https://github.com/huggingface/datasets/issues/2618 | `filelock.py` Error | Hi @liyucheng09, thanks for reporting.
Apparently this issue has to do with your environment setup. One question: is your data in an NFS share? Some people have reported this error when using `fcntl` to write to an NFS share... If this is the case, then it might be that your NFS just may not be set up to provide fil... | ## Describe the bug
It seems that the `filelock.py` went error.
```
>>> ds=load_dataset('xsum')
^CTraceback (most recent call last):
File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire
fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
... | 84 | `filelock.py` Error
## Describe the bug
It seems that the `filelock.py` went error.
```
>>> ds=load_dataset('xsum')
^CTraceback (most recent call last):
File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire
fcntl.flock(fd, fcntl.LOC... | [
0.1390440613,
-0.3473014235,
0.0151471188,
0.0418611653,
0.0302763786,
0.090704754,
0.1623595655,
0.1356667578,
0.0592381507,
0.0936225057,
-0.1010854244,
0.3403193057,
-0.0329611972,
-0.4240654111,
-0.4592514932,
0.1141828448,
0.0069275438,
-0.0698582307,
-0.1149569228,
-0.013... |
https://github.com/huggingface/datasets/issues/2615 | Jsonlines export error | For some reason this happens (both `datasets` version are on master) only on Python 3.6 and not Python 3.8. | ## Describe the bug
When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default
## Steps to reproduce the bug
This wha... | 19 | Jsonlines export error
## Describe the bug
When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default
## Steps to re... | [
-0.3504131138,
0.1333581656,
-0.0069016716,
0.3097095191,
0.0389033817,
0.0805481821,
0.2188648731,
0.3883451223,
0.0006039952,
-0.0402303115,
0.323100239,
0.0345941819,
0.0673165172,
0.0446975082,
-0.1680246592,
-0.1674066037,
0.2194831222,
-0.0308502372,
0.0336684026,
0.22254... |
https://github.com/huggingface/datasets/issues/2615 | Jsonlines export error | @TevenLeScao we are using `pandas` to serialize the dataset to JSON Lines. So it must be due to pandas. Could you please check the pandas version causing the issue? | ## Describe the bug
When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default
## Steps to reproduce the bug
This wha... | 29 | Jsonlines export error
## Describe the bug
When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default
## Steps to re... | [
-0.3529365957,
0.13444525,
-0.0139708584,
0.26991117,
0.0913658589,
0.0805773661,
0.275898993,
0.42092067,
-0.0012246571,
-0.054687053,
0.3439743519,
0.0168314911,
0.1050094962,
0.1663787067,
-0.2123270035,
-0.219845593,
0.2366725057,
0.0008693534,
-0.0046990626,
0.2111836821,
... |
https://github.com/huggingface/datasets/issues/2615 | Jsonlines export error | @TevenLeScao I have just checked it: this was a bug in `pandas` and it was fixed in version 1.2: https://github.com/pandas-dev/pandas/pull/36898 | ## Describe the bug
When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default
## Steps to reproduce the bug
This wha... | 20 | Jsonlines export error
## Describe the bug
When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default
## Steps to re... | [
-0.3412717581,
0.1240301281,
-0.0055020838,
0.2580175102,
0.1083405316,
0.0757044032,
0.3035868406,
0.4069504142,
0.0264131054,
-0.0411830097,
0.2745396495,
0.0067484351,
0.1387159526,
0.1741789728,
-0.2055494934,
-0.2439864427,
0.2017902285,
-0.0212470349,
0.0084575042,
0.2275... |
https://github.com/huggingface/datasets/issues/2615 | Jsonlines export error | Sorry, I was also talking to teven offline so I already had the PR ready before noticing x) | ## Describe the bug
When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default
## Steps to reproduce the bug
This wha... | 18 | Jsonlines export error
## Describe the bug
When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default
## Steps to re... | [
-0.3764410317,
0.0621024482,
-0.0249255598,
0.2861750722,
0.0446195416,
0.0809500068,
0.1558461785,
0.3971742094,
0.0510586612,
-0.0476371646,
0.3336996436,
0.0041158954,
0.0834180042,
0.1582750827,
-0.1321930289,
-0.1995780915,
0.1929605901,
-0.0722631887,
0.0871970132,
0.2302... |
https://github.com/huggingface/datasets/issues/2615 | Jsonlines export error | I was also already working in my PR... Nevermind. Next time we should pay attention if there is somebody (self-)assigned to an issue and if he/she is still working on it before overtaking it... 😄 | ## Describe the bug
When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default
## Steps to reproduce the bug
This wha... | 35 | Jsonlines export error
## Describe the bug
When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default
## Steps to re... | [
-0.3961874247,
0.1412501335,
-0.0447770469,
0.2652051747,
0.055289939,
0.0680766329,
0.1558923125,
0.3812046051,
-0.0110890446,
-0.0350051261,
0.3722907007,
0.0118640503,
0.0816402584,
0.1777884662,
-0.1461666673,
-0.1732497513,
0.1780271083,
-0.0427164473,
0.0344857797,
0.2413... |
https://github.com/huggingface/datasets/issues/2607 | Streaming local gzip compressed JSON line files is not working | Hi @thomwolf, thanks for reporting.
It seems this might be due to the fact that the JSON Dataset builder uses `pyarrow.json` (`paj.read_json`) to read the data without using the Python standard `open(file,...` (which is the one patched with `xopen` to work in streaming mode).
This has to be fixed. | ## Describe the bug
Using streaming to iterate on local gzip compressed JSON files raise a file not exist error
## Steps to reproduce the bug
```python
from datasets import load_dataset
streamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True)
next(iter(streamed_dataset))... | 49 | Streaming local gzip compressed JSON line files is not working
## Describe the bug
Using streaming to iterate on local gzip compressed JSON files raise a file not exist error
## Steps to reproduce the bug
```python
from datasets import load_dataset
streamed_dataset = load_dataset('json', split='train', data_... | [
-0.19924891,
-0.1339782476,
-0.0091442792,
0.2391349524,
0.0777639374,
0.0168800782,
0.39767465,
0.5178916454,
0.1970214248,
0.0789983049,
0.1030014679,
0.3306829929,
-0.0883667767,
0.020873908,
0.2288867384,
-0.1809342057,
-0.0436619073,
0.3560612798,
-0.0251666978,
0.00158500... |
https://github.com/huggingface/datasets/issues/2607 | Streaming local gzip compressed JSON line files is not working | Sorry for reopening this, but I'm having the same issue as @thomwolf when streaming a gzipped JSON Lines file from the hub. Or is that just not possible by definition?
I installed `datasets`in editable mode from source (so probably includes the fix from #2608 ?):
```
>>> datasets.__version__
'1.9.1.dev0'
```
`... | ## Describe the bug
Using streaming to iterate on local gzip compressed JSON files raise a file not exist error
## Steps to reproduce the bug
```python
from datasets import load_dataset
streamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True)
next(iter(streamed_dataset))... | 167 | Streaming local gzip compressed JSON line files is not working
## Describe the bug
Using streaming to iterate on local gzip compressed JSON files raise a file not exist error
## Steps to reproduce the bug
```python
from datasets import load_dataset
streamed_dataset = load_dataset('json', split='train', data_... | [
-0.19924891,
-0.1339782476,
-0.0091442792,
0.2391349524,
0.0777639374,
0.0168800782,
0.39767465,
0.5178916454,
0.1970214248,
0.0789983049,
0.1030014679,
0.3306829929,
-0.0883667767,
0.020873908,
0.2288867384,
-0.1809342057,
-0.0436619073,
0.3560612798,
-0.0251666978,
0.00158500... |
https://github.com/huggingface/datasets/issues/2607 | Streaming local gzip compressed JSON line files is not working | Hi ! To make the streaming work, we extend `open` in the dataset builder to work with urls.
Therefore you just need to use `open` before using `gzip.open`:
```diff
- with gzip.open(file, "rt", encoding="utf-8") as f:
+ with gzip.open(open(file, "rb"), "rt", encoding="utf-8") as f:
```
You can see that it is t... | ## Describe the bug
Using streaming to iterate on local gzip compressed JSON files raise a file not exist error
## Steps to reproduce the bug
```python
from datasets import load_dataset
streamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True)
next(iter(streamed_dataset))... | 61 | Streaming local gzip compressed JSON line files is not working
## Describe the bug
Using streaming to iterate on local gzip compressed JSON files raise a file not exist error
## Steps to reproduce the bug
```python
from datasets import load_dataset
streamed_dataset = load_dataset('json', split='train', data_... | [
-0.19924891,
-0.1339782476,
-0.0091442792,
0.2391349524,
0.0777639374,
0.0168800782,
0.39767465,
0.5178916454,
0.1970214248,
0.0789983049,
0.1030014679,
0.3306829929,
-0.0883667767,
0.020873908,
0.2288867384,
-0.1809342057,
-0.0436619073,
0.3560612798,
-0.0251666978,
0.00158500... |
https://github.com/huggingface/datasets/issues/2604 | Add option to delete temporary files (e.g. extracted files) when loading dataset | Hi !
If we want something more general, we could either
1. delete the extracted files after the arrow data generation automatically, or
2. delete each extracted file during the arrow generation right after it has been closed.
Solution 2 is better to save disk space during the arrow generation. Is it what you had... | I'm loading a dataset constituted of 44 GB of compressed JSON files.
When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables
Having a simple way to delete the extracted files after usage (or even better, to strea... | 129 | Add option to delete temporary files (e.g. extracted files) when loading dataset
I'm loading a dataset constituted of 44 GB of compressed JSON files.
When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables
Havi... | [
-0.0422240309,
-0.0520783477,
-0.137564823,
0.2491360605,
-0.0890201479,
0.104098551,
-0.1198281944,
0.1847193539,
0.2342312187,
0.2407585382,
0.1567290723,
0.5178807974,
-0.3404890597,
-0.0508526862,
-0.1349283308,
-0.0202004369,
-0.19922553,
0.3400240242,
-0.0047849817,
0.230... |
https://github.com/huggingface/datasets/issues/2604 | Add option to delete temporary files (e.g. extracted files) when loading dataset | Also, if I delete the extracted files they need to be re-extracted again instead of loading from the Arrow cache files | I'm loading a dataset constituted of 44 GB of compressed JSON files.
When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables
Having a simple way to delete the extracted files after usage (or even better, to strea... | 21 | Add option to delete temporary files (e.g. extracted files) when loading dataset
I'm loading a dataset constituted of 44 GB of compressed JSON files.
When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables
Havi... | [
-0.0280626472,
-0.1268453449,
-0.1462636888,
0.1517365426,
-0.1755877435,
0.1943834573,
-0.2197523117,
0.2515493631,
0.1799500734,
0.1638498753,
0.1056419984,
0.4813864231,
-0.240722701,
-0.1040742695,
-0.1887396127,
-0.0030701694,
-0.2355996519,
0.288969934,
-0.007610328,
0.16... |
https://github.com/huggingface/datasets/issues/2604 | Add option to delete temporary files (e.g. extracted files) when loading dataset | I think we already opened an issue about this topic (suggested by @stas00): duplicated of #2481?
This is in our TODO list... 😅 | I'm loading a dataset constituted of 44 GB of compressed JSON files.
When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables
Having a simple way to delete the extracted files after usage (or even better, to strea... | 23 | Add option to delete temporary files (e.g. extracted files) when loading dataset
I'm loading a dataset constituted of 44 GB of compressed JSON files.
When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables
Havi... | [
-0.0192101859,
-0.1269765198,
-0.148183018,
0.0798468217,
-0.132169202,
0.1765159518,
-0.1987022758,
0.2431637049,
0.2124011517,
0.2022784203,
0.1615421772,
0.4751735926,
-0.2215891033,
-0.0703335702,
-0.2046113312,
0.0402210206,
-0.1827223003,
0.2646827102,
0.0021854679,
0.182... |
https://github.com/huggingface/datasets/issues/2604 | Add option to delete temporary files (e.g. extracted files) when loading dataset | I think the deletion of each extracted file could be implemented in our CacheManager and ExtractManager (once merged to master: #2295, #2277). 😉 | I'm loading a dataset constituted of 44 GB of compressed JSON files.
When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables
Having a simple way to delete the extracted files after usage (or even better, to strea... | 23 | Add option to delete temporary files (e.g. extracted files) when loading dataset
I'm loading a dataset constituted of 44 GB of compressed JSON files.
When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables
Havi... | [
-0.0226010922,
-0.1121035144,
-0.1641411036,
0.1639863104,
-0.1408298314,
0.156821385,
-0.1947339475,
0.287306428,
0.2321141958,
0.1775879413,
0.0999827161,
0.4532558322,
-0.2333154678,
-0.1064828634,
-0.2285244763,
0.0652217269,
-0.2182046622,
0.273180604,
-0.0273973402,
0.202... |
https://github.com/huggingface/datasets/issues/2604 | Add option to delete temporary files (e.g. extracted files) when loading dataset | Nevermind @thomwolf, I just mentioned the other issue so that both appear linked in GitHub and we do not forget to close both once we make the corresponding Pull Request... That was the main reason! 😄 | I'm loading a dataset constituted of 44 GB of compressed JSON files.
When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables
Having a simple way to delete the extracted files after usage (or even better, to strea... | 36 | Add option to delete temporary files (e.g. extracted files) when loading dataset
I'm loading a dataset constituted of 44 GB of compressed JSON files.
When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables
Havi... | [
-0.0855272636,
-0.0671882406,
-0.1844741851,
0.104787223,
-0.0681768283,
0.0940924808,
-0.2069168389,
0.357980907,
0.1929913014,
0.1803302169,
0.1256365329,
0.5178527832,
-0.1990654916,
-0.0532913134,
-0.2303044349,
0.1003466025,
-0.171078369,
0.2507985234,
-0.0966363847,
0.172... |
https://github.com/huggingface/datasets/issues/2604 | Add option to delete temporary files (e.g. extracted files) when loading dataset | Ok yes. I think this is an important feature to be able to use large datasets which are pretty much always compressed files.
In particular now this requires to keep the extracted file on the drive if you want to avoid reprocessing the dataset so in my case, this require using always ~400GB of drive instead of just 2... | I'm loading a dataset constituted of 44 GB of compressed JSON files.
When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables
Having a simple way to delete the extracted files after usage (or even better, to strea... | 116 | Add option to delete temporary files (e.g. extracted files) when loading dataset
I'm loading a dataset constituted of 44 GB of compressed JSON files.
When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables
Havi... | [
-0.0659783408,
-0.0675048828,
-0.1214438975,
0.1361942589,
-0.1881688982,
0.237156108,
-0.1601296663,
0.2473284006,
0.1624708176,
0.1584105492,
0.1043193936,
0.4483771026,
-0.3028531671,
-0.1058055386,
-0.1939978749,
-0.0391124263,
-0.1808358878,
0.3018628359,
0.0600305572,
0.1... |
https://github.com/huggingface/datasets/issues/2604 | Add option to delete temporary files (e.g. extracted files) when loading dataset | Note that I'm confirming that with the current master branch of dataset, deleting extracted files (without deleting the arrow cache file) lead to **re-extracting** these files when reloading the dataset instead of directly loading the arrow cache file. | I'm loading a dataset constituted of 44 GB of compressed JSON files.
When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables
Having a simple way to delete the extracted files after usage (or even better, to strea... | 38 | Add option to delete temporary files (e.g. extracted files) when loading dataset
I'm loading a dataset constituted of 44 GB of compressed JSON files.
When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables
Havi... | [
-0.1638349146,
0.038030412,
-0.1447686851,
0.1780928373,
-0.1592692435,
0.235714063,
-0.1746573299,
0.2873373628,
0.1265913099,
0.1585223824,
0.0674441904,
0.5160762072,
-0.2275414467,
-0.069087632,
-0.2018714398,
0.0688018426,
-0.2498382479,
0.2992703915,
-0.1215835959,
0.1475... |
https://github.com/huggingface/datasets/issues/2604 | Add option to delete temporary files (e.g. extracted files) when loading dataset | Hi ! That's weird, it doesn't do that on my side (tested on master on my laptop by deleting the `extracted` folder in the download cache directory). You tested with one of the files at https://huggingface.co/datasets/thomwolf/github-python that you have locally ? | I'm loading a dataset constituted of 44 GB of compressed JSON files.
When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables
Having a simple way to delete the extracted files after usage (or even better, to strea... | 41 | Add option to delete temporary files (e.g. extracted files) when loading dataset
I'm loading a dataset constituted of 44 GB of compressed JSON files.
When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables
Havi... | [
-0.0332568213,
-0.0617315099,
-0.151805073,
0.2321403623,
-0.0794416443,
0.1899539381,
-0.1519190967,
0.3226886988,
0.2679442763,
0.1859115064,
-0.0086605931,
0.478590101,
-0.2064821273,
0.0198205672,
-0.1249554679,
0.0193795711,
-0.2612202466,
0.267311424,
-0.0352826379,
0.102... |
https://github.com/huggingface/datasets/issues/2604 | Add option to delete temporary files (e.g. extracted files) when loading dataset | @thomwolf I'm sorry but I can't reproduce this problem. I'm also using:
```python
ds = load_dataset("json", split="train", data_files=data_files, cache_dir=cache_dir)
```
after having removed the extracted files:
```python
assert sorted((cache_dir / "downloads" / "extracted").iterdir()) == []
```
I get the l... | I'm loading a dataset constituted of 44 GB of compressed JSON files.
When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables
Having a simple way to delete the extracted files after usage (or even better, to strea... | 49 | Add option to delete temporary files (e.g. extracted files) when loading dataset
I'm loading a dataset constituted of 44 GB of compressed JSON files.
When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables
Havi... | [
0.0387397856,
0.0011512297,
-0.0731728002,
0.2888254225,
-0.0225977376,
0.2442346811,
0.004371508,
0.2252015471,
0.2111156583,
0.1112265512,
0.1300096363,
0.4447042346,
-0.314945668,
-0.1522338837,
-0.1879175305,
0.0685453415,
-0.1822635531,
0.2303555459,
0.1179902405,
0.153884... |
https://github.com/huggingface/datasets/issues/2604 | Add option to delete temporary files (e.g. extracted files) when loading dataset | >
>
> Do you confirm the extracted folder stays empty after reloading?
Yes, I have the above mentioned assertion on the emptiness of the extracted folder:
```python
assert sorted((cache_dir / "downloads" / "extracted").iterdir()) == []
```
| I'm loading a dataset constituted of 44 GB of compressed JSON files.
When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables
Having a simple way to delete the extracted files after usage (or even better, to strea... | 37 | Add option to delete temporary files (e.g. extracted files) when loading dataset
I'm loading a dataset constituted of 44 GB of compressed JSON files.
When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables
Havi... | [
0.041307278,
-0.0485376976,
-0.1449723691,
0.2248298228,
-0.0466053151,
0.201789245,
-0.1690856069,
0.3027114272,
0.2245081812,
0.1342939585,
0.0109499944,
0.4801860452,
-0.2686918676,
-0.1230432466,
-0.1824235767,
-0.0093838936,
-0.234908849,
0.2767877579,
-0.0039627482,
0.120... |
https://github.com/huggingface/datasets/issues/2598 | Unable to download omp dataset | Hi @erikadistefano , thanks for reporting the issue.
I have created a Pull Request that should fix it.
Once merged into master, feel free to update your installed `datasets` library (either by installing it from our GitHub master branch or waiting until our next release) to be able to load omp dataset. | ## Describe the bug
The omp dataset cannot be downloaded because of a DuplicatedKeysError
## Steps to reproduce the bug
from datasets import load_dataset
omp = load_dataset('omp', 'posts_labeled')
print(omp)
## Expected results
This code should download the omp dataset and print the dictionary
## Actual r... | 52 | Unable to download omp dataset
## Describe the bug
The omp dataset cannot be downloaded because of a DuplicatedKeysError
## Steps to reproduce the bug
from datasets import load_dataset
omp = load_dataset('omp', 'posts_labeled')
print(omp)
## Expected results
This code should download the omp dataset and pr... | [
-0.2633761466,
-0.1163595468,
-0.08252877,
0.1005098149,
0.2311993688,
-0.0217429902,
0.2052314132,
0.2830140293,
0.0483126529,
0.2094640285,
-0.2077923119,
0.5524355173,
-0.2091249973,
0.2359021604,
0.1329851002,
-0.2663645148,
-0.1581267565,
0.1020684019,
-0.3342055976,
-0.09... |
https://github.com/huggingface/datasets/issues/2596 | Transformer Class on dataset | Hi ! Do you have an example in mind that shows how this could be useful ? | Just wondering if you have intenttion to create
TransformerClass :
dataset --> dataset
and make determnistic transformation (ie not fit).
| 17 | Transformer Class on dataset
Just wondering if you have intenttion to create
TransformerClass :
dataset --> dataset
and make determnistic transformation (ie not fit).
Hi ! Do you have an example in mind that shows how this could be useful ? | [
-0.4188196957,
-0.1118555441,
-0.0211096797,
0.164978385,
0.4431478679,
-0.0737168714,
0.5200479031,
0.0255108308,
-0.0000911348,
-0.0612897724,
0.1383577585,
0.3554062843,
-0.3664283156,
0.1816853583,
0.1437229067,
-0.1267320961,
-0.0193030704,
0.3366392553,
-0.3813782632,
-0.... |
https://github.com/huggingface/datasets/issues/2596 | Transformer Class on dataset | Example:
Merge 2 datasets into one datasets
Label extraction from dataset
dataset(text, label)
—> dataset(text, newlabel)
TextCleaning.
For image dataset,
Transformation are easier (ie linear algebra).
> On Jul 6, 2021, at 17:39, Quentin Lhoest ***@***.***> wrote:
>
>
> Hi ! Do you have an example in... | Just wondering if you have intenttion to create
TransformerClass :
dataset --> dataset
and make determnistic transformation (ie not fit).
| 83 | Transformer Class on dataset
Just wondering if you have intenttion to create
TransformerClass :
dataset --> dataset
and make determnistic transformation (ie not fit).
Example:
Merge 2 datasets into one datasets
Label extraction from dataset
dataset(text, label)
—> dataset(text, newlabel... | [
-0.4854317307,
0.1529307961,
-0.0886403918,
0.1896276474,
0.4006464481,
0.1068503633,
0.5075283647,
0.1244742274,
-0.0116828959,
-0.062044438,
0.0533829257,
0.3345319331,
-0.3507429063,
0.1875823438,
0.0721606761,
-0.1314891726,
-0.0283113681,
0.3180098534,
-0.4435119629,
-0.12... |
https://github.com/huggingface/datasets/issues/2596 | Transformer Class on dataset | There are already a few transformations that you can apply on a dataset using methods like `dataset.map()`.
You can find examples in the documentation here:
https://huggingface.co/docs/datasets/processing.html
You can merge two datasets with `concatenate_datasets()` or do label extraction with `dataset.map()` for ... | Just wondering if you have intenttion to create
TransformerClass :
dataset --> dataset
and make determnistic transformation (ie not fit).
| 41 | Transformer Class on dataset
Just wondering if you have intenttion to create
TransformerClass :
dataset --> dataset
and make determnistic transformation (ie not fit).
There are already a few transformations that you can apply on a dataset using methods like `dataset.map()`.
You can find exam... | [
-0.4685246944,
-0.230680421,
-0.0531574041,
0.1410392523,
0.3790984452,
-0.1987149417,
0.3367204964,
0.1695586294,
0.1045029014,
0.1931906343,
-0.1746758819,
0.2893975973,
-0.2316430658,
0.5593627095,
0.0868366882,
-0.2147053182,
-0.1576153487,
0.3244948089,
-0.4510014057,
-0.0... |
https://github.com/huggingface/datasets/issues/2596 | Transformer Class on dataset | Ok, sure.
Thanks for pointing on functional part.
My question is more
“Philosophical”/Design perspective.
There are 2 perspetive:
Add transformation methods to
Dataset Class
OR Create a Transformer Class
which operates on Dataset Class.
T(Dataset) —> Dataset
datasetnew = MyTransform.transform(dataset)... | Just wondering if you have intenttion to create
TransformerClass :
dataset --> dataset
and make determnistic transformation (ie not fit).
| 142 | Transformer Class on dataset
Just wondering if you have intenttion to create
TransformerClass :
dataset --> dataset
and make determnistic transformation (ie not fit).
Ok, sure.
Thanks for pointing on functional part.
My question is more
“Philosophical”/Design perspective.
There are 2 perspe... | [
-0.1065422073,
-0.0538383983,
0.0260535572,
0.1430310309,
0.3365960419,
-0.1399591118,
0.5725405216,
-0.0339157358,
-0.0521858819,
0.1631579697,
0.116856575,
0.2448584139,
-0.4620172977,
0.4327046573,
0.2910210192,
-0.3197536469,
0.0604295954,
0.2110386193,
-0.5410815477,
-0.05... |
https://github.com/huggingface/datasets/issues/2596 | Transformer Class on dataset | I can imagine that this would be a useful API to implement processing pipelines as transforms. They could be used to perform higher level transforms compared to the atomic transforms allowed by methods like map, filter, etc.
I guess if you find any transform that could be useful for text dataset processing, image da... | Just wondering if you have intenttion to create
TransformerClass :
dataset --> dataset
and make determnistic transformation (ie not fit).
| 64 | Transformer Class on dataset
Just wondering if you have intenttion to create
TransformerClass :
dataset --> dataset
and make determnistic transformation (ie not fit).
I can imagine that this would be a useful API to implement processing pipelines as transforms. They could be used to perform h... | [
-0.4904250801,
0.0872301683,
-0.1906813532,
-0.0839950815,
0.3473588526,
-0.1753203273,
0.3381086588,
0.2574364543,
0.0903777555,
0.0197906066,
0.2580315471,
0.3618628383,
-0.3850793242,
0.4767217338,
0.1930704117,
-0.2324405164,
-0.1511344314,
0.2651835382,
-0.4250874817,
-0.0... |
https://github.com/huggingface/datasets/issues/2596 | Transformer Class on dataset | Thanks for reply.
What would be the constraints
to have
Dataset —> Dataset consistency ?
Main issue would be
larger than memory dataset and
serialization on disk.
Technically,
one still process at atomic level
and try to wrap the full results
into Dataset…. (!)
What would you think ?
> On Jul 7, 2021, at 16... | Just wondering if you have intenttion to create
TransformerClass :
dataset --> dataset
and make determnistic transformation (ie not fit).
| 155 | Transformer Class on dataset
Just wondering if you have intenttion to create
TransformerClass :
dataset --> dataset
and make determnistic transformation (ie not fit).
Thanks for reply.
What would be the constraints
to have
Dataset —> Dataset consistency ?
Main issue would be
larger than mem... | [
-0.4273888767,
0.2038236111,
-0.0892711431,
0.0677170008,
0.5223816633,
-0.1340120882,
0.3225384355,
0.1911120713,
0.0130914599,
0.0615380183,
0.2266188562,
0.3506037593,
-0.4103447795,
0.2984672487,
0.0133419083,
-0.0542736165,
-0.0388859436,
0.2965454459,
-0.5386688113,
-0.07... |
https://github.com/huggingface/datasets/issues/2596 | Transformer Class on dataset | We can be pretty flexible and not impose any constraints for transforms.
Moreover, this library is designed to support datasets bigger than memory. The datasets are loaded from the disk via memory mapping, without filling up RAM. Even processing functions like `map` work in a batched fashion to not fill up your RAM.... | Just wondering if you have intenttion to create
TransformerClass :
dataset --> dataset
and make determnistic transformation (ie not fit).
| 59 | Transformer Class on dataset
Just wondering if you have intenttion to create
TransformerClass :
dataset --> dataset
and make determnistic transformation (ie not fit).
We can be pretty flexible and not impose any constraints for transforms.
Moreover, this library is designed to support data... | [
-0.5401790142,
-0.0189926215,
-0.144008249,
0.1232814044,
0.4372493923,
-0.2170050591,
0.2453126311,
0.193660751,
0.219227314,
0.1304917336,
0.0615158863,
0.1877675354,
-0.2603259385,
0.0723871291,
0.0481921658,
-0.011597842,
-0.0782924369,
0.2525765598,
-0.5557224154,
-0.05736... |
https://github.com/huggingface/datasets/issues/2596 | Transformer Class on dataset | Ok thanks.
But, Dataset has various flavors.
In current design of Dataset,
how the serialization on disk is done (?)
The main issue is serialization
of newdataset= Transform(Dataset)
(ie thats why am referring to Out Of memory dataset…):
Should be part of Transform or part of dataset ?
Maybe, not, sin... | Just wondering if you have intenttion to create
TransformerClass :
dataset --> dataset
and make determnistic transformation (ie not fit).
| 162 | Transformer Class on dataset
Just wondering if you have intenttion to create
TransformerClass :
dataset --> dataset
and make determnistic transformation (ie not fit).
Ok thanks.
But, Dataset has various flavors.
In current design of Dataset,
how the serialization on disk is done (?)
The... | [
-0.3028286695,
-0.0752466395,
-0.0520500429,
0.2568612397,
0.4963203967,
0.033301089,
0.3464811146,
0.1359570026,
0.1084692329,
-0.0489085689,
0.1433253288,
0.1807946265,
-0.2497083545,
0.0995198712,
0.1433494985,
0.0294339936,
-0.0367590338,
0.208275333,
-0.6057528257,
-0.1103... |
https://github.com/huggingface/datasets/issues/2596 | Transformer Class on dataset | I'm not sure I understand, could you elaborate a bit more please ?
Each dataset is a wrapper of a PyArrow Table that contains all the data. The table is loaded from an arrow file on the disk.
We have an ArrowWriter and ArrowReader class to write/read arrow tables on disk or in in-memory buffers. | Just wondering if you have intenttion to create
TransformerClass :
dataset --> dataset
and make determnistic transformation (ie not fit).
| 55 | Transformer Class on dataset
Just wondering if you have intenttion to create
TransformerClass :
dataset --> dataset
and make determnistic transformation (ie not fit).
I'm not sure I understand, could you elaborate a bit more please ?
Each dataset is a wrapper of a PyArrow Table that contai... | [
-0.3604477048,
-0.0561249405,
0.0116533022,
0.2704980373,
0.3946088254,
-0.0971963257,
0.3508632779,
0.048223,
-0.1000289321,
-0.1611097902,
0.13179335,
0.3815631866,
-0.3154996932,
-0.0833917484,
0.2542790174,
-0.2325900346,
0.0182950441,
0.4430122375,
-0.3059170544,
-0.153789... |
https://github.com/huggingface/datasets/issues/2595 | ModuleNotFoundError: No module named 'datasets.tasks' while importing common voice datasets | Hi @profsatwinder.
It looks like you are using an old version of `datasets`. Please update it with `pip install -U datasets` and indicate if the problem persists. | Error traceback:
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-8-a7b592d3bca0> in <module>()
1 from datasets import load_dataset, load_metric
2
----> 3 common_voice_train = load_da... | 27 | ModuleNotFoundError: No module named 'datasets.tasks' while importing common voice datasets
Error traceback:
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-8-a7b592d3bca0> in <module>()
1 ... | [
-0.4680481851,
-0.2621824741,
-0.042259559,
-0.1249681264,
0.2041024268,
0.1450623274,
0.440243274,
0.254309237,
0.2220352739,
0.1348175406,
-0.2110196203,
0.208887428,
-0.2797671258,
-0.0780611262,
0.0812452063,
-0.0557082966,
0.042319715,
0.2207898945,
0.0042472086,
-0.191105... |
https://github.com/huggingface/datasets/issues/2595 | ModuleNotFoundError: No module named 'datasets.tasks' while importing common voice datasets | @albertvillanova Thanks for the information. I updated it to 1.9.0 and the issue is resolved. Thanks again. | Error traceback:
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-8-a7b592d3bca0> in <module>()
1 from datasets import load_dataset, load_metric
2
----> 3 common_voice_train = load_da... | 17 | ModuleNotFoundError: No module named 'datasets.tasks' while importing common voice datasets
Error traceback:
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-8-a7b592d3bca0> in <module>()
1 ... | [
-0.3636059463,
-0.3833418489,
-0.0181182213,
-0.1324387044,
0.1585255265,
0.1734298915,
0.408313036,
0.2774199545,
0.1942014247,
0.0959576592,
-0.1962040365,
0.2137221396,
-0.2857518494,
-0.0307673123,
0.1209338456,
0.0243295953,
0.0525843427,
0.1785369217,
0.0639958382,
-0.228... |
https://github.com/huggingface/datasets/issues/2591 | Cached dataset overflowing disk space | I'm using the datasets concatenate dataset to combine the datasets and then train.
train_dataset = concatenate_datasets([dataset1, dataset2, common_voice_train])
| I'm training a Swedish Wav2vec2 model on a Linux GPU and having issues that the huggingface cached dataset folder is completely filling up my disk space (I'm training on a dataset of around 500 gb).
The cache folder is 500gb (and now my disk space is full).
Is there a way to toggle caching or set the caching to b... | 18 | Cached dataset overflowing disk space
I'm training a Swedish Wav2vec2 model on a Linux GPU and having issues that the huggingface cached dataset folder is completely filling up my disk space (I'm training on a dataset of around 500 gb).
The cache folder is 500gb (and now my disk space is full).
Is there a way t... | [
0.0375595018,
-0.4222130477,
0.0985903442,
0.4514757097,
0.1011739373,
0.2050562799,
0.0744854808,
0.1419046074,
0.1445285976,
0.0556369573,
0.4010793567,
-0.2280428708,
-0.1242995039,
0.112719655,
0.1466888636,
0.0686895773,
0.2591466308,
-0.2458596975,
-0.0610486828,
0.106089... |
https://github.com/huggingface/datasets/issues/2591 | Cached dataset overflowing disk space | Hi @BirgerMoell.
You have several options:
- to set caching to be stored on a different path location, other than the default one (`~/.cache/huggingface/datasets`):
- either setting the environment variable `HF_DATASETS_CACHE` with the path to the new cache location
- or by passing it with the parameter `cach... | I'm training a Swedish Wav2vec2 model on a Linux GPU and having issues that the huggingface cached dataset folder is completely filling up my disk space (I'm training on a dataset of around 500 gb).
The cache folder is 500gb (and now my disk space is full).
Is there a way to toggle caching or set the caching to b... | 127 | Cached dataset overflowing disk space
I'm training a Swedish Wav2vec2 model on a Linux GPU and having issues that the huggingface cached dataset folder is completely filling up my disk space (I'm training on a dataset of around 500 gb).
The cache folder is 500gb (and now my disk space is full).
Is there a way t... | [
0.0375595018,
-0.4222130477,
0.0985903442,
0.4514757097,
0.1011739373,
0.2050562799,
0.0744854808,
0.1419046074,
0.1445285976,
0.0556369573,
0.4010793567,
-0.2280428708,
-0.1242995039,
0.112719655,
0.1466888636,
0.0686895773,
0.2591466308,
-0.2458596975,
-0.0610486828,
0.106089... |
https://github.com/huggingface/datasets/issues/2591 | Cached dataset overflowing disk space | Hi @BirgerMoell,
We are planning to add a new feature to datasets, which could be interesting in your case: Add the option to delete temporary files (decompressed files) from the cache directory (see: #2481, #2604).
We will ping you once this feature is implemented, so that the size of your cache directory will b... | I'm training a Swedish Wav2vec2 model on a Linux GPU and having issues that the huggingface cached dataset folder is completely filling up my disk space (I'm training on a dataset of around 500 gb).
The cache folder is 500gb (and now my disk space is full).
Is there a way to toggle caching or set the caching to b... | 56 | Cached dataset overflowing disk space
I'm training a Swedish Wav2vec2 model on a Linux GPU and having issues that the huggingface cached dataset folder is completely filling up my disk space (I'm training on a dataset of around 500 gb).
The cache folder is 500gb (and now my disk space is full).
Is there a way t... | [
0.0375595018,
-0.4222130477,
0.0985903442,
0.4514757097,
0.1011739373,
0.2050562799,
0.0744854808,
0.1419046074,
0.1445285976,
0.0556369573,
0.4010793567,
-0.2280428708,
-0.1242995039,
0.112719655,
0.1466888636,
0.0686895773,
0.2591466308,
-0.2458596975,
-0.0610486828,
0.106089... |
https://github.com/huggingface/datasets/issues/2585 | sqaud_v2 dataset contains misalignment between the answer text and the context value at the answer index | Hi @mmajurski, thanks for reporting this issue.
Indeed this misalignment arises because the source dataset context field contains leading blank spaces (and these are counted within the answer_start), while our datasets loading script removes these leading blank spaces.
I'm going to fix our script so that all lead... | ## Describe the bug
The built in huggingface squad_v2 dataset that you can access via datasets.load_dataset contains mis-alignment between the answers['text'] and the characters in the context at the location specified by answers['answer_start'].
For example:
id = '56d1f453e7d4791d009025bd'
answers = {'text': ['P... | 71 | sqaud_v2 dataset contains misalignment between the answer text and the context value at the answer index
## Describe the bug
The built in huggingface squad_v2 dataset that you can access via datasets.load_dataset contains mis-alignment between the answers['text'] and the characters in the context at the location spe... | [
-0.2330303192,
-0.3295378387,
-0.0441100672,
0.3749360144,
0.0878229886,
-0.0763422325,
0.1057570949,
0.2279032767,
-0.2927473187,
0.1577833146,
-0.0841631815,
0.0450685136,
0.3250356019,
0.0685454607,
-0.0926084593,
0.1523637623,
0.1756329089,
0.0104245786,
0.0099454969,
-0.17... |
https://github.com/huggingface/datasets/issues/2585 | sqaud_v2 dataset contains misalignment between the answer text and the context value at the answer index | If you are going to be altering the data cleaning from the source Squad dataset, here is one thing to consider.
There are occasional double spaces separating words which it might be nice to get rid of.
Either way, thank you. | ## Describe the bug
The built in huggingface squad_v2 dataset that you can access via datasets.load_dataset contains mis-alignment between the answers['text'] and the characters in the context at the location specified by answers['answer_start'].
For example:
id = '56d1f453e7d4791d009025bd'
answers = {'text': ['P... | 41 | sqaud_v2 dataset contains misalignment between the answer text and the context value at the answer index
## Describe the bug
The built in huggingface squad_v2 dataset that you can access via datasets.load_dataset contains mis-alignment between the answers['text'] and the characters in the context at the location spe... | [
-0.2330303192,
-0.3295378387,
-0.0441100672,
0.3749360144,
0.0878229886,
-0.0763422325,
0.1057570949,
0.2279032767,
-0.2927473187,
0.1577833146,
-0.0841631815,
0.0450685136,
0.3250356019,
0.0685454607,
-0.0926084593,
0.1523637623,
0.1756329089,
0.0104245786,
0.0099454969,
-0.17... |
https://github.com/huggingface/datasets/issues/2583 | Error iteration over IterableDataset using Torch DataLoader | Hi ! This is because you first need to format the dataset for pytorch:
```python
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True)
>>> torch_iterable_dataset = dataset.with_format("torch")
>>> assert isinstance... | ## Describe the bug
I have an IterableDataset (created using streaming=True) and I am trying to create batches using Torch DataLoader class by passing this IterableDataset to it. This throws error which is pasted below. I can do the same by using Torch IterableDataset. One thing I noticed is that in the former case wh... | 93 | Error iteration over IterableDataset using Torch DataLoader
## Describe the bug
I have an IterableDataset (created using streaming=True) and I am trying to create batches using Torch DataLoader class by passing this IterableDataset to it. This throws error which is pasted below. I can do the same by using Torch Iter... | [
-0.1759884208,
-0.3325827718,
-0.0121411253,
0.2365849912,
0.1552749872,
0.0204123836,
0.4311745465,
0.001804909,
-0.1689607054,
0.2460095882,
0.0963452831,
0.2442089915,
-0.335481137,
-0.4065967202,
-0.2495773584,
-0.2147274017,
0.0861799493,
-0.2172201574,
-0.2279188931,
-0.0... |
https://github.com/huggingface/datasets/issues/2583 | Error iteration over IterableDataset using Torch DataLoader | Thank you for that and the example!
What you said makes total sense; I just somehow missed that and assumed HF IterableDataset was a subclass of Torch IterableDataset. | ## Describe the bug
I have an IterableDataset (created using streaming=True) and I am trying to create batches using Torch DataLoader class by passing this IterableDataset to it. This throws error which is pasted below. I can do the same by using Torch IterableDataset. One thing I noticed is that in the former case wh... | 28 | Error iteration over IterableDataset using Torch DataLoader
## Describe the bug
I have an IterableDataset (created using streaming=True) and I am trying to create batches using Torch DataLoader class by passing this IterableDataset to it. This throws error which is pasted below. I can do the same by using Torch Iter... | [
-0.1759884208,
-0.3325827718,
-0.0121411253,
0.2365849912,
0.1552749872,
0.0204123836,
0.4311745465,
0.001804909,
-0.1689607054,
0.2460095882,
0.0963452831,
0.2442089915,
-0.335481137,
-0.4065967202,
-0.2495773584,
-0.2147274017,
0.0861799493,
-0.2172201574,
-0.2279188931,
-0.0... |
https://github.com/huggingface/datasets/issues/2573 | Finding right block-size with JSON loading difficult for user | This was actually a second error arising from a too small block-size in the json reader.
Finding the right block size is difficult for the layman user | As reported by @thomwolf, while loading a JSON Lines file with "json" loading script, he gets
> json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 383)
| 27 | Finding right block-size with JSON loading difficult for user
As reported by @thomwolf, while loading a JSON Lines file with "json" loading script, he gets
> json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 383)
This was actually a second error arising from a too small block-size in the json reade... | [
0.0956071168,
0.0020218808,
-0.2247489989,
0.4068728983,
0.1341334283,
-0.0870382935,
0.3515782952,
0.3959696591,
0.5053704977,
0.4565080106,
0.2659342885,
-0.0984665453,
0.0224604625,
-0.0638387874,
-0.0379773863,
-0.2568932474,
-0.2618598342,
0.2319743037,
0.1568870693,
0.303... |
https://github.com/huggingface/datasets/issues/2569 | Weights of model checkpoint not initialized for RobertaModel for Bertscore | Hi @suzyahyah, thanks for reporting.
The message you get is indeed not an error message, but a warning coming from Hugging Face `transformers`. The complete warning message is:
```
Some weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_hea... | When applying bertscore out of the box,
```Some weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.weight']```
Following the typical ... | 167 | Weights of model checkpoint not initialized for RobertaModel for Bertscore
When applying bertscore out of the box,
```Some weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_... | [
-0.0595505349,
-0.3645300865,
0.090109922,
0.1289699972,
0.4273766577,
0.0748272091,
0.2718202174,
0.1242346913,
0.1251104474,
0.1333976686,
-0.1491384059,
0.2172134519,
-0.1455290467,
-0.0643485487,
-0.0716612935,
-0.219649002,
0.1432189047,
-0.049943801,
-0.2094155401,
-0.176... |
https://github.com/huggingface/datasets/issues/2569 | Weights of model checkpoint not initialized for RobertaModel for Bertscore | Hi @suzyahyah, I have created a Pull Request to filter out that warning message in this specific case, since the behavior is as expected and the warning message can only cause confusion for users (as in your case). | When applying bertscore out of the box,
```Some weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.weight']```
Following the typical ... | 38 | Weights of model checkpoint not initialized for RobertaModel for Bertscore
When applying bertscore out of the box,
```Some weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_... | [
-0.1506307572,
-0.2975347936,
0.0790533349,
0.0513517894,
0.4647553861,
0.0405766107,
0.197935611,
0.1907866746,
0.1385856122,
0.2084497064,
-0.1303773075,
0.2869917452,
-0.1275678426,
-0.0866210759,
-0.123821564,
-0.1949993819,
0.1129314452,
-0.0918190479,
-0.2066622972,
-0.17... |
https://github.com/huggingface/datasets/issues/2561 | Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True` | Hi ! I just tried to reproduce what you said:
- create a local builder class
- use `load_dataset`
- update the builder class code
- use `load_dataset` again (with or without `ignore_verifications=True`)
And it creates a new cache, as expected.
What modifications did you do to your builder's code ? | ## Describe the bug
If i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`. This slows down debugging and cache generator for very large datasets.
## Steps to reproduce th... | 51 | Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True`
## Describe the bug
If i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`... | [
-0.3804820776,
0.4886573255,
0.0326323584,
0.1935643703,
0.1446635276,
0.150725916,
0.319269985,
0.3748852611,
0.1290489882,
0.218332082,
0.2149605453,
0.363037169,
0.1198909357,
-0.2817021608,
0.0510922149,
0.2790058553,
0.1163432673,
0.143495813,
0.2144533694,
-0.07715635,
... |
https://github.com/huggingface/datasets/issues/2561 | Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True` | Hi @lhoestq. Thanks for your reply. I just did minor modifications for which it should not regenerate cache (for e.g. Adding a print statement). Overall, regardless of cache miss, there should be an explicit option to allow reuse of existing cache if author knows cache shouldn't be affected. | ## Describe the bug
If i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`. This slows down debugging and cache generator for very large datasets.
## Steps to reproduce th... | 48 | Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True`
## Describe the bug
If i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`... | [
-0.296610266,
0.4845600128,
0.0497075468,
0.1848860383,
0.0803923681,
0.2186553478,
0.282461524,
0.3745070696,
0.1949125379,
0.1779459566,
0.2753896713,
0.354469955,
0.0864099339,
-0.251421392,
-0.0044402927,
0.2905792296,
0.0580529571,
0.1453521997,
0.185672015,
-0.006389664,
... |
https://github.com/huggingface/datasets/issues/2561 | Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True` | The cache is based on the hash of the dataset builder's code, so changing the code makes it recompute the cache.
You could still rename the cache directory of your previous computation to the new expected cache directory if you want to avoid having to recompute it and if you're sure that it would generate the exact ... | ## Describe the bug
If i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`. This slows down debugging and cache generator for very large datasets.
## Steps to reproduce th... | 82 | Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True`
## Describe the bug
If i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`... | [
-0.380987674,
0.4386590719,
-0.0031961675,
0.1750698388,
0.0950203314,
0.1552424729,
0.2317516208,
0.4199699461,
0.1590618044,
0.3092048168,
0.1621155143,
0.2830115855,
0.1100005358,
-0.2082075626,
-0.0321415477,
0.2218123227,
0.0682656914,
0.1848928034,
0.150418818,
-0.0182193... |
https://github.com/huggingface/datasets/issues/2561 | Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True` | Hi @apsdehal,
If you decide to follow @lhoestq's suggestion to rename the cache directory of your previous computation to the new expected cache directory, you can do the following to get the name of the new expected cache directory once #2500 is merged:
```python
from datasets import load_dataset_builder
dataset... | ## Describe the bug
If i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`. This slows down debugging and cache generator for very large datasets.
## Steps to reproduce th... | 73 | Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True`
## Describe the bug
If i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`... | [
-0.4378628731,
0.478214711,
-0.0076683937,
0.1180528924,
0.1569269598,
0.1881317496,
0.2960322499,
0.4827200174,
0.1750374585,
0.3437813818,
0.2252853364,
0.3112471104,
0.0643950775,
-0.1951676458,
0.0266977567,
0.3226769865,
0.0457071178,
0.1662352681,
0.0950703919,
0.00560992... |
https://github.com/huggingface/datasets/issues/2559 | Memory usage consistently increases when processing a dataset with `.map` | Hi ! Can you share the function you pass to `map` ?
I know you mentioned it would be hard to share some code but this would really help to understand what happened | ## Describe the bug
I have a HF dataset with image paths stored in it and I am trying to load those image paths using `.map` with `num_proc=80`. I am noticing that the memory usage consistently keeps on increasing with time. I tried using `DEFAULT_WRITER_BATCH_SIZE=10` in the builder to decrease arrow writer's batch... | 33 | Memory usage consistently increases when processing a dataset with `.map`
## Describe the bug
I have a HF dataset with image paths stored in it and I am trying to load those image paths using `.map` with `num_proc=80`. I am noticing that the memory usage consistently keeps on increasing with time. I tried using `D... | [
-0.1570354104,
-0.1291137189,
0.0070895329,
0.4211395383,
0.1809349358,
-0.0064078951,
0.058969263,
0.1185614467,
0.2151498049,
0.1314364374,
0.4036096036,
0.5179604292,
-0.1873754263,
-0.0342992395,
-0.0775476918,
0.0739516914,
0.1007091179,
0.0621855743,
-0.0024835395,
0.0814... |
https://github.com/huggingface/datasets/issues/2554 | Multilabel metrics not supported | Hi @GuillemGSubies, thanks for reporting.
I have made a PR to fix this issue and allow metrics to be computed also for multilabel classification problems. | When I try to use a metric like F1 macro I get the following error:
```
TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'
```
There is an explicit casting here:
https://github.com/huggingface/datasets/blob/fc79f61cbbcfa0e8c68b28c0a8257f17e768a075/src/datasets/features.py#L... | 25 | Multilabel metrics not supported
When I try to use a metric like F1 macro I get the following error:
```
TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'
```
There is an explicit casting here:
https://github.com/huggingface/datasets/blob/fc79f61cbbcfa0e8c68b28c0a8257f17... | [
-0.2212651223,
-0.1788477153,
0.004477466,
0.3008990586,
0.6725277901,
-0.1359904855,
0.4806859493,
-0.1023351923,
0.2283082604,
0.3166793287,
-0.1240744665,
0.2981927991,
-0.2808498442,
0.404633224,
-0.2508814037,
-0.1761212498,
-0.2169737965,
-0.3327275515,
-0.0599903204,
0.1... |
https://github.com/huggingface/datasets/issues/2554 | Multilabel metrics not supported | Sorry for reopening but I just noticed that the `_compute` method for the F1 metric is still not good enough for multilabel problems:
https://github.com/huggingface/datasets/blob/92a3ee549705aa0a107c9fa5caf463b3b3da2616/metrics/f1/f1.py#L115
Somehow we should be able to change the parameter `average` at least | When I try to use a metric like F1 macro I get the following error:
```
TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'
```
There is an explicit casting here:
https://github.com/huggingface/datasets/blob/fc79f61cbbcfa0e8c68b28c0a8257f17e768a075/src/datasets/features.py#L... | 36 | Multilabel metrics not supported
When I try to use a metric like F1 macro I get the following error:
```
TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'
```
There is an explicit casting here:
https://github.com/huggingface/datasets/blob/fc79f61cbbcfa0e8c68b28c0a8257f17... | [
-0.3035683036,
-0.1394245178,
-0.0115921395,
0.2726673484,
0.5279554725,
-0.1952756643,
0.5079543591,
-0.0628860667,
0.2076374888,
0.3886967599,
-0.0487108938,
0.2973477244,
-0.2217354923,
0.4341044128,
-0.2341410816,
-0.0707550719,
-0.200794518,
-0.3434455097,
0.0283415653,
0.... |
https://github.com/huggingface/datasets/issues/2553 | load_dataset("web_nlg") NonMatchingChecksumError | Hi ! Thanks for reporting. This is due to the WebNLG repository that got updated today.
I just pushed a fix at #2558 - this shouldn't happen anymore in the future. | Hi! It seems the WebNLG dataset gives a NonMatchingChecksumError.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('web_nlg', name="release_v3.0_en", split="dev")
```
Gives
```
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['h... | 31 | load_dataset("web_nlg") NonMatchingChecksumError
Hi! It seems the WebNLG dataset gives a NonMatchingChecksumError.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('web_nlg', name="release_v3.0_en", split="dev")
```
Gives
```
NonMatchingChecksumError: Ch... | [
-0.1927949041,
0.1298391968,
-0.1363759488,
0.0406425297,
0.2651570141,
0.0086786011,
0.1319974065,
0.4511446953,
0.2963728905,
0.0861961618,
-0.0655138269,
0.2723963559,
0.0220884588,
-0.0255217608,
-0.2154659778,
0.5214449763,
0.0755514726,
0.016244553,
-0.1093695909,
-0.1461... |
https://github.com/huggingface/datasets/issues/2552 | Keys should be unique error on code_search_net | Two questions:
- with `datasets-cli env` we don't have any information on the dataset script version used. Should we give access to this somehow? Either as a note in the Error message or as an argument with the name of the dataset to `datasets-cli env`?
- I don't really understand why the id is duplicated in the code... | ## Describe the bug
Loading `code_search_net` seems not possible at the moment.
## Steps to reproduce the bug
```python
>>> load_dataset('code_search_net')
Downloading: 8.50kB [00:00, 3.09MB/s] ... | 66 | Keys should be unique error on code_search_net
## Describe the bug
Loading `code_search_net` seems not possible at the moment.
## Steps to reproduce the bug
```python
>>> load_dataset('code_search_net')
Downloading: 8.50kB [00:00, 3.09MB/s] ... | [
-0.0094925659,
0.0049084597,
-0.1037804782,
0.3735045493,
0.0469305515,
-0.0595814027,
0.2355327457,
0.2376768738,
0.0580708347,
0.0420987643,
-0.0894130915,
0.4072092772,
-0.1577005982,
0.104338564,
0.1628590375,
0.0670309365,
0.0143384207,
0.1504512429,
0.1202021912,
-0.12889... |
https://github.com/huggingface/datasets/issues/2552 | Keys should be unique error on code_search_net | Thanks for reporting. There was indeed an issue with the keys. The key was the addition of the file id and row id, which resulted in collisions. I just opened a PR to fix this at https://github.com/huggingface/datasets/pull/2555
To help users debug this kind of errors we could try to show a message like this
```pyt... | ## Describe the bug
Loading `code_search_net` seems not possible at the moment.
## Steps to reproduce the bug
```python
>>> load_dataset('code_search_net')
Downloading: 8.50kB [00:00, 3.09MB/s] ... | 97 | Keys should be unique error on code_search_net
## Describe the bug
Loading `code_search_net` seems not possible at the moment.
## Steps to reproduce the bug
```python
>>> load_dataset('code_search_net')
Downloading: 8.50kB [00:00, 3.09MB/s] ... | [
-0.039390035,
-0.0147920158,
-0.0928234309,
0.3366982639,
0.0873197168,
-0.0139397718,
0.200390473,
0.2712211311,
0.0882732868,
0.0667393878,
-0.0861996189,
0.3934497237,
-0.162392363,
0.1329390556,
0.1347278208,
0.0915051699,
-0.0428926237,
0.1898691654,
0.1181888953,
-0.05612... |
https://github.com/huggingface/datasets/issues/2552 | Keys should be unique error on code_search_net | and are we sure there are not a lot of datasets which are now broken with this change? | ## Describe the bug
Loading `code_search_net` seems not possible at the moment.
## Steps to reproduce the bug
```python
>>> load_dataset('code_search_net')
Downloading: 8.50kB [00:00, 3.09MB/s] ... | 18 | Keys should be unique error on code_search_net
## Describe the bug
Loading `code_search_net` seems not possible at the moment.
## Steps to reproduce the bug
```python
>>> load_dataset('code_search_net')
Downloading: 8.50kB [00:00, 3.09MB/s] ... | [
-0.046908617,
-0.0271528233,
-0.1106119379,
0.3490993977,
0.0812767223,
-0.0156192472,
0.1811807752,
0.2770841122,
0.0815913603,
0.054418277,
-0.0599071719,
0.3916930556,
-0.1846611947,
0.1166166067,
0.156404376,
0.0785155669,
-0.0312843956,
0.1728185117,
0.1106828228,
-0.05424... |
https://github.com/huggingface/datasets/issues/2552 | Keys should be unique error on code_search_net | Thanks to the dummy data, we know for sure that most of them work as expected.
`code_search_net` wasn't caught because the dummy data only have one dummy data file while the dataset script can actually load several of them using `os.listdir`. Let me take a look at all the other datasets that use `os.listdir` to see if... | ## Describe the bug
Loading `code_search_net` seems not possible at the moment.
## Steps to reproduce the bug
```python
>>> load_dataset('code_search_net')
Downloading: 8.50kB [00:00, 3.09MB/s] ... | 61 | Keys should be unique error on code_search_net
## Describe the bug
Loading `code_search_net` seems not possible at the moment.
## Steps to reproduce the bug
```python
>>> load_dataset('code_search_net')
Downloading: 8.50kB [00:00, 3.09MB/s] ... | [
-0.0364267603,
-0.020195568,
-0.0964382589,
0.3276670277,
0.0761768669,
-0.0132121714,
0.208099708,
0.2912015319,
0.0984991491,
0.0711847693,
-0.0670372322,
0.3835589886,
-0.1711657792,
0.1363998652,
0.1288101971,
0.0837999135,
-0.0438018404,
0.178087607,
0.1199664921,
-0.05948... |
https://github.com/huggingface/datasets/issues/2552 | Keys should be unique error on code_search_net | I found one issue on `fever` (PR here: https://github.com/huggingface/datasets/pull/2557)
All the other ones seem fine :) | ## Describe the bug
Loading `code_search_net` seems not possible at the moment.
## Steps to reproduce the bug
```python
>>> load_dataset('code_search_net')
Downloading: 8.50kB [00:00, 3.09MB/s] ... | 16 | Keys should be unique error on code_search_net
## Describe the bug
Loading `code_search_net` seems not possible at the moment.
## Steps to reproduce the bug
```python
>>> load_dataset('code_search_net')
Downloading: 8.50kB [00:00, 3.09MB/s] ... | [
-0.0202084873,
0.000768123,
-0.0907565504,
0.3496334255,
0.0851367489,
-0.0248721372,
0.1991795599,
0.2581067383,
0.0932297036,
0.0736749917,
-0.0927867666,
0.3846125305,
-0.1742853373,
0.1145090014,
0.1579978913,
0.097871393,
-0.0322048962,
0.1860357523,
0.1146631092,
-0.06635... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.