html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 70 51.8k | body stringlengths 0 29.8k | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings list |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/1641 | muchocine dataset cannot be dowloaded | Hi @mrm8488 and @amoux!
The datasets you are trying to load have been added to the library during the community sprint for v2 last month. They will be available with the v2 release!
For now, there are still a couple of solutions to load the datasets:
1. As suggested by @amoux, you can clone the git repo and pass th... | ```python
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, ... | 81 | muchocine dataset cannot be dowloaded
```python
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, do... | [
-0.36579957604408264,
-0.1491081863641739,
-0.053581614047288895,
0.33292171359062195,
0.43087688088417053,
0.12308886647224426,
0.3599627912044525,
0.3173547089099884,
0.3176315128803253,
0.06259830296039581,
-0.2380271553993225,
0.001200711471028626,
-0.088910311460495,
0.061044182628393... |
https://github.com/huggingface/datasets/issues/1641 | muchocine dataset cannot be dowloaded | If you don't want to clone entire `datasets` repo, just download the `muchocine` directory and pass the local path to the directory. Cheers! | ```python
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, ... | 23 | muchocine dataset cannot be dowloaded
```python
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, do... | [
-0.36579957604408264,
-0.1491081863641739,
-0.053581614047288895,
0.33292171359062195,
0.43087688088417053,
0.12308886647224426,
0.3599627912044525,
0.3173547089099884,
0.3176315128803253,
0.06259830296039581,
-0.2380271553993225,
0.001200711471028626,
-0.088910311460495,
0.061044182628393... |
https://github.com/huggingface/datasets/issues/1641 | muchocine dataset cannot be dowloaded | Muchocine was added recently, that's why it wasn't available yet.
To load it you can just update `datasets`
```
pip install --upgrade datasets
```
and then you can load `muchocine` with
```python
from datasets import load_dataset
dataset = load_dataset("muchocine", split="train")
``` | ```python
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, ... | 41 | muchocine dataset cannot be dowloaded
```python
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, do... | [
-0.36579957604408264,
-0.1491081863641739,
-0.053581614047288895,
0.33292171359062195,
0.43087688088417053,
0.12308886647224426,
0.3599627912044525,
0.3173547089099884,
0.3176315128803253,
0.06259830296039581,
-0.2380271553993225,
0.001200711471028626,
-0.088910311460495,
0.061044182628393... |
https://github.com/huggingface/datasets/issues/1639 | bug with sst2 in glue | Maybe you can use nltk's treebank detokenizer ?
```python
from nltk.tokenize.treebank import TreebankWordDetokenizer
TreebankWordDetokenizer().detokenize("it 's a charming and often affecting journey . ".split())
# "it's a charming and often affecting journey."
``` | Hi
I am getting very low accuracy on SST2 I investigate this and observe that for this dataset sentences are tokenized, while this is correct for the other datasets in GLUE, please see below.
Is there any alternatives I could get untokenized sentences? I am unfortunately under time pressure to report some results on ... | 32 | bug with sst2 in glue
Hi
I am getting very low accuracy on SST2 I investigate this and observe that for this dataset sentences are tokenized, while this is correct for the other datasets in GLUE, please see below.
Is there any alternatives I could get untokenized sentences? I am unfortunately under time pressure t... | [
0.1141936331987381,
-0.19111455976963043,
0.05352902412414551,
0.1571248471736908,
0.15624050796031952,
-0.3605673313140869,
0.1134418398141861,
0.4331328868865967,
-0.09326575696468353,
0.023026524111628532,
-0.12253914028406143,
0.13916973769664764,
-0.09749671816825867,
0.05867533758282... |
https://github.com/huggingface/datasets/issues/1639 | bug with sst2 in glue | I don't know if there exists a detokenized version somewhere. Even the version on kaggle is tokenized | Hi
I am getting very low accuracy on SST2 I investigate this and observe that for this dataset sentences are tokenized, while this is correct for the other datasets in GLUE, please see below.
Is there any alternatives I could get untokenized sentences? I am unfortunately under time pressure to report some results on ... | 17 | bug with sst2 in glue
Hi
I am getting very low accuracy on SST2 I investigate this and observe that for this dataset sentences are tokenized, while this is correct for the other datasets in GLUE, please see below.
Is there any alternatives I could get untokenized sentences? I am unfortunately under time pressure t... | [
0.10537703335285187,
-0.19167111814022064,
0.05356001853942871,
0.08271010965108871,
0.16840626299381256,
-0.328156054019928,
0.1854296624660492,
0.43216732144355774,
-0.09194087982177734,
0.030973872169852257,
-0.09434890747070312,
0.146273672580719,
-0.10577068477869034,
0.13801614940166... |
https://github.com/huggingface/datasets/issues/1636 | winogrande cannot be dowloaded | I have same issue for other datasets (`myanmar_news` in my case).
A version of `datasets` runs correctly on my local machine (**without GPU**) which looking for the dataset at
```
https://raw.githubusercontent.com/huggingface/datasets/master/datasets/myanmar_news/myanmar_news.py
```
Meanwhile, other version r... | Hi,
I am getting this error when trying to run the codes on the cloud. Thank you for any suggestion and help on this @lhoestq
```
File "./finetune_trainer.py", line 318, in <module>
main()
File "./finetune_trainer.py", line 148, in main
for task in data_args.tasks]
File "./finetune_trainer.py", ... | 90 | winogrande cannot be dowloaded
Hi,
I am getting this error when trying to run the codes on the cloud. Thank you for any suggestion and help on this @lhoestq
```
File "./finetune_trainer.py", line 318, in <module>
main()
File "./finetune_trainer.py", line 148, in main
for task in data_args.tasks]... | [
-0.36869677901268005,
0.10485056787729263,
-0.06603816896677017,
0.17274318635463715,
0.29317447543144226,
0.06464018672704697,
0.6664783954620361,
0.07428350299596786,
0.320622980594635,
0.03763075917959213,
-0.12091880291700363,
0.08111235499382019,
-0.027439886704087257,
0.3085550367832... |
https://github.com/huggingface/datasets/issues/1636 | winogrande cannot be dowloaded | It looks like they're two different issues
----------
First for `myanmar_news`:
It must come from the way you installed `datasets`.
If you install `datasets` from source, then the `myanmar_news` script will be loaded from `master`.
However if you install from `pip` it will get it using the version of the li... | Hi,
I am getting this error when trying to run the codes on the cloud. Thank you for any suggestion and help on this @lhoestq
```
File "./finetune_trainer.py", line 318, in <module>
main()
File "./finetune_trainer.py", line 148, in main
for task in data_args.tasks]
File "./finetune_trainer.py", ... | 141 | winogrande cannot be dowloaded
Hi,
I am getting this error when trying to run the codes on the cloud. Thank you for any suggestion and help on this @lhoestq
```
File "./finetune_trainer.py", line 318, in <module>
main()
File "./finetune_trainer.py", line 148, in main
for task in data_args.tasks]... | [
-0.36869677901268005,
0.10485056787729263,
-0.06603816896677017,
0.17274318635463715,
0.29317447543144226,
0.06464018672704697,
0.6664783954620361,
0.07428350299596786,
0.320622980594635,
0.03763075917959213,
-0.12091880291700363,
0.08111235499382019,
-0.027439886704087257,
0.3085550367832... |
https://github.com/huggingface/datasets/issues/1634 | Inspecting datasets per category | That's interesting, can you tell me what you think would be useful to access to inspect a dataset?
You can filter them in the hub with the search by the way: https://huggingface.co/datasets have you seen it? | Hi
Is there a way I could get all NLI datasets/all QA datasets to get some understanding of available datasets per category? this is hard for me to inspect the datasets one by one in the webpage, thanks for the suggestions @lhoestq | 36 | Inspecting datasets per category
Hi
Is there a way I could get all NLI datasets/all QA datasets to get some understanding of available datasets per category? this is hard for me to inspect the datasets one by one in the webpage, thanks for the suggestions @lhoestq
That's interesting, can you tell me what you thin... | [
-0.24553193151950836,
-0.1529335230588913,
-0.22945666313171387,
0.3999655544757843,
0.0459514781832695,
0.2248050570487976,
0.07310562580823898,
0.4707448184490204,
-0.0032280252780765295,
-0.14973489940166473,
-0.5485339164733887,
-0.11188453435897827,
-0.10111634433269501,
0.32587620615... |
https://github.com/huggingface/datasets/issues/1634 | Inspecting datasets per category | Hi @thomwolf
thank you, I was not aware of this, I was looking into the data viewer linked into readme page.
This is exactly what I was looking for, but this does not work currently, please see the attached
I am selecting to see all nli datasets in english and it retrieves none. thanks
, we will focus on that in January (cc @yjernite): https://huggingface.co/datasets?filter=task_ids:natural-language-inference,languages:en | Hi
Is there a way I could get all NLI datasets/all QA datasets to get some understanding of available datasets per category? this is hard for me to inspect the datasets one by one in the webpage, thanks for the suggestions @lhoestq | 28 | Inspecting datasets per category
Hi
Is there a way I could get all NLI datasets/all QA datasets to get some understanding of available datasets per category? this is hard for me to inspect the datasets one by one in the webpage, thanks for the suggestions @lhoestq
I see 4 results for NLI in English but indeed som... | [
-0.11279863864183426,
0.015689359977841377,
-0.2197401225566864,
0.4125653803348541,
-0.007914373651146889,
0.02699165605008602,
0.08438295125961304,
0.4735390841960907,
-0.06211090087890625,
-0.1604091078042984,
-0.5733583569526672,
-0.12270792573690414,
0.05213504284620285,
0.34268009662... |
https://github.com/huggingface/datasets/issues/1633 | social_i_qa wrong format of labels | @lhoestq, should I raise a PR for this? Just a minor change while reading labels text file | Hi,
there is extra "\n" in labels of social_i_qa datasets, no big deal, but I was wondering if you could remove it to make it consistent.
so label is 'label': '1\n', not '1'
thanks
```
>>> import datasets
>>> from datasets import load_dataset
>>> dataset = load_dataset(
... 'social_i_qa')
cahce dir /jul... | 17 | social_i_qa wrong format of labels
Hi,
there is extra "\n" in labels of social_i_qa datasets, no big deal, but I was wondering if you could remove it to make it consistent.
so label is 'label': '1\n', not '1'
thanks
```
>>> import datasets
>>> from datasets import load_dataset
>>> dataset = load_dataset(
.... | [
0.013278299011290073,
-0.22193120419979095,
-0.0782267153263092,
0.37242206931114197,
0.12995882332324982,
-0.19164705276489258,
0.06086030229926109,
0.2249794900417328,
-0.2229049950838089,
0.3125813901424408,
-0.16639092564582825,
-0.34459587931632996,
-0.07857673615217209,
0.53735131025... |
https://github.com/huggingface/datasets/issues/1630 | Adding UKP Argument Aspect Similarity Corpus | Adding a link to the guide on adding a dataset if someone want to give it a try: https://github.com/huggingface/datasets#add-a-new-dataset-to-the-hub
we should add this guide to the issue template @lhoestq | Hi, this would be great to have this dataset included.
## Adding a Dataset
- **Name:** UKP Argument Aspect Similarity Corpus
- **Description:** The UKP Argument Aspect Similarity Corpus (UKP ASPECT) includes 3,595 sentence pairs over 28 controversial topics. Each sentence pair was annotated via crowdsourcing as ei... | 29 | Adding UKP Argument Aspect Similarity Corpus
Hi, this would be great to have this dataset included.
## Adding a Dataset
- **Name:** UKP Argument Aspect Similarity Corpus
- **Description:** The UKP Argument Aspect Similarity Corpus (UKP ASPECT) includes 3,595 sentence pairs over 28 controversial topics. Each sent... | [
-0.3177592158317566,
-0.26209187507629395,
-0.07429405301809311,
-0.1360725611448288,
-0.11523175984621048,
0.17070846259593964,
0.3080383837223053,
0.20302270352840424,
-0.23973609507083893,
0.055295173078775406,
-0.23521345853805542,
0.4752633571624756,
-0.09524598717689514,
-0.119330093... |
https://github.com/huggingface/datasets/issues/1630 | Adding UKP Argument Aspect Similarity Corpus | thanks @thomwolf , this is added now. The template is correct, sorry my mistake not to include it. | Hi, this would be great to have this dataset included.
## Adding a Dataset
- **Name:** UKP Argument Aspect Similarity Corpus
- **Description:** The UKP Argument Aspect Similarity Corpus (UKP ASPECT) includes 3,595 sentence pairs over 28 controversial topics. Each sentence pair was annotated via crowdsourcing as ei... | 18 | Adding UKP Argument Aspect Similarity Corpus
Hi, this would be great to have this dataset included.
## Adding a Dataset
- **Name:** UKP Argument Aspect Similarity Corpus
- **Description:** The UKP Argument Aspect Similarity Corpus (UKP ASPECT) includes 3,595 sentence pairs over 28 controversial topics. Each sent... | [
-0.2550906836986542,
-0.21735580265522003,
-0.08474929630756378,
-0.2169908583164215,
-0.09479812532663345,
0.14514227211475372,
0.36726924777030945,
0.20975413918495178,
-0.28977081179618835,
0.06342780590057373,
-0.1665051132440567,
0.45421457290649414,
-0.029843417927622795,
-0.18061642... |
https://github.com/huggingface/datasets/issues/1627 | `Dataset.map` disable progress bar | Progress bar can be disabled like this:
```python
from datasets.utils.logging import set_verbosity_error
set_verbosity_error()
```
There is this line in `Dataset.map`:
```python
not_verbose = bool(logger.getEffectiveLevel() > WARNING)
```
So any logging level higher than `WARNING` turns off the progress ba... | I can't find anything to turn off the `tqdm` progress bars while running a preprocessing function using `Dataset.map`. I want to do akin to `disable_tqdm=True` in the case of `transformers`. Is there something like that? | 39 | `Dataset.map` disable progress bar
I can't find anything to turn off the `tqdm` progress bars while running a preprocessing function using `Dataset.map`. I want to do akin to `disable_tqdm=True` in the case of `transformers`. Is there something like that?
Progress bar can be disabled like this:
```python
from dat... | [
-0.34583428502082825,
-0.31814858317375183,
-0.05070163309574127,
-0.18781772255897522,
0.3254137635231018,
-0.0002217930305050686,
0.23117858171463013,
0.15339909493923187,
-0.3758890628814697,
0.21497279405593872,
0.17821292579174042,
0.6198930144309998,
-0.20778515934944153,
0.081350669... |
https://github.com/huggingface/datasets/issues/1624 | Cannot download ade_corpus_v2 | Hi @him1411, the dataset you are trying to load has been added during the community sprint and has not been released yet. It will be available with the v2 of `datasets`.
For now, you should be able to load the datasets after installing the latest (master) version of `datasets` using pip:
`pip install git+https://gith... | I tried this to get the dataset following this url : https://huggingface.co/datasets/ade_corpus_v2
but received this error :
`Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module
local_path = cached_path(file_path, download_con... | 54 | Cannot download ade_corpus_v2
I tried this to get the dataset following this url : https://huggingface.co/datasets/ade_corpus_v2
but received this error :
`Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module
local_path = cac... | [
-0.03990993648767471,
-0.19826661050319672,
-0.1133943572640419,
0.27419033646583557,
0.2603810727596283,
0.29674506187438965,
-0.04101129248738289,
0.3299032151699066,
0.04059939086437225,
-0.16820748150348663,
-0.2708151936531067,
-0.2759157717227936,
0.07905684411525726,
-0.206055536866... |
https://github.com/huggingface/datasets/issues/1624 | Cannot download ade_corpus_v2 | `ade_corpus_v2` was added recently, that's why it wasn't available yet.
To load it you can just update `datasets`
```
pip install --upgrade datasets
```
and then you can load `ade_corpus_v2` with
```python
from datasets import load_dataset
dataset = load_dataset("ade_corpus_v2", "Ade_corpos_v2_drug_ade_... | I tried this to get the dataset following this url : https://huggingface.co/datasets/ade_corpus_v2
but received this error :
`Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module
local_path = cached_path(file_path, download_con... | 61 | Cannot download ade_corpus_v2
I tried this to get the dataset following this url : https://huggingface.co/datasets/ade_corpus_v2
but received this error :
`Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module
local_path = cac... | [
-0.03990993648767471,
-0.19826661050319672,
-0.1133943572640419,
0.27419033646583557,
0.2603810727596283,
0.29674506187438965,
-0.04101129248738289,
0.3299032151699066,
0.04059939086437225,
-0.16820748150348663,
-0.2708151936531067,
-0.2759157717227936,
0.07905684411525726,
-0.206055536866... |
https://github.com/huggingface/datasets/issues/1618 | Can't filter language:EN on https://huggingface.co/datasets | Cool @mapmeld ! My 2 cents (for a next iteration), it would be cool to have a small search widget in the filter dropdown as you have a ton of languages now here! Closing this in the meantime. | When visiting https://huggingface.co/datasets, I don't see an obvious way to filter only English datasets. This is unexpected for me, am I missing something? I'd expect English to be selectable in the language widget. This problem reproduced on Mozilla Firefox and MS Edge:
, I checked the size of the destination directory.
What version of Datasets are you using?
| Hello,
I'm having issue downloading TriviaQA dataset with `load_dataset`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```python
import datasets
dataset = datasets.load_dataset("trivia_qa", "rc", c... | 44 | Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir`
Hello,
I'm having issue downloading TriviaQA dataset with `load_dataset`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```pytho... | [
-0.07275497168302536,
0.16264642775058746,
-0.05210091546177864,
0.4159295856952667,
0.0719529315829277,
0.17672500014305115,
0.19856643676757812,
0.0667586699128151,
0.1333528459072113,
-0.003992323763668537,
-0.21094301342964172,
0.2015080451965332,
-0.047218095511198044,
0.0925027802586... |
https://github.com/huggingface/datasets/issues/1615 | Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir` | I'm using datasets version: 1.1.3. I think you should drop `cache_dir` and use only
`dataset = datasets.load_dataset("trivia_qa", "rc")`
Tried that on colab and it's working there too

| Hello,
I'm having issue downloading TriviaQA dataset with `load_dataset`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```python
import datasets
dataset = datasets.load_dataset("trivia_qa", "rc", c... | 28 | Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir`
Hello,
I'm having issue downloading TriviaQA dataset with `load_dataset`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```pytho... | [
-0.032445378601551056,
0.27497023344039917,
-0.021824920549988747,
0.3239804804325104,
0.13339541852474213,
0.19682002067565918,
0.2799462378025055,
0.05830225721001625,
0.09485325962305069,
-0.05971772223711014,
-0.24502748250961304,
0.27101460099220276,
-0.0008887064759619534,
0.10607985... |
https://github.com/huggingface/datasets/issues/1615 | Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir` | Train, Validation, and Test splits contain 138384, 18669, and 17210 samples respectively. It takes some time to read the samples. Even in your colab notebook it was reading the samples before you killed the process. Let me know if it works now! | Hello,
I'm having issue downloading TriviaQA dataset with `load_dataset`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```python
import datasets
dataset = datasets.load_dataset("trivia_qa", "rc", c... | 42 | Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir`
Hello,
I'm having issue downloading TriviaQA dataset with `load_dataset`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```pytho... | [
-0.09542669355869293,
0.21500161290168762,
-0.02827032282948494,
0.3146701455116272,
0.08805207163095474,
0.15481962263584137,
0.25185665488243103,
0.05872320383787155,
0.13171784579753876,
0.013548278249800205,
-0.2907208800315857,
0.2942447066307068,
-0.06302734464406967,
0.1593241244554... |
https://github.com/huggingface/datasets/issues/1615 | Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir` | Hi, it works on colab but it still doesn't work on my computer, same problem as before - overly large and long extraction process.
I have to use a custom 'cache_dir' because I don't have any space left in my home directory where it is defaulted, maybe this could be the issue? | Hello,
I'm having issue downloading TriviaQA dataset with `load_dataset`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```python
import datasets
dataset = datasets.load_dataset("trivia_qa", "rc", c... | 52 | Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir`
Hello,
I'm having issue downloading TriviaQA dataset with `load_dataset`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```pytho... | [
-0.1124391108751297,
0.20396307110786438,
-0.07271020114421844,
0.4566595256328583,
0.10538526624441147,
0.13809765875339508,
0.2412601262331009,
0.03996584191918373,
0.1459220051765442,
0.07280449569225311,
-0.257124662399292,
0.17974019050598145,
-0.017512431368231773,
0.0835771113634109... |
https://github.com/huggingface/datasets/issues/1615 | Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir` | I tried running this again - More details of the problem:
Code:
```
datasets.load_dataset("trivia_qa", "rc", cache_dir="/path/to/cache")
```
The output:
```
Downloading and preparing dataset trivia_qa/rc (download: 2.48 GiB, generated: 14.92 GiB, post-processed: Unknown size, total: 17.40 GiB) to path/to/cache... | Hello,
I'm having issue downloading TriviaQA dataset with `load_dataset`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```python
import datasets
dataset = datasets.load_dataset("trivia_qa", "rc", c... | 81 | Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir`
Hello,
I'm having issue downloading TriviaQA dataset with `load_dataset`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```pytho... | [
-0.019113028421998024,
0.23462261259555817,
-0.035867705941200256,
0.3387273848056793,
0.14120131731033325,
0.19561465084552765,
0.2685052454471588,
0.02842009626328945,
0.1188834086060524,
-0.08269984275102615,
-0.23905251920223236,
0.2821522057056427,
0.02168474905192852,
0.0726566910743... |
https://github.com/huggingface/datasets/issues/1615 | Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir` | 1) You can clear the huggingface folder in your `.cache` directory to use default directory for datasets. Speed of extraction and loading of samples depends a lot on your machine's configurations too.
2) I tried on colab `dataset = datasets.load_dataset("trivia_qa", "rc", cache_dir = "./datasets")`. After memory usa... | Hello,
I'm having issue downloading TriviaQA dataset with `load_dataset`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```python
import datasets
dataset = datasets.load_dataset("trivia_qa", "rc", c... | 73 | Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir`
Hello,
I'm having issue downloading TriviaQA dataset with `load_dataset`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```pytho... | [
-0.037820324301719666,
0.19693605601787567,
-0.021391872316598892,
0.37703564763069153,
0.07148026674985886,
0.1753476858139038,
0.24096976220607758,
0.05739424750208855,
0.12787044048309326,
-0.022504927590489388,
-0.2598694860935211,
0.26174288988113403,
-0.009975001215934753,
0.15392731... |
https://github.com/huggingface/datasets/issues/1615 | Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir` | Facing the same issue.
I am able to download datasets without `cache_dir`, however, when I specify the `cache_dir`, the process hangs indefinitely after partial download.
Tried for `data = load_dataset("cnn_dailymail", "3.0.0")` | Hello,
I'm having issue downloading TriviaQA dataset with `load_dataset`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```python
import datasets
dataset = datasets.load_dataset("trivia_qa", "rc", c... | 31 | Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir`
Hello,
I'm having issue downloading TriviaQA dataset with `load_dataset`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```pytho... | [
0.0042784931138157845,
0.25041186809539795,
-0.04580298066139221,
0.32035890221595764,
0.17405904829502106,
0.22033658623695374,
0.2745954394340515,
0.018866846337914467,
0.1395893096923828,
-0.08359896391630173,
-0.24111001193523407,
0.2049974501132965,
-0.015735357999801636,
0.0229580532... |
https://github.com/huggingface/datasets/issues/1615 | Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir` | Hi @ashutoshml,
I tried this and it worked for me:
`data = load_dataset("cnn_dailymail", "3.0.0", cache_dir="./dummy")`
I'm using datasets==1.8.0. It took around 3-4 mins for dataset to unpack and start loading examples. | Hello,
I'm having issue downloading TriviaQA dataset with `load_dataset`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```python
import datasets
dataset = datasets.load_dataset("trivia_qa", "rc", c... | 31 | Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir`
Hello,
I'm having issue downloading TriviaQA dataset with `load_dataset`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```pytho... | [
-0.016170630231499672,
0.22978444397449493,
-0.03223977982997894,
0.34343016147613525,
0.1108233854174614,
0.22201845049858093,
0.30555278062820435,
0.037281155586242676,
0.10845688730478287,
-0.09214194864034653,
-0.22216461598873138,
0.2408728450536728,
-0.038552574813365936,
0.116200506... |
https://github.com/huggingface/datasets/issues/1615 | Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir` | Ok. I waited for 20-30 mins, and it still is stuck.
I am using datasets==1.8.0.
Is there anyway to check what is happening? like a` --verbose` flag?

| Hello,
I'm having issue downloading TriviaQA dataset with `load_dataset`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```python
import datasets
dataset = datasets.load_dataset("trivia_qa", "rc", c... | 34 | Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir`
Hello,
I'm having issue downloading TriviaQA dataset with `load_dataset`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```pytho... | [
-0.005245869047939777,
0.13088785111904144,
-0.06461629271507263,
0.3441568911075592,
0.07185829430818558,
0.21029171347618103,
0.1724407821893692,
0.13109226524829865,
0.13916942477226257,
-0.071065254509449,
-0.16993333399295807,
0.23123103380203247,
-0.019225582480430603,
0.006620461121... |
https://github.com/huggingface/datasets/issues/1611 | shuffle with torch generator | Is there a way one can convert the two generator? not sure overall what alternatives I could have to shuffle the datasets with a torch generator, thanks | Hi
I need to shuffle mutliple large datasets with `generator = torch.Generator()` for a distributed sampler which needs to make sure datasets are consistent across different cores, for this, this is really necessary for me to use torch generator, based on documentation this generator is not supported with datasets, I... | 27 | shuffle with torch generator
Hi
I need to shuffle mutliple large datasets with `generator = torch.Generator()` for a distributed sampler which needs to make sure datasets are consistent across different cores, for this, this is really necessary for me to use torch generator, based on documentation this generator i... | [
-0.41417816281318665,
-0.22763441503047943,
-0.030986107885837555,
0.43630221486091614,
0.27918341755867004,
0.043277665972709656,
0.03506441414356232,
0.08620297163724899,
-0.07051491737365723,
0.5191637277603149,
-0.239180326461792,
0.49696728587150574,
-0.3308678865432739,
-0.3823136389... |
https://github.com/huggingface/datasets/issues/1611 | shuffle with torch generator | @lhoestq let me please expalin in more details, maybe you could help me suggesting an alternative to solve the issue for now, I have multiple large datasets using huggingface library, then I need to define a distributed sampler on top of it, for this I need to shard the datasets and give each shard to each core, but b... | Hi
I need to shuffle mutliple large datasets with `generator = torch.Generator()` for a distributed sampler which needs to make sure datasets are consistent across different cores, for this, this is really necessary for me to use torch generator, based on documentation this generator is not supported with datasets, I... | 136 | shuffle with torch generator
Hi
I need to shuffle mutliple large datasets with `generator = torch.Generator()` for a distributed sampler which needs to make sure datasets are consistent across different cores, for this, this is really necessary for me to use torch generator, based on documentation this generator i... | [
-0.3458523750305176,
-0.4519605338573456,
-0.005385358352214098,
0.3233453333377838,
0.200884610414505,
-0.17069607973098755,
0.11381731927394867,
0.0025902152992784977,
0.012134021148085594,
0.5316620469093323,
-0.15948058664798737,
0.43233516812324524,
-0.3134092092514038,
-0.21658900380... |
https://github.com/huggingface/datasets/issues/1611 | shuffle with torch generator | @lhoestq Is there a way I could shuffle the datasets from this library with a custom defined shuffle function? thanks for your help on this. | Hi
I need to shuffle mutliple large datasets with `generator = torch.Generator()` for a distributed sampler which needs to make sure datasets are consistent across different cores, for this, this is really necessary for me to use torch generator, based on documentation this generator is not supported with datasets, I... | 25 | shuffle with torch generator
Hi
I need to shuffle mutliple large datasets with `generator = torch.Generator()` for a distributed sampler which needs to make sure datasets are consistent across different cores, for this, this is really necessary for me to use torch generator, based on documentation this generator i... | [
-0.303578644990921,
-0.15417614579200745,
-0.04289385676383972,
0.33737799525260925,
0.12449656426906586,
-0.040524452924728394,
0.14959953725337982,
0.06986864656209946,
-0.09886626899242401,
0.49832332134246826,
-0.14780068397521973,
0.5367980599403381,
-0.3674837052822113,
-0.3628568053... |
https://github.com/huggingface/datasets/issues/1611 | shuffle with torch generator | Right now the shuffle method only accepts the `seed` (optional int) or `generator` (optional `np.random.Generator`) parameters.
Here is a suggestion to shuffle the data using your own shuffle method using `select`.
`select` can be used to re-order the dataset samples or simply pick a few ones if you want.
It's wha... | Hi
I need to shuffle mutliple large datasets with `generator = torch.Generator()` for a distributed sampler which needs to make sure datasets are consistent across different cores, for this, this is really necessary for me to use torch generator, based on documentation this generator is not supported with datasets, I... | 120 | shuffle with torch generator
Hi
I need to shuffle mutliple large datasets with `generator = torch.Generator()` for a distributed sampler which needs to make sure datasets are consistent across different cores, for this, this is really necessary for me to use torch generator, based on documentation this generator i... | [
-0.23207035660743713,
-0.1594075858592987,
-0.013259691186249256,
0.24928660690784454,
0.19171589612960815,
0.016128459945321083,
0.08609315007925034,
0.12564094364643097,
-0.07389256358146667,
0.5062562823295593,
-0.05656023323535919,
0.6657480597496033,
-0.3163607716560364,
-0.3614482283... |
https://github.com/huggingface/datasets/issues/1611 | shuffle with torch generator | thank you @lhoestq thank you very much for responding to my question, this greatly helped me and remove the blocking for continuing my work, thanks. | Hi
I need to shuffle mutliple large datasets with `generator = torch.Generator()` for a distributed sampler which needs to make sure datasets are consistent across different cores, for this, this is really necessary for me to use torch generator, based on documentation this generator is not supported with datasets, I... | 25 | shuffle with torch generator
Hi
I need to shuffle mutliple large datasets with `generator = torch.Generator()` for a distributed sampler which needs to make sure datasets are consistent across different cores, for this, this is really necessary for me to use torch generator, based on documentation this generator i... | [
-0.41175633668899536,
-0.28181108832359314,
-0.02333904430270195,
0.34293094277381897,
0.2881666421890259,
-0.008327564224600792,
0.18536485731601715,
0.02024802193045616,
-0.038213085383176804,
0.5751110911369324,
-0.0707363560795784,
0.506485104560852,
-0.246088445186615,
-0.398516893386... |
https://github.com/huggingface/datasets/issues/1611 | shuffle with torch generator | @lhoestq could you confirm the method proposed does not bring the whole data into memory? thanks | Hi
I need to shuffle mutliple large datasets with `generator = torch.Generator()` for a distributed sampler which needs to make sure datasets are consistent across different cores, for this, this is really necessary for me to use torch generator, based on documentation this generator is not supported with datasets, I... | 16 | shuffle with torch generator
Hi
I need to shuffle mutliple large datasets with `generator = torch.Generator()` for a distributed sampler which needs to make sure datasets are consistent across different cores, for this, this is really necessary for me to use torch generator, based on documentation this generator i... | [
-0.38578668236732483,
-0.22824153304100037,
-0.016249267384409904,
0.40480920672416687,
0.20821085572242737,
-0.03419066220521927,
0.057011887431144714,
0.08082731813192368,
-0.06658587604761124,
0.5318526029586792,
-0.07527392357587814,
0.48124876618385315,
-0.2632020115852356,
-0.4862928... |
https://github.com/huggingface/datasets/issues/1610 | shuffle does not accept seed | Hi Thomas
thanks for reponse, yes, I did checked it, but this does not work for me please see
```
(internship) rkarimi@italix17:/idiap/user/rkarimi/dev$ python
Python 3.7.9 (default, Aug 31 2020, 12:42:55)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more inform... | Hi
I need to shuffle the dataset, but this needs to be based on epoch+seed to be consistent across the cores, when I pass seed to shuffle, this does not accept seed, could you assist me with this? thanks @lhoestq
| 134 | shuffle does not accept seed
Hi
I need to shuffle the dataset, but this needs to be based on epoch+seed to be consistent across the cores, when I pass seed to shuffle, this does not accept seed, could you assist me with this? thanks @lhoestq
Hi Thomas
thanks for reponse, yes, I did checked it, but this does ... | [
-0.36114031076431274,
-0.2253303974866867,
-0.05234286189079285,
0.10026116669178009,
0.2563576102256775,
-0.1110152006149292,
0.1722034066915512,
0.08529658615589142,
-0.00571319367736578,
0.36125168204307556,
0.14410842955112457,
0.382855623960495,
-0.2792424261569977,
0.4236408770084381... |
https://github.com/huggingface/datasets/issues/1610 | shuffle does not accept seed | Thanks for reporting !
Indeed it looks like an issue with `suffle` on `DatasetDict`. We're going to fix that.
In the meantime you can shuffle each split (train, validation, test) separately:
```python
shuffled_train_dataset = data["train"].shuffle(seed=42)
```
| Hi
I need to shuffle the dataset, but this needs to be based on epoch+seed to be consistent across the cores, when I pass seed to shuffle, this does not accept seed, could you assist me with this? thanks @lhoestq
| 36 | shuffle does not accept seed
Hi
I need to shuffle the dataset, but this needs to be based on epoch+seed to be consistent across the cores, when I pass seed to shuffle, this does not accept seed, could you assist me with this? thanks @lhoestq
Thanks for reporting !
Indeed it looks like an issue with `suffl... | [
-0.13478028774261475,
-0.22007255256175995,
-0.07214224338531494,
0.023678524419665337,
0.291415274143219,
0.06661561131477356,
0.19130456447601318,
0.13251495361328125,
-0.11940495669841766,
0.31389477849006653,
0.13474319875240326,
0.3167940378189087,
-0.3220251798629761,
0.3731395304203... |
https://github.com/huggingface/datasets/issues/1609 | Not able to use 'jigsaw_toxicity_pred' dataset | Hi @jassimran,
The `jigsaw_toxicity_pred` dataset has not been released yet, it will be available with version 2 of `datasets`, coming soon.
You can still access it by installing the master (unreleased) version of datasets directly :
`pip install git+https://github.com/huggingface/datasets.git@master`
Please let me... | When trying to use jigsaw_toxicity_pred dataset, like this in a [colab](https://colab.research.google.com/drive/1LwO2A5M2X5dvhkAFYE4D2CUT3WUdWnkn?usp=sharing):
```
from datasets import list_datasets, list_metrics, load_dataset, load_metric
ds = load_dataset("jigsaw_toxicity_pred")
```
I see below error:
>... | 46 | Not able to use 'jigsaw_toxicity_pred' dataset
When trying to use jigsaw_toxicity_pred dataset, like this in a [colab](https://colab.research.google.com/drive/1LwO2A5M2X5dvhkAFYE4D2CUT3WUdWnkn?usp=sharing):
```
from datasets import list_datasets, list_metrics, load_dataset, load_metric
ds = load_dataset("jigsaw... | [
-0.1765049397945404,
-0.07363665848970413,
-0.060839299112558365,
0.2410435825586319,
0.3765500783920288,
0.2648484408855438,
0.19602495431900024,
0.007784516550600529,
-0.13723605871200562,
0.18303577601909637,
-0.2765671908855438,
0.3002055883407593,
-0.09545636177062988,
0.0872110575437... |
https://github.com/huggingface/datasets/issues/1600 | AttributeError: 'DatasetDict' object has no attribute 'train_test_split' | Hi @david-waterworth!
As indicated in the error message, `load_dataset("csv")` returns a `DatasetDict` object, which is mapping of `str` to `Dataset` objects. I believe in this case the behavior is to return a `train` split with all the data.
`train_test_split` is a method of the `Dataset` object, so you will need ... | The following code fails with "'DatasetDict' object has no attribute 'train_test_split'" - am I doing something wrong?
```
from datasets import load_dataset
dataset = load_dataset('csv', data_files='data.txt')
dataset = dataset.train_test_split(test_size=0.1)
```
> AttributeError: 'DatasetDict' object has no at... | 76 | AttributeError: 'DatasetDict' object has no attribute 'train_test_split'
The following code fails with "'DatasetDict' object has no attribute 'train_test_split'" - am I doing something wrong?
```
from datasets import load_dataset
dataset = load_dataset('csv', data_files='data.txt')
dataset = dataset.train_test_sp... | [
-0.07218451052904129,
-0.1323111355304718,
-0.08616770803928375,
0.2685682475566864,
0.3532630205154419,
0.17875216901302338,
0.4272960424423218,
0.31180649995803833,
0.33190974593162537,
0.16632086038589478,
0.2769961357116699,
0.18330319225788116,
-0.29801082611083984,
0.3075742125511169... |
https://github.com/huggingface/datasets/issues/1600 | AttributeError: 'DatasetDict' object has no attribute 'train_test_split' | Thanks, that's working - the same issue also tripped me up with training.
I also agree https://github.com/huggingface/datasets/issues/767 would be a useful addition. | The following code fails with "'DatasetDict' object has no attribute 'train_test_split'" - am I doing something wrong?
```
from datasets import load_dataset
dataset = load_dataset('csv', data_files='data.txt')
dataset = dataset.train_test_split(test_size=0.1)
```
> AttributeError: 'DatasetDict' object has no at... | 22 | AttributeError: 'DatasetDict' object has no attribute 'train_test_split'
The following code fails with "'DatasetDict' object has no attribute 'train_test_split'" - am I doing something wrong?
```
from datasets import load_dataset
dataset = load_dataset('csv', data_files='data.txt')
dataset = dataset.train_test_sp... | [
-0.18468602001667023,
-0.19844816625118256,
-0.05458797514438629,
0.2467511147260666,
0.3953750729560852,
0.10302431136369705,
0.4267283082008362,
0.38310331106185913,
0.32619932293891907,
0.19565406441688538,
0.15076839923858643,
0.0640655905008316,
-0.26865872740745544,
0.450716763734817... |
https://github.com/huggingface/datasets/issues/1600 | AttributeError: 'DatasetDict' object has no attribute 'train_test_split' | > ```python
> dataset_dict = load_dataset(`'csv', data_files='data.txt')
> dataset = dataset_dict['split name, eg train']
> dataset.train_test_split(test_size=0.1)
> ```
I am getting error like
KeyError: 'split name, eg train'
Could you please tell me how to solve this? | The following code fails with "'DatasetDict' object has no attribute 'train_test_split'" - am I doing something wrong?
```
from datasets import load_dataset
dataset = load_dataset('csv', data_files='data.txt')
dataset = dataset.train_test_split(test_size=0.1)
```
> AttributeError: 'DatasetDict' object has no at... | 37 | AttributeError: 'DatasetDict' object has no attribute 'train_test_split'
The following code fails with "'DatasetDict' object has no attribute 'train_test_split'" - am I doing something wrong?
```
from datasets import load_dataset
dataset = load_dataset('csv', data_files='data.txt')
dataset = dataset.train_test_sp... | [
-0.1205269992351532,
-0.20445972681045532,
-0.11787382513284683,
0.34275099635124207,
0.2748532295227051,
0.2128630131483078,
0.4176216423511505,
0.27919116616249084,
0.3849133849143982,
0.1766943633556366,
0.2599658966064453,
0.2442348748445511,
-0.3558095693588257,
0.40158531069755554,
... |
https://github.com/huggingface/datasets/issues/1594 | connection error | This happen quite often when they are too many concurrent requests to github.
i can understand it’s a bit cumbersome to handle on the user side. Maybe we should try a few times in the lib (eg with timeout) before failing, what do you think @lhoestq ? | Hi
I am hitting to this error, thanks
```
> Traceback (most recent call last):
File "finetune_t5_trainer.py", line 379, in <module>
main()
File "finetune_t5_trainer.py", line 208, in main
if training_args.do_eval or training_args.evaluation_strategy != EvaluationStrategy.NO
File "finetune_t5_tr... | 47 | connection error
Hi
I am hitting to this error, thanks
```
> Traceback (most recent call last):
File "finetune_t5_trainer.py", line 379, in <module>
main()
File "finetune_t5_trainer.py", line 208, in main
if training_args.do_eval or training_args.evaluation_strategy != EvaluationStrategy.NO
... | [
-0.32930827140808105,
-0.13458459079265594,
-0.201692596077919,
0.3231748044490814,
0.5001869797706604,
-0.18860074877738953,
0.16027376055717468,
0.29970741271972656,
-0.30557873845100403,
0.1561729907989502,
0.012657275423407555,
-0.0056439596228301525,
0.22510136663913727,
0.16426494717... |
https://github.com/huggingface/datasets/issues/1594 | connection error | Hi @lhoestq thank you for the modification, I will use`script_version="master"` for now :), to my experience, also setting timeout to a larger number like 3*60 which I normally use helps a lot on this.
| Hi
I am hitting to this error, thanks
```
> Traceback (most recent call last):
File "finetune_t5_trainer.py", line 379, in <module>
main()
File "finetune_t5_trainer.py", line 208, in main
if training_args.do_eval or training_args.evaluation_strategy != EvaluationStrategy.NO
File "finetune_t5_tr... | 34 | connection error
Hi
I am hitting to this error, thanks
```
> Traceback (most recent call last):
File "finetune_t5_trainer.py", line 379, in <module>
main()
File "finetune_t5_trainer.py", line 208, in main
if training_args.do_eval or training_args.evaluation_strategy != EvaluationStrategy.NO
... | [
-0.32930827140808105,
-0.13458459079265594,
-0.201692596077919,
0.3231748044490814,
0.5001869797706604,
-0.18860074877738953,
0.16027376055717468,
0.29970741271972656,
-0.30557873845100403,
0.1561729907989502,
0.012657275423407555,
-0.0056439596228301525,
0.22510136663913727,
0.16426494717... |
https://github.com/huggingface/datasets/issues/1593 | Access to key in DatasetDict map | Indeed that would be cool
Also FYI right now the easiest way to do this is
```python
dataset_dict["train"] = dataset_dict["train"].map(my_transform_for_the_train_set)
dataset_dict["test"] = dataset_dict["test"].map(my_transform_for_the_test_set)
``` | It is possible that we want to do different things in the `map` function (and possibly other functions too) of a `DatasetDict`, depending on the key. I understand that `DatasetDict.map` is a really thin wrapper of `Dataset.map`, so it is easy to directly implement this functionality in the client code. Still, it'd be n... | 24 | Access to key in DatasetDict map
It is possible that we want to do different things in the `map` function (and possibly other functions too) of a `DatasetDict`, depending on the key. I understand that `DatasetDict.map` is a really thin wrapper of `Dataset.map`, so it is easy to directly implement this functionality i... | [
0.012361436150968075,
-0.0980539619922638,
-0.15044139325618744,
0.03158802539110184,
0.14852547645568848,
0.051978882402181625,
0.215781107544899,
0.18562687933444977,
0.2782221734523773,
0.11376120895147324,
0.01877688616514206,
0.6971399188041687,
-0.19462822377681732,
0.362470835447311... |
https://github.com/huggingface/datasets/issues/1591 | IWSLT-17 Link Broken | Sorry, this is a duplicate of #1287. Not sure why it didn't come up when I searched `iwslt` in the issues list. | ```
FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnmted//texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.tgz
``` | 22 | IWSLT-17 Link Broken
```
FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnmted//texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.tgz
```
Sorry, this is a duplicate of #1287. Not sure why it didn't come up when I searched `iwslt` in the issues list. | [
0.0006843995652161539,
-0.4732922911643982,
-0.047303032130002975,
-0.025245849043130875,
-0.07237210869789124,
-0.13779973983764648,
0.4260438084602356,
0.21967223286628723,
0.08615799993276596,
-0.05100829526782036,
0.1014910340309143,
0.011178182438015938,
0.23412485420703888,
0.1399888... |
https://github.com/huggingface/datasets/issues/1590 | Add helper to resolve namespace collision | I was thinking about using something like [importlib](https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly) to over-ride the collision.
**Reason requested**: I use the [following template](https://github.com/jramapuram/ml_base/) repo where I house all my datasets as a submodule. | Many projects use a module called `datasets`, however this is incompatible with huggingface datasets. It would be great if there if there was some helper or similar function to resolve such a common conflict. | 29 | Add helper to resolve namespace collision
Many projects use a module called `datasets`, however this is incompatible with huggingface datasets. It would be great if there if there was some helper or similar function to resolve such a common conflict.
I was thinking about using something like [importlib](https://do... | [
-0.15694521367549896,
0.0858055129647255,
-0.045953672379255295,
0.27503684163093567,
0.1614484190940857,
-0.04710717871785164,
0.18794690072536469,
0.23515480756759644,
0.13997302949428558,
0.06762176007032394,
-0.10601531714200974,
0.07263168692588806,
-0.3794683814048767,
0.365108340978... |
https://github.com/huggingface/datasets/issues/1590 | Add helper to resolve namespace collision | Alternatively huggingface could consider some submodule type structure like:
`import huggingface.datasets`
`import huggingface.transformers`
`datasets` is a very common module in ML and should be an end-user decision and not scope all of python ¯\_(ツ)_/¯
| Many projects use a module called `datasets`, however this is incompatible with huggingface datasets. It would be great if there if there was some helper or similar function to resolve such a common conflict. | 34 | Add helper to resolve namespace collision
Many projects use a module called `datasets`, however this is incompatible with huggingface datasets. It would be great if there if there was some helper or similar function to resolve such a common conflict.
Alternatively huggingface could consider some submodule type str... | [
-0.02022393047809601,
-0.265224814414978,
-0.09271939843893051,
0.1554591953754425,
0.3644939661026001,
-0.1625666320323944,
0.22060473263263702,
0.1671951413154602,
0.0891764909029007,
0.2554263770580292,
-0.2716359496116638,
0.16977296769618988,
-0.23846887052059174,
0.2573797106742859,
... |
https://github.com/huggingface/datasets/issues/1590 | Add helper to resolve namespace collision | It also wasn't initially obvious to me that the samples which contain `import datasets` were in fact importing a huggingface library (in fact all the huggingface imports are very generic - transformers, tokenizers, datasets...) | Many projects use a module called `datasets`, however this is incompatible with huggingface datasets. It would be great if there if there was some helper or similar function to resolve such a common conflict. | 34 | Add helper to resolve namespace collision
Many projects use a module called `datasets`, however this is incompatible with huggingface datasets. It would be great if there if there was some helper or similar function to resolve such a common conflict.
It also wasn't initially obvious to me that the samples which co... | [
-0.17162343859672546,
-0.05433178320527077,
-0.10609053075313568,
0.16611701250076294,
0.32007652521133423,
-0.2631007730960846,
0.22331497073173523,
0.2214924842119217,
0.11521472781896591,
0.3468361794948578,
-0.21770727634429932,
0.07699406892061234,
-0.2270480841398239,
0.2291832119226... |
https://github.com/huggingface/datasets/issues/1585 | FileNotFoundError for `amazon_polarity` | Hi @phtephanx , the `amazon_polarity` dataset has not been released yet. It will be available in the coming soon v2of `datasets` :)
You can still access it now if you want, but you will need to install datasets via the master branch:
`pip install git+https://github.com/huggingface/datasets.git@master` | Version: `datasets==v1.1.3`
### Reproduction
```python
from datasets import load_dataset
data = load_dataset("amazon_polarity")
```
crashes with
```bash
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/amazon_polarity/amazon_polarity.py
```
and
... | 45 | FileNotFoundError for `amazon_polarity`
Version: `datasets==v1.1.3`
### Reproduction
```python
from datasets import load_dataset
data = load_dataset("amazon_polarity")
```
crashes with
```bash
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/amaz... | [
-0.34798744320869446,
-0.4079696238040924,
-0.1594301015138626,
0.14680100977420807,
0.34096136689186096,
0.12450356781482697,
0.13682961463928223,
0.07088252156972885,
-0.0147831030189991,
0.21001499891281128,
-0.03047691471874714,
-0.0856965035200119,
-0.27529624104499817,
0.007995410822... |
https://github.com/huggingface/datasets/issues/1581 | Installing datasets and transformers in a tensorflow docker image throws Permission Error on 'import transformers' | Thanks for reporting !
You can override the directory in which cache file are stored using for example
```
ENV HF_HOME="/root/cache/hf_cache_home"
```
This way both `transformers` and `datasets` will use this directory instead of the default `.cache` | I am using a docker container, based on latest tensorflow-gpu image, to run transformers and datasets (4.0.1 and 1.1.3 respectively - Dockerfile attached below). Importing transformers throws a Permission Error to access `/.cache`:
```
$ docker run --gpus=all --rm -it -u $(id -u):$(id -g) -v $(pwd)/data:/root/data ... | 37 | Installing datasets and transformers in a tensorflow docker image throws Permission Error on 'import transformers'
I am using a docker container, based on latest tensorflow-gpu image, to run transformers and datasets (4.0.1 and 1.1.3 respectively - Dockerfile attached below). Importing transformers throws a Permissio... | [
-0.2601940631866455,
0.15375785529613495,
-0.08791602402925491,
0.13611888885498047,
0.05338246002793312,
0.05931965634226799,
0.6560069918632507,
0.19603128731250763,
0.1612774133682251,
-0.01367007102817297,
-0.3051157295703888,
-0.1683906763792038,
-0.032444197684526443,
-0.510906398296... |
https://github.com/huggingface/datasets/issues/1581 | Installing datasets and transformers in a tensorflow docker image throws Permission Error on 'import transformers' | > Thanks for reporting !
> You can override the directory in which cache file are stored using for example
>
> ```
> ENV HF_HOME="/root/cache/hf_cache_home"
> ```
>
> This way both `transformers` and `datasets` will use this directory instead of the default `.cache`
can we disable caching directly? | I am using a docker container, based on latest tensorflow-gpu image, to run transformers and datasets (4.0.1 and 1.1.3 respectively - Dockerfile attached below). Importing transformers throws a Permission Error to access `/.cache`:
```
$ docker run --gpus=all --rm -it -u $(id -u):$(id -g) -v $(pwd)/data:/root/data ... | 50 | Installing datasets and transformers in a tensorflow docker image throws Permission Error on 'import transformers'
I am using a docker container, based on latest tensorflow-gpu image, to run transformers and datasets (4.0.1 and 1.1.3 respectively - Dockerfile attached below). Importing transformers throws a Permissio... | [
-0.2601940631866455,
0.15375785529613495,
-0.08791602402925491,
0.13611888885498047,
0.05338246002793312,
0.05931965634226799,
0.6560069918632507,
0.19603128731250763,
0.1612774133682251,
-0.01367007102817297,
-0.3051157295703888,
-0.1683906763792038,
-0.032444197684526443,
-0.510906398296... |
https://github.com/huggingface/datasets/issues/1581 | Installing datasets and transformers in a tensorflow docker image throws Permission Error on 'import transformers' | Hi ! Unfortunately no since we need this directory to load datasets.
When you load a dataset, it downloads the raw data files in the cache directory inside <cache_dir>/downloads. Then it builds the dataset and saves it as arrow data inside <cache_dir>/<dataset_name>.
However you can specify the directory of your ch... | I am using a docker container, based on latest tensorflow-gpu image, to run transformers and datasets (4.0.1 and 1.1.3 respectively - Dockerfile attached below). Importing transformers throws a Permission Error to access `/.cache`:
```
$ docker run --gpus=all --rm -it -u $(id -u):$(id -g) -v $(pwd)/data:/root/data ... | 68 | Installing datasets and transformers in a tensorflow docker image throws Permission Error on 'import transformers'
I am using a docker container, based on latest tensorflow-gpu image, to run transformers and datasets (4.0.1 and 1.1.3 respectively - Dockerfile attached below). Importing transformers throws a Permissio... | [
-0.2601940631866455,
0.15375785529613495,
-0.08791602402925491,
0.13611888885498047,
0.05338246002793312,
0.05931965634226799,
0.6560069918632507,
0.19603128731250763,
0.1612774133682251,
-0.01367007102817297,
-0.3051157295703888,
-0.1683906763792038,
-0.032444197684526443,
-0.510906398296... |
https://github.com/huggingface/datasets/issues/1541 | connection issue while downloading data | could you tell me how I can avoid download, by pre-downloading the data first, put them in a folder so the code does not try to redownload? could you tell me the path to put the downloaded data, and how to do it? thanks
@lhoestq | Hi
I am running my codes on google cloud, and I am getting this error resulting in the failure of the codes when trying to download the data, could you assist me to solve this? also as a temporary solution, could you tell me how I can increase the number of retries and timeout to at least let the models run for now. t... | 45 | connection issue while downloading data
Hi
I am running my codes on google cloud, and I am getting this error resulting in the failure of the codes when trying to download the data, could you assist me to solve this? also as a temporary solution, could you tell me how I can increase the number of retries and timeout... | [
-0.23484064638614655,
-0.026418551802635193,
-0.16849590837955475,
0.3393736779689789,
0.43915751576423645,
-0.14071203768253326,
0.1563979983329773,
0.3509827256202698,
-0.2138078212738037,
0.06850875169038773,
-0.020897213369607925,
-0.10968578606843948,
0.07043521106243134,
0.5369911789... |
https://github.com/huggingface/datasets/issues/1541 | connection issue while downloading data | Does your instance have an internet connection ?
If you don't have an internet connection you'll need to have the dataset on the instance disk.
To do so first download the dataset on another machine using `load_dataset` and then you can save it in a folder using `my_dataset.save_to_disk("path/to/folder")`. Once the... | Hi
I am running my codes on google cloud, and I am getting this error resulting in the failure of the codes when trying to download the data, could you assist me to solve this? also as a temporary solution, could you tell me how I can increase the number of retries and timeout to at least let the models run for now. t... | 63 | connection issue while downloading data
Hi
I am running my codes on google cloud, and I am getting this error resulting in the failure of the codes when trying to download the data, could you assist me to solve this? also as a temporary solution, could you tell me how I can increase the number of retries and timeout... | [
-0.23484064638614655,
-0.026418551802635193,
-0.16849590837955475,
0.3393736779689789,
0.43915751576423645,
-0.14071203768253326,
0.1563979983329773,
0.3509827256202698,
-0.2138078212738037,
0.06850875169038773,
-0.020897213369607925,
-0.10968578606843948,
0.07043521106243134,
0.5369911789... |
https://github.com/huggingface/datasets/issues/1514 | how to get all the options of a property in datasets | In a dataset, labels correspond to the `ClassLabel` feature that has the `names` property that returns string represenation of the integer classes (or `num_classes` to get the number of different classes). | Hi
could you tell me how I can get all unique options of a property of dataset?
for instance in case of boolq, if the user wants to know which unique labels it has, is there a way to access unique labels without getting all training data lables and then forming a set i mean? thanks | 31 | how to get all the options of a property in datasets
Hi
could you tell me how I can get all unique options of a property of dataset?
for instance in case of boolq, if the user wants to know which unique labels it has, is there a way to access unique labels without getting all training data lables and then forming ... | [
-0.35085099935531616,
-0.5276567935943604,
-0.032890141010284424,
0.3197905421257019,
-0.008852461352944374,
0.1327081322669983,
0.0669214054942131,
0.06275560706853867,
-0.1925356239080429,
0.270304799079895,
-0.2775319516658783,
0.29905879497528076,
-0.2552962899208069,
0.122093044221401... |
https://github.com/huggingface/datasets/issues/1514 | how to get all the options of a property in datasets | I think the `features` attribute of the dataset object is what you are looking for:
```
>>> dataset.features
{'sentence1': Value(dtype='string', id=None),
'sentence2': Value(dtype='string', id=None),
'label': ClassLabel(num_classes=2, names=['not_equivalent', 'equivalent'], names_file=None, id=None),
'idx': Va... | Hi
could you tell me how I can get all unique options of a property of dataset?
for instance in case of boolq, if the user wants to know which unique labels it has, is there a way to access unique labels without getting all training data lables and then forming a set i mean? thanks | 42 | how to get all the options of a property in datasets
Hi
could you tell me how I can get all unique options of a property of dataset?
for instance in case of boolq, if the user wants to know which unique labels it has, is there a way to access unique labels without getting all training data lables and then forming ... | [
-0.22874321043491364,
-0.705501139163971,
0.002190690254792571,
0.36623743176460266,
0.05097472667694092,
0.14188794791698456,
0.026728369295597076,
0.06085136532783508,
-0.15471282601356506,
0.26906898617744446,
-0.40873822569847107,
0.2562119960784912,
-0.25112611055374146,
0.41049697995... |
https://github.com/huggingface/datasets/issues/1478 | Inconsistent argument names. | Also for the `Accuracy` metric the `accuracy_score` method should have its args in the opposite order so `accuracy_score(predictions, references,,,)`. | Just find it a wee bit odd that in the transformers library `predictions` are those made by the model:
https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_utils.py#L51-L61
While in many datasets metrics they are the ground truth labels:
https://github.com/huggingface/datasets/blob/c3f5... | 19 | Inconsistent argument names.
Just find it a wee bit odd that in the transformers library `predictions` are those made by the model:
https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_utils.py#L51-L61
While in many datasets metrics they are the ground truth labels:
https://github.com... | [
0.2731815576553345,
-0.36698707938194275,
0.01052460540086031,
0.2615755498409271,
0.406415194272995,
-0.22076712548732758,
0.24355220794677734,
-0.03292868658900261,
-0.19604937732219696,
0.04979119077324867,
-0.17125891149044037,
0.11039980500936508,
0.02398989163339138,
0.10108134895563... |
https://github.com/huggingface/datasets/issues/1478 | Inconsistent argument names. | Thanks for pointing this out ! 🕵🏻
Predictions and references should indeed be swapped in the docstring.
However, the call to `accuracy_score` should not be changed, it [signature](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html#sklearn.metrics.accuracy_score) being:
```
skle... | Just find it a wee bit odd that in the transformers library `predictions` are those made by the model:
https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_utils.py#L51-L61
While in many datasets metrics they are the ground truth labels:
https://github.com/huggingface/datasets/blob/c3f5... | 49 | Inconsistent argument names.
Just find it a wee bit odd that in the transformers library `predictions` are those made by the model:
https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_utils.py#L51-L61
While in many datasets metrics they are the ground truth labels:
https://github.com... | [
0.25693175196647644,
-0.4395505487918854,
-0.010300193913280964,
0.16327065229415894,
0.4465915560722351,
-0.21154089272022247,
0.1434418261051178,
-0.033495355397462845,
0.041428498923778534,
0.08147896826267242,
-0.040513329207897186,
0.18038822710514069,
-0.05485958978533745,
0.13490684... |
https://github.com/huggingface/datasets/issues/1452 | SNLI dataset contains labels with value -1 | I believe the `-1` label is used for missing/NULL data as per HuggingFace Dataset conventions. If I recall correctly SNLI has some entries with no (gold) labels in the dataset. | ```
import datasets
nli_data = datasets.load_dataset("snli")
train_data = nli_data['train']
train_labels = train_data['label']
label_set = set(train_labels)
print(label_set)
```
**Output:**
`{0, 1, 2, -1}` | 30 | SNLI dataset contains labels with value -1
```
import datasets
nli_data = datasets.load_dataset("snli")
train_data = nli_data['train']
train_labels = train_data['label']
label_set = set(train_labels)
print(label_set)
```
**Output:**
`{0, 1, 2, -1}`
I believe the `-1` label is used for missing/NULL data a... | [
0.23513014614582062,
-0.4906347692012787,
-0.17801444232463837,
0.3363339602947235,
0.23378238081932068,
0.02110082283616066,
0.3015904724597931,
0.2003285437822342,
0.08077318966388702,
0.28642940521240234,
-0.23511511087417603,
0.35362890362739563,
-0.20612071454524994,
0.278016746044158... |
https://github.com/huggingface/datasets/issues/1444 | FileNotFound remotly, can't load a dataset | This dataset will be available in version-2 of the library. If you want to use this dataset now, install datasets from `master` branch rather.
Command to install datasets from `master` branch:
`!pip install git+https://github.com/huggingface/datasets.git@master` | ```py
!pip install datasets
import datasets as ds
corpus = ds.load_dataset('large_spanish_corpus')
```
gives the error
> FileNotFoundError: Couldn't find file locally at large_spanish_corpus/large_spanish_corpus.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/large_spa... | 34 | FileNotFound remotly, can't load a dataset
```py
!pip install datasets
import datasets as ds
corpus = ds.load_dataset('large_spanish_corpus')
```
gives the error
> FileNotFoundError: Couldn't find file locally at large_spanish_corpus/large_spanish_corpus.py, or remotely at https://raw.githubusercontent.com/... | [
-0.3270966708660126,
-0.38887128233909607,
-0.02118534967303276,
0.3421649932861328,
0.3611238896846771,
0.053929828107357025,
-0.0934886485338211,
0.21286128461360931,
-0.06176580861210823,
0.2608966827392578,
-0.3553181290626526,
0.17308634519577026,
0.030048852786421776,
-0.159437075257... |
https://github.com/huggingface/datasets/issues/1422 | Can't map dataset (loaded from csv) | Please could you post the whole script? I can't reproduce your issue. After updating the feature names/labels to match with the data, everything works fine for me. Try to update datasets/transformers to the newest version. | Hello! I am trying to load single csv file with two columns: ('label': str, 'text' str), where is label is str of two possible classes.
Below steps are similar with [this notebook](https://colab.research.google.com/drive/1-JIJlao4dI-Ilww_NnTc0rxtp-ymgDgM?usp=sharing), where bert model and tokenizer are used to class... | 35 | Can't map dataset (loaded from csv)
Hello! I am trying to load single csv file with two columns: ('label': str, 'text' str), where is label is str of two possible classes.
Below steps are similar with [this notebook](https://colab.research.google.com/drive/1-JIJlao4dI-Ilww_NnTc0rxtp-ymgDgM?usp=sharing), where bert... | [
-0.0363197922706604,
-0.04933058097958565,
0.059148337692022324,
0.19603030383586884,
0.31758007407188416,
0.12963463366031647,
0.6318865418434143,
0.37842679023742676,
0.2701599597930908,
-0.014723473228514194,
-0.22110891342163086,
0.26376453042030334,
-0.06318508833646774,
-0.0150063317... |
https://github.com/huggingface/datasets/issues/1422 | Can't map dataset (loaded from csv) | Actually, the problem was how `tokenize` function was defined. This was completely my side mistake, so there are really no needs in this issue anymore | Hello! I am trying to load single csv file with two columns: ('label': str, 'text' str), where is label is str of two possible classes.
Below steps are similar with [this notebook](https://colab.research.google.com/drive/1-JIJlao4dI-Ilww_NnTc0rxtp-ymgDgM?usp=sharing), where bert model and tokenizer are used to class... | 25 | Can't map dataset (loaded from csv)
Hello! I am trying to load single csv file with two columns: ('label': str, 'text' str), where is label is str of two possible classes.
Below steps are similar with [this notebook](https://colab.research.google.com/drive/1-JIJlao4dI-Ilww_NnTc0rxtp-ymgDgM?usp=sharing), where bert... | [
-0.0363197922706604,
-0.04933058097958565,
0.059148337692022324,
0.19603030383586884,
0.31758007407188416,
0.12963463366031647,
0.6318865418434143,
0.37842679023742676,
0.2701599597930908,
-0.014723473228514194,
-0.22110891342163086,
0.26376453042030334,
-0.06318508833646774,
-0.0150063317... |
https://github.com/huggingface/datasets/issues/1324 | ❓ Sharing ElasticSearch indexed dataset | Hello @pietrolesci , I am not sure to understand what you are trying to do here.
If you're looking for ways to save a dataset on disk, you can you the `save_to_disk` method:
```python
>>> import datasets
>>> loaded_dataset = datasets.load("dataset_name")
>>> loaded_dataset.save_to_disk("/path/on/your/disk")
```... | Hi there,
First of all, thank you very much for this amazing library. Datasets have become my preferred data structure for basically everything I am currently doing.
**Question:** I'm working with a dataset and I have an elasticsearch container running at localhost:9200. I added an elasticsearch index and I was w... | 73 | ❓ Sharing ElasticSearch indexed dataset
Hi there,
First of all, thank you very much for this amazing library. Datasets have become my preferred data structure for basically everything I am currently doing.
**Question:** I'm working with a dataset and I have an elasticsearch container running at localhost:9200.... | [
-0.1429358422756195,
-0.0638306513428688,
-0.16498704254627228,
0.11578205972909927,
-0.23901288211345673,
0.18162041902542114,
0.23754887282848358,
0.06453099101781845,
0.1375180333852768,
0.26328596472740173,
-0.2542993426322937,
0.110201895236969,
-0.040349286049604416,
-0.1499139815568... |
https://github.com/huggingface/datasets/issues/1324 | ❓ Sharing ElasticSearch indexed dataset | Hi @SBrandeis,
Thanks a lot for picking up my request.
Maybe I can clarify my use-case with a bit of context. Say I have the IMDb dataset. I create an ES index on it. Now I can save and reload the dataset from disk normally. Once I reload the dataset, it is easy to retrieve the ES index on my machine. I was wonderin... | Hi there,
First of all, thank you very much for this amazing library. Datasets have become my preferred data structure for basically everything I am currently doing.
**Question:** I'm working with a dataset and I have an elasticsearch container running at localhost:9200. I added an elasticsearch index and I was w... | 98 | ❓ Sharing ElasticSearch indexed dataset
Hi there,
First of all, thank you very much for this amazing library. Datasets have become my preferred data structure for basically everything I am currently doing.
**Question:** I'm working with a dataset and I have an elasticsearch container running at localhost:9200.... | [
-0.1988600492477417,
0.12474790960550308,
-0.11967775225639343,
0.03391892835497856,
-0.3991584777832031,
0.16318729519844055,
0.2689712941646576,
0.06411916017532349,
0.0988277792930603,
0.20210033655166626,
-0.2281736135482788,
0.1584114283323288,
0.023379765450954437,
-0.041442077606916... |
https://github.com/huggingface/datasets/issues/1324 | ❓ Sharing ElasticSearch indexed dataset | Thanks for the clarification.
I am not familiar with ElasticSearch, but if I understand well you're trying to migrate your data along with the ES index.
My advice would be to check out ES documentation, for instance, this might help you: https://www.elastic.co/guide/en/cloud/current/ec-migrate-data.html
Let me k... | Hi there,
First of all, thank you very much for this amazing library. Datasets have become my preferred data structure for basically everything I am currently doing.
**Question:** I'm working with a dataset and I have an elasticsearch container running at localhost:9200. I added an elasticsearch index and I was w... | 48 | ❓ Sharing ElasticSearch indexed dataset
Hi there,
First of all, thank you very much for this amazing library. Datasets have become my preferred data structure for basically everything I am currently doing.
**Question:** I'm working with a dataset and I have an elasticsearch container running at localhost:9200.... | [
-0.145775705575943,
-0.04534680023789406,
-0.13107475638389587,
0.08623764663934708,
-0.3029540777206421,
0.14406320452690125,
0.1689627766609192,
0.03949399292469025,
0.09051045775413513,
0.16783718764781952,
-0.22479918599128723,
0.118238665163517,
0.03831694647669792,
-0.048301257193088... |
https://github.com/huggingface/datasets/issues/1299 | can't load "german_legal_entity_recognition" dataset | Please if you could tell me more about the error?
1. Please check the directory you've been working on
2. Check for any typos | FileNotFoundError: Couldn't find file locally at german_legal_entity_recognition/german_legal_entity_recognition.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/german_legal_entity_recognition/german_legal_entity_recognition.py or https://s3.amazonaws.com/datasets.huggingface.co... | 24 | can't load "german_legal_entity_recognition" dataset
FileNotFoundError: Couldn't find file locally at german_legal_entity_recognition/german_legal_entity_recognition.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/german_legal_entity_recognition/german_legal_entity_recognition... | [
-0.246110737323761,
-0.29548466205596924,
-0.040122129023075104,
0.4629742503166199,
0.1347414255142212,
0.0751197338104248,
0.18290972709655762,
0.2415546327829361,
0.27432313561439514,
0.08433935791254044,
-0.16218896210193634,
-0.24677741527557373,
-0.04103861376643181,
0.22715811431407... |
https://github.com/huggingface/datasets/issues/1299 | can't load "german_legal_entity_recognition" dataset | > Please if you could tell me more about the error?
>
> 1. Please check the directory you've been working on
> 2. Check for any typos
Error happens during the execution of this line:
dataset = load_dataset("german_legal_entity_recognition")
Also, when I try to open mentioned links via Opera I have errors "40... | FileNotFoundError: Couldn't find file locally at german_legal_entity_recognition/german_legal_entity_recognition.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/german_legal_entity_recognition/german_legal_entity_recognition.py or https://s3.amazonaws.com/datasets.huggingface.co... | 77 | can't load "german_legal_entity_recognition" dataset
FileNotFoundError: Couldn't find file locally at german_legal_entity_recognition/german_legal_entity_recognition.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/german_legal_entity_recognition/german_legal_entity_recognition... | [
-0.12846189737319946,
-0.09706597775220871,
-0.030172018334269524,
0.5242337584495544,
0.2877427935600281,
0.13808666169643402,
0.19560953974723816,
0.22927193343639374,
0.18780918419361115,
0.18087968230247498,
-0.3449555039405823,
-0.13546445965766907,
0.045405253767967224,
0.23866581916... |
https://github.com/huggingface/datasets/issues/1299 | can't load "german_legal_entity_recognition" dataset | Hello @nataly-obr, the `german_legal_entity_recognition` dataset has not yet been released (it is part of the coming soon v2 release).
You can still access it now if you want, but you will need to install `datasets` via the master branch:
`pip install git+https://github.com/huggingface/datasets.git@master`
Pleas... | FileNotFoundError: Couldn't find file locally at german_legal_entity_recognition/german_legal_entity_recognition.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/german_legal_entity_recognition/german_legal_entity_recognition.py or https://s3.amazonaws.com/datasets.huggingface.co... | 52 | can't load "german_legal_entity_recognition" dataset
FileNotFoundError: Couldn't find file locally at german_legal_entity_recognition/german_legal_entity_recognition.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/german_legal_entity_recognition/german_legal_entity_recognition... | [
-0.28713545203208923,
-0.22930456697940826,
-0.05587424710392952,
0.3209628164768219,
0.1926872581243515,
0.10350369662046432,
0.12274926900863647,
0.2395581305027008,
0.2547087073326111,
-0.003333593253046274,
-0.12832622230052948,
-0.12318460643291473,
-0.062068480998277664,
0.3180630207... |
https://github.com/huggingface/datasets/issues/1290 | imdb dataset cannot be downloaded | Hi @rabeehk , I am unable to reproduce your problem locally.
Can you try emptying the cache (removing the content of `/idiap/temp/rkarimi/cache_home_1/datasets`) and retry ? | hi
please find error below getting imdb train spli:
thanks
`
datasets.load_dataset>>> datasets.load_dataset("imdb", split="train")`
errors
```
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
Downloading and preparing dataset imdb/plain_text (d... | 25 | imdb dataset cannot be downloaded
hi
please find error below getting imdb train spli:
thanks
`
datasets.load_dataset>>> datasets.load_dataset("imdb", split="train")`
errors
```
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
Downloading and ... | [
-0.4860099256038666,
0.02991257794201374,
-0.18373194336891174,
0.2636173665523529,
0.3486502170562744,
0.30775588750839233,
0.31570538878440857,
0.44396668672561646,
0.21628108620643616,
0.021267659962177277,
-0.09907682240009308,
-0.07242101430892944,
-0.03154214471578598,
0.158099800348... |
https://github.com/huggingface/datasets/issues/1290 | imdb dataset cannot be downloaded | Hi,
thanks, I did remove the cache and still the same error here
```
>>> a = datasets.load_dataset("imdb", split="train")
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
Downloading and preparing dataset imdb/plain_text (download: 80.23 MiB, generated: 1... | hi
please find error below getting imdb train spli:
thanks
`
datasets.load_dataset>>> datasets.load_dataset("imdb", split="train")`
errors
```
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
Downloading and preparing dataset imdb/plain_text (d... | 115 | imdb dataset cannot be downloaded
hi
please find error below getting imdb train spli:
thanks
`
datasets.load_dataset>>> datasets.load_dataset("imdb", split="train")`
errors
```
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
Downloading and ... | [
-0.4860099256038666,
0.02991257794201374,
-0.18373194336891174,
0.2636173665523529,
0.3486502170562744,
0.30775588750839233,
0.31570538878440857,
0.44396668672561646,
0.21628108620643616,
0.021267659962177277,
-0.09907682240009308,
-0.07242101430892944,
-0.03154214471578598,
0.158099800348... |
https://github.com/huggingface/datasets/issues/1287 | 'iwslt2017-ro-nl', cannot be downloaded | Looks like the data has been moved from its original location to google drive
New url: https://drive.google.com/u/0/uc?id=12ycYSzLIG253AFN35Y6qoyf9wtkOjakp&export=download | Hi
I am trying
`>>> datasets.load_dataset("iwslt2017", 'iwslt2017-ro-nl', split="train")`
getting this error thank you for your help
```
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
Downloading and preparing dataset iwsl_t217/iwslt2017-ro-nl (downlo... | 17 | 'iwslt2017-ro-nl', cannot be downloaded
Hi
I am trying
`>>> datasets.load_dataset("iwslt2017", 'iwslt2017-ro-nl', split="train")`
getting this error thank you for your help
```
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
Downloading and preparin... | [
-0.341976135969162,
-0.30060911178588867,
-0.15237276256084442,
0.27061399817466736,
0.21390970051288605,
0.21579669415950775,
0.25694066286087036,
0.36505481600761414,
0.1976393610239029,
-0.020129917189478874,
-0.06545224040746689,
-0.11762124300003052,
0.14409954845905304,
-0.0291722137... |
https://github.com/huggingface/datasets/issues/1286 | [libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted | I remember also getting the same issue for several other translation datasets like all the iwslt2017 group, this is blokcing me and I really need to fix it and I was wondering if you have an idea on this. @lhoestq thanks,. | Hi
I am getting this error when evaluating on wmt16-ro-en using finetune_trainer.py of huggingface repo. thank for your help
{'epoch': 20.0}
100%|████████████████████████████... | 41 | [libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted
Hi
I am getting this error when evaluating on wm... | [
-0.01973504200577736,
-0.7191824913024902,
0.003932482097297907,
0.3493676781654358,
0.48424214124679565,
0.007673418615013361,
0.24244433641433716,
0.12168796360492706,
-0.25134631991386414,
0.3279396593570709,
-0.08214803785085678,
-0.12255542725324631,
-0.12133372575044632,
0.3986015021... |
https://github.com/huggingface/datasets/issues/1286 | [libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted | maybe there is an empty line or something inside these datasets? could you tell me why this is happening? thanks | Hi
I am getting this error when evaluating on wmt16-ro-en using finetune_trainer.py of huggingface repo. thank for your help
{'epoch': 20.0}
100%|████████████████████████████... | 20 | [libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted
Hi
I am getting this error when evaluating on wm... | [
-0.01973504200577736,
-0.7191824913024902,
0.003932482097297907,
0.3493676781654358,
0.48424214124679565,
0.007673418615013361,
0.24244433641433716,
0.12168796360492706,
-0.25134631991386414,
0.3279396593570709,
-0.08214803785085678,
-0.12255542725324631,
-0.12133372575044632,
0.3986015021... |
https://github.com/huggingface/datasets/issues/1286 | [libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted | I just checked and the wmt16 en-ro doesn't have empty lines
```python
from datasets import load_dataset
d = load_dataset("wmt16", "ro-en", split="train")
len(d) # 610320
len(d.filter(lambda x: len(x["translation"]["en"].strip()) > 0)) # 610320
len(d.filter(lambda x: len(x["translation"]["ro"].strip()) > 0)) ... | Hi
I am getting this error when evaluating on wmt16-ro-en using finetune_trainer.py of huggingface repo. thank for your help
{'epoch': 20.0}
100%|████████████████████████████... | 59 | [libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted
Hi
I am getting this error when evaluating on wm... | [
-0.01973504200577736,
-0.7191824913024902,
0.003932482097297907,
0.3493676781654358,
0.48424214124679565,
0.007673418615013361,
0.24244433641433716,
0.12168796360492706,
-0.25134631991386414,
0.3279396593570709,
-0.08214803785085678,
-0.12255542725324631,
-0.12133372575044632,
0.3986015021... |
https://github.com/huggingface/datasets/issues/1286 | [libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted | Hi @lhoestq
I am not really sure which part is causing this, to me this is more related to dataset library as this is happening for some of the datassets below please find the information to reprodcue the bug, this is really blocking me and I appreciate your help
## Environment info
- `transformers` version: 3.... | Hi
I am getting this error when evaluating on wmt16-ro-en using finetune_trainer.py of huggingface repo. thank for your help
{'epoch': 20.0}
100%|████████████████████████████... | 1,524 | [libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted
Hi
I am getting this error when evaluating on wm... | [
-0.01973504200577736,
-0.7191824913024902,
0.003932482097297907,
0.3493676781654358,
0.48424214124679565,
0.007673418615013361,
0.24244433641433716,
0.12168796360492706,
-0.25134631991386414,
0.3279396593570709,
-0.08214803785085678,
-0.12255542725324631,
-0.12133372575044632,
0.3986015021... |
https://github.com/huggingface/datasets/issues/1285 | boolq does not work | here is the minimal code to reproduce
`datasets>>> datasets.load_dataset("boolq", "train")
the errors
```
`cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
Using custom data configuration train
Downloading and preparing dataset boolq/train (download:... | Hi
I am getting this error when trying to load boolq, thanks for your help
ts_boolq_default_0.1.0_2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11.lock
Traceback (most recent call last):
File "finetune_t5_trainer.py", line 274, in <module>
main()
File "finetune_t5_trainer.py", line 147, ... | 115 | boolq does not work
Hi
I am getting this error when trying to load boolq, thanks for your help
ts_boolq_default_0.1.0_2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11.lock
Traceback (most recent call last):
File "finetune_t5_trainer.py", line 274, in <module>
main()
File "finetune_t5_... | [
-0.21914304792881012,
-0.20522330701351166,
-0.10218259692192078,
0.05680425092577934,
0.04338549077510834,
0.07777059823274612,
0.3575466573238373,
0.25291958451271057,
0.2545487582683563,
-0.029093649238348007,
-0.137118399143219,
0.318046897649765,
-0.2989821135997772,
0.435960143804550... |
https://github.com/huggingface/datasets/issues/1285 | boolq does not work | This has been fixed by #881
this fix will be available in the next release soon.
If you don't want to wait for the release you can actually load the latest version of boolq by specifying `script_version="master"` in `load_dataset` | Hi
I am getting this error when trying to load boolq, thanks for your help
ts_boolq_default_0.1.0_2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11.lock
Traceback (most recent call last):
File "finetune_t5_trainer.py", line 274, in <module>
main()
File "finetune_t5_trainer.py", line 147, ... | 39 | boolq does not work
Hi
I am getting this error when trying to load boolq, thanks for your help
ts_boolq_default_0.1.0_2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11.lock
Traceback (most recent call last):
File "finetune_t5_trainer.py", line 274, in <module>
main()
File "finetune_t5_... | [
-0.21914304792881012,
-0.20522330701351166,
-0.10218259692192078,
0.05680425092577934,
0.04338549077510834,
0.07777059823274612,
0.3575466573238373,
0.25291958451271057,
0.2545487582683563,
-0.029093649238348007,
-0.137118399143219,
0.318046897649765,
-0.2989821135997772,
0.435960143804550... |
https://github.com/huggingface/datasets/issues/1167 | ❓ On-the-fly tokenization with datasets, tokenizers, and torch Datasets and Dataloaders | We're working on adding on-the-fly transforms in datasets.
Currently the only on-the-fly functions that can be applied are in `set_format` in which we transform the data in either numpy/torch/tf tensors or pandas.
For example
```python
dataset.set_format("torch")
```
applies `torch.Tensor` to the dataset entries ... | Hi there,
I have a question regarding "on-the-fly" tokenization. This question was elicited by reading the "How to train a new language model from scratch using Transformers and Tokenizers" [here](https://huggingface.co/blog/how-to-train). Towards the end there is this sentence: "If your dataset is very large, you c... | 63 | ❓ On-the-fly tokenization with datasets, tokenizers, and torch Datasets and Dataloaders
Hi there,
I have a question regarding "on-the-fly" tokenization. This question was elicited by reading the "How to train a new language model from scratch using Transformers and Tokenizers" [here](https://huggingface.co/blog/ho... | [
-0.05715365335345268,
-0.03786188364028931,
0.05116446688771248,
-0.09090297669172287,
0.178022101521492,
-0.02216704934835434,
0.5987793207168579,
0.08288965374231339,
-0.3151784837245941,
-0.07321522384881973,
0.10414644330739975,
0.26249703764915466,
-0.28402194380760193,
0.088003613054... |
https://github.com/huggingface/datasets/issues/1110 | Using a feature named "_type" fails with certain operations | Thanks for reporting !
Indeed this is a keyword in the library that is used to encode/decode features to a python dictionary that we can save/load to json.
We can probably change `_type` to something that is less likely to collide with user feature names.
In this case we would want something backward compatible th... | A column named `_type` leads to a `TypeError: unhashable type: 'dict'` for certain operations:
```python
from datasets import Dataset, concatenate_datasets
ds = Dataset.from_dict({"_type": ["whatever"]}).map()
concatenate_datasets([ds])
# or simply
Dataset(ds._data)
```
Context: We are using datasets to persi... | 74 | Using a feature named "_type" fails with certain operations
A column named `_type` leads to a `TypeError: unhashable type: 'dict'` for certain operations:
```python
from datasets import Dataset, concatenate_datasets
ds = Dataset.from_dict({"_type": ["whatever"]}).map()
concatenate_datasets([ds])
# or simply
D... | [
0.007838190533220768,
-0.2019331008195877,
0.033574461936950684,
-0.045919932425022125,
0.3190201222896576,
0.13687825202941895,
0.4551842212677002,
0.4090704023838043,
0.15967422723770142,
0.030558796599507332,
0.4140094518661499,
0.4669056832790375,
0.03611650690436363,
0.519295036792755... |
https://github.com/huggingface/datasets/issues/1103 | Add support to download kaggle datasets | Hey, I think this is great idea. Any plan to integrate kaggle private datasets loading to `datasets`? | We can use API key | 17 | Add support to download kaggle datasets
We can use API key
Hey, I think this is great idea. Any plan to integrate kaggle private datasets loading to `datasets`? | [
-0.09296482056379318,
-0.05358406901359558,
-0.3102498948574066,
-0.003568349638953805,
0.11542533338069916,
-0.019987260922789574,
0.21321535110473633,
0.12746118009090424,
0.4255775511264801,
0.10141831636428833,
-0.20426180958747864,
0.6420329809188843,
-0.08120814710855484,
0.887046933... |
https://github.com/huggingface/datasets/issues/1064 | Not support links with 302 redirect | > Hi !
> This kind of links is now supported by the library since #1316
I updated links in TLC datasets to be the github links in this pull request
https://github.com/huggingface/datasets/pull/1737
Everything works now. Thank you. | I have an issue adding this download link https://github.com/jitkapat/thailitcorpus/releases/download/v.2.0/tlc_v.2.0.tar.gz
it might be because it is not a direct link (it returns 302 and redirects to aws that returns 403 for head requests).
```
r.head("https://github.com/jitkapat/thailitcorpus/releases/downlo... | 37 | Not support links with 302 redirect
I have an issue adding this download link https://github.com/jitkapat/thailitcorpus/releases/download/v.2.0/tlc_v.2.0.tar.gz
it might be because it is not a direct link (it returns 302 and redirects to aws that returns 403 for head requests).
```
r.head("https://github.com... | [
-0.0889686569571495,
-0.2517872154712677,
-0.00887474324554205,
-0.15047161281108856,
0.16920582950115204,
-0.014681430533528328,
-0.07347162067890167,
0.35192424058914185,
0.03704380244016647,
-0.08535672724246979,
-0.14623351395130157,
0.1753971129655838,
-0.008547297678887844,
0.0927390... |
https://github.com/huggingface/datasets/issues/1046 | Dataset.map() turns tensors into lists? | A solution is to have the tokenizer return a list instead of a tensor, and then use `dataset_tok.set_format(type = 'torch')` to convert that list into a tensor. Still not sure if bug. | I apply `Dataset.map()` to a function that returns a dict of torch tensors (like a tokenizer from the repo transformers). However, in the mapped dataset, these tensors have turned to lists!
```import datasets
import torch
from datasets import load_dataset ... | 32 | Dataset.map() turns tensors into lists?
I apply `Dataset.map()` to a function that returns a dict of torch tensors (like a tokenizer from the repo transformers). However, in the mapped dataset, these tensors have turned to lists!
```import datasets
import torch
from datasets import load_dataset ... | [
-0.0905342623591423,
-0.17366203665733337,
-0.13390696048736572,
0.21302978694438934,
0.21620355546474457,
0.21936547756195068,
0.4914945960044861,
0.3712700307369232,
0.2574450671672821,
-0.026397651061415672,
-0.1461348682641983,
0.6802520155906677,
-0.100302554666996,
-0.416458576917648... |
https://github.com/huggingface/datasets/issues/1046 | Dataset.map() turns tensors into lists? | It is expected behavior, you should set the format to `"torch"` as you mentioned to get pytorch tensors back.
By default datasets returns pure python objects. | I apply `Dataset.map()` to a function that returns a dict of torch tensors (like a tokenizer from the repo transformers). However, in the mapped dataset, these tensors have turned to lists!
```import datasets
import torch
from datasets import load_dataset ... | 26 | Dataset.map() turns tensors into lists?
I apply `Dataset.map()` to a function that returns a dict of torch tensors (like a tokenizer from the repo transformers). However, in the mapped dataset, these tensors have turned to lists!
```import datasets
import torch
from datasets import load_dataset ... | [
-0.0905342623591423,
-0.17366203665733337,
-0.13390696048736572,
0.21302978694438934,
0.21620355546474457,
0.21936547756195068,
0.4914945960044861,
0.3712700307369232,
0.2574450671672821,
-0.026397651061415672,
-0.1461348682641983,
0.6802520155906677,
-0.100302554666996,
-0.416458576917648... |
https://github.com/huggingface/datasets/issues/1004 | how large datasets are handled under the hood | This library uses Apache Arrow under the hood to store datasets on disk.
The advantage of Apache Arrow is that it allows to memory map the dataset. This allows to load datasets bigger than memory and with almost no RAM usage. It also offers excellent I/O speed.
For example when you access one element or one batch
... | Hi
I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, than... | 90 | how large datasets are handled under the hood
Hi
I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the ... | [
-0.0588860809803009,
-0.4006503224372864,
-0.08413930982351303,
0.49534037709236145,
-0.005189197137951851,
-0.04104107618331909,
0.2562442123889923,
0.08263860642910004,
0.43701550364494324,
0.10284914076328278,
-0.1519438773393631,
0.030345266684889793,
-0.10561215877532959,
-0.021718868... |
https://github.com/huggingface/datasets/issues/1004 | how large datasets are handled under the hood | How can we change how much data is loaded to memory with Arrow? I think that I am having some performance issue with it. When Arrow loads the data from disk it does it in multiprocess? It's almost twice slower training with arrow than in memory.
EDIT:
My fault! I had not seen the `dataloader_num_workers` in `Traini... | Hi
I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, than... | 68 | how large datasets are handled under the hood
Hi
I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the ... | [
-0.12156739085912704,
-0.4438161253929138,
-0.1115838885307312,
0.4983530044555664,
0.06788340955972672,
-0.18067386746406555,
0.19715037941932678,
-0.0075911893509328365,
0.4332485795021057,
0.07328794151544571,
-0.10711450129747391,
0.2367631494998932,
-0.13384363055229187,
0.04586600884... |
https://github.com/huggingface/datasets/issues/1004 | how large datasets are handled under the hood | > How can we change how much data is loaded to memory with Arrow? I think that I am having some performance issue with it. When Arrow loads the data from disk it does it in multiprocess? It's almost twice slower training with arrow than in memory.
Loading arrow data from disk is done with memory-mapping. This allows... | Hi
I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, than... | 192 | how large datasets are handled under the hood
Hi
I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the ... | [
-0.10941378772258759,
-0.43085765838623047,
-0.08712811768054962,
0.49038180708885193,
0.019265061244368553,
-0.20029783248901367,
0.17106175422668457,
0.03665684536099434,
0.4349530339241028,
0.038885701447725296,
-0.1141778752207756,
0.2551892101764679,
-0.12327373772859573,
0.0230295844... |
https://github.com/huggingface/datasets/issues/996 | NotADirectoryError while loading the CNN/Dailymail dataset | Looks like the google drive download failed.
I'm getting a `Google Drive - Quota exceeded` error while looking at the downloaded file.
We should consider finding a better host than google drive for this dataset imo
related : #873 #864 |
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------... | 40 | NotADirectoryError while loading the CNN/Dailymail dataset
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642... | [
-0.15028664469718933,
0.013875752687454224,
0.002715535694733262,
0.2634350657463074,
0.47745734453201294,
0.05984543636441231,
0.49018222093582153,
0.3076288402080536,
-0.2510558068752289,
0.25677669048309326,
-0.3245874047279358,
-0.0494808591902256,
-0.4837742745876312,
-0.1979145407676... |
https://github.com/huggingface/datasets/issues/996 | NotADirectoryError while loading the CNN/Dailymail dataset | It is working now, thank you.
Should I leave this issue open to address the Quota-exceeded error? |
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------... | 17 | NotADirectoryError while loading the CNN/Dailymail dataset
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642... | [
-0.15028664469718933,
0.013875752687454224,
0.002715535694733262,
0.2634350657463074,
0.47745734453201294,
0.05984543636441231,
0.49018222093582153,
0.3076288402080536,
-0.2510558068752289,
0.25677669048309326,
-0.3245874047279358,
-0.0494808591902256,
-0.4837742745876312,
-0.1979145407676... |
https://github.com/huggingface/datasets/issues/996 | NotADirectoryError while loading the CNN/Dailymail dataset | I've looked into it and couldn't find a solution. This looks like a Google Drive limitation..
Please try to use other hosts when possible |
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------... | 24 | NotADirectoryError while loading the CNN/Dailymail dataset
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642... | [
-0.15028664469718933,
0.013875752687454224,
0.002715535694733262,
0.2634350657463074,
0.47745734453201294,
0.05984543636441231,
0.49018222093582153,
0.3076288402080536,
-0.2510558068752289,
0.25677669048309326,
-0.3245874047279358,
-0.0494808591902256,
-0.4837742745876312,
-0.1979145407676... |
https://github.com/huggingface/datasets/issues/996 | NotADirectoryError while loading the CNN/Dailymail dataset | The original links are google drive links. Would it be feasible for HF to maintain their own servers for this? Also, I think the same issue must also exist with TFDS. |
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------... | 31 | NotADirectoryError while loading the CNN/Dailymail dataset
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642... | [
-0.15028664469718933,
0.013875752687454224,
0.002715535694733262,
0.2634350657463074,
0.47745734453201294,
0.05984543636441231,
0.49018222093582153,
0.3076288402080536,
-0.2510558068752289,
0.25677669048309326,
-0.3245874047279358,
-0.0494808591902256,
-0.4837742745876312,
-0.1979145407676... |
https://github.com/huggingface/datasets/issues/996 | NotADirectoryError while loading the CNN/Dailymail dataset | It's possible to host data on our side but we should ask the authors. TFDS has the same issue and doesn't have a solution either afaik.
Otherwise you can use the google drive link, but it it's not that convenient because of this quota issue. |
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------... | 45 | NotADirectoryError while loading the CNN/Dailymail dataset
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642... | [
-0.15028664469718933,
0.013875752687454224,
0.002715535694733262,
0.2634350657463074,
0.47745734453201294,
0.05984543636441231,
0.49018222093582153,
0.3076288402080536,
-0.2510558068752289,
0.25677669048309326,
-0.3245874047279358,
-0.0494808591902256,
-0.4837742745876312,
-0.1979145407676... |
https://github.com/huggingface/datasets/issues/996 | NotADirectoryError while loading the CNN/Dailymail dataset | Okay. I imagine asking every author who shares their dataset on Google Drive will also be cumbersome. |
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------... | 17 | NotADirectoryError while loading the CNN/Dailymail dataset
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642... | [
-0.15028664469718933,
0.013875752687454224,
0.002715535694733262,
0.2634350657463074,
0.47745734453201294,
0.05984543636441231,
0.49018222093582153,
0.3076288402080536,
-0.2510558068752289,
0.25677669048309326,
-0.3245874047279358,
-0.0494808591902256,
-0.4837742745876312,
-0.1979145407676... |
https://github.com/huggingface/datasets/issues/996 | NotADirectoryError while loading the CNN/Dailymail dataset | Not as long as the data is stored on GG drive unfortunately.
Maybe we can ask if there's a mirror ?
Hi @JafferWilson is there a download link to get cnn dailymail from another host than GG drive ?
To give you some context, this library provides tools to download and process datasets. For CNN DailyMail the data a... |
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...
---------------------------------------... | 84 | NotADirectoryError while loading the CNN/Dailymail dataset
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642... | [
-0.15028664469718933,
0.013875752687454224,
0.002715535694733262,
0.2634350657463074,
0.47745734453201294,
0.05984543636441231,
0.49018222093582153,
0.3076288402080536,
-0.2510558068752289,
0.25677669048309326,
-0.3245874047279358,
-0.0494808591902256,
-0.4837742745876312,
-0.1979145407676... |
https://github.com/huggingface/datasets/issues/993 | Problem downloading amazon_reviews_multi | Hi @hfawaz ! This is working fine for me. Is it a repeated occurence? Have you tried from the latest verion? | Thanks for adding the dataset.
After trying to load the dataset, I am getting the following error:
`ConnectionError: Couldn't reach https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_fr_train.json
`
I used the following code to load the dataset:
`load_dataset(
dataset_name,
... | 21 | Problem downloading amazon_reviews_multi
Thanks for adding the dataset.
After trying to load the dataset, I am getting the following error:
`ConnectionError: Couldn't reach https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_fr_train.json
`
I used the following code to load the dataset:
`l... | [
-0.3760577440261841,
-0.06871706247329712,
-0.15121592581272125,
0.3995555639266968,
0.2218291461467743,
0.08718260377645493,
0.22712312638759613,
0.0012163223000243306,
-0.19542576372623444,
-0.17005369067192078,
-0.13868173956871033,
0.0604703463613987,
0.1707974374294281,
0.033273577690... |
https://github.com/huggingface/datasets/issues/988 | making sure datasets are not loaded in memory and distributed training of them | my implementation of sharding per TPU core: https://github.com/google-research/ruse/blob/d4dd58a2d8efe0ffb1a9e9e77e3228d6824d3c3c/seq2seq/trainers/t5_trainer.py#L316
my implementation of dataloader for this case https://github.com/google-research/ruse/blob/d4dd58a2d8efe0ffb1a9e9e77e3228d6824d3c3c/seq2seq/tasks/tasks.... | Hi
I am dealing with large-scale datasets which I need to train distributedly, I used the shard function to divide the dataset across the cores, without any sampler, this does not work for distributed training and does not become any faster than 1 TPU core. 1) how I can make sure data is not loaded in memory 2) in cas... | 16 | making sure datasets are not loaded in memory and distributed training of them
Hi
I am dealing with large-scale datasets which I need to train distributedly, I used the shard function to divide the dataset across the cores, without any sampler, this does not work for distributed training and does not become any fast... | [
-0.2850978672504425,
-0.4469245672225952,
-0.10164061188697815,
0.2859600782394409,
0.08568144589662552,
-0.14910830557346344,
0.23839902877807617,
-0.14658623933792114,
0.024080155417323112,
0.4799138605594635,
0.018180396407842636,
-0.34881794452667236,
0.0343003049492836,
0.027509305626... |
https://github.com/huggingface/datasets/issues/961 | sample multiple datasets | here I share my dataloader currently for multiple tasks: https://gist.github.com/rabeehkarimimahabadi/39f9444a4fb6f53dcc4fca5d73bf8195
I need to train my model distributedly with this dataloader, "MultiTasksataloader", currently this does not work in distributed fasion,
to save on memory I tried to use iterative d... | Hi
I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is:
- I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I c... | 109 | sample multiple datasets
Hi
I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is:
- I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2... | [
-0.39164042472839355,
-0.06415629386901855,
-0.07151347398757935,
0.18023233115673065,
0.030831437557935715,
-0.19643978774547577,
0.3828044831752777,
-0.0021055317483842373,
0.3755803108215332,
0.30514296889305115,
0.021049683913588524,
0.16712519526481628,
-0.19928698241710663,
0.2304516... |
https://github.com/huggingface/datasets/issues/937 | Local machine/cluster Beam Datasets example/tutorial | I tried to make it run once on the SparkRunner but it seems that this runner has some issues when it is run locally.
From my experience the DirectRunner is fine though, even if it's clearly not memory efficient.
It would be awesome though to make it work locally on a SparkRunner !
Did you manage to make your proce... | Hi,
I'm wondering if https://huggingface.co/docs/datasets/beam_dataset.html has an non-GCP or non-Dataflow version example/tutorial? I tried to migrate it to run on DirectRunner and SparkRunner, however, there were way too many runtime errors that I had to fix during the process, and even so I wasn't able to get eit... | 62 | Local machine/cluster Beam Datasets example/tutorial
Hi,
I'm wondering if https://huggingface.co/docs/datasets/beam_dataset.html has an non-GCP or non-Dataflow version example/tutorial? I tried to migrate it to run on DirectRunner and SparkRunner, however, there were way too many runtime errors that I had to fix d... | [
-0.30291640758514404,
-0.25978517532348633,
-0.004164572339504957,
0.08882158994674683,
0.06116870418190956,
-0.22946017980575562,
0.2730875313282013,
-0.12529593706130981,
0.1471407562494278,
0.2653595805168152,
0.1793321669101715,
0.29214242100715637,
-0.46417108178138733,
0.694486498832... |
https://github.com/huggingface/datasets/issues/919 | wrong length with datasets | Also, I cannot first convert it to torch format, since huggingface seq2seq_trainer codes process the datasets afterwards during datacollector function to make it optimize for TPUs. | Hi
I have a MRPC dataset which I convert it to seq2seq format, then this is of this format:
`Dataset(features: {'src_texts': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 10)
`
I feed it to a dataloader:
```
dataloader = DataLoader(
train_dataset,
... | 26 | wrong length with datasets
Hi
I have a MRPC dataset which I convert it to seq2seq format, then this is of this format:
`Dataset(features: {'src_texts': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 10)
`
I feed it to a dataloader:
```
dataloader = DataLoader(
... | [
-0.08989919722080231,
-0.3192080557346344,
-0.03239407762885094,
0.6070460081100464,
0.27657824754714966,
-0.033613186329603195,
0.479080468416214,
0.07050269842147827,
-0.5328690409660339,
0.20781123638153076,
0.19010338187217712,
-0.1201387420296669,
0.013927141204476357,
0.3443427085876... |
https://github.com/huggingface/datasets/issues/915 | Shall we change the hashing to encoding to reduce potential replicated cache files? | This is an interesting idea !
Do you have ideas about how to approach the decoding and the normalization ? | Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or commutative to each other. I think that encoding the transformation chain as the finge... | 20 | Shall we change the hashing to encoding to reduce potential replicated cache files?
Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or ... | [
0.05798809975385666,
0.16470485925674438,
-0.020648302510380745,
-0.09999579191207886,
0.22929450869560242,
-0.05592697113752365,
0.21666912734508514,
0.4347034692764282,
-0.2473880797624588,
-0.16177046298980713,
0.009789476171135902,
-0.04963209852576256,
-0.1669541895389557,
0.289674639... |
https://github.com/huggingface/datasets/issues/915 | Shall we change the hashing to encoding to reduce potential replicated cache files? | @lhoestq
I think we first need to save the transformation chain to a list in `self._fingerprint`. Then we can
- decode all the current saved datasets to see if there is already one that is equivalent to the transformation we need now.
- or, calculate all the possible hash value of the current chain for comparison so... | Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or commutative to each other. I think that encoding the transformation chain as the finge... | 191 | Shall we change the hashing to encoding to reduce potential replicated cache files?
Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or ... | [
0.1285303682088852,
0.08636417984962463,
-0.04908743128180504,
-0.07721101492643356,
0.1253056675195694,
-0.05366494506597519,
0.22149795293807983,
0.4482250511646271,
-0.12496168166399002,
-0.04003752022981644,
-0.1387322098016739,
0.03347310796380043,
-0.12813976407051086,
0.359291017055... |
https://github.com/huggingface/datasets/issues/897 | Dataset viewer issues | Thanks for reporting !
cc @srush for the empty feature list issue and the encoding issue
cc @julien-c maybe we can update the url and just have a redirection from the old url to the new one ? | I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though:
- the URL is still under `nlp`, perhaps an alias for `datasets` can be made
- when I remove a **feature** (and the feature list is empty), I get an error. T... | 38 | Dataset viewer issues
I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though:
- the URL is still under `nlp`, perhaps an alias for `datasets` can be made
- when I remove a **feature** (and the feature list is e... | [
-0.2186933010816574,
0.2789268493652344,
-0.026432735845446587,
0.28165122866630554,
-0.002390423556789756,
0.11150456964969635,
0.2981255054473877,
0.29886332154273987,
-0.1956239491701126,
0.06475906819105148,
-0.007638650014996529,
0.26640623807907104,
-0.39500224590301514,
0.0988745763... |
https://github.com/huggingface/datasets/issues/897 | Dataset viewer issues | Ok, I redirected on our side to a new url. ⚠️ @srush: if you update the Streamlit config too to `/datasets/viewer`, let me know because I'll need to change our nginx config at the same time | I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though:
- the URL is still under `nlp`, perhaps an alias for `datasets` can be made
- when I remove a **feature** (and the feature list is empty), I get an error. T... | 36 | Dataset viewer issues
I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though:
- the URL is still under `nlp`, perhaps an alias for `datasets` can be made
- when I remove a **feature** (and the feature list is e... | [
-0.2186933010816574,
0.2789268493652344,
-0.026432735845446587,
0.28165122866630554,
-0.002390423556789756,
0.11150456964969635,
0.2981255054473877,
0.29886332154273987,
-0.1956239491701126,
0.06475906819105148,
-0.007638650014996529,
0.26640623807907104,
-0.39500224590301514,
0.0988745763... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.