html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 63 51.8k | body stringlengths 0 36.2k ⌀ | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings list |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/2860 | Cannot download TOTTO dataset | Hola @mrm8488, thanks for reporting.
Apparently, the data source host changed their URL one week ago: https://github.com/google-research-datasets/ToTTo/commit/cebeb430ec2a97747e704d16a9354f7d9073ff8f
I'm fixing it. | Error: Couldn't find file at https://storage.googleapis.com/totto/totto_data.zip
`datasets version: 1.11.0`
# How to reproduce:
```py
from datasets import load_dataset
dataset = load_dataset('totto')
```
| 20 | Cannot download TOTTO dataset
Error: Couldn't find file at https://storage.googleapis.com/totto/totto_data.zip
`datasets version: 1.11.0`
# How to reproduce:
```py
from datasets import load_dataset
dataset = load_dataset('totto')
```
Hola @mrm8488, thanks for reporting.
Apparently, the data sourc... | [
-0.2944106758,
0.390093267,
-0.1317224354,
0.0641969666,
0.4353683889,
0.1393281817,
0.1679637283,
0.587985158,
-0.0261457115,
0.2343415916,
-0.1947767287,
-0.0473089851,
0.0594172031,
0.3187091053,
0.0643601269,
-0.3084109724,
0.0652797148,
-0.0130064292,
-0.1395521462,
0.0308... |
https://github.com/huggingface/datasets/issues/2842 | always requiring the username in the dataset name when there is one | From what I can understand, you want the saved arrow file directory to have username as well instead of just dataset name if it was downloaded with the user prefix? | Me and now another person have been bitten by the `datasets`'s non-strictness on requiring a dataset creator's username when it's due.
So both of us started with `stas/openwebtext-10k`, somewhere along the lines lost `stas/` and continued using `openwebtext-10k` and it all was good until we published the software an... | 30 | always requiring the username in the dataset name when there is one
Me and now another person have been bitten by the `datasets`'s non-strictness on requiring a dataset creator's username when it's due.
So both of us started with `stas/openwebtext-10k`, somewhere along the lines lost `stas/` and continued using `o... | [
-0.1190379411,
0.1679283381,
-0.047179047,
0.0831829533,
0.0581347719,
-0.1882434636,
0.2695405185,
0.090119943,
0.0316710621,
0.3004505634,
0.157178849,
-0.0406883359,
-0.0339303538,
0.0137739284,
-0.2211272866,
0.0345356502,
-0.0375725999,
0.4607705474,
-0.1611818969,
-0.0188... |
https://github.com/huggingface/datasets/issues/2842 | always requiring the username in the dataset name when there is one | I don't think the user cares of how this is done, but the 2nd command should fail, IMHO, as its dataset name is invalid:
```
# first run
python -c "from datasets import load_dataset; load_dataset('stas/openwebtext-10k')"
# now run immediately
python -c "from datasets import load_dataset; load_dataset('openwebtext-... | Me and now another person have been bitten by the `datasets`'s non-strictness on requiring a dataset creator's username when it's due.
So both of us started with `stas/openwebtext-10k`, somewhere along the lines lost `stas/` and continued using `openwebtext-10k` and it all was good until we published the software an... | 115 | always requiring the username in the dataset name when there is one
Me and now another person have been bitten by the `datasets`'s non-strictness on requiring a dataset creator's username when it's due.
So both of us started with `stas/openwebtext-10k`, somewhere along the lines lost `stas/` and continued using `o... | [
-0.0112161068,
0.1447481215,
-0.0393612012,
0.0958191305,
0.0011398974,
-0.21814017,
0.4295823574,
0.1111667305,
0.106905356,
0.3369615972,
0.1158717498,
-0.0281226411,
0.0120588709,
0.0387605019,
-0.1626236737,
0.1540511698,
-0.06216361,
0.3452987373,
-0.1390583366,
0.00825685... |
https://github.com/huggingface/datasets/issues/2842 | always requiring the username in the dataset name when there is one | This has been fixed now, and we'll do a new release of the library today.
Now the stas/openwebtext-10k dataset is cached at `.cache/huggingface/datasets/stas___openwebtext10k` and openwebtext-10k would be at `.cache/huggingface/datasets/openwebtext10k`. Since they are different, the cache won't fall back on loading ... | Me and now another person have been bitten by the `datasets`'s non-strictness on requiring a dataset creator's username when it's due.
So both of us started with `stas/openwebtext-10k`, somewhere along the lines lost `stas/` and continued using `openwebtext-10k` and it all was good until we published the software an... | 65 | always requiring the username in the dataset name when there is one
Me and now another person have been bitten by the `datasets`'s non-strictness on requiring a dataset creator's username when it's due.
So both of us started with `stas/openwebtext-10k`, somewhere along the lines lost `stas/` and continued using `o... | [
0.0375942104,
0.0824941769,
-0.0623897277,
0.1891893297,
0.0797620192,
-0.1698654294,
0.3240479231,
0.1753910184,
0.098809801,
0.2571045458,
-0.0334546044,
-0.0057137166,
0.0868410617,
0.0152155412,
-0.0175971426,
0.1378325522,
-0.0254064966,
0.3296597302,
-0.1355325878,
0.0548... |
https://github.com/huggingface/datasets/issues/2841 | Adding GLUECoS Hinglish and Spanglish code-switching bemchmark | Hi @yjernite I am interested in adding this dataset.
In the repo they have also added a code mixed MT task from English to Hinglish [here](https://github.com/microsoft/GLUECoS#code-mixed-machine-translation-task). I think this could be a good dataset addition in itself and then I can add the rest of the GLUECoS tasks... | ## Adding a Dataset
- **Name:** GLUECoS
- **Description:** a Microsoft Benchmark to evaluate code-switching for only two language pairs but a variety of tasks
- **Paper:** https://aclanthology.org/2020.acl-main.329/
- **Data:** https://github.com/microsoft/GLUECoS
- **Motivation:** We currently only have [one othe... | 55 | Adding GLUECoS Hinglish and Spanglish code-switching bemchmark
## Adding a Dataset
- **Name:** GLUECoS
- **Description:** a Microsoft Benchmark to evaluate code-switching for only two language pairs but a variety of tasks
- **Paper:** https://aclanthology.org/2020.acl-main.329/
- **Data:** https://github.com/micr... | [
-0.4562040567,
-0.0375739373,
-0.1403834522,
0.0984708145,
-0.0866129771,
0.1747598946,
0.103435643,
0.1597848237,
0.1243996322,
-0.1275312901,
-0.3663086295,
0.0582287125,
-0.1664722115,
0.4522483349,
0.4579001665,
-0.0750973746,
0.1682833284,
-0.1863520741,
-0.2975812554,
-0.... |
https://github.com/huggingface/datasets/issues/2839 | OpenWebText: NonMatchingSplitsSizesError | I just regenerated the verifications metadata and noticed that nothing changed: the data file is fine (the checksum didn't change), and the number of examples is still 8013769. Not sure how you managed to get 7982430 examples.
Can you try to delete your cache ( by default at `~/.cache/huggingface/datasets`) and try ... | ## Describe the bug
When downloading `openwebtext`, I'm getting:
```
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', num_bytes=39611023912, num_examples=7982430... | 62 | OpenWebText: NonMatchingSplitsSizesError
## Describe the bug
When downloading `openwebtext`, I'm getting:
```
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', ... | [
-0.232565403,
-0.1025963202,
0.0594485067,
0.4710071385,
0.0991449878,
0.0983287022,
0.0017990913,
0.5494819283,
-0.0201671273,
0.1316695958,
-0.1176460311,
0.0032898202,
-0.1183153838,
0.0913083553,
-0.0851510316,
0.1036114469,
-0.1839792281,
0.1432444751,
-0.2236062586,
0.083... |
https://github.com/huggingface/datasets/issues/2839 | OpenWebText: NonMatchingSplitsSizesError | I'll try without deleting the whole cache (we have large datasets already stored). I was under the impression that `download_mode="force_redownload"` would bypass cache.
Sorry plateform should be linux (Redhat version 8.1) | ## Describe the bug
When downloading `openwebtext`, I'm getting:
```
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', num_bytes=39611023912, num_examples=7982430... | 31 | OpenWebText: NonMatchingSplitsSizesError
## Describe the bug
When downloading `openwebtext`, I'm getting:
```
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', ... | [
-0.3607827425,
-0.2398548871,
0.0516971909,
0.4607084095,
-0.0456267223,
0.1327422857,
0.0660521686,
0.6086550355,
0.1439710706,
0.137479037,
-0.2283657789,
0.1216923892,
-0.0511875302,
0.2408794761,
0.0173507649,
0.0554529913,
-0.1288680583,
0.2090866268,
0.0327518806,
0.09639... |
https://github.com/huggingface/datasets/issues/2839 | OpenWebText: NonMatchingSplitsSizesError | Sorry I haven't had time to work on this. I'll close and re-open if I can't figure out why I'm having this issue. Thanks for taking a look ! | ## Describe the bug
When downloading `openwebtext`, I'm getting:
```
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', num_bytes=39611023912, num_examples=7982430... | 29 | OpenWebText: NonMatchingSplitsSizesError
## Describe the bug
When downloading `openwebtext`, I'm getting:
```
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', ... | [
-0.3353739381,
-0.1884129345,
0.0487628244,
0.432811141,
0.0591350868,
0.111541383,
-0.0852616429,
0.5720366836,
-0.0169445891,
0.1263843775,
-0.1240674108,
0.0030229676,
-0.0780211166,
0.084197171,
-0.1284151375,
0.0285330918,
-0.1261274219,
0.187572971,
-0.2567580938,
0.10378... |
https://github.com/huggingface/datasets/issues/2832 | Logging levels not taken into account | I just take a look at all the outputs produced by `datasets` using the different log-levels.
As far as i can tell using `datasets==1.17.0` they overall issue seems to be fixed.
However, I noticed that there is one tqdm based progress indicator appearing on STDERR that I can simply not suppress.
```
Resolving data... | ## Describe the bug
The `logging` module isn't working as intended relative to the levels to set.
## Steps to reproduce the bug
```python
from datasets import logging
logging.set_verbosity_debug()
logger = logging.get_logger()
logger.error("ERROR")
logger.warning("WARNING")
logger.info("INFO")
logge... | 159 | Logging levels not taken into account
## Describe the bug
The `logging` module isn't working as intended relative to the levels to set.
## Steps to reproduce the bug
```python
from datasets import logging
logging.set_verbosity_debug()
logger = logging.get_logger()
logger.error("ERROR")
logger.warnin... | [
-0.1819091141,
-0.0898382887,
0.0939668864,
0.1882541627,
0.3095968366,
-0.2192624807,
0.4732371569,
0.1431783736,
-0.3718084991,
0.0353801474,
-0.0058935136,
0.4615266323,
-0.4104681909,
0.0659599826,
-0.31302315,
-0.0980830789,
-0.0660252795,
-0.139336288,
-0.5450296998,
0.04... |
https://github.com/huggingface/datasets/issues/2832 | Logging levels not taken into account | Hi! This should disable the tqdm output:
```python
import datasets
datasets.set_progress_bar_enabled(False)
```
On a side note: I believe the issue with logging (not tqdm) is still relevant on master. | ## Describe the bug
The `logging` module isn't working as intended relative to the levels to set.
## Steps to reproduce the bug
```python
from datasets import logging
logging.set_verbosity_debug()
logger = logging.get_logger()
logger.error("ERROR")
logger.warning("WARNING")
logger.info("INFO")
logge... | 29 | Logging levels not taken into account
## Describe the bug
The `logging` module isn't working as intended relative to the levels to set.
## Steps to reproduce the bug
```python
from datasets import logging
logging.set_verbosity_debug()
logger = logging.get_logger()
logger.error("ERROR")
logger.warnin... | [
-0.1961633563,
-0.1857534349,
0.1107906997,
0.1480395347,
0.3297184408,
-0.0822730809,
0.5039986372,
0.1595654935,
-0.3525215089,
0.0196760632,
-0.0581963211,
0.399209708,
-0.3322144449,
0.0895238668,
-0.2066877037,
-0.0721537992,
-0.058295887,
-0.1462147683,
-0.7243365049,
0.0... |
https://github.com/huggingface/datasets/issues/2831 | ArrowInvalid when mapping dataset with missing values | Hi ! It fails because of the feature type inference.
Because the first 1000 examples all have null values in the "match" field, then it infers that the type for this field is `null` type before writing the data on disk. But as soon as it tries to map an example with a non-null "match" field, then it fails.
To fix... | ## Describe the bug
I encountered an `ArrowInvalid` when mapping dataset with missing values.
Here are the files for a minimal example. The exception is only thrown when the first line in the csv has a missing value (if you move the last line to the top it isn't thrown).
[data_small.csv](https://github.com/huggingf... | 134 | ArrowInvalid when mapping dataset with missing values
## Describe the bug
I encountered an `ArrowInvalid` when mapping dataset with missing values.
Here are the files for a minimal example. The exception is only thrown when the first line in the csv has a missing value (if you move the last line to the top it isn'... | [
-0.03272672,
-0.153125599,
0.1040896326,
0.2286235839,
0.0726002157,
0.1366069913,
0.3329793215,
0.4905760586,
-0.0207699779,
0.2300734818,
0.2338508219,
0.4390176833,
0.0271517597,
-0.1494163424,
-0.1128670499,
-0.0434915684,
0.0362380184,
0.1175364777,
-0.0013273309,
-0.04473... |
https://github.com/huggingface/datasets/issues/2826 | Add a Text Classification dataset: KanHope | Hi ! In your script it looks like you're trying to load the dataset `bn_hate_speech,`, not KanHope.
Moreover the error `KeyError: ' '` means that you have a feature of type ClassLabel, but for a certain example of the dataset, it looks like the label is empty (it's just a string with a space). Can you make sure that... | ## Adding a Dataset
- **Name:** *KanHope*
- **Description:** *A code-mixed English-Kannada dataset for Hope speech detection*
- **Paper:** *https://arxiv.org/abs/2108.04616* (I am the author of the paper}
- **Author:** *[AdeepH](https://github.com/adeepH)*
- **Data:** *https://github.com/adeepH/KanHope/tree/main/d... | 75 | Add a Text Classification dataset: KanHope
## Adding a Dataset
- **Name:** *KanHope*
- **Description:** *A code-mixed English-Kannada dataset for Hope speech detection*
- **Paper:** *https://arxiv.org/abs/2108.04616* (I am the author of the paper}
- **Author:** *[AdeepH](https://github.com/adeepH)*
- **Data:** *... | [
-0.2438095361,
-0.0837457851,
-0.1076410487,
0.168150574,
0.2711260915,
0.0154425288,
0.1767384857,
0.4127869606,
0.146568045,
0.0868590027,
-0.1648918986,
-0.0779900402,
-0.2590736151,
0.2801797092,
-0.2317630202,
-0.2915209532,
-0.0884709507,
0.1522611678,
-0.1177388802,
-0.0... |
https://github.com/huggingface/datasets/issues/2825 | The datasets.map function does not load cached dataset after moving python script | This also happened to me on COLAB.
Details:
I ran the `run_mlm.py` in two different notebooks.
In the first notebook, I do tokenization since I can get 4 CPU cores without any GPUs, and save the cache into a folder which I copy to drive.
In the second notebook, I copy the cache folder from drive and re-run the run... | ## Describe the bug
The datasets.map function caches the processed data to a certain directory. When the map function is called another time with totally the same parameters, the cached data are supposed to be reloaded instead of re-processing. However, it doesn't reuse cached data sometimes. I use the common data pro... | 85 | The datasets.map function does not load cached dataset after moving python script
## Describe the bug
The datasets.map function caches the processed data to a certain directory. When the map function is called another time with totally the same parameters, the cached data are supposed to be reloaded instead of re-pr... | [
-0.1612100005,
0.0666725561,
0.1235987544,
0.142117694,
0.1211063862,
-0.092572242,
0.418130815,
0.1090158522,
0.1731481105,
-0.0780498907,
0.1154462993,
0.6156932712,
-0.0211399812,
-0.1331948042,
0.1611655205,
0.2321514338,
0.175963521,
0.1501923352,
0.0869101211,
-0.18116016... |
https://github.com/huggingface/datasets/issues/2825 | The datasets.map function does not load cached dataset after moving python script | #2854 fixed the issue :)
We'll do a new release of `datasets` soon to make the fix available.
In the meantime, feel free to try it out by installing `datasets` from source
If you have other issues or any question, feel free to re-open the issue :) | ## Describe the bug
The datasets.map function caches the processed data to a certain directory. When the map function is called another time with totally the same parameters, the cached data are supposed to be reloaded instead of re-processing. However, it doesn't reuse cached data sometimes. I use the common data pro... | 47 | The datasets.map function does not load cached dataset after moving python script
## Describe the bug
The datasets.map function caches the processed data to a certain directory. When the map function is called another time with totally the same parameters, the cached data are supposed to be reloaded instead of re-pr... | [
-0.1273976862,
0.0724145621,
0.091458641,
0.0095803542,
0.1599635333,
-0.0660089105,
0.4047463536,
0.2460485399,
0.2259852588,
-0.0873799995,
0.1144408807,
0.5168569684,
-0.0583695211,
-0.1686526686,
0.1454254091,
0.1743040234,
0.2147075236,
0.1613189131,
0.015196085,
-0.174861... |
https://github.com/huggingface/datasets/issues/2823 | HF_DATASETS_CACHE variable in Windows | Agh - I'm a muppet. No quote marks are needed.
set HF_DATASETS_CACHE = C:\Datasets
works as intended. | I can't seem to use a custom Cache directory in Windows. I have tried:
set HF_DATASETS_CACHE = "C:\Datasets"
set HF_DATASETS_CACHE = "C:/Datasets"
set HF_DATASETS_CACHE = "C:\\Datasets"
set HF_DATASETS_CACHE = "r'C:\Datasets'"
set HF_DATASETS_CACHE = "\Datasets"
set HF_DATASETS_CACHE = "/Datasets"
In each in... | 17 | HF_DATASETS_CACHE variable in Windows
I can't seem to use a custom Cache directory in Windows. I have tried:
set HF_DATASETS_CACHE = "C:\Datasets"
set HF_DATASETS_CACHE = "C:/Datasets"
set HF_DATASETS_CACHE = "C:\\Datasets"
set HF_DATASETS_CACHE = "r'C:\Datasets'"
set HF_DATASETS_CACHE = "\Datasets"
set HF_DA... | [
-0.2557517588,
0.4681490064,
-0.0086678015,
0.0938129723,
-0.0932578966,
0.2809452116,
0.359179467,
0.0130967852,
0.5492928028,
0.1723243147,
-0.1086300313,
-0.3207072318,
0.0269633159,
-0.1362948865,
-0.0884397104,
0.1600555778,
-0.0124637978,
0.2051331699,
0.3308283091,
0.105... |
https://github.com/huggingface/datasets/issues/2821 | Cannot load linnaeus dataset | Thanks for reporting ! #2852 fixed this error
We'll do a new release of `datasets` soon :) | ## Describe the bug
The [linnaeus](https://huggingface.co/datasets/linnaeus) dataset cannot be loaded. To reproduce:
```
from datasets import load_dataset
datasets = load_dataset("linnaeus")
```
This results in:
```
Downloading and preparing dataset linnaeus/linnaeus (download: 17.36 MiB, generated: 8.74 MiB,... | 17 | Cannot load linnaeus dataset
## Describe the bug
The [linnaeus](https://huggingface.co/datasets/linnaeus) dataset cannot be loaded. To reproduce:
```
from datasets import load_dataset
datasets = load_dataset("linnaeus")
```
This results in:
```
Downloading and preparing dataset linnaeus/linnaeus (download: ... | [
-0.1098579168,
-0.1431658566,
-0.0036668191,
0.5683763027,
0.2294620275,
-0.0716871843,
0.0764337406,
0.3127034307,
0.1373579502,
0.0487763248,
-0.3296413124,
0.0350636207,
-0.0180284195,
-0.1834695041,
0.1603858322,
-0.2352235317,
-0.1053616777,
-0.0622279905,
-0.4106490314,
0... |
https://github.com/huggingface/datasets/issues/2820 | Downloading “reddit” dataset keeps timing out. | ```
Using custom data configuration default
Downloading and preparing dataset reddit/default (download: 2.93 GiB, generated: 17.64 GiB, post-processed: Unknown size, total: 20.57 GiB) to /Volumes/My Passport for Mac/og-chat-data/reddit/default/1.0.0/98ba5abea674d3178f7588aa6518a5510dc0c6fa8176d9653a3546d5afcb3969...
... | ## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduce the bug
```python
from datasets import load_d... | 646 | Downloading “reddit” dataset keeps timing out.
## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduc... | [
-0.3638956547,
-0.142130658,
-0.0797188804,
0.1200469062,
0.1668418348,
0.1872060895,
0.0386047252,
0.2438803762,
-0.0571570881,
-0.1331620365,
-0.0627649203,
0.1441399455,
0.348828733,
-0.0019752285,
-0.0935339332,
0.2313061208,
0.0492557846,
-0.1813281327,
-0.0657733604,
0.05... |
https://github.com/huggingface/datasets/issues/2820 | Downloading “reddit” dataset keeps timing out. | It also doesn't seem to be "smart caching" and I received an error about a file not being found... | ## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduce the bug
```python
from datasets import load_d... | 19 | Downloading “reddit” dataset keeps timing out.
## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduc... | [
-0.3514557779,
-0.1771148592,
-0.0796714202,
0.1682870537,
0.1475949287,
0.2232837975,
0.0226032771,
0.2211401314,
0.0123540033,
-0.189738065,
-0.0006051387,
0.07988175,
0.2927660048,
-0.0234992765,
0.0045965817,
0.260556519,
0.1746073961,
-0.1837053299,
0.0295602456,
0.0212488... |
https://github.com/huggingface/datasets/issues/2820 | Downloading “reddit” dataset keeps timing out. | To be clear, the error I get when I try to "re-instantiate" the download after failure is:
```
OSError: Cannot find data file.
Original error:
[Errno 20] Not a directory: <HOME>/.cache/huggingface/datasets/downloads/1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c/corpus-webis-tldr-17.json'
``` | ## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduce the bug
```python
from datasets import load_d... | 32 | Downloading “reddit” dataset keeps timing out.
## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduc... | [
-0.3457064033,
-0.1134912223,
-0.0889912322,
0.129444018,
0.147492066,
0.2143733799,
0.048017364,
0.1998877525,
0.0338551588,
-0.0955311134,
-0.0302022602,
0.1168749481,
0.2437811345,
-0.0771486089,
-0.0491199605,
0.2314233035,
0.0577857271,
-0.1622307003,
-0.0559196025,
0.0735... |
https://github.com/huggingface/datasets/issues/2820 | Downloading “reddit” dataset keeps timing out. | Hi ! Since https://github.com/huggingface/datasets/pull/2803 we've changed the time out from 10sec to 100sec.
This should prevent the `ReadTimeoutError`. Feel free to try it out by installing `datasets` from source
```
pip install git+https://github.com/huggingface/datasets.git
```
When re-running your code you ... | ## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduce the bug
```python
from datasets import load_d... | 111 | Downloading “reddit” dataset keeps timing out.
## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduc... | [
-0.4538249671,
-0.0359521843,
-0.1072576791,
0.2148067951,
0.2254009396,
0.0983383209,
0.1591715962,
0.2268798649,
-0.0526377186,
0.0000228251,
-0.1627775133,
0.1547372192,
0.1955918819,
-0.1334097087,
-0.0972537696,
0.0610445626,
-0.0565708056,
-0.067760244,
-0.1615537107,
0.0... |
https://github.com/huggingface/datasets/issues/2820 | Downloading “reddit” dataset keeps timing out. | @lhoestq thanks for the update. The directory specified by the OSError ie.
```
1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c/corpus-webis-tldr-17.json
```
was not actually in that directory so I can't delete it. | ## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduce the bug
```python
from datasets import load_d... | 26 | Downloading “reddit” dataset keeps timing out.
## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduc... | [
-0.3095023334,
-0.1504021585,
-0.1248522922,
0.1671922356,
0.0626937002,
0.2322172672,
0.0452463068,
0.2481846809,
0.0208498687,
-0.0671561658,
-0.0702518076,
0.1633017659,
0.2610353827,
-0.1121932417,
-0.1331467181,
0.2049565464,
0.0665127486,
-0.2218915671,
-0.0956013799,
0.1... |
https://github.com/huggingface/datasets/issues/2820 | Downloading “reddit” dataset keeps timing out. | Oh, then could you try deleting the parent directory `1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c` instead ?
This way the download manager will know that it has to uncompress the data again | ## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduce the bug
```python
from datasets import load_d... | 27 | Downloading “reddit” dataset keeps timing out.
## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduc... | [
-0.3229551315,
-0.1548750252,
-0.0958517119,
0.2122766376,
0.1727839559,
0.2089087963,
0.0311658904,
0.2200948894,
-0.0430056825,
-0.0564695187,
-0.072283946,
0.1025688946,
0.2630369067,
-0.0284359157,
-0.1015663967,
0.2591444254,
0.0751859546,
-0.1532899439,
-0.1340025663,
0.0... |
https://github.com/huggingface/datasets/issues/2820 | Downloading “reddit” dataset keeps timing out. | It seems to have worked. It only took like 20min! I think the extra timeout length did the trick! One thing is that it downloaded a total of 41gb instead of 20gb but at least it finished. | ## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduce the bug
```python
from datasets import load_d... | 37 | Downloading “reddit” dataset keeps timing out.
## Describe the bug
A clear and concise description of what the bug is.
Everytime I try and download the reddit dataset it times out before finishing and I have to try again.
There is some timeout error that I will post once it happens again.
## Steps to reproduc... | [
-0.3443330824,
-0.1409980804,
-0.0883192196,
0.1077194437,
0.1344687045,
0.1419221759,
0.0290933214,
0.2305128276,
-0.0500459597,
-0.1488424242,
-0.0651005954,
0.1319426894,
0.2865749002,
0.0448666178,
-0.0551886819,
0.2586468458,
0.0915640444,
-0.2118437886,
-0.0709679648,
0.0... |
https://github.com/huggingface/datasets/issues/2818 | cannot load data from my loacal path | Hi ! The `data_files` parameter must be a string, a list/tuple or a python dict.
Can you check the type of your `config.train_path` please ? Or use `data_files=str(config.train_path)` ? | ## Describe the bug
I just want to directly load data from my local path,but find a bug.And I compare it with pandas to provide my local path is real.
here is my code
```python3
# print my local path
print(config.train_path)
# read data and print data length
tarin=pd.read_csv(config.train_path)
print(len(tari... | 29 | cannot load data from my loacal path
## Describe the bug
I just want to directly load data from my local path,but find a bug.And I compare it with pandas to provide my local path is real.
here is my code
```python3
# print my local path
print(config.train_path)
# read data and print data length
tarin=pd.read... | [
-0.0574387796,
-0.0158847775,
0.0742202774,
0.5506058335,
0.3027434647,
-0.1935665458,
0.4147057831,
0.0769138411,
0.0825466439,
0.0638930872,
-0.0089266161,
0.5863605142,
0.012284115,
-0.0284526348,
0.1138427183,
-0.0146788508,
0.1873943359,
0.0855283961,
-0.1241162866,
-0.256... |
https://github.com/huggingface/datasets/issues/2813 | Remove compression from xopen | After discussing with @lhoestq, a reasonable alternative:
- `download_manager.extract(urlpath)` adds prefixes to `urlpath` in the same way as `fsspec` does for protocols, but we implement custom prefixes for all compression formats:
`bz2::http://domain.org/filename.bz2`
- `xopen` parses the `urlpath` and extracts... | We implemented support for streaming with 2 requirements:
- transparent use for the end user: just needs to pass the parameter `streaming=True`
- no additional work for the contributors: previous loading scripts should also work in streaming mode with no (or minor) changes; and new loading scripts should not involve ... | 105 | Remove compression from xopen
We implemented support for streaming with 2 requirements:
- transparent use for the end user: just needs to pass the parameter `streaming=True`
- no additional work for the contributors: previous loading scripts should also work in streaming mode with no (or minor) changes; and new loa... | [
-0.4191690385,
-0.006903403,
0.005912493,
0.0238575526,
0.1248177961,
-0.3256883025,
-0.2292779237,
0.4713354111,
0.0886645913,
0.2409462631,
-0.247153759,
0.4001295567,
0.0819667354,
0.1371935159,
-0.0226641688,
-0.1370154172,
-0.0278308634,
0.1331309229,
0.088236846,
-0.00541... |
https://github.com/huggingface/datasets/issues/2799 | Loading JSON throws ArrowNotImplementedError | Hi @lewtun, thanks for reporting.
Apparently, `pyarrow.json` tries to cast timestamp-like fields in your JSON file to pyarrow timestamp type, and it fails with `ArrowNotImplementedError`.
I will investigate if there is a way to tell pyarrow not to try that timestamp casting. | ## Describe the bug
I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below).
Curiously, there is no problem loading the dataset with `pandas` which... | 42 | Loading JSON throws ArrowNotImplementedError
## Describe the bug
I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below).
Curiously, there is no ... | [
-0.0681873858,
0.1841620356,
0.0364154391,
0.378070116,
0.2355371118,
-0.0062701493,
0.4846676886,
0.3670632541,
0.4180770516,
-0.0028390361,
0.0605550855,
0.5898873806,
0.0276205242,
-0.1488580108,
-0.2172114253,
-0.1838724762,
0.0313556641,
0.1905163974,
0.1274460852,
0.02858... |
https://github.com/huggingface/datasets/issues/2799 | Loading JSON throws ArrowNotImplementedError | I think the issue is more complex than that...
I just took one of your JSON lines and pyarrow.json read it without problem. | ## Describe the bug
I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below).
Curiously, there is no problem loading the dataset with `pandas` which... | 23 | Loading JSON throws ArrowNotImplementedError
## Describe the bug
I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below).
Curiously, there is no ... | [
-0.0681873858,
0.1841620356,
0.0364154391,
0.378070116,
0.2355371118,
-0.0062701493,
0.4846676886,
0.3670632541,
0.4180770516,
-0.0028390361,
0.0605550855,
0.5898873806,
0.0276205242,
-0.1488580108,
-0.2172114253,
-0.1838724762,
0.0313556641,
0.1905163974,
0.1274460852,
0.02858... |
https://github.com/huggingface/datasets/issues/2799 | Loading JSON throws ArrowNotImplementedError | > I just took one of your JSON lines an pyarrow.json read it without problem.
yes, and for some peculiar reason the error is non-deterministic (i was eventually able to load the whole dataset by just re-running the `load_dataset` cell multiple times 🤔)
thanks for looking into this 🙏 ! | ## Describe the bug
I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below).
Curiously, there is no problem loading the dataset with `pandas` which... | 50 | Loading JSON throws ArrowNotImplementedError
## Describe the bug
I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below).
Curiously, there is no ... | [
-0.0681873858,
0.1841620356,
0.0364154391,
0.378070116,
0.2355371118,
-0.0062701493,
0.4846676886,
0.3670632541,
0.4180770516,
-0.0028390361,
0.0605550855,
0.5898873806,
0.0276205242,
-0.1488580108,
-0.2172114253,
-0.1838724762,
0.0313556641,
0.1905163974,
0.1274460852,
0.02858... |
https://github.com/huggingface/datasets/issues/2799 | Loading JSON throws ArrowNotImplementedError | The code works fine on my side.
Not sure what's going on here :/
I remember we did a few changes in the JSON loader in #2638 , did you do an update `datasets` when debugging this ?
| ## Describe the bug
I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below).
Curiously, there is no problem loading the dataset with `pandas` which... | 38 | Loading JSON throws ArrowNotImplementedError
## Describe the bug
I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below).
Curiously, there is no ... | [
-0.0681873858,
0.1841620356,
0.0364154391,
0.378070116,
0.2355371118,
-0.0062701493,
0.4846676886,
0.3670632541,
0.4180770516,
-0.0028390361,
0.0605550855,
0.5898873806,
0.0276205242,
-0.1488580108,
-0.2172114253,
-0.1838724762,
0.0313556641,
0.1905163974,
0.1274460852,
0.02858... |
https://github.com/huggingface/datasets/issues/2799 | Loading JSON throws ArrowNotImplementedError | OK after upgrading `datasets` to v1.12.1 the issue seems to have gone away. Closing this now :) | ## Describe the bug
I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below).
Curiously, there is no problem loading the dataset with `pandas` which... | 17 | Loading JSON throws ArrowNotImplementedError
## Describe the bug
I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below).
Curiously, there is no ... | [
-0.0681873858,
0.1841620356,
0.0364154391,
0.378070116,
0.2355371118,
-0.0062701493,
0.4846676886,
0.3670632541,
0.4180770516,
-0.0028390361,
0.0605550855,
0.5898873806,
0.0276205242,
-0.1488580108,
-0.2172114253,
-0.1838724762,
0.0313556641,
0.1905163974,
0.1274460852,
0.02858... |
https://github.com/huggingface/datasets/issues/2799 | Loading JSON throws ArrowNotImplementedError | Oops, I spoke too soon 😓
After deleting the cache and trying the above code snippet again I am hitting the same error. You can also reproduce it in the Colab notebook I linked to in the issue description. | ## Describe the bug
I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below).
Curiously, there is no problem loading the dataset with `pandas` which... | 39 | Loading JSON throws ArrowNotImplementedError
## Describe the bug
I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below).
Curiously, there is no ... | [
-0.0681873858,
0.1841620356,
0.0364154391,
0.378070116,
0.2355371118,
-0.0062701493,
0.4846676886,
0.3670632541,
0.4180770516,
-0.0028390361,
0.0605550855,
0.5898873806,
0.0276205242,
-0.1488580108,
-0.2172114253,
-0.1838724762,
0.0313556641,
0.1905163974,
0.1274460852,
0.02858... |
https://github.com/huggingface/datasets/issues/2799 | Loading JSON throws ArrowNotImplementedError | @albertvillanova @lhoestq I noticed the same issue using datasets v1.12.1. Is there an update on when this could be fixed? | ## Describe the bug
I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below).
Curiously, there is no problem loading the dataset with `pandas` which... | 20 | Loading JSON throws ArrowNotImplementedError
## Describe the bug
I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below).
Curiously, there is no ... | [
-0.0681873858,
0.1841620356,
0.0364154391,
0.378070116,
0.2355371118,
-0.0062701493,
0.4846676886,
0.3670632541,
0.4180770516,
-0.0028390361,
0.0605550855,
0.5898873806,
0.0276205242,
-0.1488580108,
-0.2172114253,
-0.1838724762,
0.0313556641,
0.1905163974,
0.1274460852,
0.02858... |
https://github.com/huggingface/datasets/issues/2799 | Loading JSON throws ArrowNotImplementedError | Apparently it's possible to make it work by increasing the `block_size`, let me open a PR | ## Describe the bug
I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below).
Curiously, there is no problem loading the dataset with `pandas` which... | 16 | Loading JSON throws ArrowNotImplementedError
## Describe the bug
I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below).
Curiously, there is no ... | [
-0.0681873858,
0.1841620356,
0.0364154391,
0.378070116,
0.2355371118,
-0.0062701493,
0.4846676886,
0.3670632541,
0.4180770516,
-0.0028390361,
0.0605550855,
0.5898873806,
0.0276205242,
-0.1488580108,
-0.2172114253,
-0.1838724762,
0.0313556641,
0.1905163974,
0.1274460852,
0.02858... |
https://github.com/huggingface/datasets/issues/2799 | Loading JSON throws ArrowNotImplementedError | I just opened a PR with a fix, feel free to install `datasets` from source from source and let me know if it helps | ## Describe the bug
I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below).
Curiously, there is no problem loading the dataset with `pandas` which... | 24 | Loading JSON throws ArrowNotImplementedError
## Describe the bug
I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below).
Curiously, there is no ... | [
-0.0681873858,
0.1841620356,
0.0364154391,
0.378070116,
0.2355371118,
-0.0062701493,
0.4846676886,
0.3670632541,
0.4180770516,
-0.0028390361,
0.0605550855,
0.5898873806,
0.0276205242,
-0.1488580108,
-0.2172114253,
-0.1838724762,
0.0313556641,
0.1905163974,
0.1274460852,
0.02858... |
https://github.com/huggingface/datasets/issues/2799 | Loading JSON throws ArrowNotImplementedError | @zijwang did PR #3000 solve the problem for you? It did for me, so it all is good on your end we can close this issue. Thanks again to @lhoestq for the pyarrow magic 🤯 | ## Describe the bug
I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below).
Curiously, there is no problem loading the dataset with `pandas` which... | 35 | Loading JSON throws ArrowNotImplementedError
## Describe the bug
I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below).
Curiously, there is no ... | [
-0.0681873858,
0.1841620356,
0.0364154391,
0.378070116,
0.2355371118,
-0.0062701493,
0.4846676886,
0.3670632541,
0.4180770516,
-0.0028390361,
0.0605550855,
0.5898873806,
0.0276205242,
-0.1488580108,
-0.2172114253,
-0.1838724762,
0.0313556641,
0.1905163974,
0.1274460852,
0.02858... |
https://github.com/huggingface/datasets/issues/2788 | How to sample every file in a list of files making up a split in a dataset when loading? | Hi ! This is not possible just with `load_dataset`.
You can do something like this instead:
```python
seed=42
data_files_dict = {
"train": [train_file1, train_file2],
"test": [test_file1, test_file2],
"val": [val_file1, val_file2]
}
dataset = datasets.load_dataset(
"csv",
data_files=dat... | I am loading a dataset with multiple train, test, and validation files like this:
```
data_files_dict = {
"train": [train_file1, train_file2],
"test": [test_file1, test_file2],
"val": [val_file1, val_file2]
}
dataset = datasets.load_dataset(
"csv",
data_files=data_files_dict,
split=[... | 67 | How to sample every file in a list of files making up a split in a dataset when loading?
I am loading a dataset with multiple train, test, and validation files like this:
```
data_files_dict = {
"train": [train_file1, train_file2],
"test": [test_file1, test_file2],
"val": [val_file1, val_file2]
}
... | [
-0.3138793409,
-0.1796858311,
-0.1142876968,
0.1454613209,
-0.0157868639,
0.3574570119,
0.4449552894,
0.4948761463,
0.556787014,
0.0195080694,
-0.0745146871,
0.1985773444,
0.0281435344,
0.143457517,
0.1276392639,
-0.323733747,
-0.0534549244,
0.1187503561,
0.271900177,
0.0101070... |
https://github.com/huggingface/datasets/issues/2787 | ConnectionError: Couldn't reach https://raw.githubusercontent.com | the bug code locate in :
if data_args.task_name is not None:
# Downloading and loading a dataset from the hub.
datasets = load_dataset("glue", data_args.task_name, cache_dir=model_args.cache_dir) | Hello,
I am trying to run run_glue.py and it gives me this error -
Traceback (most recent call last):
File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 546, in <module>
main()
File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/... | 25 | ConnectionError: Couldn't reach https://raw.githubusercontent.com
Hello,
I am trying to run run_glue.py and it gives me this error -
Traceback (most recent call last):
File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 546, in <module>
main()
File "E:/BER... | [
-0.2038710266,
-0.1601333916,
-0.0819258615,
0.046901986,
0.2665442228,
-0.0723188967,
0.1546958536,
0.3012446761,
0.1090678573,
-0.0955981091,
-0.1769393533,
-0.1569308937,
0.0424281619,
0.0098086512,
0.0353739783,
-0.17579633,
-0.095591493,
-0.0462967977,
-0.2514630556,
0.187... |
https://github.com/huggingface/datasets/issues/2787 | ConnectionError: Couldn't reach https://raw.githubusercontent.com | Hi @jinec,
From time to time we get this kind of `ConnectionError` coming from the github.com website: https://raw.githubusercontent.com
Normally, it should work if you wait a little and then retry.
Could you please confirm if the problem persists? | Hello,
I am trying to run run_glue.py and it gives me this error -
Traceback (most recent call last):
File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 546, in <module>
main()
File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/... | 38 | ConnectionError: Couldn't reach https://raw.githubusercontent.com
Hello,
I am trying to run run_glue.py and it gives me this error -
Traceback (most recent call last):
File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 546, in <module>
main()
File "E:/BER... | [
-0.2038710266,
-0.1601333916,
-0.0819258615,
0.046901986,
0.2665442228,
-0.0723188967,
0.1546958536,
0.3012446761,
0.1090678573,
-0.0955981091,
-0.1769393533,
-0.1569308937,
0.0424281619,
0.0098086512,
0.0353739783,
-0.17579633,
-0.095591493,
-0.0462967977,
-0.2514630556,
0.187... |
https://github.com/huggingface/datasets/issues/2787 | ConnectionError: Couldn't reach https://raw.githubusercontent.com | > I can access https://raw.githubusercontent.com/huggingface/datasets/1.7.0/datasets/glue/glue.py without problem...
I can not access https://raw.githubusercontent.com/huggingface/datasets either, I am in China | Hello,
I am trying to run run_glue.py and it gives me this error -
Traceback (most recent call last):
File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 546, in <module>
main()
File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/... | 17 | ConnectionError: Couldn't reach https://raw.githubusercontent.com
Hello,
I am trying to run run_glue.py and it gives me this error -
Traceback (most recent call last):
File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 546, in <module>
main()
File "E:/BER... | [
-0.2038710266,
-0.1601333916,
-0.0819258615,
0.046901986,
0.2665442228,
-0.0723188967,
0.1546958536,
0.3012446761,
0.1090678573,
-0.0955981091,
-0.1769393533,
-0.1569308937,
0.0424281619,
0.0098086512,
0.0353739783,
-0.17579633,
-0.095591493,
-0.0462967977,
-0.2514630556,
0.187... |
https://github.com/huggingface/datasets/issues/2787 | ConnectionError: Couldn't reach https://raw.githubusercontent.com | > Finally i can access it, by the superfast software. Thanks
Excuse me, I have the same problem as you, could you please tell me how to solve it? | Hello,
I am trying to run run_glue.py and it gives me this error -
Traceback (most recent call last):
File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 546, in <module>
main()
File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/... | 29 | ConnectionError: Couldn't reach https://raw.githubusercontent.com
Hello,
I am trying to run run_glue.py and it gives me this error -
Traceback (most recent call last):
File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 546, in <module>
main()
File "E:/BER... | [
-0.2038710266,
-0.1601333916,
-0.0819258615,
0.046901986,
0.2665442228,
-0.0723188967,
0.1546958536,
0.3012446761,
0.1090678573,
-0.0955981091,
-0.1769393533,
-0.1569308937,
0.0424281619,
0.0098086512,
0.0353739783,
-0.17579633,
-0.095591493,
-0.0462967977,
-0.2514630556,
0.187... |
https://github.com/huggingface/datasets/issues/2775 | `generate_random_fingerprint()` deterministic with 🤗Transformers' `set_seed()` | I dug into what I believe is the root of this issue and added a repro in my comment. If this is better addressed as a cross-team issue, let me know and I can open an issue in the Transformers repo | ## Describe the bug
**Update:** I dug into this to try to reproduce the underlying issue, and I believe it's that `set_seed()` from the `transformers` library makes the "random" fingerprint identical each time. I believe this is still a bug, because `datasets` is used exactly this way in `transformers` after `set_se... | 41 | `generate_random_fingerprint()` deterministic with 🤗Transformers' `set_seed()`
## Describe the bug
**Update:** I dug into this to try to reproduce the underlying issue, and I believe it's that `set_seed()` from the `transformers` library makes the "random" fingerprint identical each time. I believe this is still ... | [
-0.1622431874,
-0.0606820956,
0.093882665,
0.0836028904,
0.3144049346,
-0.142973423,
0.6139577627,
0.0132132014,
-0.1196163222,
0.0058160932,
0.1650342196,
0.2017954439,
-0.0787890106,
0.0945225731,
-0.0190070532,
0.2130858749,
0.0278786626,
-0.1402868778,
-0.2763955295,
-0.141... |
https://github.com/huggingface/datasets/issues/2775 | `generate_random_fingerprint()` deterministic with 🤗Transformers' `set_seed()` | Hi !
IMO we shouldn't try to modify `set_seed` from transformers but maybe make `datasets` have its own RNG just to generate random fingerprints.
Any opinion on this @LysandreJik ? | ## Describe the bug
**Update:** I dug into this to try to reproduce the underlying issue, and I believe it's that `set_seed()` from the `transformers` library makes the "random" fingerprint identical each time. I believe this is still a bug, because `datasets` is used exactly this way in `transformers` after `set_se... | 30 | `generate_random_fingerprint()` deterministic with 🤗Transformers' `set_seed()`
## Describe the bug
**Update:** I dug into this to try to reproduce the underlying issue, and I believe it's that `set_seed()` from the `transformers` library makes the "random" fingerprint identical each time. I believe this is still ... | [
-0.1622431874,
-0.0606820956,
0.093882665,
0.0836028904,
0.3144049346,
-0.142973423,
0.6139577627,
0.0132132014,
-0.1196163222,
0.0058160932,
0.1650342196,
0.2017954439,
-0.0787890106,
0.0945225731,
-0.0190070532,
0.2130858749,
0.0278786626,
-0.1402868778,
-0.2763955295,
-0.141... |
https://github.com/huggingface/datasets/issues/2768 | `ArrowInvalid: Added column's length must match table's length.` after using `select` | Hi,
the `select` method creates an indices mapping and doesn't modify the underlying PyArrow table by default for better performance. To modify the underlying table after the `select` call, call `flatten_indices` on the dataset object as follows:
```python
from datasets import load_dataset
ds = load_dataset("tw... | ## Describe the bug
I would like to add a column to a downsampled dataset. However I get an error message saying the length don't match with the length of the unsampled dataset indicated. I suspect that the dataset size is not updated when calling `select`.
## Steps to reproduce the bug
```python
from datasets im... | 53 | `ArrowInvalid: Added column's length must match table's length.` after using `select`
## Describe the bug
I would like to add a column to a downsampled dataset. However I get an error message saying the length don't match with the length of the unsampled dataset indicated. I suspect that the dataset size is not upda... | [
-0.2274189293,
-0.1898311079,
-0.0089053502,
0.0325091742,
0.0982827693,
0.0415741168,
0.0728504434,
0.183797732,
0.0604876168,
0.164753601,
0.2048401535,
0.6761354804,
0.0139969029,
-0.3138835728,
0.0691086128,
-0.2314470708,
0.0780238509,
0.1225911006,
-0.3308289945,
-0.06523... |
https://github.com/huggingface/datasets/issues/2767 | equal operation to perform unbatch for huggingface datasets | Hi @lhoestq
Maybe this is clearer to explain like this, currently map function, map one example to "one" modified one, lets assume we want to map one example to "multiple" examples, in which we do not know in advance how many examples they would be per each entry. I greatly appreciate telling me how I can handle this... | Hi
I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve:
I am considering "record" dataset in SuperGlue and I need to replicate each entery of the dataset for each answer, to ma... | 62 | equal operation to perform unbatch for huggingface datasets
Hi
I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve:
I am considering "record" dataset in SuperGlue and I need... | [
-0.0286771413,
-0.7930665612,
0.0335285813,
-0.0818501636,
-0.0049163247,
-0.1033488587,
0.2304780334,
-0.0015910362,
0.4284726977,
0.2395398915,
-0.3240568042,
-0.0459577553,
0.0236223266,
0.4699032903,
0.1768153012,
-0.2348045111,
0.0796521828,
0.112086989,
-0.2576456666,
-0.... |
https://github.com/huggingface/datasets/issues/2767 | equal operation to perform unbatch for huggingface datasets | Hi,
this is also my question on how to perform similar operation as "unbatch" in tensorflow in great huggingface dataset library.
thanks. | Hi
I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve:
I am considering "record" dataset in SuperGlue and I need to replicate each entery of the dataset for each answer, to ma... | 22 | equal operation to perform unbatch for huggingface datasets
Hi
I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve:
I am considering "record" dataset in SuperGlue and I need... | [
-0.0424977504,
-0.7478796244,
0.0441973545,
-0.0638021827,
0.0602155514,
-0.07411924,
0.2164923549,
0.0234036464,
0.4041276574,
0.2328332216,
-0.3757909536,
-0.0309476256,
0.0087435916,
0.46033445,
0.2217883319,
-0.2381003797,
0.0493123941,
0.099917382,
-0.2166884094,
-0.069288... |
https://github.com/huggingface/datasets/issues/2767 | equal operation to perform unbatch for huggingface datasets | Hi,
`Dataset.map` in the batched mode allows you to map a single row to multiple rows. So to perform "unbatch", you can do the following:
```python
import collections
def unbatch(batch):
new_batch = collections.defaultdict(list)
keys = batch.keys()
for values in zip(*batch.values()):
ex ... | Hi
I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve:
I am considering "record" dataset in SuperGlue and I need to replicate each entery of the dataset for each answer, to ma... | 72 | equal operation to perform unbatch for huggingface datasets
Hi
I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve:
I am considering "record" dataset in SuperGlue and I need... | [
-0.0753812939,
-0.7735245228,
0.0261516161,
-0.0693189204,
0.0228941906,
-0.0482675135,
0.3034125566,
0.0408936143,
0.3975751996,
0.2398023605,
-0.365085423,
-0.0645893514,
0.0264362544,
0.4119095206,
0.1797592342,
-0.204305321,
0.0633045137,
0.0990089998,
-0.2375573069,
-0.058... |
https://github.com/huggingface/datasets/issues/2767 | equal operation to perform unbatch for huggingface datasets | Dear @mariosasko
First, thank you very much for coming back to me on this, I appreciate it a lot. I tried this solution, I am getting errors, do you mind
giving me one test example to be able to run your code, to understand better the format of the inputs to your function?
in this function https://github.com/googl... | Hi
I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve:
I am considering "record" dataset in SuperGlue and I need to replicate each entery of the dataset for each answer, to ma... | 90 | equal operation to perform unbatch for huggingface datasets
Hi
I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve:
I am considering "record" dataset in SuperGlue and I need... | [
-0.0108778058,
-0.7745169997,
0.0500378646,
-0.0438203961,
0.058595743,
-0.1315132678,
0.2449042946,
0.0355798528,
0.4034039378,
0.2071984261,
-0.3647297621,
-0.0576844029,
0.0258303266,
0.4528595805,
0.1368794143,
-0.2415411919,
0.0354219042,
0.1300745308,
-0.3146577477,
-0.09... |
https://github.com/huggingface/datasets/issues/2767 | equal operation to perform unbatch for huggingface datasets | Hi @mariosasko
I think finally I got this, I think you mean to do things in one step, here is the full example for completeness:
```
def unbatch(batch):
new_batch = collections.defaultdict(list)
keys = batch.keys()
for values in zip(*batch.values()):
ex = {k: v for k, v in zip(keys, values... | Hi
I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve:
I am considering "record" dataset in SuperGlue and I need to replicate each entery of the dataset for each answer, to ma... | 131 | equal operation to perform unbatch for huggingface datasets
Hi
I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve:
I am considering "record" dataset in SuperGlue and I need... | [
-0.0244649686,
-0.8023661375,
0.0363252237,
-0.0556127727,
0.0216098335,
-0.0733310282,
0.2536011636,
0.0494493581,
0.3815230131,
0.2414882332,
-0.353425473,
-0.0654724389,
0.0393711366,
0.4445896447,
0.1729432046,
-0.2635638118,
0.0600869879,
0.1323879361,
-0.2501767576,
-0.09... |
https://github.com/huggingface/datasets/issues/2765 | BERTScore Error | Hi,
The `use_fast_tokenizer` argument has been recently added to the bert-score lib. I've opened a PR with the fix. In the meantime, you can try to downgrade the version of bert-score with the following command to make the code work:
```
pip uninstall bert-score
pip install "bert-score<0.3.10"
``` | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
predictions = ["hello there", "general kenobi"]
references = ["hello there", "general kenobi"]
bert = load_metric('bertscore')
bert.compute(predictions=predictions, references=references,lang='en')
... | 48 | BERTScore Error
## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
predictions = ["hello there", "general kenobi"]
references = ["hello there", "general kenobi"]
bert = load_metric('bertscore')
bert.compute(predictions=predictions, references=refer... | [
-0.1357685328,
0.1683022231,
0.0354214497,
0.2092981488,
0.3473314643,
0.0116619924,
0.228991881,
0.3530412912,
-0.1199427247,
0.3440086246,
0.0769439712,
0.4752254784,
-0.1141473055,
-0.1162746474,
-0.0818043947,
-0.3451916873,
0.0918747634,
0.3889159858,
0.0978737548,
-0.1766... |
https://github.com/huggingface/datasets/issues/2763 | English wikipedia datasets is not clean | Hi ! Certain users might need these data (for training or simply to explore/index the dataset).
Feel free to implement a map function that gets rid of these paragraphs and process the wikipedia dataset with it before training | ## Describe the bug
Wikipedia english dumps contain many wikipedia paragraphs like "References", "Category:" and "See Also" that should not be used for training.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
w = load_dataset('wikipedia', '20200501.e... | 38 | English wikipedia datasets is not clean
## Describe the bug
Wikipedia english dumps contain many wikipedia paragraphs like "References", "Category:" and "See Also" that should not be used for training.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
... | [
-0.0217496846,
0.2180517465,
-0.1063895747,
0.4800748527,
0.2548157275,
0.1268989593,
0.3188253045,
0.3147775829,
0.2105301917,
0.0805180445,
-0.1847252548,
0.2396584302,
0.3409661353,
-0.2226555645,
0.205966413,
-0.3597016931,
0.2095894068,
0.0540649034,
-0.3535144031,
-0.2804... |
https://github.com/huggingface/datasets/issues/2762 | Add RVL-CDIP dataset | [labels_only.tar.gz](https://docs.google.com/uc?authuser=0&id=0B0NKIRwUL9KYcXo3bV9LU0t3SGs&export=download) on the RVL-CDIP website does not work for me.
> 404. That’s an error. The requested URL was not found on this server.
I contacted the author ( Adam Harley) regarding this, and he told me that the link works... | ## Adding a Dataset
- **Name:** RVL-CDIP
- **Description:** The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The image... | 76 | Add RVL-CDIP dataset
## Adding a Dataset
- **Name:** RVL-CDIP
- **Description:** The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000... | [
0.0768499747,
-0.4151332974,
0.0702098906,
0.3048322201,
0.0102978572,
0.1493622512,
0.1433266699,
0.0107151344,
-0.0011781731,
-0.0203361716,
-0.425915271,
-0.0388229638,
-0.0852706358,
-0.1490817368,
0.3841081262,
-0.2075450271,
-0.122254774,
-0.0297740344,
0.0521152169,
-0.2... |
https://github.com/huggingface/datasets/issues/2761 | Error loading C4 realnewslike dataset | Hi @danshirron,
`c4` was updated few days back by @lhoestq. The new configs are `['en', 'en.noclean', 'en.realnewslike', 'en.webtextlike'].` You'll need to remove any older version of this dataset you previously downloaded and then run `load_dataset` again with new configuration. | ## Describe the bug
Error loading C4 realnewslike dataset. Validation part mismatch
## Steps to reproduce the bug
```python
raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir)
## Expected results
success on data loading
## Actual results
Downloading: 100%|███████████████████████... | 39 | Error loading C4 realnewslike dataset
## Describe the bug
Error loading C4 realnewslike dataset. Validation part mismatch
## Steps to reproduce the bug
```python
raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir)
## Expected results
success on data loading
## Actual results
D... | [
-0.2315261513,
0.0246402305,
-0.0223660432,
0.4574562013,
0.1744247079,
0.2295814157,
-0.0073709022,
0.5081833601,
-0.0316529274,
0.0191250741,
-0.1842969358,
-0.0640129447,
-0.0091638118,
0.3338083625,
-0.0433653891,
-0.2194700837,
-0.1107668057,
0.1606067419,
0.1407686621,
0.... |
https://github.com/huggingface/datasets/issues/2761 | Error loading C4 realnewslike dataset | @bhavitvyamalik @lhoestq , just tried the above and got:
>>> a=datasets.load_dataset('c4','en.realnewslike')
Downloading: 3.29kB [00:00, 1.66MB/s] ... | ## Describe the bug
Error loading C4 realnewslike dataset. Validation part mismatch
## Steps to reproduce the bug
```python
raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir)
## Expected results
success on data loading
## Actual results
Downloading: 100%|███████████████████████... | 91 | Error loading C4 realnewslike dataset
## Describe the bug
Error loading C4 realnewslike dataset. Validation part mismatch
## Steps to reproduce the bug
```python
raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir)
## Expected results
success on data loading
## Actual results
D... | [
-0.2315261513,
0.0246402305,
-0.0223660432,
0.4574562013,
0.1744247079,
0.2295814157,
-0.0073709022,
0.5081833601,
-0.0316529274,
0.0191250741,
-0.1842969358,
-0.0640129447,
-0.0091638118,
0.3338083625,
-0.0433653891,
-0.2194700837,
-0.1107668057,
0.1606067419,
0.1407686621,
0.... |
https://github.com/huggingface/datasets/issues/2761 | Error loading C4 realnewslike dataset | I think I had an older version of datasets installed and that's why I commented the old configurations in my last comment, my bad! I re-checked and updated it to latest version (`datasets==1.11.0`) and it's showing `available configs: ['en', 'realnewslike', 'en.noblocklist', 'en.noclean']`.
I tried `raw_datasets = ... | ## Describe the bug
Error loading C4 realnewslike dataset. Validation part mismatch
## Steps to reproduce the bug
```python
raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir)
## Expected results
success on data loading
## Actual results
Downloading: 100%|███████████████████████... | 79 | Error loading C4 realnewslike dataset
## Describe the bug
Error loading C4 realnewslike dataset. Validation part mismatch
## Steps to reproduce the bug
```python
raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir)
## Expected results
success on data loading
## Actual results
D... | [
-0.2315261513,
0.0246402305,
-0.0223660432,
0.4574562013,
0.1744247079,
0.2295814157,
-0.0073709022,
0.5081833601,
-0.0316529274,
0.0191250741,
-0.1842969358,
-0.0640129447,
-0.0091638118,
0.3338083625,
-0.0433653891,
-0.2194700837,
-0.1107668057,
0.1606067419,
0.1407686621,
0.... |
https://github.com/huggingface/datasets/issues/2761 | Error loading C4 realnewslike dataset | It works. I probably had some issue with the cache. after cleaning it im able to download the dataset. Thanks | ## Describe the bug
Error loading C4 realnewslike dataset. Validation part mismatch
## Steps to reproduce the bug
```python
raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir)
## Expected results
success on data loading
## Actual results
Downloading: 100%|███████████████████████... | 20 | Error loading C4 realnewslike dataset
## Describe the bug
Error loading C4 realnewslike dataset. Validation part mismatch
## Steps to reproduce the bug
```python
raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir)
## Expected results
success on data loading
## Actual results
D... | [
-0.2315261513,
0.0246402305,
-0.0223660432,
0.4574562013,
0.1744247079,
0.2295814157,
-0.0073709022,
0.5081833601,
-0.0316529274,
0.0191250741,
-0.1842969358,
-0.0640129447,
-0.0091638118,
0.3338083625,
-0.0433653891,
-0.2194700837,
-0.1107668057,
0.1606067419,
0.1407686621,
0.... |
https://github.com/huggingface/datasets/issues/2759 | the meteor metric seems not consist with the official version | the issue is caused by the differences between varied meteor versions:
meteor1.0 is for https://aclanthology.org/W07-0734.pdf
meteor1.5 is for https://aclanthology.org/W14-3348.pdf
here is a very similar issue in NLTK
https://github.com/nltk/nltk/issues/2655 | ## Describe the bug
The computed meteor score seems strange because the value is very different from the scores computed by other tools. For example, I use the meteor score computed by [NLGeval](https://github.com/Maluuba/nlg-eval) as the reference (which reuses the official jar file for the computation)
## Steps t... | 28 | the meteor metric seems not consist with the official version
## Describe the bug
The computed meteor score seems strange because the value is very different from the scores computed by other tools. For example, I use the meteor score computed by [NLGeval](https://github.com/Maluuba/nlg-eval) as the reference (which... | [
-0.1358161271,
-0.2349782288,
-0.0059329458,
0.4061323404,
0.1851485372,
-0.2260374129,
-0.1716091037,
0.2357302755,
-0.0438347533,
0.0740853325,
0.066820249,
0.3859926164,
-0.0958541185,
-0.4473042786,
-0.0392019711,
-0.0029860118,
-0.0017701586,
-0.2732463181,
-0.1561685354,
... |
https://github.com/huggingface/datasets/issues/2759 | the meteor metric seems not consist with the official version | Hi @jianguda, thanks for reporting.
Currently, at 🤗 `datasets` we are using METEOR 1.0 (indeed using NLTK: `from nltk.translate import meteor_score`): See the [citation here](https://github.com/huggingface/datasets/blob/master/metrics/meteor/meteor.py#L23-L35).
If there is some open source implementation of METE... | ## Describe the bug
The computed meteor score seems strange because the value is very different from the scores computed by other tools. For example, I use the meteor score computed by [NLGeval](https://github.com/Maluuba/nlg-eval) as the reference (which reuses the official jar file for the computation)
## Steps t... | 42 | the meteor metric seems not consist with the official version
## Describe the bug
The computed meteor score seems strange because the value is very different from the scores computed by other tools. For example, I use the meteor score computed by [NLGeval](https://github.com/Maluuba/nlg-eval) as the reference (which... | [
-0.1478184015,
-0.2521867454,
-0.0246562567,
0.3410741985,
0.2311813384,
-0.2251470238,
-0.1337347329,
0.237529844,
-0.1029680371,
0.1020670831,
0.0819614753,
0.3635591865,
-0.163041845,
-0.3619387746,
-0.0496548116,
0.0434394479,
-0.0236535575,
-0.3071354032,
-0.1534651071,
-0... |
https://github.com/huggingface/datasets/issues/2757 | Unexpected type after `concatenate_datasets` | Hi @JulesBelveze, thanks for your question.
Note that 🤗 `datasets` internally store their data in Apache Arrow format.
However, when accessing dataset columns, by default they are returned as native Python objects (lists in this case).
If you would like their columns to be returned in a more suitable format ... | ## Describe the bug
I am trying to concatenate two `Dataset` using `concatenate_datasets` but it turns out that after concatenation the features are casted from `torch.Tensor` to `list`.
It then leads to a weird tensors when trying to convert it to a `DataLoader`. However, if I use each `Dataset` separately everythi... | 80 | Unexpected type after `concatenate_datasets`
## Describe the bug
I am trying to concatenate two `Dataset` using `concatenate_datasets` but it turns out that after concatenation the features are casted from `torch.Tensor` to `list`.
It then leads to a weird tensors when trying to convert it to a `DataLoader`. Howev... | [
0.0659024119,
-0.2509692609,
0.0296786539,
0.5075558424,
0.4053797722,
0.1719072908,
0.4817780554,
0.1346461475,
-0.3654302061,
-0.1315236092,
-0.1543313265,
0.3956092298,
-0.03830266,
-0.0193487704,
-0.0897326916,
-0.2919965684,
0.2896150351,
-0.0016609463,
-0.2081796378,
-0.2... |
https://github.com/huggingface/datasets/issues/2750 | Second concatenation of datasets produces errors | Hi @Aktsvigun! We are planning to address this issue before our next release, in a couple of weeks at most. 😅
In the meantime, if you would like to contribute, feel free to open a Pull Request. You are welcome. Here you can find more information: [How to contribute to Datasets?](CONTRIBUTING.md) | Hi,
I am need to concatenate my dataset with others several times, and after I concatenate it for the second time, the features of features (e.g. tags names) are collapsed. This hinders, for instance, the usage of tokenize function with `data.map`.
```
from datasets import load_dataset, concatenate_datasets
d... | 51 | Second concatenation of datasets produces errors
Hi,
I am need to concatenate my dataset with others several times, and after I concatenate it for the second time, the features of features (e.g. tags names) are collapsed. This hinders, for instance, the usage of tokenize function with `data.map`.
```
from data... | [
-0.1504654139,
-0.0591698438,
-0.0577094704,
0.2559514344,
0.193053633,
0.0901741609,
0.0821335316,
0.262276113,
-0.2609196305,
-0.0551396161,
0.035831213,
0.1265252233,
0.2174618393,
0.0901991948,
-0.2850413024,
-0.1801928282,
0.1788396388,
0.0559669584,
0.0485581681,
-0.11890... |
https://github.com/huggingface/datasets/issues/2749 | Raise a proper exception when trying to stream a dataset that requires to manually download files | Hi @severo, thanks for reporting.
As discussed, datasets requiring manual download should be:
- programmatically identifiable
- properly handled with more clear error message when trying to load them with streaming
In relation with programmatically identifiability, note that for datasets requiring manual downlo... | ## Describe the bug
At least for 'reclor', 'telugu_books', 'turkish_movie_sentiment', 'ubuntu_dialogs_corpus', 'wikihow', trying to `load_dataset` in streaming mode raises a `TypeError` without any detail about why it fails.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = ... | 64 | Raise a proper exception when trying to stream a dataset that requires to manually download files
## Describe the bug
At least for 'reclor', 'telugu_books', 'turkish_movie_sentiment', 'ubuntu_dialogs_corpus', 'wikihow', trying to `load_dataset` in streaming mode raises a `TypeError` without any detail about why it... | [
-0.2240488231,
0.0735054091,
0.0688554496,
0.1454879045,
0.3467077315,
0.0917348936,
0.1795136631,
0.0385220684,
0.1048626676,
0.0732551441,
0.0714531168,
0.2278850824,
-0.3061069846,
0.1689812988,
-0.0743806288,
-0.0853379145,
-0.0969871208,
0.1296142191,
0.2995479405,
0.12180... |
https://github.com/huggingface/datasets/issues/2746 | Cannot load `few-nerd` dataset | Hi @Mehrad0711,
I'm afraid there is no "canonical" Hugging Face dataset named "few-nerd".
There are 2 kinds of datasets hosted at the Hugging Face Hub:
- canonical datasets (their identifier contains no slash "/"): we, the Hugging Face team, supervise their implementation and we make sure they work correctly by ... | ## Describe the bug
Cannot load `few-nerd` dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset('few-nerd', 'supervised')
```
## Actual results
Executing above code will give the following error:
```
Using the latest cached version of the module from /Users... | 242 | Cannot load `few-nerd` dataset
## Describe the bug
Cannot load `few-nerd` dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset('few-nerd', 'supervised')
```
## Actual results
Executing above code will give the following error:
```
Using the latest cached ... | [
-0.3042272031,
-0.0985632092,
0.0078415042,
0.5131354928,
0.3285455704,
-0.0576421581,
0.5201522708,
0.2553009391,
0.1991575807,
0.1681089848,
-0.4604777694,
-0.0910486281,
-0.2877758145,
-0.2970744371,
0.4347795844,
0.0320018455,
-0.0251879897,
-0.0349553004,
0.1119612306,
0.0... |
https://github.com/huggingface/datasets/issues/2746 | Cannot load `few-nerd` dataset | Hello, @Mehrad0711; Hi, @albertvillanova !
I am the maintainer of the `dfki/few-nerd" dataset script, sorry for the very late reply and hope this message finds you well!
We should use
```
dataset = load_dataset("dfki-nlp/few-nerd", name="supervised")
```
instead of not specifying the "name" argument, where name ... | ## Describe the bug
Cannot load `few-nerd` dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset('few-nerd', 'supervised')
```
## Actual results
Executing above code will give the following error:
```
Using the latest cached version of the module from /Users... | 305 | Cannot load `few-nerd` dataset
## Describe the bug
Cannot load `few-nerd` dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset('few-nerd', 'supervised')
```
## Actual results
Executing above code will give the following error:
```
Using the latest cached ... | [
-0.3042272031,
-0.0985632092,
0.0078415042,
0.5131354928,
0.3285455704,
-0.0576421581,
0.5201522708,
0.2553009391,
0.1991575807,
0.1681089848,
-0.4604777694,
-0.0910486281,
-0.2877758145,
-0.2970744371,
0.4347795844,
0.0320018455,
-0.0251879897,
-0.0349553004,
0.1119612306,
0.0... |
https://github.com/huggingface/datasets/issues/2746 | Cannot load `few-nerd` dataset | Hi @chen-yuxuan, thanks for your answer.
Just a few comments:
- Please, note that as we use `datasets.load_dataset` implementation, we can pass the configuration name as the second positional argument (no need to pass explicitly `name=`) and it downloads the 3 splits:
```python
In [4]: ds = load_dataset("dfki... | ## Describe the bug
Cannot load `few-nerd` dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset('few-nerd', 'supervised')
```
## Actual results
Executing above code will give the following error:
```
Using the latest cached version of the module from /Users... | 208 | Cannot load `few-nerd` dataset
## Describe the bug
Cannot load `few-nerd` dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset('few-nerd', 'supervised')
```
## Actual results
Executing above code will give the following error:
```
Using the latest cached ... | [
-0.3042272031,
-0.0985632092,
0.0078415042,
0.5131354928,
0.3285455704,
-0.0576421581,
0.5201522708,
0.2553009391,
0.1991575807,
0.1681089848,
-0.4604777694,
-0.0910486281,
-0.2877758145,
-0.2970744371,
0.4347795844,
0.0320018455,
-0.0251879897,
-0.0349553004,
0.1119612306,
0.0... |
https://github.com/huggingface/datasets/issues/2746 | Cannot load `few-nerd` dataset | Thank you @albertvillanova for your detailed feedback!
> no need to pass explicitly `name=`
Good catch! I thought `split` stands before `name` in the argument list... but now it is all clear to me, sounds cool! Thanks for the explanation.
Anyways in our old code it still looks bit confusing if we only want one... | ## Describe the bug
Cannot load `few-nerd` dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset('few-nerd', 'supervised')
```
## Actual results
Executing above code will give the following error:
```
Using the latest cached version of the module from /Users... | 207 | Cannot load `few-nerd` dataset
## Describe the bug
Cannot load `few-nerd` dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset('few-nerd', 'supervised')
```
## Actual results
Executing above code will give the following error:
```
Using the latest cached ... | [
-0.3042272031,
-0.0985632092,
0.0078415042,
0.5131354928,
0.3285455704,
-0.0576421581,
0.5201522708,
0.2553009391,
0.1991575807,
0.1681089848,
-0.4604777694,
-0.0910486281,
-0.2877758145,
-0.2970744371,
0.4347795844,
0.0320018455,
-0.0251879897,
-0.0349553004,
0.1119612306,
0.0... |
https://github.com/huggingface/datasets/issues/2746 | Cannot load `few-nerd` dataset | Hi @chen-yuxuan,
I have tested on Windows and now it works perfectly, after the fixing of the encoding issue:
```python
In [1]: from datasets import load_dataset
In [2]: ds = load_dataset("dfki-nlp/few-nerd", "supervised")
Downloading: 100%|██████████████████████████████████████████████████████████████████████... | ## Describe the bug
Cannot load `few-nerd` dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset('few-nerd', 'supervised')
```
## Actual results
Executing above code will give the following error:
```
Using the latest cached version of the module from /Users... | 112 | Cannot load `few-nerd` dataset
## Describe the bug
Cannot load `few-nerd` dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset('few-nerd', 'supervised')
```
## Actual results
Executing above code will give the following error:
```
Using the latest cached ... | [
-0.3042272031,
-0.0985632092,
0.0078415042,
0.5131354928,
0.3285455704,
-0.0576421581,
0.5201522708,
0.2553009391,
0.1991575807,
0.1681089848,
-0.4604777694,
-0.0910486281,
-0.2877758145,
-0.2970744371,
0.4347795844,
0.0320018455,
-0.0251879897,
-0.0349553004,
0.1119612306,
0.0... |
https://github.com/huggingface/datasets/issues/2743 | Dataset JSON is incorrect | As discussed, the metadata JSON files must be regenerated because the keys were nor properly generated and they will not be read by the builder:
> Indeed there is some problem/bug while reading the datasets_info.json file: there is a mismatch with the config.name keys in the file...
In the meanwhile, in order to be a... | ## Describe the bug
The JSON file generated for https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/journalists_questions/journalists_questions.py is https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/journalists_questions/dataset... | 109 | Dataset JSON is incorrect
## Describe the bug
The JSON file generated for https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/journalists_questions/journalists_questions.py is https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/j... | [
0.1457878947,
0.0401422866,
-0.0529892631,
0.4768226147,
0.1063419804,
0.2229724228,
0.1550247073,
0.3400091529,
-0.2390941679,
0.0082268072,
0.0357940793,
0.4321308732,
0.0995689407,
0.1382742971,
-0.0019155741,
-0.1908647269,
0.058264561,
-0.0400952771,
0.0514684245,
0.013578... |
https://github.com/huggingface/datasets/issues/2742 | Improve detection of streamable file types | maybe we should rather attempt to download a `Range` from the server and see if it works? | **Is your feature request related to a problem? Please describe.**
```python
from datasets import load_dataset_builder
from datasets.utils.streaming_download_manager import StreamingDownloadManager
builder = load_dataset_builder("journalists_questions", name="plain_text")
builder._split_generators(StreamingDownl... | 17 | Improve detection of streamable file types
**Is your feature request related to a problem? Please describe.**
```python
from datasets import load_dataset_builder
from datasets.utils.streaming_download_manager import StreamingDownloadManager
builder = load_dataset_builder("journalists_questions", name="plain_tex... | [
-0.4683198333,
-0.0467546806,
-0.0979573652,
0.1375528574,
0.1663959324,
-0.2239753008,
0.1926612556,
0.4674961865,
0.0628534555,
0.2362470627,
-0.0067692022,
-0.0470786951,
-0.4149928987,
0.2554931939,
0.0666617155,
-0.1503152102,
0.0009769521,
0.0538055599,
0.2009109855,
0.05... |
https://github.com/huggingface/datasets/issues/2737 | SacreBLEU update | Hi @devrimcavusoglu,
I tried your code with latest version of `datasets`and `sacrebleu==1.5.1` and it's running fine after changing one small thing:
```
sacrebleu = datasets.load_metric('sacrebleu')
predictions = ["It is a guide to action which ensures that the military always obeys the commands of the party"]
re... | With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error.
AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER'
this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but sacrebleu.py tries... | 101 | SacreBLEU update
With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error.
AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER'
this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but... | [
-0.3851111233,
0.1802146733,
-0.0012569798,
-0.0679101869,
0.5497627854,
-0.2189846188,
0.0627022982,
0.3480108976,
-0.2101944685,
0.12432307,
0.0086565251,
0.3338842392,
-0.0440219417,
-0.0246399958,
-0.1152012944,
-0.0089818547,
0.1046468765,
0.0598539673,
0.3436260819,
0.007... |
https://github.com/huggingface/datasets/issues/2737 | SacreBLEU update | @bhavitvyamalik hmm. I forgot double brackets, but still didn't work when used it with double brackets. It may be an isseu with platform (using win-10 currently), or versions. What is your platform and your version info for datasets, python, and sacrebleu ? | With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error.
AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER'
this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but sacrebleu.py tries... | 42 | SacreBLEU update
With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error.
AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER'
this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but... | [
-0.3610987663,
0.1312096417,
-0.0056598815,
-0.1026849523,
0.4761408269,
-0.1557436287,
0.1335218698,
0.3350856304,
-0.0614675246,
0.118069835,
0.0635040328,
0.4146681428,
-0.0860368386,
-0.1110802665,
-0.0810956433,
-0.0622078553,
0.1325369775,
0.0672765449,
0.3208294511,
-0.0... |
https://github.com/huggingface/datasets/issues/2737 | SacreBLEU update | You can check that here, I've reproduced your code in [Google colab](https://colab.research.google.com/drive/1X90fHRgMLKczOVgVk7NDEw_ciZFDjaCM?usp=sharing). Looks like there was some issue in `sacrebleu` which was fixed later from what I've found [here](https://github.com/pytorch/fairseq/issues/2049#issuecomment-622367... | With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error.
AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER'
this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but sacrebleu.py tries... | 36 | SacreBLEU update
With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error.
AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER'
this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but... | [
-0.3578206897,
0.1911825687,
-0.0130563797,
-0.1219667718,
0.4591224492,
-0.2587322891,
0.1023554802,
0.3462673724,
-0.1047268435,
0.170291692,
0.0292684566,
0.3873144686,
-0.0286068507,
-0.1296034157,
-0.0896718577,
-0.0818408877,
0.022336835,
0.1103717759,
0.210844174,
-0.065... |
https://github.com/huggingface/datasets/issues/2737 | SacreBLEU update | It seems that next release of `sacrebleu` (v2.0.0) will break our `datasets` implementation to compute it. See my Google Colab: https://colab.research.google.com/drive/1SKmvvjQi6k_3OHsX5NPkZdiaJIfXyv9X?usp=sharing
I'm reopening this Issue and making a Pull Request to fix it. | With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error.
AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER'
this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but sacrebleu.py tries... | 33 | SacreBLEU update
With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error.
AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER'
this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but... | [
-0.3739029467,
0.2329157144,
0.016678812,
-0.1529369205,
0.3844796717,
-0.2280017734,
0.1411173344,
0.3971858323,
-0.1434216946,
0.1172360256,
0.0634810627,
0.3675316572,
-0.074741371,
-0.0431302041,
-0.0400884673,
0.018208012,
0.1286733001,
0.0568154044,
0.2899596393,
0.047962... |
https://github.com/huggingface/datasets/issues/2737 | SacreBLEU update | > It seems that next release of `sacrebleu` (v2.0.0) will break our `datasets` implementation to compute it. See my Google Colab: https://colab.research.google.com/drive/1SKmvvjQi6k_3OHsX5NPkZdiaJIfXyv9X?usp=sharing
>
> I'm reopening this Issue and making a Pull Request to fix it.
How did you solve him | With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error.
AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER'
this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but sacrebleu.py tries... | 41 | SacreBLEU update
With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error.
AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER'
this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but... | [
-0.3628456593,
0.2175991684,
0.0040866765,
-0.124244228,
0.411188513,
-0.2206296474,
0.1483427584,
0.3998839855,
-0.1654298902,
0.1352213621,
0.0552651361,
0.3734946847,
-0.0774148628,
-0.0580645315,
-0.0607180297,
0.0035375077,
0.1176180989,
0.0779188201,
0.2915457785,
0.01916... |
https://github.com/huggingface/datasets/issues/2736 | Add Microsoft Building Footprints dataset | Motivation: this can be a useful dataset for researchers working on climate change adaptation, urban studies, geography, etc. I'll see if I can figure out how to add it! | ## Adding a Dataset
- **Name:** Microsoft Building Footprints
- **Description:** With the goal to increase the coverage of building footprint data available as open data for OpenStreetMap and humanitarian efforts, we have released millions of building footprints as open data available to download free of charge.
- *... | 29 | Add Microsoft Building Footprints dataset
## Adding a Dataset
- **Name:** Microsoft Building Footprints
- **Description:** With the goal to increase the coverage of building footprint data available as open data for OpenStreetMap and humanitarian efforts, we have released millions of building footprints as open dat... | [
-0.6230252981,
0.0684201345,
-0.2241070569,
-0.0011126295,
-0.0653920695,
-0.0625755042,
0.1025174484,
0.2215956002,
0.1189825237,
0.3236289322,
0.2269605547,
0.2179020047,
-0.2382513732,
0.4009597301,
0.0867367014,
-0.0855377913,
-0.0039108046,
0.0257603396,
-0.3132508397,
-0.... |
https://github.com/huggingface/datasets/issues/2730 | Update CommonVoice with new release | Does anybody know if there is a bundled link, which would allow direct data download instead of manual?
Something similar to: `https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/ab.tar.gz` ? cc @patil-suraj
| ## Adding a Dataset
- **Name:** CommonVoice mid-2021 release
- **Description:** more data in CommonVoice: Languages that have increased the most by percentage are Thai (almost 20x growth, from 12 hours to 250 hours), Luganda (almost 9x growth, from 8 to 80), Esperanto (7x growth, from 100 to 840), and Tamil (almost 8... | 25 | Update CommonVoice with new release
## Adding a Dataset
- **Name:** CommonVoice mid-2021 release
- **Description:** more data in CommonVoice: Languages that have increased the most by percentage are Thai (almost 20x growth, from 12 hours to 250 hours), Luganda (almost 9x growth, from 8 to 80), Esperanto (7x growth,... | [
-0.3897119164,
0.0689004436,
-0.0436253659,
-0.1407741308,
-0.0075453566,
0.2361044139,
-0.0866280943,
0.4099913239,
-0.0332525037,
0.0552656464,
-0.2268211842,
0.1267441064,
-0.1374959052,
0.3757569492,
0.3153653145,
-0.1025511324,
-0.0221793558,
0.0116232242,
0.3541308343,
-0... |
https://github.com/huggingface/datasets/issues/2728 | Concurrent use of same dataset (already downloaded) | Launching simultaneous job relying on the same datasets try some writing issue. I guess it is unexpected since I only need to load some already downloaded file. | ## Describe the bug
When launching several jobs at the same time loading the same dataset trigger some errors see (last comments).
## Steps to reproduce the bug
export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets
for MODEL in "bert-base-uncased" "roberta-base" "distilbert-base-cased"; do # "bert-base-uncased" ... | 27 | Concurrent use of same dataset (already downloaded)
## Describe the bug
When launching several jobs at the same time loading the same dataset trigger some errors see (last comments).
## Steps to reproduce the bug
export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets
for MODEL in "bert-base-uncased" "roberta-bas... | [
-0.5779123902,
-0.0810000077,
-0.0453642905,
0.4501566589,
0.2459625155,
0.0139752384,
0.5565328002,
0.2723812461,
0.1843714416,
0.190743193,
-0.0311777443,
0.1464354247,
0.0212424397,
0.1441879869,
-0.0885509849,
0.0235604215,
0.130916521,
-0.225741446,
-0.396879375,
-0.020363... |
https://github.com/huggingface/datasets/issues/2728 | Concurrent use of same dataset (already downloaded) | If i have two jobs that use the same dataset. I got :
File "compute_measures.py", line 181, in <module>
train_loader, val_loader, test_loader = get_dataloader(args)
File "/gpfsdswork/projects/rech/toto/intRAOcular/dataset_utils.py", line 69, in get_dataloader
dataset_train = load_dataset('paws', "la... | ## Describe the bug
When launching several jobs at the same time loading the same dataset trigger some errors see (last comments).
## Steps to reproduce the bug
export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets
for MODEL in "bert-base-uncased" "roberta-base" "distilbert-base-cased"; do # "bert-base-uncased" ... | 78 | Concurrent use of same dataset (already downloaded)
## Describe the bug
When launching several jobs at the same time loading the same dataset trigger some errors see (last comments).
## Steps to reproduce the bug
export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets
for MODEL in "bert-base-uncased" "roberta-bas... | [
-0.5779123902,
-0.0810000077,
-0.0453642905,
0.4501566589,
0.2459625155,
0.0139752384,
0.5565328002,
0.2723812461,
0.1843714416,
0.190743193,
-0.0311777443,
0.1464354247,
0.0212424397,
0.1441879869,
-0.0885509849,
0.0235604215,
0.130916521,
-0.225741446,
-0.396879375,
-0.020363... |
https://github.com/huggingface/datasets/issues/2728 | Concurrent use of same dataset (already downloaded) | You can probably have a solution much faster than me (first time I use the library). But I suspect some write function are used when loading the dataset from cache. | ## Describe the bug
When launching several jobs at the same time loading the same dataset trigger some errors see (last comments).
## Steps to reproduce the bug
export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets
for MODEL in "bert-base-uncased" "roberta-base" "distilbert-base-cased"; do # "bert-base-uncased" ... | 30 | Concurrent use of same dataset (already downloaded)
## Describe the bug
When launching several jobs at the same time loading the same dataset trigger some errors see (last comments).
## Steps to reproduce the bug
export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets
for MODEL in "bert-base-uncased" "roberta-bas... | [
-0.5779123902,
-0.0810000077,
-0.0453642905,
0.4501566589,
0.2459625155,
0.0139752384,
0.5565328002,
0.2723812461,
0.1843714416,
0.190743193,
-0.0311777443,
0.1464354247,
0.0212424397,
0.1441879869,
-0.0885509849,
0.0235604215,
0.130916521,
-0.225741446,
-0.396879375,
-0.020363... |
https://github.com/huggingface/datasets/issues/2728 | Concurrent use of same dataset (already downloaded) | I have the same issue:
```
Traceback (most recent call last):
File "/dccstor/tslm/envs/anaconda3/envs/trf-a100/lib/python3.9/site-packages/datasets/builder.py", line 652, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/dccstor/tslm/envs/anaconda3/envs/trf-a100/l... | ## Describe the bug
When launching several jobs at the same time loading the same dataset trigger some errors see (last comments).
## Steps to reproduce the bug
export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets
for MODEL in "bert-base-uncased" "roberta-base" "distilbert-base-cased"; do # "bert-base-uncased" ... | 172 | Concurrent use of same dataset (already downloaded)
## Describe the bug
When launching several jobs at the same time loading the same dataset trigger some errors see (last comments).
## Steps to reproduce the bug
export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets
for MODEL in "bert-base-uncased" "roberta-bas... | [
-0.5779123902,
-0.0810000077,
-0.0453642905,
0.4501566589,
0.2459625155,
0.0139752384,
0.5565328002,
0.2723812461,
0.1843714416,
0.190743193,
-0.0311777443,
0.1464354247,
0.0212424397,
0.1441879869,
-0.0885509849,
0.0235604215,
0.130916521,
-0.225741446,
-0.396879375,
-0.020363... |
https://github.com/huggingface/datasets/issues/2727 | Error in loading the Arabic Billion Words Corpus | I modified the dataset loading script to catch the `IndexError` and inspect the records at which the error is happening, and I found this:
For the `Techreen` config, the error happens in 36 records when trying to find the `Text` or `Dateline` tags. All these 36 records look something like:
```
<Techreen>
<ID>TRN_A... | ## Describe the bug
I get `IndexError: list index out of range` when trying to load the `Techreen` and `Almustaqbal` configs of the dataset.
## Steps to reproduce the bug
```python
load_dataset("arabic_billion_words", "Techreen")
load_dataset("arabic_billion_words", "Almustaqbal")
```
## Expected results
Th... | 128 | Error in loading the Arabic Billion Words Corpus
## Describe the bug
I get `IndexError: list index out of range` when trying to load the `Techreen` and `Almustaqbal` configs of the dataset.
## Steps to reproduce the bug
```python
load_dataset("arabic_billion_words", "Techreen")
load_dataset("arabic_billion_wor... | [
-0.1755404025,
0.1067498401,
-0.1183999851,
0.4231292605,
-0.1488428265,
0.2618192136,
0.2323350161,
0.390468061,
0.3619003892,
-0.0010757784,
-0.2331463099,
-0.0116895726,
0.0589738116,
0.0111062471,
-0.0046201525,
-0.2025987804,
-0.0308901742,
-0.1021870375,
0.2275921702,
0.1... |
https://github.com/huggingface/datasets/issues/2727 | Error in loading the Arabic Billion Words Corpus | Thanks @M-Salti for reporting this issue and for your investigation.
Indeed, those `IndexError` should be catched and the corresponding record should be ignored.
I'm opening a Pull Request to fix it. | ## Describe the bug
I get `IndexError: list index out of range` when trying to load the `Techreen` and `Almustaqbal` configs of the dataset.
## Steps to reproduce the bug
```python
load_dataset("arabic_billion_words", "Techreen")
load_dataset("arabic_billion_words", "Almustaqbal")
```
## Expected results
Th... | 31 | Error in loading the Arabic Billion Words Corpus
## Describe the bug
I get `IndexError: list index out of range` when trying to load the `Techreen` and `Almustaqbal` configs of the dataset.
## Steps to reproduce the bug
```python
load_dataset("arabic_billion_words", "Techreen")
load_dataset("arabic_billion_wor... | [
-0.1877753586,
-0.0464500673,
-0.0886766389,
0.4570907652,
-0.1361014545,
0.2721574903,
0.2241878211,
0.3228884339,
0.4278388321,
-0.0132561065,
-0.2501671016,
-0.0138163362,
0.1214890853,
0.0700825602,
-0.0400825329,
-0.298004806,
0.0483286828,
-0.098388046,
0.2569687665,
0.09... |
https://github.com/huggingface/datasets/issues/2724 | 404 Error when loading remote data files from private repo | I guess the issue is when computing the ETags of the remote files. Indeed `use_auth_token` must be passed to `request_etags` here:
https://github.com/huggingface/datasets/blob/35b5e4bc0cb2ed896e40f3eb2a4aa3de1cb1a6c5/src/datasets/builder.py#L160-L160 | ## Describe the bug
When loading remote data files from a private repo, a 404 error is raised.
## Steps to reproduce the bug
```python
url = hf_hub_url("lewtun/asr-preds-test", "preds.jsonl", repo_type="dataset")
dset = load_dataset("json", data_files=url, use_auth_token=True)
# HTTPError: 404 Client Error: Not... | 22 | 404 Error when loading remote data files from private repo
## Describe the bug
When loading remote data files from a private repo, a 404 error is raised.
## Steps to reproduce the bug
```python
url = hf_hub_url("lewtun/asr-preds-test", "preds.jsonl", repo_type="dataset")
dset = load_dataset("json", data_files=... | [
0.0871630982,
0.025846146,
0.0274065882,
0.60459131,
-0.1784683466,
-0.0898554921,
0.2882221639,
0.3268501759,
0.0184905417,
0.1675233096,
-0.369400084,
-0.0839858353,
0.2241547406,
-0.0408565253,
0.088821955,
-0.0422795936,
0.0376423858,
0.1200495288,
0.3278564513,
-0.24067342... |
https://github.com/huggingface/datasets/issues/2724 | 404 Error when loading remote data files from private repo | Yes, I remember having properly implemented that:
- https://github.com/huggingface/datasets/commit/7a9c62f7cef9ecc293f629f859d4375a6bd26dc8#diff-f933ce41f71c6c0d1ce658e27de62cbe0b45d777e9e68056dd012ac3eb9324f7R160
- https://github.com/huggingface/datasets/pull/2628/commits/6350a03b4b830339a745f7b1da46ece784ca734c
... | ## Describe the bug
When loading remote data files from a private repo, a 404 error is raised.
## Steps to reproduce the bug
```python
url = hf_hub_url("lewtun/asr-preds-test", "preds.jsonl", repo_type="dataset")
dset = load_dataset("json", data_files=url, use_auth_token=True)
# HTTPError: 404 Client Error: Not... | 18 | 404 Error when loading remote data files from private repo
## Describe the bug
When loading remote data files from a private repo, a 404 error is raised.
## Steps to reproduce the bug
```python
url = hf_hub_url("lewtun/asr-preds-test", "preds.jsonl", repo_type="dataset")
dset = load_dataset("json", data_files=... | [
0.1519909352,
0.0217427276,
0.0421522632,
0.4689881504,
-0.1487776488,
-0.0837456286,
0.1870218366,
0.3314779401,
0.0251011867,
0.1321308911,
-0.4147610962,
-0.0381754264,
0.1177173704,
-0.0845712796,
0.2274658084,
-0.0939540863,
0.0643758848,
0.2031138241,
0.1788401604,
-0.196... |
https://github.com/huggingface/datasets/issues/2722 | Missing cache file | This could be solved by going to the glue/ directory and delete sst2 directory, then load the dataset again will help you redownload the dataset. | Strangely missing cache file after I restart my program again.
`glue_dataset = datasets.load_dataset('glue', 'sst2')`
`FileNotFoundError: [Errno 2] No such file or directory: /Users/chris/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96d6053ad/dataset_info.json... | 25 | Missing cache file
Strangely missing cache file after I restart my program again.
`glue_dataset = datasets.load_dataset('glue', 'sst2')`
`FileNotFoundError: [Errno 2] No such file or directory: /Users/chris/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96d605... | [
-0.1489496082,
-0.284111619,
-0.0903949291,
0.1460704803,
0.2751776576,
0.0493271276,
0.0770323128,
0.2098800391,
0.2618973553,
0.0594649091,
-0.0856971294,
0.096999228,
0.098621659,
0.2517308891,
0.1167900339,
-0.1429445744,
-0.1188410968,
0.2136434913,
-0.37576437,
0.10285977... |
https://github.com/huggingface/datasets/issues/2722 | Missing cache file | Hi ! Not sure why this file was missing, but yes the way to fix this is to delete the sst2 directory and to reload the dataset | Strangely missing cache file after I restart my program again.
`glue_dataset = datasets.load_dataset('glue', 'sst2')`
`FileNotFoundError: [Errno 2] No such file or directory: /Users/chris/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96d6053ad/dataset_info.json... | 27 | Missing cache file
Strangely missing cache file after I restart my program again.
`glue_dataset = datasets.load_dataset('glue', 'sst2')`
`FileNotFoundError: [Errno 2] No such file or directory: /Users/chris/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96d605... | [
-0.0967949629,
-0.2115667909,
-0.0807494968,
0.1962109506,
0.3262526691,
0.2369838953,
0.0145859625,
0.2186019719,
0.304212451,
0.0247303843,
0.0359605141,
0.0324180797,
0.0695074722,
0.0254837051,
0.1852319688,
-0.1839707345,
-0.0519990027,
0.2447134107,
-0.2526424527,
0.11344... |
https://github.com/huggingface/datasets/issues/2716 | Calling shuffle on IterableDataset will disable batching in case any functions were mapped | Hi :) Good catch ! Feel free to open a PR if you want to contribute, this would be very welcome ;) | When using dataset in streaming mode, if one applies `shuffle` method on the dataset and `map` method for which `batched=True` than the batching operation will not happen, instead `batched` will be set to `False`
I did RCA on the dataset codebase, the problem is emerging from [this line of code](https://github.com/h... | 22 | Calling shuffle on IterableDataset will disable batching in case any functions were mapped
When using dataset in streaming mode, if one applies `shuffle` method on the dataset and `map` method for which `batched=True` than the batching operation will not happen, instead `batched` will be set to `False`
I did RCA o... | [
-0.4098217487,
-0.2538073659,
0.016455479,
-0.0311614051,
0.3398854733,
-0.057971403,
0.3023079932,
0.0674435869,
-0.2175251842,
0.3773571551,
-0.231050238,
0.2912555635,
-0.1922605634,
0.2657684982,
-0.0942451358,
0.1088106856,
0.077402167,
0.0824447572,
-0.4071668983,
-0.0259... |
https://github.com/huggingface/datasets/issues/2714 | add more precise information for size | We already have this information in the dataset_infos.json files of each dataset.
Maybe we can parse these files in the backend to return their content with the endpoint at huggingface.co/api/datasets
For now if you want to access this info you have to load the json for each dataset. For example:
- for a dataset o... | For the import into ELG, we would like a more precise description of the size of the dataset, instead of the current size categories. The size can be expressed in bytes, or any other preferred size unit. As suggested in the slack channel, perhaps this could be computed with a regex for existing datasets. | 71 | add more precise information for size
For the import into ELG, we would like a more precise description of the size of the dataset, instead of the current size categories. The size can be expressed in bytes, or any other preferred size unit. As suggested in the slack channel, perhaps this could be computed with a reg... | [
-0.0413543284,
-0.5577721,
-0.1589673162,
0.3919741809,
0.2053750902,
-0.0639247671,
-0.0737591982,
0.0503647439,
0.1064481661,
0.0798684135,
-0.4394701719,
0.0881781131,
-0.0656971186,
0.4548651576,
0.0609380044,
0.0388586968,
-0.2295208722,
-0.1126658022,
-0.1492429227,
-0.21... |
https://github.com/huggingface/datasets/issues/2709 | Missing documentation for wnut_17 (ner_tags) | Hi @maxpel, thanks for reporting this issue.
Indeed, the documentation in the dataset card is not complete. I’m opening a Pull Request to fix it.
As the paper explains, there are 6 entity types and we have ordered them alphabetically: `corporation`, `creative-work`, `group`, `location`, `person` and `product`.
... | On the info page of the wnut_17 data set (https://huggingface.co/datasets/wnut_17), the model output of ner-tags is only documented for these 5 cases:
`ner_tags: a list of classification labels, with possible values including O (0), B-corporation (1), I-corporation (2), B-creative-work (3), I-creative-work (4).`
... | 145 | Missing documentation for wnut_17 (ner_tags)
On the info page of the wnut_17 data set (https://huggingface.co/datasets/wnut_17), the model output of ner-tags is only documented for these 5 cases:
`ner_tags: a list of classification labels, with possible values including O (0), B-corporation (1), I-corporation (2),... | [
0.275123626,
-0.2970445156,
0.0208110008,
0.4673910141,
-0.0559450611,
0.0520152375,
-0.0117357597,
-0.2991321981,
-0.2569103837,
-0.21867311,
0.0983798057,
0.3813359439,
-0.1071818694,
0.2050388157,
0.1961680651,
-0.0721682981,
0.1684130281,
-0.1887158453,
0.3061253428,
-0.110... |
https://github.com/huggingface/datasets/issues/2708 | QASC: incomplete training set | Hi @danyaljj, thanks for reporting.
Unfortunately, I have not been able to reproduce your problem. My train split has 8134 examples:
```ipython
In [10]: ds["train"]
Out[10]:
Dataset({
features: ['id', 'question', 'choices', 'answerKey', 'fact1', 'fact2', 'combinedfact', 'formatted_question'],
num_rows:... | ## Describe the bug
The training instances are not loaded properly.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("qasc", script_version='1.10.2')
def load_instances(split):
instances = dataset[split]
print(f"split: {split} - size: {len(instanc... | 496 | QASC: incomplete training set
## Describe the bug
The training instances are not loaded properly.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("qasc", script_version='1.10.2')
def load_instances(split):
instances = dataset[split]
print(f"sp... | [
-0.2450180203,
-0.3615008891,
-0.167169258,
0.3723611832,
0.1236956567,
0.1451465189,
0.016439097,
0.4380205274,
-0.1213207915,
0.0934929103,
0.1632600874,
0.1249717548,
0.0430465601,
0.2471366227,
-0.0943753272,
-0.1647270769,
-0.0069864895,
0.2966330647,
-0.2305757999,
-0.074... |
https://github.com/huggingface/datasets/issues/2707 | 404 Not Found Error when loading LAMA dataset | Hi @dwil2444! I was able to reproduce your error when I downgraded to v1.1.2. Updating to the latest version of Datasets fixed the error for me :) | The [LAMA](https://huggingface.co/datasets/viewer/?dataset=lama) probing dataset is not available for download:
Steps to Reproduce:
1. `from datasets import load_dataset`
2. `dataset = load_dataset('lama', 'trex')`.
Results:
`FileNotFoundError: Couldn't find file locally at lama/lama.py, or remotely ... | 27 | 404 Not Found Error when loading LAMA dataset
The [LAMA](https://huggingface.co/datasets/viewer/?dataset=lama) probing dataset is not available for download:
Steps to Reproduce:
1. `from datasets import load_dataset`
2. `dataset = load_dataset('lama', 'trex')`.
Results:
`FileNotFoundError: Couldn't... | [
0.0275760554,
-0.1798949391,
-0.0416176841,
0.5950446725,
0.4188712239,
0.1276882589,
-0.102133356,
0.4045755267,
-0.4222145975,
0.2916166782,
-0.3122208118,
0.1641519666,
-0.159445852,
-0.2467198968,
0.0084443195,
-0.3867281079,
-0.1002724096,
-0.0234840568,
-0.2141504884,
0.1... |
https://github.com/huggingface/datasets/issues/2707 | 404 Not Found Error when loading LAMA dataset | Hi @dwil2444, thanks for reporting.
Could you please confirm which `datasets` version you were using and if the problem persists after you update it to the latest version: `pip install -U datasets`?
Thanks @stevhliu for the hint to fix this! ;) | The [LAMA](https://huggingface.co/datasets/viewer/?dataset=lama) probing dataset is not available for download:
Steps to Reproduce:
1. `from datasets import load_dataset`
2. `dataset = load_dataset('lama', 'trex')`.
Results:
`FileNotFoundError: Couldn't find file locally at lama/lama.py, or remotely ... | 41 | 404 Not Found Error when loading LAMA dataset
The [LAMA](https://huggingface.co/datasets/viewer/?dataset=lama) probing dataset is not available for download:
Steps to Reproduce:
1. `from datasets import load_dataset`
2. `dataset = load_dataset('lama', 'trex')`.
Results:
`FileNotFoundError: Couldn't... | [
0.0376230143,
-0.2442804277,
-0.0386776403,
0.6305745244,
0.4072245955,
0.088330768,
-0.0716989189,
0.3489049673,
-0.3947158754,
0.264603883,
-0.2870089412,
0.2205043137,
-0.1573987901,
-0.2962247729,
0.0108722709,
-0.3568080664,
-0.1149341688,
-0.0169954877,
-0.1815583408,
0.1... |
https://github.com/huggingface/datasets/issues/2707 | 404 Not Found Error when loading LAMA dataset | @stevhliu @albertvillanova updating to the latest version of datasets did in fact fix this issue. Thanks a lot for your help! | The [LAMA](https://huggingface.co/datasets/viewer/?dataset=lama) probing dataset is not available for download:
Steps to Reproduce:
1. `from datasets import load_dataset`
2. `dataset = load_dataset('lama', 'trex')`.
Results:
`FileNotFoundError: Couldn't find file locally at lama/lama.py, or remotely ... | 21 | 404 Not Found Error when loading LAMA dataset
The [LAMA](https://huggingface.co/datasets/viewer/?dataset=lama) probing dataset is not available for download:
Steps to Reproduce:
1. `from datasets import load_dataset`
2. `dataset = load_dataset('lama', 'trex')`.
Results:
`FileNotFoundError: Couldn't... | [
0.0579443052,
-0.2256722003,
-0.0337307602,
0.6018185616,
0.3969440758,
0.1228495836,
-0.0771400481,
0.392708987,
-0.4282575846,
0.2528086305,
-0.3340741098,
0.1994092017,
-0.1437428147,
-0.2634530067,
0.0329601839,
-0.3631289899,
-0.1055584773,
-0.035641022,
-0.1782318056,
0.1... |
https://github.com/huggingface/datasets/issues/2705 | 404 not found error on loading WIKIANN dataset | Hi @ronbutan, thanks for reporting.
You are right: we have recently found that the link to the original PAN-X dataset (also called WikiANN), hosted at Dropbox, is no longer working.
We have opened an issue in the GitHub repository of the original dataset (afshinrahimi/mmner#4) and we have also contacted the autho... | ## Describe the bug
Unable to retreive wikiann English dataset
## Steps to reproduce the bug
```python
from datasets import list_datasets, load_dataset, list_metrics, load_metric
WIKIANN = load_dataset("wikiann","en")
```
## Expected results
Colab notebook should display successful download status
## Act... | 94 | 404 not found error on loading WIKIANN dataset
## Describe the bug
Unable to retreive wikiann English dataset
## Steps to reproduce the bug
```python
from datasets import list_datasets, load_dataset, list_metrics, load_metric
WIKIANN = load_dataset("wikiann","en")
```
## Expected results
Colab notebook sh... | [
-0.1983267367,
0.0143734785,
0.0021064377,
0.2360561341,
0.0338601544,
0.0631666258,
0.054252781,
0.1971954703,
-0.0232600011,
0.1844014227,
-0.199016884,
0.3482317924,
0.1889101714,
-0.0352049135,
0.1387395263,
-0.2517964542,
0.0296935439,
0.2635377347,
0.0581075922,
-0.097588... |
https://github.com/huggingface/datasets/issues/2700 | from datasets import Dataset is failing | Hi @kswamy15, thanks for reporting.
We are fixing this critical issue and making an urgent patch release of the `datasets` library today.
In the meantime, you can circumvent this issue by updating the `tqdm` library: `!pip install -U tqdm` | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import Dataset
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or... | 39 | from datasets import Dataset is failing
## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import Dataset
```
## Expected results
A clear and concise description of the expected results.
## Ac... | [
-0.4085455239,
-0.1528587341,
-0.1239809915,
0.0641505867,
0.1121258512,
0.103505075,
0.3430986404,
0.2238982916,
-0.1365936846,
0.0548987947,
-0.0883903876,
0.28764081,
0.0480185561,
0.0611999109,
-0.1352098882,
0.0198289193,
-0.0553884692,
-0.0109331124,
-0.3337740898,
0.1266... |
https://github.com/huggingface/datasets/issues/2699 | cannot combine splits merging and streaming? | Hi ! That's missing indeed. We'll try to implement this for the next version :)
I guess we just need to implement #2564 first, and then we should be able to add support for splits combinations | this does not work:
`dataset = datasets.load_dataset('mc4','iw',split='train+validation',streaming=True)`
with error:
`ValueError: Bad split: train+validation. Available splits: ['train', 'validation']`
these work:
`dataset = datasets.load_dataset('mc4','iw',split='train+validation')`
`dataset = datasets.load_d... | 36 | cannot combine splits merging and streaming?
this does not work:
`dataset = datasets.load_dataset('mc4','iw',split='train+validation',streaming=True)`
with error:
`ValueError: Bad split: train+validation. Available splits: ['train', 'validation']`
these work:
`dataset = datasets.load_dataset('mc4','iw',split='... | [
-0.5081489086,
-0.3373530507,
-0.1102790311,
-0.0140299853,
0.0330485739,
0.0741377994,
0.0743085742,
0.485465467,
-0.0663448721,
0.1295401305,
-0.2676171064,
0.2315488905,
-0.0514131337,
0.5110139847,
0.0302072968,
-0.3258367777,
0.1286162287,
0.0924143046,
-0.3972888589,
0.20... |
https://github.com/huggingface/datasets/issues/2695 | Cannot import load_dataset on Colab | I'm facing the same issue on Colab today too.
```
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-4-5833ac0f5437> in <module>()
3
4 from ray import tune
----> 5 from datasets import DatasetDict, Dataset
6 from datasets import load_dataset, load_metr... | ## Describe the bug
Got tqdm concurrent module not found error during importing load_dataset from datasets.
## Steps to reproduce the bug
Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error
On colab:
```python
!pip install dataset... | 111 | Cannot import load_dataset on Colab
## Describe the bug
Got tqdm concurrent module not found error during importing load_dataset from datasets.
## Steps to reproduce the bug
Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error
On c... | [
-0.4603254795,
-0.2148275226,
-0.0243750736,
0.2682135701,
0.1354127526,
0.0519943908,
0.4973770976,
-0.1074850187,
0.1983703226,
0.0849529952,
-0.3266308308,
0.4753236175,
-0.1742699891,
0.1487480402,
-0.2584453821,
0.0984701663,
-0.0661858618,
0.0582034029,
-0.2907034159,
0.0... |
https://github.com/huggingface/datasets/issues/2695 | Cannot import load_dataset on Colab | @phosseini
I think it is related to [1.10.0](https://github.com/huggingface/datasets/actions/runs/1052653701) release done 3 hours ago. (cc: @lhoestq )
For now I just downgraded to 1.9.0 and it is working fine. | ## Describe the bug
Got tqdm concurrent module not found error during importing load_dataset from datasets.
## Steps to reproduce the bug
Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error
On colab:
```python
!pip install dataset... | 28 | Cannot import load_dataset on Colab
## Describe the bug
Got tqdm concurrent module not found error during importing load_dataset from datasets.
## Steps to reproduce the bug
Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error
On c... | [
-0.4603254795,
-0.2148275226,
-0.0243750736,
0.2682135701,
0.1354127526,
0.0519943908,
0.4973770976,
-0.1074850187,
0.1983703226,
0.0849529952,
-0.3266308308,
0.4753236175,
-0.1742699891,
0.1487480402,
-0.2584453821,
0.0984701663,
-0.0661858618,
0.0582034029,
-0.2907034159,
0.0... |
https://github.com/huggingface/datasets/issues/2695 | Cannot import load_dataset on Colab | > @phosseini
> I think it is related to [1.10.0](https://github.com/huggingface/datasets/actions/runs/1052653701) release done 3 hours ago. (cc: @lhoestq )
> For now I just downgraded to 1.9.0 and it is working fine.
Same here, downgraded to 1.9.0 for now and works fine. | ## Describe the bug
Got tqdm concurrent module not found error during importing load_dataset from datasets.
## Steps to reproduce the bug
Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error
On colab:
```python
!pip install dataset... | 41 | Cannot import load_dataset on Colab
## Describe the bug
Got tqdm concurrent module not found error during importing load_dataset from datasets.
## Steps to reproduce the bug
Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error
On c... | [
-0.4603254795,
-0.2148275226,
-0.0243750736,
0.2682135701,
0.1354127526,
0.0519943908,
0.4973770976,
-0.1074850187,
0.1983703226,
0.0849529952,
-0.3266308308,
0.4753236175,
-0.1742699891,
0.1487480402,
-0.2584453821,
0.0984701663,
-0.0661858618,
0.0582034029,
-0.2907034159,
0.0... |
https://github.com/huggingface/datasets/issues/2695 | Cannot import load_dataset on Colab | Hi,
updating tqdm to the newest version resolves the issue for me. You can do this as follows in Colab:
```
!pip install tqdm --upgrade
``` | ## Describe the bug
Got tqdm concurrent module not found error during importing load_dataset from datasets.
## Steps to reproduce the bug
Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error
On colab:
```python
!pip install dataset... | 26 | Cannot import load_dataset on Colab
## Describe the bug
Got tqdm concurrent module not found error during importing load_dataset from datasets.
## Steps to reproduce the bug
Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error
On c... | [
-0.4603254795,
-0.2148275226,
-0.0243750736,
0.2682135701,
0.1354127526,
0.0519943908,
0.4973770976,
-0.1074850187,
0.1983703226,
0.0849529952,
-0.3266308308,
0.4753236175,
-0.1742699891,
0.1487480402,
-0.2584453821,
0.0984701663,
-0.0661858618,
0.0582034029,
-0.2907034159,
0.0... |
https://github.com/huggingface/datasets/issues/2695 | Cannot import load_dataset on Colab | Hi @bayartsogt-ya and @phosseini, thanks for reporting.
We are fixing this critical issue and making an urgent patch release of the `datasets` library today.
In the meantime, as pointed out by @mariosasko, you can circumvent this issue by updating the `tqdm` library:
```
!pip install -U tqdm
``` | ## Describe the bug
Got tqdm concurrent module not found error during importing load_dataset from datasets.
## Steps to reproduce the bug
Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error
On colab:
```python
!pip install dataset... | 48 | Cannot import load_dataset on Colab
## Describe the bug
Got tqdm concurrent module not found error during importing load_dataset from datasets.
## Steps to reproduce the bug
Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error
On c... | [
-0.4603254795,
-0.2148275226,
-0.0243750736,
0.2682135701,
0.1354127526,
0.0519943908,
0.4973770976,
-0.1074850187,
0.1983703226,
0.0849529952,
-0.3266308308,
0.4753236175,
-0.1742699891,
0.1487480402,
-0.2584453821,
0.0984701663,
-0.0661858618,
0.0582034029,
-0.2907034159,
0.0... |
https://github.com/huggingface/datasets/issues/2691 | xtreme / pan-x cannot be downloaded | Hi @severo, thanks for reporting.
However I have not been able to reproduce this issue. Could you please confirm if the problem persists for you?
Maybe Dropbox (where the data source is hosted) was temporarily unavailable when you tried. | ## Describe the bug
Dataset xtreme / pan-x cannot be loaded
Seems related to https://github.com/huggingface/datasets/pull/2326
## Steps to reproduce the bug
```python
dataset = load_dataset("xtreme", "PAN-X.fr")
```
## Expected results
Load the dataset
## Actual results
```
FileNotFoundError:... | 39 | xtreme / pan-x cannot be downloaded
## Describe the bug
Dataset xtreme / pan-x cannot be loaded
Seems related to https://github.com/huggingface/datasets/pull/2326
## Steps to reproduce the bug
```python
dataset = load_dataset("xtreme", "PAN-X.fr")
```
## Expected results
Load the dataset
## Act... | [
-0.3526867032,
-0.3867756426,
-0.0510649458,
0.2910829782,
0.2072106004,
0.0486672781,
-0.1169929355,
0.2295712233,
0.1450731158,
0.0749079585,
-0.207177788,
0.2856614292,
0.0034072113,
0.0860852748,
0.1964894384,
-0.2445530444,
0.070219405,
0.119609952,
0.0714931786,
-0.039535... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.