html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 63 51.8k | body stringlengths 0 36.2k ⌀ | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings list |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/3310 | Fatal error condition occurred in aws-c-io | I have also had this issue since a few days, when running scripts using PyCharm in particular, but it does not seem to affect the script from running, only reporting this error at the end of the run. | ## Describe the bug
Fatal error when using the library
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
```
## Expected results
No fatal errors
## Actual results
```
Fatal error condition occurred in D:\bld\aws-c-io_1633633258269\work\sou... | 38 | Fatal error condition occurred in aws-c-io
## Describe the bug
Fatal error when using the library
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
```
## Expected results
No fatal errors
## Actual results
```
Fatal error condition occur... | [
-0.4791854024,
-0.142034784,
-0.1136764213,
-0.0025797929,
0.0447750874,
-0.066775687,
0.1985339224,
0.1818035543,
0.275020957,
0.1701408029,
0.2111242861,
0.5673761964,
-0.043479763,
0.1107322723,
-0.128939718,
-0.0115248915,
0.1479744762,
0.268906951,
-0.3732091784,
0.2062390... |
https://github.com/huggingface/datasets/issues/3310 | Fatal error condition occurred in aws-c-io | I also get this issue, It appears after my script has finished running. I get the following error message
```
Fatal error condition occurred in /home/conda/feedstock_root/build_artifacts/aws-c-io_1637179816120/work/source/event_loop.c:72: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_gro... | ## Describe the bug
Fatal error when using the library
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
```
## Expected results
No fatal errors
## Actual results
```
Fatal error condition occurred in D:\bld\aws-c-io_1633633258269\work\sou... | 95 | Fatal error condition occurred in aws-c-io
## Describe the bug
Fatal error when using the library
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
```
## Expected results
No fatal errors
## Actual results
```
Fatal error condition occur... | [
-0.5238192081,
0.0124545572,
-0.1510106623,
0.0529181547,
0.1022881716,
-0.056282755,
0.2720828652,
0.1647465974,
0.1394755095,
0.2164282054,
0.1449788809,
0.6392527819,
0.0272323303,
0.2679427564,
-0.1522212029,
0.0486653782,
0.0849769935,
0.2248355597,
-0.5182912946,
0.277628... |
https://github.com/huggingface/datasets/issues/3310 | Fatal error condition occurred in aws-c-io | @CallumMcMahon Do you have a small reproducer for this problem on Linux? I can reproduce this on Windows but sadly not with linux. | ## Describe the bug
Fatal error when using the library
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
```
## Expected results
No fatal errors
## Actual results
```
Fatal error condition occurred in D:\bld\aws-c-io_1633633258269\work\sou... | 23 | Fatal error condition occurred in aws-c-io
## Describe the bug
Fatal error when using the library
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
```
## Expected results
No fatal errors
## Actual results
```
Fatal error condition occur... | [
-0.4971722364,
-0.2210783511,
-0.0642463043,
0.1765671074,
0.0318543613,
-0.1087292433,
0.1770700812,
0.0373220481,
0.2282176167,
0.225035727,
0.1644743532,
0.6636955142,
-0.0692413151,
0.1272047609,
-0.1631724089,
-0.1083404645,
0.1769797504,
0.1298456937,
-0.6801725626,
0.082... |
https://github.com/huggingface/datasets/issues/3308 | "dataset_infos.json" missing for chr_en and mc4 | Hi ! Thanks for reporting :)
We can easily add the metadata for `chr_en` IMO, but for mC4 it will take more time, since it requires to count the number of examples in each language | ## Describe the bug
In the repository, every dataset has its metadata in a file called`dataset_infos.json`. But, this file is missing for two datasets: `chr_en` and `mc4`.
## Steps to reproduce the bug
Check [chr_en](https://github.com/huggingface/datasets/tree/master/datasets/chr_en) and [mc4](https://github.com/... | 35 | "dataset_infos.json" missing for chr_en and mc4
## Describe the bug
In the repository, every dataset has its metadata in a file called`dataset_infos.json`. But, this file is missing for two datasets: `chr_en` and `mc4`.
## Steps to reproduce the bug
Check [chr_en](https://github.com/huggingface/datasets/tree/mas... | [
0.1193215549,
-0.1811270714,
-0.0279059187,
0.3837454915,
0.2158334255,
0.2018749863,
-0.0541832,
0.1573790461,
-0.3296636939,
0.248790428,
0.0808397904,
0.2499098778,
-0.0000339306,
0.0638483241,
0.0916150659,
-0.2367414087,
0.1538734585,
0.1002095193,
-0.1073533967,
-0.152432... |
https://github.com/huggingface/datasets/issues/3308 | "dataset_infos.json" missing for chr_en and mc4 | No problem. I am trying to do some analysis on the metadata of all available datasets. Is reading `metadata_infos.json` for each dataset the correct way to go?
I noticed that the same information is also available as special variables inside .py file of each dataset. So, I was wondering if `metadata_infos.json` has... | ## Describe the bug
In the repository, every dataset has its metadata in a file called`dataset_infos.json`. But, this file is missing for two datasets: `chr_en` and `mc4`.
## Steps to reproduce the bug
Check [chr_en](https://github.com/huggingface/datasets/tree/master/datasets/chr_en) and [mc4](https://github.com/... | 55 | "dataset_infos.json" missing for chr_en and mc4
## Describe the bug
In the repository, every dataset has its metadata in a file called`dataset_infos.json`. But, this file is missing for two datasets: `chr_en` and `mc4`.
## Steps to reproduce the bug
Check [chr_en](https://github.com/huggingface/datasets/tree/mas... | [
0.2620973885,
-0.1249730512,
-0.1152999774,
0.4680540562,
0.0923609883,
0.1910523921,
0.0018954643,
0.2054129988,
-0.4593905509,
-0.0279044807,
-0.0281205755,
0.2110625207,
-0.1727291644,
-0.1559252888,
-0.1255878359,
0.0337350182,
0.2851157486,
0.0156849157,
0.1599001139,
-0.0... |
https://github.com/huggingface/datasets/issues/3308 | "dataset_infos.json" missing for chr_en and mc4 | The `dataset_infos.json` files have more information and are made to be used to analyze the datasets without having to run/parse the python scripts. Moreover some datasets on the Hugging face don't even have a python script, and for those ones we'll make tools to generate the JSON file automatically :) | ## Describe the bug
In the repository, every dataset has its metadata in a file called`dataset_infos.json`. But, this file is missing for two datasets: `chr_en` and `mc4`.
## Steps to reproduce the bug
Check [chr_en](https://github.com/huggingface/datasets/tree/master/datasets/chr_en) and [mc4](https://github.com/... | 50 | "dataset_infos.json" missing for chr_en and mc4
## Describe the bug
In the repository, every dataset has its metadata in a file called`dataset_infos.json`. But, this file is missing for two datasets: `chr_en` and `mc4`.
## Steps to reproduce the bug
Check [chr_en](https://github.com/huggingface/datasets/tree/mas... | [
0.1424958259,
-0.3361807764,
-0.0471009165,
0.2993091345,
0.1907495558,
0.1461185366,
-0.0039605163,
0.2632821798,
-0.1192552075,
0.2591179013,
0.0356880352,
0.2333019227,
-0.0015295641,
0.1996409744,
0.0464101322,
-0.1968249679,
0.0915271938,
0.1346552968,
-0.0754049793,
-0.10... |
https://github.com/huggingface/datasets/issues/3304 | Dataset object has no attribute `to_tf_dataset` | The issue is due to the older version of transformers and datasets. It has been resolved by upgrading their versions.
```
# upgrade transformers and datasets to latest versions
!pip install --upgrade transformers
!pip install --upgrade datasets
```
Regards! | I am following HuggingFace Course. I am at Fine-tuning a model.
Link: https://huggingface.co/course/chapter3/2?fw=tf
I use tokenize_function and `map` as mentioned in the course to process data.
`# define a tokenize function`
`def Tokenize_function(example):`
` return tokenizer(example['sentence'], truncat... | 39 | Dataset object has no attribute `to_tf_dataset`
I am following HuggingFace Course. I am at Fine-tuning a model.
Link: https://huggingface.co/course/chapter3/2?fw=tf
I use tokenize_function and `map` as mentioned in the course to process data.
`# define a tokenize function`
`def Tokenize_function(example):`
... | [
-0.2293262035,
-0.0800562575,
0.0744439662,
0.1120376214,
0.6455809474,
0.2319770604,
0.1128883958,
0.2750467956,
-0.1273372471,
-0.0271429773,
-0.0409887768,
0.3272254467,
-0.3555476069,
0.3133002818,
0.173726812,
-0.2345678806,
0.1084550396,
0.0091280574,
-0.0873258784,
-0.08... |
https://github.com/huggingface/datasets/issues/3303 | DataCollatorWithPadding: TypeError |
>
> Input:
>
> ```
> tokenizer = AutoTokenizer.from_pretrained(checkpoint)
> data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf")
> ```
>
> Output:
>
> ```
> TypeError Traceback (most recent call last)
> /tmp/ipykernel_42/1563280798.py in <m... | Hi,
I am following the HuggingFace course. I am now at Fine-tuning [https://huggingface.co/course/chapter3/3?fw=tf](https://huggingface.co/course/chapter3/3?fw=tf). When I set up `DataCollatorWithPadding` as following I got an error while trying to reproduce the course code in Kaggle. This error occurs with either a ... | 100 | DataCollatorWithPadding: TypeError
Hi,
I am following the HuggingFace course. I am now at Fine-tuning [https://huggingface.co/course/chapter3/3?fw=tf](https://huggingface.co/course/chapter3/3?fw=tf). When I set up `DataCollatorWithPadding` as following I got an error while trying to reproduce the course code in Kag... | [
0.0907466263,
-0.4077562094,
0.1217752174,
0.270572722,
0.399409622,
-0.1075448394,
0.3337470293,
0.2486715019,
-0.1562586725,
0.2049702853,
0.2183446139,
0.3773719668,
-0.1649317741,
0.1004556492,
-0.1292618066,
-0.246628508,
0.0724818334,
0.0992895141,
0.1067004874,
0.0274561... |
https://github.com/huggingface/datasets/issues/3300 | ❓ Dataset loading script from Hugging Face Hub | Hi ! In the next version of `datasets`, your train and test splits will be correctly separated (changes from #3027) if you create a dataset repository with only your CSV files.
Also it seems that you overwrite the `data_files` and `data_dir` arguments in your code, when you instantiate the AGNewsConfig objects. Thos... | Hi there,
I am trying to add my custom `ag_news` with its own loading script on the Hugging Face datasets hub. In particular, I would like to test the addition of a second configuration to the existing `ag_news` dataset. Once it works in my hub, I plan to make a PR to the original dataset. However, in trying to do s... | 133 | ❓ Dataset loading script from Hugging Face Hub
Hi there,
I am trying to add my custom `ag_news` with its own loading script on the Hugging Face datasets hub. In particular, I would like to test the addition of a second configuration to the existing `ag_news` dataset. Once it works in my hub, I plan to make a PR to... | [
-0.1013946682,
-0.2682317793,
0.0481378287,
0.2958106399,
0.005004223,
0.2030407935,
0.3185771704,
0.0778236687,
0.625675261,
0.1679820865,
-0.2501531839,
-0.0095644649,
-0.0821522623,
0.4151838124,
0.3813699484,
0.0784425512,
-0.0206500776,
0.0400116369,
0.188796103,
0.0250974... |
https://github.com/huggingface/datasets/issues/3300 | ❓ Dataset loading script from Hugging Face Hub | Also I think the viewer will be updated when you fix the dataset script, let me know if it doesn't | Hi there,
I am trying to add my custom `ag_news` with its own loading script on the Hugging Face datasets hub. In particular, I would like to test the addition of a second configuration to the existing `ag_news` dataset. Once it works in my hub, I plan to make a PR to the original dataset. However, in trying to do s... | 20 | ❓ Dataset loading script from Hugging Face Hub
Hi there,
I am trying to add my custom `ag_news` with its own loading script on the Hugging Face datasets hub. In particular, I would like to test the addition of a second configuration to the existing `ag_news` dataset. Once it works in my hub, I plan to make a PR to... | [
-0.0634878948,
-0.3772728741,
0.0302897785,
0.2885511518,
-0.0027499553,
0.2052664161,
0.2930764854,
0.0706355721,
0.6545083523,
0.1745841205,
-0.1952072084,
-0.0231926274,
-0.1169418022,
0.4614902437,
0.4217408299,
0.0841495171,
-0.0183683634,
0.0301568788,
0.206361562,
0.0256... |
https://github.com/huggingface/datasets/issues/3300 | ❓ Dataset loading script from Hugging Face Hub | Hi @lhoestq,
Thanks a lot for the super quick answer!
Your suggestion solves my issue. I am now able to load the dataset properly 🚀
However, the dataviewer is not working yet.
Really, thanks a lot for your help and consideration!
Best,
Pietro | Hi there,
I am trying to add my custom `ag_news` with its own loading script on the Hugging Face datasets hub. In particular, I would like to test the addition of a second configuration to the existing `ag_news` dataset. Once it works in my hub, I plan to make a PR to the original dataset. However, in trying to do s... | 43 | ❓ Dataset loading script from Hugging Face Hub
Hi there,
I am trying to add my custom `ag_news` with its own loading script on the Hugging Face datasets hub. In particular, I would like to test the addition of a second configuration to the existing `ag_news` dataset. Once it works in my hub, I plan to make a PR to... | [
-0.0676700994,
-0.3583374023,
0.0224837773,
0.2893784642,
-0.0218640827,
0.1763964146,
0.3361327648,
0.0578268841,
0.6890067458,
0.1715690345,
-0.2108742446,
-0.0025720121,
-0.1206180453,
0.4642144442,
0.3857717812,
0.0706231296,
-0.0077106263,
0.0814232901,
0.2180109918,
0.033... |
https://github.com/huggingface/datasets/issues/3300 | ❓ Dataset loading script from Hugging Face Hub | @lhoestq I think I am having a related problem.
My call to load_dataset() looks like this:
```
datasets = load_dataset(
os.path.abspath(layoutlmft.data.datasets.xfun.__file__),
f"xfun.{data_args.lang}",
additional_langs=data_args.additional_langs,
keep_in_memory=True,
... | Hi there,
I am trying to add my custom `ag_news` with its own loading script on the Hugging Face datasets hub. In particular, I would like to test the addition of a second configuration to the existing `ag_news` dataset. Once it works in my hub, I plan to make a PR to the original dataset. However, in trying to do s... | 173 | ❓ Dataset loading script from Hugging Face Hub
Hi there,
I am trying to add my custom `ag_news` with its own loading script on the Hugging Face datasets hub. In particular, I would like to test the addition of a second configuration to the existing `ag_news` dataset. Once it works in my hub, I plan to make a PR to... | [
-0.0687073544,
-0.3811146319,
0.0466552377,
0.3227541447,
0.0099622961,
0.1686694622,
0.3243077695,
0.0713837296,
0.6673383117,
0.2038330138,
-0.2203580588,
-0.0036798161,
-0.067635268,
0.4104301333,
0.4078681767,
0.0784859285,
0.0209657699,
0.0293771625,
0.1985020936,
-0.02121... |
https://github.com/huggingface/datasets/issues/3300 | ❓ Dataset loading script from Hugging Face Hub | Hi ! The `gen_kwargs` dictionary is passed to `_generate_examples`, so in your case it must be defined this way:
```python
def _generate_examples(self, filepath):
...
```
And here is an additional tip: you can use `os.path.join(downloaded_file, "dataset/testing_data")` instead of `f"downloaded_file}/dataset/... | Hi there,
I am trying to add my custom `ag_news` with its own loading script on the Hugging Face datasets hub. In particular, I would like to test the addition of a second configuration to the existing `ag_news` dataset. Once it works in my hub, I plan to make a PR to the original dataset. However, in trying to do s... | 64 | ❓ Dataset loading script from Hugging Face Hub
Hi there,
I am trying to add my custom `ag_news` with its own loading script on the Hugging Face datasets hub. In particular, I would like to test the addition of a second configuration to the existing `ag_news` dataset. Once it works in my hub, I plan to make a PR to... | [
-0.0499408469,
-0.3830170929,
0.0404729098,
0.292393595,
0.0143705336,
0.2027331442,
0.2845098078,
0.0854050219,
0.6453068852,
0.2024909407,
-0.186206311,
0.0076824403,
-0.1077618524,
0.4451583326,
0.4212681055,
0.0472498462,
0.0251382291,
0.0623590723,
0.1722055823,
0.03227788... |
https://github.com/huggingface/datasets/issues/3300 | ❓ Dataset loading script from Hugging Face Hub | Thanks for you quick reply @lhoestq and so sorry for my very delayed response.
We have gotten around the error another way but I will try to duplicate this when I can. We may have had "filepaths" instead of "filepath" in our def of _generate_examples() and not noticed the difference. If I find a more useful answer f... | Hi there,
I am trying to add my custom `ag_news` with its own loading script on the Hugging Face datasets hub. In particular, I would like to test the addition of a second configuration to the existing `ag_news` dataset. Once it works in my hub, I plan to make a PR to the original dataset. However, in trying to do s... | 93 | ❓ Dataset loading script from Hugging Face Hub
Hi there,
I am trying to add my custom `ag_news` with its own loading script on the Hugging Face datasets hub. In particular, I would like to test the addition of a second configuration to the existing `ag_news` dataset. Once it works in my hub, I plan to make a PR to... | [
-0.0717485547,
-0.3505972624,
0.0383090451,
0.328047812,
0.0319311991,
0.1988090426,
0.3233024776,
0.1017355695,
0.6908878088,
0.2111232132,
-0.1766379327,
-0.0175198559,
-0.1154559851,
0.4286899865,
0.4041127563,
0.0324170031,
-0.0001025203,
0.0363319814,
0.1870840937,
0.01891... |
https://github.com/huggingface/datasets/issues/3298 | Agnews dataset viewer is not working | Hi ! Thanks for reporting
We've already fixed the code that generates the preview for this dataset, we'll release the fix soon :) | ## Dataset viewer issue for '*name of the dataset*'
**Link:** https://huggingface.co/datasets/ag_news
Hi there, the `ag_news` dataset viewer is not working.
Am I the one who added this dataset? No
| 23 | Agnews dataset viewer is not working
## Dataset viewer issue for '*name of the dataset*'
**Link:** https://huggingface.co/datasets/ag_news
Hi there, the `ag_news` dataset viewer is not working.
Am I the one who added this dataset? No
Hi ! Thanks for reporting
We've already fixed the code that generates ... | [
-0.4043060839,
-0.0264237374,
0.0003687421,
0.1476446837,
0.0435507409,
0.1333729178,
0.3327427208,
0.3144618273,
0.0698371306,
0.1123041436,
-0.142061457,
0.0911887661,
0.022487497,
0.2090033889,
-0.0568054505,
0.0026442732,
0.2122282535,
0.0268218517,
-0.0084453551,
0.0673370... |
https://github.com/huggingface/datasets/issues/3297 | .map() cache is wrongfully reused - only happens when the mapping function is imported | Hi ! Thanks for reporting. Indeed this is a current limitation of the usage we have of `dill` in `datasets`. I'd suggest you use your workaround for now until we find a way to fix this. Maybe functions that are not coming from a module not installed with pip should be dumped completely, rather than only taking their lo... | ## Describe the bug
When `.map` is used with a mapping function that is imported, the cache is reused even if the mapping function has been modified.
The reason for this is that `dill` that is used for creating the fingerprint [pickles imported functions by reference](https://stackoverflow.com/a/67851411).
I guess... | 61 | .map() cache is wrongfully reused - only happens when the mapping function is imported
## Describe the bug
When `.map` is used with a mapping function that is imported, the cache is reused even if the mapping function has been modified.
The reason for this is that `dill` that is used for creating the fingerprint [p... | [
0.1023008078,
-0.1066755429,
0.023120366,
0.0879040956,
0.1407015622,
0.0473642983,
0.3940168917,
0.3197053075,
0.2703638077,
0.0267432407,
-0.299967438,
0.4735360444,
0.2265991718,
-0.3539206088,
0.0825730115,
0.2903339863,
0.0108863935,
-0.0799177513,
-0.2749255598,
-0.147531... |
https://github.com/huggingface/datasets/issues/3297 | .map() cache is wrongfully reused - only happens when the mapping function is imported | I agree. Sounds like a solution for it would be pretty dirty, even [cloudpickle](https://stackoverflow.com/a/16891169) doesn't help in this case.
In the meanwhile I think that adding a warning and the workaround somewhere in the documentation can be helpful. | ## Describe the bug
When `.map` is used with a mapping function that is imported, the cache is reused even if the mapping function has been modified.
The reason for this is that `dill` that is used for creating the fingerprint [pickles imported functions by reference](https://stackoverflow.com/a/67851411).
I guess... | 38 | .map() cache is wrongfully reused - only happens when the mapping function is imported
## Describe the bug
When `.map` is used with a mapping function that is imported, the cache is reused even if the mapping function has been modified.
The reason for this is that `dill` that is used for creating the fingerprint [p... | [
0.1023008078,
-0.1066755429,
0.023120366,
0.0879040956,
0.1407015622,
0.0473642983,
0.3940168917,
0.3197053075,
0.2703638077,
0.0267432407,
-0.299967438,
0.4735360444,
0.2265991718,
-0.3539206088,
0.0825730115,
0.2903339863,
0.0108863935,
-0.0799177513,
-0.2749255598,
-0.147531... |
https://github.com/huggingface/datasets/issues/3295 | Temporary dataset_path for remote fs URIs not built properly in arrow_dataset.py::load_from_disk | Hi ! Good catch and thanks for opening a PR :)
I just responded in your PR | ## Describe the bug
When trying to build a temporary dataset path from a remote URI in this block of code:
https://github.com/huggingface/datasets/blob/42f6b1d18a4a1b6009b6e62d115491be16dfca22/src/datasets/arrow_dataset.py#L1038-L1042
the result is not the expected when passing an absolute path in an URI like `h... | 17 | Temporary dataset_path for remote fs URIs not built properly in arrow_dataset.py::load_from_disk
## Describe the bug
When trying to build a temporary dataset path from a remote URI in this block of code:
https://github.com/huggingface/datasets/blob/42f6b1d18a4a1b6009b6e62d115491be16dfca22/src/datasets/arrow_datas... | [
0.0369561575,
-0.0552982613,
0.0234630387,
0.2166193724,
0.2328248322,
-0.3071340919,
0.2497993112,
-0.0407906696,
-0.1296836734,
-0.0436530784,
0.1355844438,
0.1818683743,
0.0551710315,
-0.5092630386,
-0.0166563839,
-0.0110697905,
-0.0112729603,
-0.0280554593,
-0.1703246832,
-... |
https://github.com/huggingface/datasets/issues/3292 | Not able to load 'wikipedia' dataset | Hi ! Indeed it looks like the code snippet on the Hugging face Hub doesn't show the second parameter

Thanks for reporting, I'm taking a look
| ## Describe the bug
I am following the instruction for loading the wikipedia dataset using datasets. However getting the below error.
## Steps to reproduce the bug
from datasets import load_dataset
dataset = load_dataset("wikipedia")
```
## Expected results
A clear and concise description of the expected res... | 27 | Not able to load 'wikipedia' dataset
## Describe the bug
I am following the instruction for loading the wikipedia dataset using datasets. However getting the below error.
## Steps to reproduce the bug
from datasets import load_dataset
dataset = load_dataset("wikipedia")
```
## Expected results
A clear and ... | [
-0.0762499273,
0.1433965415,
-0.0506567694,
0.361487031,
0.1556535512,
0.3728308976,
0.3982861936,
0.2425004393,
0.1862864047,
0.0674075261,
0.1126710996,
0.3431457281,
0.0850702003,
-0.0809758008,
0.1471760273,
-0.1971467137,
-0.0270585194,
0.0480618328,
0.0294922721,
0.108122... |
https://github.com/huggingface/datasets/issues/3285 | Add IEMOCAP dataset | The IEMOCAP dataset is private and available only on request.
```
To obtain the IEMOCAP data you just need to fill out an electronic release form below.
```
- [Request form](https://sail.usc.edu/iemocap/release_form.php)
- [License ](https://sail.usc.edu/iemocap/Data_Release_Form_IEMOCAP.pdf)
> We do not sh... | ## Adding a Dataset
- **Name:** IEMOCAP
- **Description:** acted, multimodal and multispeaker database
- **Paper:** https://sail.usc.edu/iemocap/Busso_2008_iemocap.pdf
- **Data:** https://sail.usc.edu/iemocap/index.html
- **Motivation:** Useful multimodal dataset
cc @anton-l
Instructions to add a new datase... | 80 | Add IEMOCAP dataset
## Adding a Dataset
- **Name:** IEMOCAP
- **Description:** acted, multimodal and multispeaker database
- **Paper:** https://sail.usc.edu/iemocap/Busso_2008_iemocap.pdf
- **Data:** https://sail.usc.edu/iemocap/index.html
- **Motivation:** Useful multimodal dataset
cc @anton-l
Instructio... | [
-0.397473067,
-0.1045239344,
-0.1626392454,
0.0194149353,
-0.0553386062,
-0.1147543192,
0.5808407664,
0.0518383682,
0.0989707932,
0.3002893031,
-0.4795856476,
0.0958077982,
-0.1967371255,
0.4536911547,
0.1594685018,
-0.0247065332,
-0.0657192394,
0.1287432909,
0.0163776595,
0.00... |
https://github.com/huggingface/datasets/issues/3285 | Add IEMOCAP dataset | Hi @dnaveenr ! We can contact the authors to see if they are interested in hosting the dataset on the Hub. In the meantime, feel free to work on a script with manual download. | ## Adding a Dataset
- **Name:** IEMOCAP
- **Description:** acted, multimodal and multispeaker database
- **Paper:** https://sail.usc.edu/iemocap/Busso_2008_iemocap.pdf
- **Data:** https://sail.usc.edu/iemocap/index.html
- **Motivation:** Useful multimodal dataset
cc @anton-l
Instructions to add a new datase... | 34 | Add IEMOCAP dataset
## Adding a Dataset
- **Name:** IEMOCAP
- **Description:** acted, multimodal and multispeaker database
- **Paper:** https://sail.usc.edu/iemocap/Busso_2008_iemocap.pdf
- **Data:** https://sail.usc.edu/iemocap/index.html
- **Motivation:** Useful multimodal dataset
cc @anton-l
Instructio... | [
-0.4055472612,
-0.2296994925,
-0.1807560772,
0.0004849705,
-0.0748526528,
-0.0578590706,
0.5305740237,
0.1825390905,
0.1652227938,
0.3068114519,
-0.3846266866,
0.0953106731,
-0.2732800543,
0.4907404482,
0.2245352566,
0.0488989539,
0.0969657227,
0.1993618011,
0.047155384,
-0.000... |
https://github.com/huggingface/datasets/issues/3285 | Add IEMOCAP dataset | Hi @mariosasko . Thanks for your response. Sure, I will mail them and find out if they're open to this.
Work on a script with manual download ? This is new to me, any guidelines would be helpful here.
| ## Adding a Dataset
- **Name:** IEMOCAP
- **Description:** acted, multimodal and multispeaker database
- **Paper:** https://sail.usc.edu/iemocap/Busso_2008_iemocap.pdf
- **Data:** https://sail.usc.edu/iemocap/index.html
- **Motivation:** Useful multimodal dataset
cc @anton-l
Instructions to add a new datase... | 39 | Add IEMOCAP dataset
## Adding a Dataset
- **Name:** IEMOCAP
- **Description:** acted, multimodal and multispeaker database
- **Paper:** https://sail.usc.edu/iemocap/Busso_2008_iemocap.pdf
- **Data:** https://sail.usc.edu/iemocap/index.html
- **Motivation:** Useful multimodal dataset
cc @anton-l
Instructio... | [
-0.396915406,
-0.0930215493,
-0.1669157296,
-0.0267273746,
-0.0511000119,
-0.0638297275,
0.4177331924,
0.0990112722,
0.184079811,
0.2257884145,
-0.3469266295,
0.0741997883,
-0.2861623466,
0.5310553908,
0.2502622604,
0.020621093,
0.0286565106,
0.2210894674,
-0.0193936657,
-0.034... |
https://github.com/huggingface/datasets/issues/3285 | Add IEMOCAP dataset | > Thanks for your response. Sure, I will mail them and find out if they're open to this.
It's best to leave this part to us because we have to explain how login would work and (potentially) set up a custom verification for the dataset.
> Work on a script with manual download ? This is new to me, any guidelines wo... | ## Adding a Dataset
- **Name:** IEMOCAP
- **Description:** acted, multimodal and multispeaker database
- **Paper:** https://sail.usc.edu/iemocap/Busso_2008_iemocap.pdf
- **Data:** https://sail.usc.edu/iemocap/index.html
- **Motivation:** Useful multimodal dataset
cc @anton-l
Instructions to add a new datase... | 100 | Add IEMOCAP dataset
## Adding a Dataset
- **Name:** IEMOCAP
- **Description:** acted, multimodal and multispeaker database
- **Paper:** https://sail.usc.edu/iemocap/Busso_2008_iemocap.pdf
- **Data:** https://sail.usc.edu/iemocap/index.html
- **Motivation:** Useful multimodal dataset
cc @anton-l
Instructio... | [
-0.1976326704,
-0.1684255153,
-0.0636767149,
0.0175229982,
0.0493132584,
-0.0771767125,
0.3879881501,
-0.0209067725,
0.2226244658,
0.1589051038,
-0.3142955601,
0.1245032772,
-0.2589575052,
0.493137449,
0.3352160156,
0.0632353574,
-0.0690496713,
-0.0771609321,
0.0377372243,
-0.0... |
https://github.com/huggingface/datasets/issues/3285 | Add IEMOCAP dataset | > It's best to leave this part to us because we have to explain how login would work and (potentially) set up a custom verification for the dataset.
Yes. That would be perfect. Thanks.
----
Okay. Thanks for giving a reference. This is helpful. I will go through it.
| ## Adding a Dataset
- **Name:** IEMOCAP
- **Description:** acted, multimodal and multispeaker database
- **Paper:** https://sail.usc.edu/iemocap/Busso_2008_iemocap.pdf
- **Data:** https://sail.usc.edu/iemocap/index.html
- **Motivation:** Useful multimodal dataset
cc @anton-l
Instructions to add a new datase... | 49 | Add IEMOCAP dataset
## Adding a Dataset
- **Name:** IEMOCAP
- **Description:** acted, multimodal and multispeaker database
- **Paper:** https://sail.usc.edu/iemocap/Busso_2008_iemocap.pdf
- **Data:** https://sail.usc.edu/iemocap/index.html
- **Motivation:** Useful multimodal dataset
cc @anton-l
Instructio... | [
-0.2882090211,
-0.1497914493,
-0.1636939347,
-0.0070336401,
-0.0361263268,
-0.1127439961,
0.5733915567,
-0.0719576403,
0.1140567735,
0.2705541551,
-0.3042043746,
-0.0156050269,
-0.1448670328,
0.4219715893,
0.2603910267,
-0.018376451,
0.0353237353,
0.1186271533,
0.0649689436,
-0... |
https://github.com/huggingface/datasets/issues/3282 | ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py | Hi ! Thanks for reporting :)
I think this is because the dataset is behind an access page. We can fix the dataset viewer
If you also have this error when you use the `datasets` library in python, you should probably pass `use_auth_token=True` to the `load_dataset()` function to use your account to access the datase... | ## Dataset viewer issue for '*oscar-corpus/OSCAR-2109*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)*
*The dataset library cannot download any language from the oscar-corpus/OSCAR-2109 dataset. By entering the URL in your browser I can access the file.*
... | 55 | ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py
## Dataset viewer issue for '*oscar-corpus/OSCAR-2109*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)*
*The dataset library cannot download a... | [
-0.2027653456,
0.1993869096,
-0.0173747595,
0.3850066364,
0.2765265405,
0.172312215,
-0.005311287,
0.2269322276,
-0.1176276058,
0.1920671612,
-0.2589607239,
-0.0070933392,
0.1979573667,
-0.0506283343,
0.0628772229,
-0.2161213905,
-0.1084234416,
-0.0997951925,
-0.0893969536,
-0.... |
https://github.com/huggingface/datasets/issues/3282 | ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py | Ah ok, I didn't realise about the login page. I'll try `use_auth_token=True` and see if that solves it.
Regards! | ## Dataset viewer issue for '*oscar-corpus/OSCAR-2109*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)*
*The dataset library cannot download any language from the oscar-corpus/OSCAR-2109 dataset. By entering the URL in your browser I can access the file.*
... | 19 | ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py
## Dataset viewer issue for '*oscar-corpus/OSCAR-2109*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)*
*The dataset library cannot download a... | [
-0.1426579952,
0.1558704525,
0.0060984073,
0.3392142951,
0.2724205256,
0.1259438992,
-0.0063694865,
0.1940262616,
-0.1445149928,
0.2094283849,
-0.2114290446,
-0.1045742929,
0.2127493918,
0.0003539837,
0.1761928201,
-0.1613225639,
-0.0735768676,
-0.1723283082,
-0.0665445477,
-0.... |
https://github.com/huggingface/datasets/issues/3282 | ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py | Hi,
Using `use_auth_token=True` and downloading the credentials with `huggingface-cli login` (stored in .huggingface/token) solved the issue.
Should I leave the issue open until you fix the Dataset viewer issue? | ## Dataset viewer issue for '*oscar-corpus/OSCAR-2109*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)*
*The dataset library cannot download any language from the oscar-corpus/OSCAR-2109 dataset. By entering the URL in your browser I can access the file.*
... | 29 | ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py
## Dataset viewer issue for '*oscar-corpus/OSCAR-2109*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)*
*The dataset library cannot download a... | [
-0.1648682505,
0.1433244199,
0.0291342679,
0.3350146711,
0.2738420069,
0.074923791,
-0.0048634904,
0.1728971153,
-0.1965863854,
0.1607672423,
-0.2475940883,
-0.066174306,
0.0811842307,
0.0479792878,
0.1127685681,
-0.1378087401,
-0.0630305409,
-0.150430575,
-0.0512395576,
-0.063... |
https://github.com/huggingface/datasets/issues/3282 | ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py | Cool ! Yes let's keep this issue open until the viewer is fixed - I'll close it when this is fixed. Thanks | ## Dataset viewer issue for '*oscar-corpus/OSCAR-2109*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)*
*The dataset library cannot download any language from the oscar-corpus/OSCAR-2109 dataset. By entering the URL in your browser I can access the file.*
... | 22 | ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py
## Dataset viewer issue for '*oscar-corpus/OSCAR-2109*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)*
*The dataset library cannot download a... | [
-0.229873836,
0.2996897995,
0.0159762148,
0.3480393291,
0.2428966463,
0.0843781531,
0.0289748609,
0.2151712775,
-0.1925186664,
0.1596948057,
-0.2004184127,
-0.04559879,
0.0961814597,
-0.0956305414,
0.0457461104,
-0.1347217262,
-0.0746337026,
-0.0802199244,
0.0109409932,
-0.0307... |
https://github.com/huggingface/datasets/issues/3282 | ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py | The error I get when trying to load OSCAR 21.09 is this
```
ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py
```
The URL I get in the browser is this
```
https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/blob/main/OSCAR-2109.py
```
Mayb... | ## Dataset viewer issue for '*oscar-corpus/OSCAR-2109*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)*
*The dataset library cannot download any language from the oscar-corpus/OSCAR-2109 dataset. By entering the URL in your browser I can access the file.*
... | 38 | ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py
## Dataset viewer issue for '*oscar-corpus/OSCAR-2109*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)*
*The dataset library cannot download a... | [
-0.1634295881,
0.2305828333,
0.024413811,
0.3770050406,
0.0909549966,
0.030726349,
0.0639222562,
0.2214712799,
-0.2426067144,
0.2597850263,
-0.214044109,
-0.1690792143,
0.1428035945,
-0.0072289985,
0.1171832308,
-0.2361693233,
-0.0139901433,
0.0460932292,
0.138641879,
-0.063448... |
https://github.com/huggingface/datasets/issues/3282 | ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py | > The error I get when trying to load OSCAR 21.09 is this
>
> ```
> ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py
> ```
>
> The URL I get in the browser is this
>
> ```
> https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/blob/main/OS... | ## Dataset viewer issue for '*oscar-corpus/OSCAR-2109*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)*
*The dataset library cannot download any language from the oscar-corpus/OSCAR-2109 dataset. By entering the URL in your browser I can access the file.*
... | 75 | ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py
## Dataset viewer issue for '*oscar-corpus/OSCAR-2109*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)*
*The dataset library cannot download a... | [
-0.1638737023,
0.1031891331,
0.0558867417,
0.4317854941,
0.1487074941,
0.0859395117,
-0.0256002247,
0.2220758349,
-0.2109852135,
0.2274253517,
-0.2897391021,
-0.163197577,
0.113761127,
0.0779962242,
0.1040889546,
-0.3276954889,
-0.071785979,
-0.0296839159,
0.0155164571,
-0.0434... |
https://github.com/huggingface/datasets/issues/3272 | Make iter_archive work with ZIP files | Hi ! Sure this is open for any contributor. If you're interested feel free to self-assign this issue to you by commenting `#self-assign`. Then if you have any question or if I can help, feel free to ping me.
To begin with, feel free to take a look at both implementations of `iter_archive` for local downloads and for... | Currently users can use `dl_manager.iter_archive` in their dataset script to iterate over all the files of a TAR archive.
It would be nice if it could work with ZIP files too ! | 103 | Make iter_archive work with ZIP files
Currently users can use `dl_manager.iter_archive` in their dataset script to iterate over all the files of a TAR archive.
It would be nice if it could work with ZIP files too !
Hi ! Sure this is open for any contributor. If you're interested feel free to self-assign this issue... | [
-0.5271627307,
0.1634646654,
-0.0882295221,
0.1586410999,
-0.091050826,
-0.0960064158,
0.1364819556,
0.4259112179,
-0.0266859457,
0.0931349471,
-0.0515030548,
0.6554415226,
-0.1210887358,
0.4705846012,
-0.0370961763,
0.0497757569,
-0.2543893158,
0.3523830771,
-0.4338114858,
0.1... |
https://github.com/huggingface/datasets/issues/3269 | coqa NonMatchingChecksumError | Hi @ZhaofengWu, thanks for reporting.
Unfortunately, I'm not able to reproduce your bug:
```python
In [1]: from datasets import load_dataset
In [2]: ds = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.91MB/s]
Downloading: 1.79kB [00:00, 1.79MB/s]
Using custom data configuration default
Downloading and p... | ```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | 180 | coqa NonMatchingChecksumError
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | [
-0.2934430242,
-0.1696564257,
-0.237728551,
0.1443958431,
0.175302878,
-0.0322034322,
0.1085318327,
0.3469724655,
0.1612990946,
0.3450761735,
-0.1242234036,
0.0163034536,
0.132706821,
0.3179681301,
-0.2400150299,
0.410885632,
-0.0373039171,
0.1308017224,
-0.1365240365,
-0.08464... |
https://github.com/huggingface/datasets/issues/3269 | coqa NonMatchingChecksumError | I'm getting the same error in two separate environments:
```
- `datasets` version: 1.15.1
- Platform: Linux-5.4.0-84-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.11
- PyArrow version: 6.0.0
```
```
- `datasets` version: 1.15.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.... | ```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | 43 | coqa NonMatchingChecksumError
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | [
-0.2934430242,
-0.1696564257,
-0.237728551,
0.1443958431,
0.175302878,
-0.0322034322,
0.1085318327,
0.3469724655,
0.1612990946,
0.3450761735,
-0.1242234036,
0.0163034536,
0.132706821,
0.3179681301,
-0.2400150299,
0.410885632,
-0.0373039171,
0.1308017224,
-0.1365240365,
-0.08464... |
https://github.com/huggingface/datasets/issues/3269 | coqa NonMatchingChecksumError | I'm sorry, but don't get to reproduce the error in the Linux environment.
@mariosasko @lhoestq can you reproduce it? | ```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | 19 | coqa NonMatchingChecksumError
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | [
-0.2934430242,
-0.1696564257,
-0.237728551,
0.1443958431,
0.175302878,
-0.0322034322,
0.1085318327,
0.3469724655,
0.1612990946,
0.3450761735,
-0.1242234036,
0.0163034536,
0.132706821,
0.3179681301,
-0.2400150299,
0.410885632,
-0.0373039171,
0.1308017224,
-0.1365240365,
-0.08464... |
https://github.com/huggingface/datasets/issues/3269 | coqa NonMatchingChecksumError | I also can't reproduce the error on Windows/Linux (tested both the master and the `1.15.1` version). | ```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | 16 | coqa NonMatchingChecksumError
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | [
-0.2934430242,
-0.1696564257,
-0.237728551,
0.1443958431,
0.175302878,
-0.0322034322,
0.1085318327,
0.3469724655,
0.1612990946,
0.3450761735,
-0.1242234036,
0.0163034536,
0.132706821,
0.3179681301,
-0.2400150299,
0.410885632,
-0.0373039171,
0.1308017224,
-0.1365240365,
-0.08464... |
https://github.com/huggingface/datasets/issues/3269 | coqa NonMatchingChecksumError | Maybe the file had issues during the download ? Could you try to delete your cache and try again ?
By default the downloads cache is at `~/.cache/huggingface/datasets/downloads`
Also can you check if you have a proxy that could prevent the download to succeed ? Are you able to download those files via your browser ... | ```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | 56 | coqa NonMatchingChecksumError
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | [
-0.2934430242,
-0.1696564257,
-0.237728551,
0.1443958431,
0.175302878,
-0.0322034322,
0.1085318327,
0.3469724655,
0.1612990946,
0.3450761735,
-0.1242234036,
0.0163034536,
0.132706821,
0.3179681301,
-0.2400150299,
0.410885632,
-0.0373039171,
0.1308017224,
-0.1365240365,
-0.08464... |
https://github.com/huggingface/datasets/issues/3269 | coqa NonMatchingChecksumError | I got the same error in a third environment (google cloud) as well. The internet for these three environments are all different so I don't think that's the reason.
```
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-1022-gcp-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 6.0.0
```
I delet... | ```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | 380 | coqa NonMatchingChecksumError
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | [
-0.2934430242,
-0.1696564257,
-0.237728551,
0.1443958431,
0.175302878,
-0.0322034322,
0.1085318327,
0.3469724655,
0.1612990946,
0.3450761735,
-0.1242234036,
0.0163034536,
0.132706821,
0.3179681301,
-0.2400150299,
0.410885632,
-0.0373039171,
0.1308017224,
-0.1365240365,
-0.08464... |
https://github.com/huggingface/datasets/issues/3269 | coqa NonMatchingChecksumError | I can run your notebook fine, but if I create one myself, it has that error: https://colab.research.google.com/drive/107GIdhrauPO6ZiFDY7G9S74in4qqI2Kx?usp=sharing.
It's so funny -- it's like whenever you guys run it it's fine but whenever I run it it fails, whatever the environment is. | ```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | 41 | coqa NonMatchingChecksumError
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | [
-0.2934430242,
-0.1696564257,
-0.237728551,
0.1443958431,
0.175302878,
-0.0322034322,
0.1085318327,
0.3469724655,
0.1612990946,
0.3450761735,
-0.1242234036,
0.0163034536,
0.132706821,
0.3179681301,
-0.2400150299,
0.410885632,
-0.0373039171,
0.1308017224,
-0.1365240365,
-0.08464... |
https://github.com/huggingface/datasets/issues/3269 | coqa NonMatchingChecksumError | I guess it must be some connection issue: the data owner may be blocking requests coming from your country or IP range... | ```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | 22 | coqa NonMatchingChecksumError
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | [
-0.2934430242,
-0.1696564257,
-0.237728551,
0.1443958431,
0.175302878,
-0.0322034322,
0.1085318327,
0.3469724655,
0.1612990946,
0.3450761735,
-0.1242234036,
0.0163034536,
0.132706821,
0.3179681301,
-0.2400150299,
0.410885632,
-0.0373039171,
0.1308017224,
-0.1365240365,
-0.08464... |
https://github.com/huggingface/datasets/issues/3269 | coqa NonMatchingChecksumError | I mean, I don't think google colab sends the connection from my IP. Same applies to google cloud. | ```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | 18 | coqa NonMatchingChecksumError
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | [
-0.2934430242,
-0.1696564257,
-0.237728551,
0.1443958431,
0.175302878,
-0.0322034322,
0.1085318327,
0.3469724655,
0.1612990946,
0.3450761735,
-0.1242234036,
0.0163034536,
0.132706821,
0.3179681301,
-0.2400150299,
0.410885632,
-0.0373039171,
0.1308017224,
-0.1365240365,
-0.08464... |
https://github.com/huggingface/datasets/issues/3269 | coqa NonMatchingChecksumError | Hello, I am having the same error with @ZhaofengWu first with "social bias frames" dataset. As I found this report, I tried also "coqa" and it fails as well.
I test this on Google Colab.
```
- `datasets` version: 1.15.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- Py... | ```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | 82 | coqa NonMatchingChecksumError
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | [
-0.2934430242,
-0.1696564257,
-0.237728551,
0.1443958431,
0.175302878,
-0.0322034322,
0.1085318327,
0.3469724655,
0.1612990946,
0.3450761735,
-0.1242234036,
0.0163034536,
0.132706821,
0.3179681301,
-0.2400150299,
0.410885632,
-0.0373039171,
0.1308017224,
-0.1365240365,
-0.08464... |
https://github.com/huggingface/datasets/issues/3269 | coqa NonMatchingChecksumError | Hi, still not able to reproduce the issue with `coqa`. If you still have this issue, could you please run these additional commands ?
```python
>>> import os
>>> from hashlib import md5
>>> from datasets.utils import DownloadManager, DownloadConfig
>>> path = DownloadManager(download_config=DownloadConfig(use_etag... | ```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | 169 | coqa NonMatchingChecksumError
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | [
-0.2934430242,
-0.1696564257,
-0.237728551,
0.1443958431,
0.175302878,
-0.0322034322,
0.1085318327,
0.3469724655,
0.1612990946,
0.3450761735,
-0.1242234036,
0.0163034536,
0.132706821,
0.3179681301,
-0.2400150299,
0.410885632,
-0.0373039171,
0.1308017224,
-0.1365240365,
-0.08464... |
https://github.com/huggingface/datasets/issues/3269 | coqa NonMatchingChecksumError | ```
>>> import os
>>> from hashlib import md5
>>> from datasets.utils import DownloadManager, DownloadConfig
>>> path = DownloadManager(download_config=DownloadConfig(use_etag=False)).download("https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json") # it returns the cached file
>>> os.path.getsize(path)
222
>>>... | ```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | 114 | coqa NonMatchingChecksumError
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | [
-0.2934430242,
-0.1696564257,
-0.237728551,
0.1443958431,
0.175302878,
-0.0322034322,
0.1085318327,
0.3469724655,
0.1612990946,
0.3450761735,
-0.1242234036,
0.0163034536,
0.132706821,
0.3179681301,
-0.2400150299,
0.410885632,
-0.0373039171,
0.1308017224,
-0.1365240365,
-0.08464... |
https://github.com/huggingface/datasets/issues/3269 | coqa NonMatchingChecksumError | `wget https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json` does work. So I suspect there might be some problem in `datasets`' networking code? Can you give me some snippet that simulates how `datasets` requests the resource which I can run on my end? | ```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | 36 | coqa NonMatchingChecksumError
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | [
-0.2934430242,
-0.1696564257,
-0.237728551,
0.1443958431,
0.175302878,
-0.0322034322,
0.1085318327,
0.3469724655,
0.1612990946,
0.3450761735,
-0.1242234036,
0.0163034536,
0.132706821,
0.3179681301,
-0.2400150299,
0.410885632,
-0.0373039171,
0.1308017224,
-0.1365240365,
-0.08464... |
https://github.com/huggingface/datasets/issues/3269 | coqa NonMatchingChecksumError | Ok This is an issue with the server that hosts the data at `https://nlp.stanford.edu/nlp/data` that randomly returns 503 (by trying several times it also happens on my side), hopefully it can be fixed soon. I'll try to reach the people in charge of hosting the data | ```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | 46 | coqa NonMatchingChecksumError
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | [
-0.2934430242,
-0.1696564257,
-0.237728551,
0.1443958431,
0.175302878,
-0.0322034322,
0.1085318327,
0.3469724655,
0.1612990946,
0.3450761735,
-0.1242234036,
0.0163034536,
0.132706821,
0.3179681301,
-0.2400150299,
0.410885632,
-0.0373039171,
0.1308017224,
-0.1365240365,
-0.08464... |
https://github.com/huggingface/datasets/issues/3269 | coqa NonMatchingChecksumError | You're right. I just opened a PR that would show this error if it happens again:
```python
ConnectionError: Couldn't reach https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json (error 503)
``` | ```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | 24 | coqa NonMatchingChecksumError
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | [
-0.2934430242,
-0.1696564257,
-0.237728551,
0.1443958431,
0.175302878,
-0.0322034322,
0.1085318327,
0.3469724655,
0.1612990946,
0.3450761735,
-0.1242234036,
0.0163034536,
0.132706821,
0.3179681301,
-0.2400150299,
0.410885632,
-0.0373039171,
0.1308017224,
-0.1365240365,
-0.08464... |
https://github.com/huggingface/datasets/issues/3268 | Dataset viewer issue for 'liweili/c4_200m' | Hi ! I think the issue comes from this [line](https://huggingface.co/datasets/liweili/c4_200m/blob/main/c4_200m.py#L87):
```python
path = filepath + "/*.tsv*"
```
You can fix this by doing this instead:
```python
path = os.path.join(filepath, "/*.tsv*")
```
Here is why:
Locally you can append `"/*.tsv*"`... | ## Dataset viewer issue for '*liweili/c4_200m*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/liweili/c4_200m)*
*Server Error*
```
Status code: 404
Exception: Status404Error
Message: Not found. Maybe the cache is missing, or maybe the ressource does not exist.
```
... | 149 | Dataset viewer issue for 'liweili/c4_200m'
## Dataset viewer issue for '*liweili/c4_200m*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/liweili/c4_200m)*
*Server Error*
```
Status code: 404
Exception: Status404Error
Message: Not found. Maybe the cache is missing, ... | [
-0.2454840243,
0.1612445265,
-0.0047895024,
0.3759571612,
0.2006412297,
0.0835294798,
0.017746523,
0.261454463,
-0.2173461169,
0.0912251621,
-0.3134583533,
0.1573549211,
0.0079844957,
0.2152266353,
0.2205264717,
0.0133887595,
-0.1173203066,
0.2433596104,
-0.2229032665,
0.087522... |
https://github.com/huggingface/datasets/issues/3268 | Dataset viewer issue for 'liweili/c4_200m' | hi @lhoestq ! thanks for the tip! i've updated the line of code but it's still not working. am i doing something else wrong? thank you! | ## Dataset viewer issue for '*liweili/c4_200m*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/liweili/c4_200m)*
*Server Error*
```
Status code: 404
Exception: Status404Error
Message: Not found. Maybe the cache is missing, or maybe the ressource does not exist.
```
... | 26 | Dataset viewer issue for 'liweili/c4_200m'
## Dataset viewer issue for '*liweili/c4_200m*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/liweili/c4_200m)*
*Server Error*
```
Status code: 404
Exception: Status404Error
Message: Not found. Maybe the cache is missing, ... | [
-0.2048722357,
0.3397707641,
-0.0439794511,
0.453977108,
-0.0323567651,
0.1852270961,
0.2257338762,
0.132736668,
-0.1553967148,
0.1312171519,
-0.1871982366,
0.0623998195,
-0.0724006072,
0.0393495634,
0.2863667309,
0.0551078729,
0.0001893812,
0.3673495948,
-0.0825177953,
0.04088... |
https://github.com/huggingface/datasets/issues/3268 | Dataset viewer issue for 'liweili/c4_200m' | Hi ! Your dataset code is all good now :)
```python
In [1]: from datasets import load_dataset
In [2]: d = load_dataset("liweili/c4_200m", streaming=True)
Downloading: 100%|█████████████████████████████████████████████| 2.79k/2.79k [00:00<00:00, 4.83MB/s]
Using custom data configuration default
In [3]: next(it... | ## Dataset viewer issue for '*liweili/c4_200m*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/liweili/c4_200m)*
*Server Error*
```
Status code: 404
Exception: Status404Error
Message: Not found. Maybe the cache is missing, or maybe the ressource does not exist.
```
... | 73 | Dataset viewer issue for 'liweili/c4_200m'
## Dataset viewer issue for '*liweili/c4_200m*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/liweili/c4_200m)*
*Server Error*
```
Status code: 404
Exception: Status404Error
Message: Not found. Maybe the cache is missing, ... | [
-0.4007691443,
0.5141014457,
-0.0849969238,
0.3988128901,
0.090029344,
0.1364438236,
0.1792555749,
0.3091340363,
-0.2108911425,
0.0321359485,
-0.1187349558,
0.0765216053,
-0.150267601,
0.1407317966,
0.0565299392,
0.0477423333,
-0.0537077226,
0.2665194273,
0.0039401096,
-0.00021... |
https://github.com/huggingface/datasets/issues/3265 | Checksum error for kilt_task_wow | Using `dataset = load_dataset("kilt_tasks", "wow", ignore_verifications=True)` may fix it, but I do not think it is a elegant solution. | ## Describe the bug
Checksum failed when downloads kilt_tasks_wow. See error output for details.
## Steps to reproduce the bug
```python
import datasets
datasets.load_datasets('kilt_tasks','wow')
```
## Expected results
Download successful
## Actual results
```
Downloading and preparing dataset kilt_ta... | 19 | Checksum error for kilt_task_wow
## Describe the bug
Checksum failed when downloads kilt_tasks_wow. See error output for details.
## Steps to reproduce the bug
```python
import datasets
datasets.load_datasets('kilt_tasks','wow')
```
## Expected results
Download successful
## Actual results
```
Downlo... | [
-0.0128258085,
-0.2781217992,
-0.0930913538,
0.2878222167,
0.3997595906,
-0.0828802958,
0.2399520427,
0.5319925547,
0.3643353581,
0.1832444221,
0.0476305783,
0.3454276621,
0.0678310692,
0.3870452642,
-0.0282226354,
0.1116022542,
0.0441304035,
-0.1036070883,
-0.0944887027,
0.075... |
https://github.com/huggingface/datasets/issues/3265 | Checksum error for kilt_task_wow | Hi @slyviacassell, thanks for reporting.
Yes, there is an issue with the checksum verification. I'm fixing it.
And as you pointed out, in the meantime, you can circumvent the problem by passing `ignore_verifications=True`. | ## Describe the bug
Checksum failed when downloads kilt_tasks_wow. See error output for details.
## Steps to reproduce the bug
```python
import datasets
datasets.load_datasets('kilt_tasks','wow')
```
## Expected results
Download successful
## Actual results
```
Downloading and preparing dataset kilt_ta... | 33 | Checksum error for kilt_task_wow
## Describe the bug
Checksum failed when downloads kilt_tasks_wow. See error output for details.
## Steps to reproduce the bug
```python
import datasets
datasets.load_datasets('kilt_tasks','wow')
```
## Expected results
Download successful
## Actual results
```
Downlo... | [
-0.0128258085,
-0.2781217992,
-0.0930913538,
0.2878222167,
0.3997595906,
-0.0828802958,
0.2399520427,
0.5319925547,
0.3643353581,
0.1832444221,
0.0476305783,
0.3454276621,
0.0678310692,
0.3870452642,
-0.0282226354,
0.1116022542,
0.0441304035,
-0.1036070883,
-0.0944887027,
0.075... |
https://github.com/huggingface/datasets/issues/3264 | Downloading URL change for WikiAuto Manual, jeopardy and definite_pronoun_resolution | #take
I am willing to fix this. Links can be replaced for WikiAuto Manual and jeopardy with new ones provided by authors.
As for the definite_pronoun_resolution URL, a certificate error seems to be preventing a download. I have the files on my local machine. I can include them in the dataset folder as the files are... | ## Describe the bug
- WikiAuto Manual
The original manual datasets with the following downloading URL in this [repository](https://github.com/chaojiang06/wiki-auto) was [deleted](https://github.com/chaojiang06/wiki-auto/commit/0af9b066f2b4e02726fb8a9be49283c0ad25367f) by the author.
```
https://github.com/chaoj... | 60 | Downloading URL change for WikiAuto Manual, jeopardy and definite_pronoun_resolution
## Describe the bug
- WikiAuto Manual
The original manual datasets with the following downloading URL in this [repository](https://github.com/chaojiang06/wiki-auto) was [deleted](https://github.com/chaojiang06/wiki-auto/commit/0a... | [
0.1274089813,
-0.1285836399,
0.0336005911,
0.0952002481,
0.1693069786,
0.0601077378,
0.2048466653,
0.2117162645,
-0.2019648105,
0.3956473768,
-0.0465765931,
0.0693233907,
0.163928777,
-0.1324874759,
0.0800741613,
-0.3333909512,
-0.0439454466,
-0.1293078065,
0.1895739138,
-0.091... |
https://github.com/huggingface/datasets/issues/3264 | Downloading URL change for WikiAuto Manual, jeopardy and definite_pronoun_resolution | > #take I am willing to fix this. Links can be replaced for WikiAuto Manual and jeopardy.
>
> As for the definite_pronoun_resolution URL, a certificate error seems to be preventing a download. I have the files on my local machine. Anyone has opinions on whether it is preferable for me to host them somewhere (e.g. pe... | ## Describe the bug
- WikiAuto Manual
The original manual datasets with the following downloading URL in this [repository](https://github.com/chaojiang06/wiki-auto) was [deleted](https://github.com/chaojiang06/wiki-auto/commit/0af9b066f2b4e02726fb8a9be49283c0ad25367f) by the author.
```
https://github.com/chaoj... | 113 | Downloading URL change for WikiAuto Manual, jeopardy and definite_pronoun_resolution
## Describe the bug
- WikiAuto Manual
The original manual datasets with the following downloading URL in this [repository](https://github.com/chaojiang06/wiki-auto) was [deleted](https://github.com/chaojiang06/wiki-auto/commit/0a... | [
0.1274089813,
-0.1285836399,
0.0336005911,
0.0952002481,
0.1693069786,
0.0601077378,
0.2048466653,
0.2117162645,
-0.2019648105,
0.3956473768,
-0.0465765931,
0.0693233907,
0.163928777,
-0.1324874759,
0.0800741613,
-0.3333909512,
-0.0439454466,
-0.1293078065,
0.1895739138,
-0.091... |
https://github.com/huggingface/datasets/issues/3261 | Scifi_TV_Shows: Having trouble getting viewer to find appropriate files | Hi ! I think this is because `iter_archive` doesn't support ZIP files yet. See https://github.com/huggingface/datasets/issues/3272
You can navigate into the archive this way instead:
```python
# in split_generators
data_dir = dl_manager.download_and_extract(url)
train_filepath = os.path.join(data_dir, "all-sci-f... | ## Dataset viewer issue for '*Science Fiction TV Show Plots Corpus (Scifi_TV_Shows)*'
**Link:** [link](https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows)
I tried adding both a script (https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/blob/main/Scifi_TV_Shows.py) and some dummy examples (https:/... | 56 | Scifi_TV_Shows: Having trouble getting viewer to find appropriate files
## Dataset viewer issue for '*Science Fiction TV Show Plots Corpus (Scifi_TV_Shows)*'
**Link:** [link](https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows)
I tried adding both a script (https://huggingface.co/datasets/lara-martin/Sc... | [
-0.2578959167,
0.1072367281,
0.0246642418,
0.3973902464,
0.0099126967,
0.1714859754,
-0.0949604958,
0.4377739429,
0.1972424686,
-0.0376981497,
-0.2437501848,
0.063755326,
-0.3711235821,
0.2452094555,
-0.040027488,
-0.0794992521,
0.0521552414,
0.2073266804,
0.0781619027,
0.20335... |
https://github.com/huggingface/datasets/issues/3257 | Use f-strings for string formatting | Hi, I would be glad to help with this. Is there anyone else working on it? | f-strings offer better readability/performance than `str.format` and `%`, so we should use them in all places in our codebase unless there is good reason to keep the older syntax.
> **NOTE FOR CONTRIBUTORS**: To avoid large PRs and possible merge conflicts, do 1-3 modules per PR. Also, feel free to ignore the files ... | 16 | Use f-strings for string formatting
f-strings offer better readability/performance than `str.format` and `%`, so we should use them in all places in our codebase unless there is good reason to keep the older syntax.
> **NOTE FOR CONTRIBUTORS**: To avoid large PRs and possible merge conflicts, do 1-3 modules per P... | [
-0.2523805499,
-0.0726950392,
-0.2729312778,
-0.1937378496,
0.318089515,
-0.2330752015,
-0.077431418,
0.4338004291,
-0.0401994474,
0.1741474271,
-0.0738548711,
0.2662830949,
-0.1548373699,
0.3264695108,
-0.1182607561,
0.0868057758,
0.1632619053,
0.3807905018,
-0.2161478251,
0.1... |
https://github.com/huggingface/datasets/issues/3257 | Use f-strings for string formatting | Hi @Carlosbogo,
would you be interested in replacing the `.format` and `%` syntax with f-strings in the modules in the `datasets` directory since @Mehdi2402 has opened a PR that does that for all the other directories? | f-strings offer better readability/performance than `str.format` and `%`, so we should use them in all places in our codebase unless there is good reason to keep the older syntax.
> **NOTE FOR CONTRIBUTORS**: To avoid large PRs and possible merge conflicts, do 1-3 modules per PR. Also, feel free to ignore the files ... | 36 | Use f-strings for string formatting
f-strings offer better readability/performance than `str.format` and `%`, so we should use them in all places in our codebase unless there is good reason to keep the older syntax.
> **NOTE FOR CONTRIBUTORS**: To avoid large PRs and possible merge conflicts, do 1-3 modules per P... | [
-0.2773165107,
-0.0062523508,
-0.2638719678,
-0.2012526393,
0.352601558,
-0.2338513881,
-0.0026533506,
0.3380841613,
-0.1398147792,
0.1589685231,
-0.0092811594,
0.3141451776,
-0.1093443111,
0.2844198942,
-0.1453707367,
0.0645593554,
0.0962566212,
0.3454552889,
-0.2286487669,
0.... |
https://github.com/huggingface/datasets/issues/3253 | `GeneratorBasedBuilder` does not support `None` values | Hi,
thanks for reporting and providing a minimal reproducible example.
This line of the PR I've linked in our discussion on the Forum will add support for `None` values:
https://github.com/huggingface/datasets/blob/a53de01842aac65c66a49b2439e18fa93ff73ceb/src/datasets/features/features.py#L835
I expect that ... | ## Describe the bug
`GeneratorBasedBuilder` does not support `None` values.
## Steps to reproduce the bug
See [this repository](https://github.com/pavel-lexyr/huggingface-datasets-bug-reproduction) for minimal reproduction.
## Expected results
Dataset is initialized with a `None` value in the `value` column.
... | 38 | `GeneratorBasedBuilder` does not support `None` values
## Describe the bug
`GeneratorBasedBuilder` does not support `None` values.
## Steps to reproduce the bug
See [this repository](https://github.com/pavel-lexyr/huggingface-datasets-bug-reproduction) for minimal reproduction.
## Expected results
Dataset is... | [
-0.3755475879,
-0.0556597337,
0.017514525,
0.362069726,
0.2084514201,
-0.0467278734,
0.2878645062,
0.3196857572,
-0.0551303513,
0.4114516973,
-0.0919492543,
0.2093870789,
-0.0303598605,
0.1589505821,
-0.0881237835,
-0.1058660746,
-0.0093904072,
0.4736220539,
-0.1599651277,
-0.1... |
https://github.com/huggingface/datasets/issues/3247 | Loading big json dataset raises pyarrow.lib.ArrowNotImplementedError | Hi,
this issue is similar to https://github.com/huggingface/datasets/issues/3093, so you can either use the solution provided there or try to load the data in one chunk (you can control the chunk size by specifying the `chunksize` parameter (`int`) in `load_dataset`).
@lhoestq Is this worth opening an issue on Ji... | ## Describe the bug
When trying to create a dataset from a json file with around 25MB, the following error is raised `pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct`
Splitting the big file into smaller ones and then loading it with the `lo... | 94 | Loading big json dataset raises pyarrow.lib.ArrowNotImplementedError
## Describe the bug
When trying to create a dataset from a json file with around 25MB, the following error is raised `pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct`
Sp... | [
-0.2685220242,
0.2580777109,
0.0205112435,
0.4228529036,
0.4146388471,
-0.0655422807,
0.101144582,
0.4812646806,
-0.1803747267,
0.008806997,
0.0041477215,
0.5188928246,
-0.0924273357,
-0.111740388,
-0.1435553432,
-0.2028794736,
0.0869870931,
0.2631750405,
0.0699901059,
0.129294... |
https://github.com/huggingface/datasets/issues/3247 | Loading big json dataset raises pyarrow.lib.ArrowNotImplementedError | I agree I would expect PyArrow to be able to handle this, do you want to open the issue @mariosasko ?
Although maybe it's possible to fix struct casting on our side without hurting performance too much, if it's simply a matter of reordering the arrays in the StructArray | ## Describe the bug
When trying to create a dataset from a json file with around 25MB, the following error is raised `pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct`
Splitting the big file into smaller ones and then loading it with the `lo... | 49 | Loading big json dataset raises pyarrow.lib.ArrowNotImplementedError
## Describe the bug
When trying to create a dataset from a json file with around 25MB, the following error is raised `pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct`
Sp... | [
-0.2685220242,
0.2580777109,
0.0205112435,
0.4228529036,
0.4146388471,
-0.0655422807,
0.101144582,
0.4812646806,
-0.1803747267,
0.008806997,
0.0041477215,
0.5188928246,
-0.0924273357,
-0.111740388,
-0.1435553432,
-0.2028794736,
0.0869870931,
0.2631750405,
0.0699901059,
0.129294... |
https://github.com/huggingface/datasets/issues/3242 | Adding ANERcorp-CAMeLLab dataset | Adding ANERcorp dataset
## Adding a Dataset
- **Name:** *ANERcorp-CAMeLLab*
- **Description:** *Since its creation in 2008, the ANERcorp dataset (Benajiba & Rosso, 2008) has been a standard reference used by Arabic named entity recognition researchers around the world. However, over time, this dataset was copied o... | null | 248 | Adding ANERcorp-CAMeLLab dataset
Adding ANERcorp dataset
## Adding a Dataset
- **Name:** *ANERcorp-CAMeLLab*
- **Description:** *Since its creation in 2008, the ANERcorp dataset (Benajiba & Rosso, 2008) has been a standard reference used by Arabic named entity recognition researchers around the world. However... | [
-0.0976279825,
0.1674585193,
-0.1779311895,
0.1465998292,
-0.0990947783,
-0.1637660116,
0.4309604764,
0.2683173716,
-0.0686768144,
0.3724307716,
-0.088210769,
0.0207344554,
0.0654739216,
0.1001140699,
0.2216454148,
-0.0687983781,
-0.1000754461,
0.0082845418,
0.0419872515,
0.000... |
https://github.com/huggingface/datasets/issues/3240 | Couldn't reach data file for disaster_response_messages | It looks like the dataset isn't available anymore on appen.com
The CSV files appear to still be available at https://www.kaggle.com/landlord/multilingual-disaster-response-messages though. It says that the data are under the CC0 license so I guess we can host the dataset elsewhere instead ? | ## Describe the bug
Following command gives an ConnectionError.
## Steps to reproduce the bug
```python
disaster = load_dataset('disaster_response_messages')
```
## Error
```
ConnectionError: Couldn't reach https://datasets.appen.com/appen_datasets/disaster_response_data/disaster_response_messages_training.... | 42 | Couldn't reach data file for disaster_response_messages
## Describe the bug
Following command gives an ConnectionError.
## Steps to reproduce the bug
```python
disaster = load_dataset('disaster_response_messages')
```
## Error
```
ConnectionError: Couldn't reach https://datasets.appen.com/appen_datasets/d... | [
-0.3969844282,
0.0855663195,
-0.1614029855,
0.2481056154,
0.2457594573,
-0.0362626836,
0.3150369227,
0.031977009,
-0.1832845807,
0.103593573,
-0.0032423015,
0.1029789522,
-0.0037673693,
0.0743470192,
0.0495314486,
0.1760584116,
-0.0236672498,
0.0551095046,
-0.1005651876,
0.2073... |
https://github.com/huggingface/datasets/issues/3236 | Loading of datasets changed in #3110 returns no examples | Hi @eladsegal, thanks for reporting.
I am sorry, but I can't reproduce the bug:
```
In [1]: from datasets import load_dataset
In [2]: ds = load_dataset("qasper")
Downloading: 5.11kB [00:00, ?B/s]
Downloading and preparing dataset qasper/qasper (download: 9.88 MiB, generated: 35.11 MiB, post-processed: Unknown... | ## Describe the bug
Loading of datasets changed in https://github.com/huggingface/datasets/pull/3110 returns no examples:
```python
DatasetDict({
train: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 0
})
validation: Dataset({
features: ['id',... | 173 | Loading of datasets changed in #3110 returns no examples
## Describe the bug
Loading of datasets changed in https://github.com/huggingface/datasets/pull/3110 returns no examples:
```python
DatasetDict({
train: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 0
... | [
0.0481985547,
-0.0869977772,
-0.0242289286,
0.2818650603,
0.3240279555,
-0.0273514315,
0.2742410302,
0.2374753803,
0.1709858179,
0.2015544921,
-0.071567364,
0.3162328601,
-0.0423279032,
0.1720466316,
0.1071109399,
-0.2534898818,
0.1101513356,
0.1358653903,
-0.3096776307,
-0.017... |
https://github.com/huggingface/datasets/issues/3236 | Loading of datasets changed in #3110 returns no examples | I can't reproduce either in my environment (macos, python 3.7).
In your case it generates zero examples. This can only happen if the extraction of the TAR archive doesn't output the right filenames. Indeed if the `qasper` script can't find the right file to load, it's currently ignored and it returns zero examples. ... | ## Describe the bug
Loading of datasets changed in https://github.com/huggingface/datasets/pull/3110 returns no examples:
```python
DatasetDict({
train: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 0
})
validation: Dataset({
features: ['id',... | 107 | Loading of datasets changed in #3110 returns no examples
## Describe the bug
Loading of datasets changed in https://github.com/huggingface/datasets/pull/3110 returns no examples:
```python
DatasetDict({
train: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 0
... | [
0.0481985547,
-0.0869977772,
-0.0242289286,
0.2818650603,
0.3240279555,
-0.0273514315,
0.2742410302,
0.2374753803,
0.1709858179,
0.2015544921,
-0.071567364,
0.3162328601,
-0.0423279032,
0.1720466316,
0.1071109399,
-0.2534898818,
0.1101513356,
0.1358653903,
-0.3096776307,
-0.017... |
https://github.com/huggingface/datasets/issues/3236 | Loading of datasets changed in #3110 returns no examples | I just tried again on python 3.8 and I was able to reproduce the issue. Let me work on a fix | ## Describe the bug
Loading of datasets changed in https://github.com/huggingface/datasets/pull/3110 returns no examples:
```python
DatasetDict({
train: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 0
})
validation: Dataset({
features: ['id',... | 21 | Loading of datasets changed in #3110 returns no examples
## Describe the bug
Loading of datasets changed in https://github.com/huggingface/datasets/pull/3110 returns no examples:
```python
DatasetDict({
train: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 0
... | [
0.0481985547,
-0.0869977772,
-0.0242289286,
0.2818650603,
0.3240279555,
-0.0273514315,
0.2742410302,
0.2374753803,
0.1709858179,
0.2015544921,
-0.071567364,
0.3162328601,
-0.0423279032,
0.1720466316,
0.1071109399,
-0.2534898818,
0.1101513356,
0.1358653903,
-0.3096776307,
-0.017... |
https://github.com/huggingface/datasets/issues/3236 | Loading of datasets changed in #3110 returns no examples | Ok I found the issue. It's not related to python 3.8 in itself though. This issue happens because your local installation of `datasets` is outdated compared to the changes to datasets in #3110
To fix this you just have to pull the latest changes from `master` :)
Let me know if that helps !
--------------
He... | ## Describe the bug
Loading of datasets changed in https://github.com/huggingface/datasets/pull/3110 returns no examples:
```python
DatasetDict({
train: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 0
})
validation: Dataset({
features: ['id',... | 183 | Loading of datasets changed in #3110 returns no examples
## Describe the bug
Loading of datasets changed in https://github.com/huggingface/datasets/pull/3110 returns no examples:
```python
DatasetDict({
train: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 0
... | [
0.0481985547,
-0.0869977772,
-0.0242289286,
0.2818650603,
0.3240279555,
-0.0273514315,
0.2742410302,
0.2374753803,
0.1709858179,
0.2015544921,
-0.071567364,
0.3162328601,
-0.0423279032,
0.1720466316,
0.1071109399,
-0.2534898818,
0.1101513356,
0.1358653903,
-0.3096776307,
-0.017... |
https://github.com/huggingface/datasets/issues/3236 | Loading of datasets changed in #3110 returns no examples | Thanks!
But what about code that is already using older version of datasets?
The reason I encountered this issue was that suddenly one of my repos with version 1.12.1 started getting 0 examples.
I handled it by adding `revision` to `load_dataset`, but I guess it would still be an issue for other users who doesn't k... | ## Describe the bug
Loading of datasets changed in https://github.com/huggingface/datasets/pull/3110 returns no examples:
```python
DatasetDict({
train: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 0
})
validation: Dataset({
features: ['id',... | 57 | Loading of datasets changed in #3110 returns no examples
## Describe the bug
Loading of datasets changed in https://github.com/huggingface/datasets/pull/3110 returns no examples:
```python
DatasetDict({
train: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 0
... | [
0.0481985547,
-0.0869977772,
-0.0242289286,
0.2818650603,
0.3240279555,
-0.0273514315,
0.2742410302,
0.2374753803,
0.1709858179,
0.2015544921,
-0.071567364,
0.3162328601,
-0.0423279032,
0.1720466316,
0.1071109399,
-0.2534898818,
0.1101513356,
0.1358653903,
-0.3096776307,
-0.017... |
https://github.com/huggingface/datasets/issues/3236 | Loading of datasets changed in #3110 returns no examples | Hi, in 1.12.1 it uses the dataset scripts from that time, not the one on master.
It only uses the datasets from master if you installed `datasets` from source, or if the dataset isn't available in your local version (in this case it shows a warning and it loads from master).
| ## Describe the bug
Loading of datasets changed in https://github.com/huggingface/datasets/pull/3110 returns no examples:
```python
DatasetDict({
train: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 0
})
validation: Dataset({
features: ['id',... | 51 | Loading of datasets changed in #3110 returns no examples
## Describe the bug
Loading of datasets changed in https://github.com/huggingface/datasets/pull/3110 returns no examples:
```python
DatasetDict({
train: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 0
... | [
0.0481985547,
-0.0869977772,
-0.0242289286,
0.2818650603,
0.3240279555,
-0.0273514315,
0.2742410302,
0.2374753803,
0.1709858179,
0.2015544921,
-0.071567364,
0.3162328601,
-0.0423279032,
0.1720466316,
0.1071109399,
-0.2534898818,
0.1101513356,
0.1358653903,
-0.3096776307,
-0.017... |
https://github.com/huggingface/datasets/issues/3236 | Loading of datasets changed in #3110 returns no examples | OK, I understand the issue a bit better now.
I see I wasn't on 1.12.1, but on 1.12.1.dev0 and since it is a dev version it uses master.
So users that use an old dev version must specify revision or else they'll encounter this problem.
BTW, when I opened the issue I installed the latest master version with
```
pi... | ## Describe the bug
Loading of datasets changed in https://github.com/huggingface/datasets/pull/3110 returns no examples:
```python
DatasetDict({
train: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 0
})
validation: Dataset({
features: ['id',... | 113 | Loading of datasets changed in #3110 returns no examples
## Describe the bug
Loading of datasets changed in https://github.com/huggingface/datasets/pull/3110 returns no examples:
```python
DatasetDict({
train: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 0
... | [
0.0481985547,
-0.0869977772,
-0.0242289286,
0.2818650603,
0.3240279555,
-0.0273514315,
0.2742410302,
0.2374753803,
0.1709858179,
0.2015544921,
-0.071567364,
0.3162328601,
-0.0423279032,
0.1720466316,
0.1071109399,
-0.2534898818,
0.1101513356,
0.1358653903,
-0.3096776307,
-0.017... |
https://github.com/huggingface/datasets/issues/3232 | The Xsum datasets seems not able to download. | > Hi ! On my side the URL is working fine, could you try again ?
I try it again and cannot download the file (might because of my location). Could you please provide another download link(such as google drive)? :> | ## Describe the bug
The download Link of the Xsum dataset provided in the repository is [Link](http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz). It seems not able to download.
## Steps to reproduce the bug
```python
load_dataset('xsum')
```
## Actual results
``` python
r... | 41 | The Xsum datasets seems not able to download.
## Describe the bug
The download Link of the Xsum dataset provided in the repository is [Link](http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz). It seems not able to download.
## Steps to reproduce the bug
```python
load_dataset('x... | [
-0.2966645062,
-0.2934873998,
-0.0182695277,
0.3797835112,
0.3340881765,
0.0487929732,
-0.1388902217,
0.1607493609,
0.380304873,
0.308460474,
-0.2573417723,
0.0936074331,
0.2472653687,
0.1655268371,
0.2194852233,
-0.2016707361,
-0.0653339699,
-0.1392727047,
-0.1925852448,
-0.20... |
https://github.com/huggingface/datasets/issues/3232 | The Xsum datasets seems not able to download. | I don't know other download links - this is the one provided by the authors of the dataset. Maybe you can try downloading from another location ? There are several solutions: a VPN, a remote VM or Google Colab for example. | ## Describe the bug
The download Link of the Xsum dataset provided in the repository is [Link](http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz). It seems not able to download.
## Steps to reproduce the bug
```python
load_dataset('xsum')
```
## Actual results
``` python
r... | 41 | The Xsum datasets seems not able to download.
## Describe the bug
The download Link of the Xsum dataset provided in the repository is [Link](http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz). It seems not able to download.
## Steps to reproduce the bug
```python
load_dataset('x... | [
-0.3314230442,
-0.3674333394,
-0.0869147927,
0.2454146445,
0.2638825178,
-0.0272142272,
-0.1818692684,
0.2482026517,
0.4178062975,
0.4426147938,
-0.2948056161,
0.2520473599,
0.2052698135,
0.229403913,
0.2333916277,
-0.0991052091,
-0.0418626443,
-0.0463894121,
-0.2623508573,
-0.... |
https://github.com/huggingface/datasets/issues/3232 | The Xsum datasets seems not able to download. | > I don't know other download links - this is the one provided by the authors of the dataset. Maybe you can try downloading from another location ? There are several solutions: a VPN, a remote VM or Google Colab for example.
:> ok. Thanks for your reply. | ## Describe the bug
The download Link of the Xsum dataset provided in the repository is [Link](http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz). It seems not able to download.
## Steps to reproduce the bug
```python
load_dataset('xsum')
```
## Actual results
``` python
r... | 48 | The Xsum datasets seems not able to download.
## Describe the bug
The download Link of the Xsum dataset provided in the repository is [Link](http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz). It seems not able to download.
## Steps to reproduce the bug
```python
load_dataset('x... | [
-0.3331863284,
-0.3943813741,
-0.0960558727,
0.2603081167,
0.2701826692,
-0.0308501571,
-0.1864698827,
0.2266122401,
0.4172305763,
0.4343669415,
-0.2952316105,
0.2692236006,
0.1890334338,
0.2628757656,
0.2242891639,
-0.1081005707,
-0.0503329001,
-0.0338187404,
-0.2698960602,
-0... |
https://github.com/huggingface/datasets/issues/3227 | Error in `Json(datasets.ArrowBasedBuilder)` class | I have additionally identified the source of the error, being that [this condition](https://github.com/huggingface/datasets/blob/fc46bba66ba4f432cc10501c16a677112e13984c/src/datasets/packaged_modules/json/json.py#L124-L126) in the file
`python3.8/site-packages/datasets/packaged_modules/json/json.py` is not being enter... | ## Describe the bug
When a json file contains a `text` field that is larger than the block_size, the JSON dataset builder fails.
## Steps to reproduce the bug
Create a folder that contains the following:
```
.
├── testdata
│ └── mydata.json
└── test.py
```
Please download [this file](https://github.com/... | 113 | Error in `Json(datasets.ArrowBasedBuilder)` class
## Describe the bug
When a json file contains a `text` field that is larger than the block_size, the JSON dataset builder fails.
## Steps to reproduce the bug
Create a folder that contains the following:
```
.
├── testdata
│ └── mydata.json
└── test.py
``... | [
-0.2403119802,
0.1650995016,
-0.1146983206,
0.4822804034,
0.0954777747,
0.1549095958,
0.1611650586,
0.4468063116,
0.1629001051,
0.0544317663,
0.2402578294,
0.2136399299,
-0.0641991049,
0.0986969247,
-0.0887032151,
-0.063235119,
-0.0958853215,
0.2196529508,
0.0554810464,
0.23881... |
https://github.com/huggingface/datasets/issues/3227 | Error in `Json(datasets.ArrowBasedBuilder)` class | Hi ! I think the issue comes from the fact that your JSON file is not a valid JSON Lines file.
Each example should be on one single line.
Can you try fixing the format to have one line per example and try again ? | ## Describe the bug
When a json file contains a `text` field that is larger than the block_size, the JSON dataset builder fails.
## Steps to reproduce the bug
Create a folder that contains the following:
```
.
├── testdata
│ └── mydata.json
└── test.py
```
Please download [this file](https://github.com/... | 45 | Error in `Json(datasets.ArrowBasedBuilder)` class
## Describe the bug
When a json file contains a `text` field that is larger than the block_size, the JSON dataset builder fails.
## Steps to reproduce the bug
Create a folder that contains the following:
```
.
├── testdata
│ └── mydata.json
└── test.py
``... | [
-0.2403119802,
0.1650995016,
-0.1146983206,
0.4822804034,
0.0954777747,
0.1549095958,
0.1611650586,
0.4468063116,
0.1629001051,
0.0544317663,
0.2402578294,
0.2136399299,
-0.0641991049,
0.0986969247,
-0.0887032151,
-0.063235119,
-0.0958853215,
0.2196529508,
0.0554810464,
0.23881... |
https://github.com/huggingface/datasets/issues/3227 | Error in `Json(datasets.ArrowBasedBuilder)` class | :open_mouth: you're right, that did it! I just put everything on a single line (my file only has a single example) and that fixed the error. Thank you so much! | ## Describe the bug
When a json file contains a `text` field that is larger than the block_size, the JSON dataset builder fails.
## Steps to reproduce the bug
Create a folder that contains the following:
```
.
├── testdata
│ └── mydata.json
└── test.py
```
Please download [this file](https://github.com/... | 30 | Error in `Json(datasets.ArrowBasedBuilder)` class
## Describe the bug
When a json file contains a `text` field that is larger than the block_size, the JSON dataset builder fails.
## Steps to reproduce the bug
Create a folder that contains the following:
```
.
├── testdata
│ └── mydata.json
└── test.py
``... | [
-0.2403119802,
0.1650995016,
-0.1146983206,
0.4822804034,
0.0954777747,
0.1549095958,
0.1611650586,
0.4468063116,
0.1629001051,
0.0544317663,
0.2402578294,
0.2136399299,
-0.0641991049,
0.0986969247,
-0.0887032151,
-0.063235119,
-0.0958853215,
0.2196529508,
0.0554810464,
0.23881... |
https://github.com/huggingface/datasets/issues/3210 | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.15.1/datasets/wmt16/wmt16.py | Hi ! Do you have some kind of proxy in your browser that gives you access to internet ?
Maybe you're having this error because you don't have access to this URL from python ? | when I use python examples/pytorch/translation/run_translation.py --model_name_or_path examples/pytorch/translation/opus-mt-en-ro --do_train --do_eval --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config_name ro-en --output_dir /tmp/tst-translation --per_device_tra... | 35 | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.15.1/datasets/wmt16/wmt16.py
when I use python examples/pytorch/translation/run_translation.py --model_name_or_path examples/pytorch/translation/opus-mt-en-ro --do_train --do_eval --source_lang en --target_lan... | [
-0.0930814818,
-0.3882348537,
-0.0091212941,
0.1529587954,
0.3836989403,
-0.0650623813,
-0.0014800368,
0.0271893367,
-0.1174205989,
0.4333746731,
-0.1092189252,
-0.1595529914,
0.3426676691,
0.134974882,
0.3313347101,
-0.4141204357,
-0.0376143903,
-0.2521337569,
-0.3091402352,
0... |
https://github.com/huggingface/datasets/issues/3210 | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.15.1/datasets/wmt16/wmt16.py | You don't need authentication to access those github hosted files
Please check that you can access this URL from your browser and also from your terminal | when I use python examples/pytorch/translation/run_translation.py --model_name_or_path examples/pytorch/translation/opus-mt-en-ro --do_train --do_eval --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config_name ro-en --output_dir /tmp/tst-translation --per_device_tra... | 26 | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.15.1/datasets/wmt16/wmt16.py
when I use python examples/pytorch/translation/run_translation.py --model_name_or_path examples/pytorch/translation/opus-mt-en-ro --do_train --do_eval --source_lang en --target_lan... | [
-0.1509537101,
-0.2567588389,
-0.015501881,
0.2449311763,
0.2433852553,
-0.0078564612,
0.0931585729,
0.096603632,
-0.0795091018,
0.4065464437,
-0.2241298705,
-0.2348440439,
0.2964992225,
0.0799347758,
0.1829089969,
-0.1545805335,
-0.0657936707,
-0.2579963505,
-0.1552281231,
0.0... |
https://github.com/huggingface/datasets/issues/3204 | FileNotFoundError for TupleIE dataste | @mariosasko @lhoestq Could you give me an update on how to load the dataset after the fix?
Thanks. | Hi,
`dataset = datasets.load_dataset('tuple_ie', 'all')`
returns a FileNotFound error. Is the data not available?
Many thanks.
| 18 | FileNotFoundError for TupleIE dataste
Hi,
`dataset = datasets.load_dataset('tuple_ie', 'all')`
returns a FileNotFound error. Is the data not available?
Many thanks.
@mariosasko @lhoestq Could you give me an update on how to load the dataset after the fix?
Thanks. | [
-0.1196368784,
0.0451599471,
-0.0531533621,
0.4988976419,
0.2058222592,
0.1560678333,
0.2879103124,
0.2625412941,
0.2662693262,
0.0063053216,
0.0560928844,
-0.0138807008,
-0.1284626871,
0.2602333426,
-0.0655836388,
-0.1212598756,
-0.1651826352,
0.2249732614,
0.0427774936,
-0.01... |
https://github.com/huggingface/datasets/issues/3204 | FileNotFoundError for TupleIE dataste | Hi @arda-vianai,
first, you can try:
```python
import datasets
dataset = datasets.load_dataset('tuple_ie', 'all', revision="master")
```
If this doesn't work, your version of `datasets` is missing some features that are required to run the dataset script, so install the master version with the following command... | Hi,
`dataset = datasets.load_dataset('tuple_ie', 'all')`
returns a FileNotFound error. Is the data not available?
Many thanks.
| 64 | FileNotFoundError for TupleIE dataste
Hi,
`dataset = datasets.load_dataset('tuple_ie', 'all')`
returns a FileNotFound error. Is the data not available?
Many thanks.
Hi @arda-vianai,
first, you can try:
```python
import datasets
dataset = datasets.load_dataset('tuple_ie', 'all', revision="master")
`... | [
-0.1884026527,
0.0600263104,
-0.1039322913,
0.2780112326,
0.2167992145,
0.2223354131,
0.2928451598,
0.4254568517,
0.3324117362,
0.1265628785,
0.0710725933,
0.0770497769,
-0.0805492923,
0.2199545801,
-0.1739465892,
0.0059489533,
-0.0722433999,
0.3442076743,
-0.0204529874,
-0.012... |
https://github.com/huggingface/datasets/issues/3204 | FileNotFoundError for TupleIE dataste | @mariosasko
Thanks, it is working now. I actually did that before but I didn't restart the kernel. I restarted it and it works now. My bad!!!
Many thanks and great job!
-arda | Hi,
`dataset = datasets.load_dataset('tuple_ie', 'all')`
returns a FileNotFound error. Is the data not available?
Many thanks.
| 32 | FileNotFoundError for TupleIE dataste
Hi,
`dataset = datasets.load_dataset('tuple_ie', 'all')`
returns a FileNotFound error. Is the data not available?
Many thanks.
@mariosasko
Thanks, it is working now. I actually did that before but I didn't restart the kernel. I restarted it and it works now. My bad!... | [
-0.1555857956,
-0.1022432595,
-0.0895033628,
0.444070518,
0.3425714374,
0.0214987397,
0.3312878907,
0.2618189752,
0.3505803049,
0.1980802119,
0.0239385068,
0.1315807849,
-0.1134963557,
0.2030687034,
-0.1134344563,
-0.036822103,
-0.1141064316,
0.2126444131,
-0.0038393643,
-0.041... |
https://github.com/huggingface/datasets/issues/3191 | Dataset viewer issue for '*compguesswhat*' | ```python
>>> import datasets
>>> dataset = datasets.load_dataset('compguesswhat', name='compguesswhat-original',split='train', streaming=True)
>>> next(iter(dataset))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/sit... | ## Dataset viewer issue for '*compguesswhat*'
**Link:** https://huggingface.co/datasets/compguesswhat
File not found
Am I the one who added this dataset ? No
| 137 | Dataset viewer issue for '*compguesswhat*'
## Dataset viewer issue for '*compguesswhat*'
**Link:** https://huggingface.co/datasets/compguesswhat
File not found
Am I the one who added this dataset ? No
```python
>>> import datasets
>>> dataset = datasets.load_dataset('compguesswhat', name='compguesswhat... | [
-0.1937608123,
-0.1040919349,
-0.0767582282,
0.3095780313,
0.132095933,
0.2199115604,
0.2739969492,
0.2762228549,
0.0248608999,
0.0397387445,
-0.0757460818,
0.2187298536,
-0.2102134526,
-0.1109090373,
0.4341827035,
0.1308069974,
-0.0155651476,
0.3229696453,
-0.2597893476,
0.013... |
https://github.com/huggingface/datasets/issues/3190 | combination of shuffle and filter results in a bug | Hi ! There was a regression in `datasets` 1.12 that introduced this bug. It has been fixed in #3019 in 1.13
Can you try to update `datasets` and try again ? | ## Describe the bug
Hi,
I would like to shuffle a dataset, then filter it based on each existing label. however, the combination of `filter`, `shuffle` seems to results in a bug. In the minimal example below, as you see in the filtered results, the filtered labels are not unique, meaning filter has not worked. Any su... | 31 | combination of shuffle and filter results in a bug
## Describe the bug
Hi,
I would like to shuffle a dataset, then filter it based on each existing label. however, the combination of `filter`, `shuffle` seems to results in a bug. In the minimal example below, as you see in the filtered results, the filtered labels ... | [
0.2327428907,
-0.1531245112,
0.0333000906,
0.152129814,
0.1432728022,
-0.111695841,
0.2651186585,
0.0427486822,
0.0969828069,
0.1914818138,
-0.3320682645,
0.5272731781,
-0.2565172315,
0.0196749438,
-0.0846300647,
0.1135222986,
0.2200107574,
-0.0801777095,
0.2401883751,
-0.16591... |
https://github.com/huggingface/datasets/issues/3189 | conll2003 incorrect label explanation | Hi @BramVanroy,
since these fields are of type `ClassLabel` (you can check this with `dset.features`), you can inspect the possible values with:
```python
dset.features[field_name].feature.names # .feature because it's a sequence of labels
```
and to find the mapping between names and integers, use:
```pyth... | In the [conll2003](https://huggingface.co/datasets/conll2003#data-fields) README, the labels are described as follows
> - `id`: a `string` feature.
> - `tokens`: a `list` of `string` features.
> - `pos_tags`: a `list` of classification labels, with possible values including `"` (0), `''` (1), `#` (2), `$` (3), `(`... | 63 | conll2003 incorrect label explanation
In the [conll2003](https://huggingface.co/datasets/conll2003#data-fields) README, the labels are described as follows
> - `id`: a `string` feature.
> - `tokens`: a `list` of `string` features.
> - `pos_tags`: a `list` of classification labels, with possible values including ... | [
0.2851009965,
-0.306415379,
0.0021603352,
0.6274859309,
0.1388071328,
-0.0540367439,
0.3389792144,
-0.006268464,
-0.3376255333,
0.0564624928,
-0.2439725101,
0.1636956185,
0.1464698315,
0.5847405195,
-0.1180654541,
-0.0284846481,
0.1054673344,
-0.1323459893,
0.0885567591,
-0.265... |
https://github.com/huggingface/datasets/issues/3188 | conll2002 issues | Hi ! Thanks for reporting :)
This is related to https://github.com/huggingface/datasets/issues/2742, I'm working on it. It should fix the viewer for around 80 datasets.
| **Link:** https://huggingface.co/datasets/conll2002
The dataset viewer throws a server error when trying to preview the dataset.
```
Message: Extraction protocol 'train' for file at 'https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/esp.train' is not implemented yet
```
I... | 24 | conll2002 issues
**Link:** https://huggingface.co/datasets/conll2002
The dataset viewer throws a server error when trying to preview the dataset.
```
Message: Extraction protocol 'train' for file at 'https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/esp.train' is not implem... | [
-0.0604760423,
0.2116218954,
0.0638272017,
0.3273207247,
0.0287138093,
0.0626272857,
0.1642676592,
0.2000191808,
-0.4097282887,
0.0621840246,
-0.1819021702,
0.1741392165,
-0.1920645535,
0.3275819123,
0.1100306064,
-0.1469973475,
0.0231315661,
0.2660285532,
-0.2740883529,
-0.134... |
https://github.com/huggingface/datasets/issues/3188 | conll2002 issues | Ah, hadn't seen that sorry.
The scrambled "point of contact" is a separate issue though, I think. | **Link:** https://huggingface.co/datasets/conll2002
The dataset viewer throws a server error when trying to preview the dataset.
```
Message: Extraction protocol 'train' for file at 'https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/esp.train' is not implemented yet
```
I... | 17 | conll2002 issues
**Link:** https://huggingface.co/datasets/conll2002
The dataset viewer throws a server error when trying to preview the dataset.
```
Message: Extraction protocol 'train' for file at 'https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/esp.train' is not implem... | [
-0.0255833752,
0.2223012298,
0.0360253826,
0.3023372889,
-0.0530182272,
0.0704936087,
0.1777101308,
0.2525488734,
-0.4730507135,
0.0556401759,
-0.0888660923,
0.1852440387,
-0.1238657758,
0.2847290039,
0.0831432194,
-0.1410981417,
0.0679345801,
0.276576966,
-0.194239676,
-0.1755... |
https://github.com/huggingface/datasets/issues/3186 | Dataset viewer for nli_tr | It's an issue with the streaming mode:
```python
>>> import datasets
>>> dataset = datasets.load_dataset('nli_tr', name='snli_tr',split='test', streaming=True)
>>> next(iter(dataset))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.ven... | ## Dataset viewer issue for '*nli_tr*'
**Link:** https://huggingface.co/datasets/nli_tr
Hello,
Thank you for the new dataset preview feature that will help the users to view the datasets online.
We just noticed that the dataset viewer widget in the `nli_tr` dataset shows the error below. The error must be d... | 119 | Dataset viewer for nli_tr
## Dataset viewer issue for '*nli_tr*'
**Link:** https://huggingface.co/datasets/nli_tr
Hello,
Thank you for the new dataset preview feature that will help the users to view the datasets online.
We just noticed that the dataset viewer widget in the `nli_tr` dataset shows the erro... | [
-0.3372096419,
0.1444133818,
-0.0150283743,
0.1737514287,
-0.0588113405,
0.1114987433,
0.1539348364,
0.1620102674,
-0.1715070754,
0.0934872851,
-0.0297645237,
0.239446193,
-0.1792830229,
0.1100378186,
0.016051231,
-0.149910748,
-0.2212218046,
0.2402867526,
-0.248372063,
0.26567... |
https://github.com/huggingface/datasets/issues/3185 | 7z dataset preview not implemented? | It's a bug in the dataset viewer: the dataset cannot be downloaded in streaming mode, but since the dataset is relatively small, the dataset viewer should have fallback to normal mode. Working on a fix. | ## Dataset viewer issue for dataset 'samsum'
**Link:** https://huggingface.co/datasets/samsum
Server Error
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol '7z' for file at 'https://arxiv.org/src/1911.12237v2/anc/corpus.7z' is not implemented yet
| 35 | 7z dataset preview not implemented?
## Dataset viewer issue for dataset 'samsum'
**Link:** https://huggingface.co/datasets/samsum
Server Error
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol '7z' for file at 'https://arxiv.org/src/1911.12237v2/anc/corpus.7z' is not im... | [
-0.4839608371,
-0.5628320575,
-0.0067397994,
0.090974398,
0.0615778528,
-0.0313980207,
0.0058004321,
0.5215321183,
0.1930067688,
0.305977881,
-0.1036302969,
0.4704754651,
0.073757194,
0.0165081043,
-0.2501565218,
-0.1724486947,
0.0452871509,
0.2829019725,
-0.0775604397,
-0.0043... |
https://github.com/huggingface/datasets/issues/3181 | `None` converted to `"None"` when loading a dataset | Hi @eladsegal, thanks for reporting.
@mariosasko I saw you are already working on this, but maybe my comment will be useful to you.
All values are casted to their corresponding feature type (including `None` values). For example if the feature type is `Value("bool")`, `None` is casted to `False`.
It is true th... | ## Describe the bug
When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
qasper = load_dataset("qasper", split="train", download_mode="reuse_cache_if_exists")
print(qasper[60]["full_text... | 65 | `None` converted to `"None"` when loading a dataset
## Describe the bug
When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
qasper = load_dataset("qasper", split="train", download_mode... | [
-0.0743647888,
-0.0289289318,
0.0202893149,
0.2942250371,
0.4671344757,
-0.0404256471,
0.3494563997,
0.2275474072,
0.0246014483,
0.5165578723,
-0.0883267522,
0.6322942376,
-0.0893220007,
0.0885335431,
0.0443152413,
-0.031129634,
0.1287149638,
0.3545816839,
-0.153489396,
0.03585... |
https://github.com/huggingface/datasets/issues/3181 | `None` converted to `"None"` when loading a dataset | Thanks for reporting.
This is actually a breaking change that I think can cause issues when users preprocess their data. String columns used to be nullable. Maybe we can correct https://github.com/huggingface/datasets/pull/3158 to keep the None values and avoid this breaking change ?
EDIT: the other types (bool, ... | ## Describe the bug
When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
qasper = load_dataset("qasper", split="train", download_mode="reuse_cache_if_exists")
print(qasper[60]["full_text... | 54 | `None` converted to `"None"` when loading a dataset
## Describe the bug
When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
qasper = load_dataset("qasper", split="train", download_mode... | [
-0.0925697833,
0.0141061703,
0.0318011977,
0.3350735307,
0.506174624,
0.084462449,
0.4042354524,
0.1468732059,
0.0264111813,
0.4644888639,
-0.0690369681,
0.4651447833,
-0.0235638227,
0.0580343753,
0.0141550768,
-0.1078598872,
0.1389272809,
0.346016109,
-0.2450709194,
0.17662745... |
https://github.com/huggingface/datasets/issues/3181 | `None` converted to `"None"` when loading a dataset | So what would be the best way to handle a feature that can have a null value in some of the instances? So far I used `None`.
Using the empty string won't be a good option, as it can be an actual value in the data and is not the same as not having a value at all. | ## Describe the bug
When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
qasper = load_dataset("qasper", split="train", download_mode="reuse_cache_if_exists")
print(qasper[60]["full_text... | 58 | `None` converted to `"None"` when loading a dataset
## Describe the bug
When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
qasper = load_dataset("qasper", split="train", download_mode... | [
0.005276327,
-0.0747643858,
0.0570605546,
0.3098501265,
0.1798288673,
0.0002796789,
0.4044808149,
0.1222434342,
-0.0889380276,
0.4486834705,
0.2950796783,
0.3863013089,
-0.1925676167,
0.1190018207,
-0.0907520205,
0.0909653828,
0.176432997,
0.4477601051,
0.0782448798,
0.04103793... |
https://github.com/huggingface/datasets/issues/3181 | `None` converted to `"None"` when loading a dataset | Hi @eladsegal,
Use `None`. As @albertvillanova correctly pointed out, this change in conversion was introduced (by mistake) in #3158. To avoid it, install the earlier revision with:
```
pip install git+https://github.com/huggingface/datasets.git@8107844ec0e7add005db0585c772ee20adc01a5e
```
I'm making all the f... | ## Describe the bug
When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
qasper = load_dataset("qasper", split="train", download_mode="reuse_cache_if_exists")
print(qasper[60]["full_text... | 52 | `None` converted to `"None"` when loading a dataset
## Describe the bug
When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
qasper = load_dataset("qasper", split="train", download_mode... | [
-0.1710931063,
-0.0119528696,
0.0143993944,
0.2961726189,
0.4306810498,
0.0251634512,
0.3792949319,
0.2542135417,
-0.0140083693,
0.486613065,
-0.0783693343,
0.542630434,
-0.0509568267,
0.1476790011,
0.0132165775,
-0.0821947083,
0.1226771474,
0.3703927696,
-0.2146403491,
0.06179... |
https://github.com/huggingface/datasets/issues/3181 | `None` converted to `"None"` when loading a dataset | https://github.com/huggingface/datasets/pull/3195 fixed it, we'll do a new release soon :)
For now feel free to install `datasets` from the master branch | ## Describe the bug
When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
qasper = load_dataset("qasper", split="train", download_mode="reuse_cache_if_exists")
print(qasper[60]["full_text... | 21 | `None` converted to `"None"` when loading a dataset
## Describe the bug
When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
qasper = load_dataset("qasper", split="train", download_mode... | [
-0.1464285254,
-0.061244525,
0.0032049515,
0.3425727487,
0.4700652659,
0.0306644179,
0.3348008096,
0.221140638,
0.0670467094,
0.5334504247,
-0.0878186896,
0.5156676769,
-0.053589642,
0.0836551711,
0.0218497422,
-0.0698360801,
0.0672495738,
0.3368619978,
-0.214979738,
0.06905688... |
https://github.com/huggingface/datasets/issues/3181 | `None` converted to `"None"` when loading a dataset | Thanks, but unfortunately looks like it isn't fixed yet 😢
[notebook for 1.14.0](https://colab.research.google.com/drive/1SV3sFXPJMWSQgbm4pr9Y1Q8OJ4JYKcDo?usp=sharing)
[notebook for master](https://colab.research.google.com/drive/145wDpuO74MmsuI0SVLcI1IswG6aHpyhi?usp=sharing) | ## Describe the bug
When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
qasper = load_dataset("qasper", split="train", download_mode="reuse_cache_if_exists")
print(qasper[60]["full_text... | 16 | `None` converted to `"None"` when loading a dataset
## Describe the bug
When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
qasper = load_dataset("qasper", split="train", download_mode... | [
-0.128923595,
-0.0297710095,
0.020354433,
0.3510314226,
0.4544922113,
0.0599747598,
0.3477732241,
0.2126249522,
0.0653546676,
0.5317575932,
-0.0835784301,
0.5394250751,
-0.0758218542,
0.0443202928,
0.0360325351,
-0.0391761549,
0.1098452881,
0.3058930337,
-0.2327348739,
0.030155... |
https://github.com/huggingface/datasets/issues/3181 | `None` converted to `"None"` when loading a dataset | Oh, sorry. I deleted the fix by accident when I was resolving a merge conflict. Let me fix this real quick. | ## Describe the bug
When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
qasper = load_dataset("qasper", split="train", download_mode="reuse_cache_if_exists")
print(qasper[60]["full_text... | 21 | `None` converted to `"None"` when loading a dataset
## Describe the bug
When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
qasper = load_dataset("qasper", split="train", download_mode... | [
-0.1147132516,
0.0136741847,
0.0202212278,
0.3419386446,
0.4503499269,
0.052650705,
0.2959984541,
0.2195353955,
0.049980022,
0.5178982019,
-0.1052668095,
0.5846873522,
-0.067857191,
-0.0207226947,
0.0296910238,
-0.0621310398,
0.1152243912,
0.3349699974,
-0.203099221,
0.03985237... |
https://github.com/huggingface/datasets/issues/3178 | "Property couldn't be hashed properly" even though fully picklable | After some digging, I found that this is caused by `dill` and using `recurse=True)` when trying to dump the object. The problem also occurs without multiprocessing. I can only find [the following information](https://dill.readthedocs.io/en/latest/dill.html#dill._dill.dumps) about this:
> If recurse=True, then object... | ## Describe the bug
I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable.
... | 108 | "Property couldn't be hashed properly" even though fully picklable
## Describe the bug
I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It... | [
-0.0547037572,
-0.0296596158,
0.1383616775,
0.1674575657,
0.2572079003,
-0.1886388958,
0.3345493674,
0.0546137951,
0.0525931679,
0.1248953864,
0.0389523171,
0.5095518827,
-0.2360349447,
0.3779696524,
-0.1779329479,
0.0818778947,
0.0879114941,
-0.0570349731,
0.0878298432,
-0.119... |
https://github.com/huggingface/datasets/issues/3178 | "Property couldn't be hashed properly" even though fully picklable | Hi ! Thanks for reporting
Yes `recurse=True` is necessary to be able to hash all the objects that are passed to the `map` function
EDIT: hopefully this object can be serializable soon, but otherwise we can consider adding more control to the user on how to hash objects that are not serializable (as mentioned in h... | ## Describe the bug
I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable.
... | 56 | "Property couldn't be hashed properly" even though fully picklable
## Describe the bug
I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It... | [
-0.0547037572,
-0.0296596158,
0.1383616775,
0.1674575657,
0.2572079003,
-0.1886388958,
0.3345493674,
0.0546137951,
0.0525931679,
0.1248953864,
0.0389523171,
0.5095518827,
-0.2360349447,
0.3779696524,
-0.1779329479,
0.0818778947,
0.0879114941,
-0.0570349731,
0.0878298432,
-0.119... |
https://github.com/huggingface/datasets/issues/3178 | "Property couldn't be hashed properly" even though fully picklable | I submitted a PR to spacy that should fix this issue (linked above). I'll leave this open until that PR is merged. | ## Describe the bug
I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable.
... | 22 | "Property couldn't be hashed properly" even though fully picklable
## Describe the bug
I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It... | [
-0.0547037572,
-0.0296596158,
0.1383616775,
0.1674575657,
0.2572079003,
-0.1886388958,
0.3345493674,
0.0546137951,
0.0525931679,
0.1248953864,
0.0389523171,
0.5095518827,
-0.2360349447,
0.3779696524,
-0.1779329479,
0.0818778947,
0.0879114941,
-0.0570349731,
0.0878298432,
-0.119... |
https://github.com/huggingface/datasets/issues/3178 | "Property couldn't be hashed properly" even though fully picklable | @lhoestq After some testing I find that even with the updated spaCy, no cache files are used. I do not get any warnings though, but I can see that map is run every time I run the code. Do you have thoughts about why? If you want to try the tests below, make sure to install spaCy from [here](https://github.com/BramVanro... | ## Describe the bug
I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable.
... | 207 | "Property couldn't be hashed properly" even though fully picklable
## Describe the bug
I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It... | [
-0.0547037572,
-0.0296596158,
0.1383616775,
0.1674575657,
0.2572079003,
-0.1886388958,
0.3345493674,
0.0546137951,
0.0525931679,
0.1248953864,
0.0389523171,
0.5095518827,
-0.2360349447,
0.3779696524,
-0.1779329479,
0.0818778947,
0.0879114941,
-0.0570349731,
0.0878298432,
-0.119... |
https://github.com/huggingface/datasets/issues/3178 | "Property couldn't be hashed properly" even though fully picklable | It looks like every time you load `en_core_web_sm` you get a different python object:
```python
import spacy
from datasets.fingerprint import Hasher
nlp1 = spacy.load("en_core_web_sm")
nlp2 = spacy.load("en_core_web_sm")
Hasher.hash(nlp1), Hasher.hash(nlp2)
# ('f6196a33882fea3b', 'a4c676a071f266ff')
```
Here... | ## Describe the bug
I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable.
... | 109 | "Property couldn't be hashed properly" even though fully picklable
## Describe the bug
I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It... | [
-0.0547037572,
-0.0296596158,
0.1383616775,
0.1674575657,
0.2572079003,
-0.1886388958,
0.3345493674,
0.0546137951,
0.0525931679,
0.1248953864,
0.0389523171,
0.5095518827,
-0.2360349447,
0.3779696524,
-0.1779329479,
0.0818778947,
0.0879114941,
-0.0570349731,
0.0878298432,
-0.119... |
https://github.com/huggingface/datasets/issues/3178 | "Property couldn't be hashed properly" even though fully picklable | Thanks for searching! I went looking, and found that this is an implementation detail of thinc
https://github.com/explosion/thinc/blob/68691e303ae68cae4bc803299016f1fc064328bf/thinc/model.py#L96-L98
Presumably (?) exactly to distinguish between different parts in memory when multiple models are loaded. Do not thi... | ## Describe the bug
I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable.
... | 119 | "Property couldn't be hashed properly" even though fully picklable
## Describe the bug
I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It... | [
-0.0547037572,
-0.0296596158,
0.1383616775,
0.1674575657,
0.2572079003,
-0.1886388958,
0.3345493674,
0.0546137951,
0.0525931679,
0.1248953864,
0.0389523171,
0.5095518827,
-0.2360349447,
0.3779696524,
-0.1779329479,
0.0818778947,
0.0879114941,
-0.0570349731,
0.0878298432,
-0.119... |
https://github.com/huggingface/datasets/issues/3178 | "Property couldn't be hashed properly" even though fully picklable | It can be even simpler to hash the bytes of the pipeline instead
```python
nlp1.to_bytes() == nlp2.to_bytes() # True
```
IMO we should integrate the custom hashing for spacy models into `datasets` (we use a custom Pickler for that).
What could be done on Spacy's side instead (if they think it's nice to have) is... | ## Describe the bug
I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable.
... | 114 | "Property couldn't be hashed properly" even though fully picklable
## Describe the bug
I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It... | [
-0.0547037572,
-0.0296596158,
0.1383616775,
0.1674575657,
0.2572079003,
-0.1886388958,
0.3345493674,
0.0546137951,
0.0525931679,
0.1248953864,
0.0389523171,
0.5095518827,
-0.2360349447,
0.3779696524,
-0.1779329479,
0.0818778947,
0.0879114941,
-0.0570349731,
0.0878298432,
-0.119... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.