html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 70 51.8k | body stringlengths 0 29.8k | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings list |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/824 | Discussion using datasets in offline mode | here is my way to load a dataset offline, but it **requires** an online machine
1. (online machine)
```
import datasets
data = datasets.load_dataset(...)
data.save_to_disk(/YOUR/DATASET/DIR)
```
2. copy the dir from online to the offline machine
3. (offline machine)
```
import datasets
data = datasets.load_f... | `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind or other propositions.
Here are some point... | 47 | Discussion using datasets in offline mode
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind o... | [
-0.4902574121952057,
0.22889623045921326,
-0.033220916986465454,
0.13887310028076172,
0.23637273907661438,
-0.08911536633968353,
0.5481943488121033,
0.06737256050109863,
0.29559606313705444,
0.24505123496055603,
-0.010174298658967018,
-0.06949244439601898,
0.027625637128949165,
0.382728636... |
https://github.com/huggingface/datasets/issues/824 | Discussion using datasets in offline mode | > here is my way to load a dataset offline, but it **requires** an online machine
>
> 1. (online machine)
>
> ```
>
> import datasets
>
> data = datasets.load_dataset(...)
>
> data.save_to_disk(/YOUR/DATASET/DIR)
>
> ```
>
> 2. copy the dir from online to the offline machine
>
> 3. (offline machine)
>
> ```
> ... | `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind or other propositions.
Here are some point... | 76 | Discussion using datasets in offline mode
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind o... | [
-0.4992601275444031,
0.22699788212776184,
-0.03246932849287987,
0.14187206327915192,
0.23695069551467896,
-0.10291729122400284,
0.5442940592765808,
0.07441117614507675,
0.2753629684448242,
0.24428817629814148,
-0.008833845146000385,
-0.06653954833745956,
0.028085805475711823,
0.37562265992... |
https://github.com/huggingface/datasets/issues/824 | Discussion using datasets in offline mode | I opened a PR that allows to reload modules that have already been loaded once even if there's no internet.
Let me know if you know other ways that can make the offline mode experience better. I'd be happy to add them :)
I already note the "freeze" modules option, to prevent local modules updates. It would be a ... | `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind or other propositions.
Here are some point... | 179 | Discussion using datasets in offline mode
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind o... | [
-0.4716479778289795,
0.2902272641658783,
-0.047671955078840256,
0.133442223072052,
0.21068869531154633,
-0.2122252732515335,
0.58582603931427,
0.053416650742292404,
0.2833411395549774,
0.18411283195018768,
0.03105953335762024,
0.03263869509100914,
0.001181453699246049,
0.3333258032798767,
... |
https://github.com/huggingface/datasets/issues/824 | Discussion using datasets in offline mode | The local dataset builders (csv, text , json and pandas) are now part of the `datasets` package since #1726 :)
You can now use them offline
```python
datasets = load_dataset('text', data_files=data_files)
```
We'll do a new release soon | `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind or other propositions.
Here are some point... | 38 | Discussion using datasets in offline mode
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind o... | [
-0.4490852952003479,
0.20950652658939362,
-0.05981927365064621,
0.129350483417511,
0.26219233870506287,
-0.13128559291362762,
0.5469641089439392,
0.09492629766464233,
0.31543806195259094,
0.22943121194839478,
0.05104101821780205,
-0.0031591064762324095,
0.04794209077954292,
0.4036761820316... |
https://github.com/huggingface/datasets/issues/823 | how processing in batch works in datasets | Hi I don’t think this is a request for a dataset like you labeled it.
I also think this would be better suited for the forum at https://discuss.huggingface.co. we try to keep the issue for the repo for bug reports and new features/dataset requests and have usage questions discussed on the forum. Thanks. | Hi,
I need to process my datasets before it is passed to dataloader in batch,
here is my codes
```
class AbstractTask(ABC):
task_name: str = NotImplemented
preprocessor: Callable = NotImplemented
split_to_data_split: Mapping[str, str] = NotImplemented
tokenizer: Callable = NotImplemented
... | 53 | how processing in batch works in datasets
Hi,
I need to process my datasets before it is passed to dataloader in batch,
here is my codes
```
class AbstractTask(ABC):
task_name: str = NotImplemented
preprocessor: Callable = NotImplemented
split_to_data_split: Mapping[str, str] = NotImplemented
... | [
-0.5023793578147888,
-0.17439505457878113,
-0.22950227558612823,
0.13015730679035187,
0.17803533375263214,
0.03179485723376274,
0.2955615222454071,
0.1713043749332428,
-0.14847035706043243,
0.17807641625404358,
0.0091216079890728,
0.1247805655002594,
0.06412006169557571,
0.2456236928701400... |
https://github.com/huggingface/datasets/issues/823 | how processing in batch works in datasets | Hi Thomas,
what I do not get from documentation is that why when you set batched=True,
this is processed in batch, while data is not divided to batched
beforehand, basically this is a question on the documentation and I do not
get the batched=True, but sure, if you think this is more appropriate in
forum I will post it... | Hi,
I need to process my datasets before it is passed to dataloader in batch,
here is my codes
```
class AbstractTask(ABC):
task_name: str = NotImplemented
preprocessor: Callable = NotImplemented
split_to_data_split: Mapping[str, str] = NotImplemented
tokenizer: Callable = NotImplemented
... | 167 | how processing in batch works in datasets
Hi,
I need to process my datasets before it is passed to dataloader in batch,
here is my codes
```
class AbstractTask(ABC):
task_name: str = NotImplemented
preprocessor: Callable = NotImplemented
split_to_data_split: Mapping[str, str] = NotImplemented
... | [
-0.5023793578147888,
-0.17439505457878113,
-0.22950227558612823,
0.13015730679035187,
0.17803533375263214,
0.03179485723376274,
0.2955615222454071,
0.1713043749332428,
-0.14847035706043243,
0.17807641625404358,
0.0091216079890728,
0.1247805655002594,
0.06412006169557571,
0.2456236928701400... |
https://github.com/huggingface/datasets/issues/823 | how processing in batch works in datasets | Yes the forum is perfect for that. You can post in the `datasets` section.
Thanks a lot! | Hi,
I need to process my datasets before it is passed to dataloader in batch,
here is my codes
```
class AbstractTask(ABC):
task_name: str = NotImplemented
preprocessor: Callable = NotImplemented
split_to_data_split: Mapping[str, str] = NotImplemented
tokenizer: Callable = NotImplemented
... | 17 | how processing in batch works in datasets
Hi,
I need to process my datasets before it is passed to dataloader in batch,
here is my codes
```
class AbstractTask(ABC):
task_name: str = NotImplemented
preprocessor: Callable = NotImplemented
split_to_data_split: Mapping[str, str] = NotImplemented
... | [
-0.5023793578147888,
-0.17439505457878113,
-0.22950227558612823,
0.13015730679035187,
0.17803533375263214,
0.03179485723376274,
0.2955615222454071,
0.1713043749332428,
-0.14847035706043243,
0.17807641625404358,
0.0091216079890728,
0.1247805655002594,
0.06412006169557571,
0.2456236928701400... |
https://github.com/huggingface/datasets/issues/822 | datasets freezes | Pytorch is unable to convert strings to tensors unfortunately.
You can use `set_format(type="torch")` on columns that can be converted to tensors, such as token ids.
This makes me think that we should probably raise an error or at least a warning when one tries to create pytorch tensors out of text columns | Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks
dataset1 = load_dataset("squad", split="train[:10]")
dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question'])
dataset2 = load_datase... | 52 | datasets freezes
Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks
dataset1 = load_dataset("squad", split="train[:10]")
dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question'])
da... | [
-0.16190969944000244,
-0.33479225635528564,
-0.046579767018556595,
0.5367453694343567,
0.3581033945083618,
0.2125210016965866,
0.5072590708732605,
0.3903239965438843,
-0.01512861717492342,
0.1207222193479538,
-0.21651789546012878,
0.29290077090263367,
-0.11853577196598053,
-0.1630848795175... |
https://github.com/huggingface/datasets/issues/816 | [Caching] Dill globalvars() output order is not deterministic and can cause cache issues. | To show the issue:
```
python -c "from datasets.fingerprint import Hasher; a=[]; func = lambda : len(a); print(Hasher.hash(func))"
```
doesn't always return the same ouput since `globs` is a dictionary with "a" and "len" as keys but sometimes not in the same order | Dill uses `dill.detect.globalvars` to get the globals used by a function in a recursive dump. `globalvars` returns a dictionary of all the globals that a dumped function needs. However the order of the keys in this dict is not deterministic and can cause caching issues.
To fix that one could register an implementati... | 43 | [Caching] Dill globalvars() output order is not deterministic and can cause cache issues.
Dill uses `dill.detect.globalvars` to get the globals used by a function in a recursive dump. `globalvars` returns a dictionary of all the globals that a dumped function needs. However the order of the keys in this dict is not d... | [
-0.08192651718854904,
-0.04951903596520424,
-0.08793112635612488,
0.12161451578140259,
0.08929162472486496,
-0.15636228024959564,
0.16716919839382172,
0.24332651495933533,
-0.09968448430299759,
0.08036427944898605,
-0.08278708159923553,
0.2674162685871124,
-0.18599039316177368,
-0.26891604... |
https://github.com/huggingface/datasets/issues/815 | Is dataset iterative or not? | Hello !
Could you give more details ?
If you mean iter through one dataset then yes, `Dataset` object does implement the `__iter__` method so you can use
```python
for example in dataset:
# do something
```
If you want to iter through several datasets you can first concatenate them
```python
from data... | Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks | 67 | Is dataset iterative or not?
Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks
Hello !
Could you give more details ?
If you mean iter through one dat... | [
-0.26201316714286804,
-0.2916284501552582,
-0.18041527271270752,
0.11397110670804977,
0.06065124645829201,
-0.026537099853157997,
0.2819754183292389,
0.16877424716949463,
0.11090203374624252,
0.0845516249537468,
0.15545962750911713,
0.2481861561536789,
-0.35543638467788696,
0.3489525914192... |
https://github.com/huggingface/datasets/issues/815 | Is dataset iterative or not? | Hi Huggingface/Datasets team,
I want to use the datasets inside Seq2SeqDataset here
https://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py
and there I need to return back each line from the datasets and I am not
sure how to access each line and implement this?
It seems it also has get_item at... | Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks | 185 | Is dataset iterative or not?
Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks
Hi Huggingface/Datasets team,
I want to use the datasets inside Seq2SeqDat... | [
-0.02254350297152996,
-0.37920230627059937,
-0.07171960920095444,
0.3793145418167114,
0.07945949584245682,
-0.15498200058937073,
0.2763756215572357,
0.08281736075878143,
0.07116219401359558,
-0.14288772642612457,
-0.06436223536729813,
0.01111909095197916,
-0.24549677968025208,
0.3429043292... |
https://github.com/huggingface/datasets/issues/815 | Is dataset iterative or not? | could you tell me please if datasets also has __getitem__ any idea on how
to integrate it with Seq2SeqDataset is appreciated thanks
On Mon, Nov 9, 2020 at 10:22 AM Rabeeh Karimi Mahabadi <rabeeh@google.com>
wrote:
> Hi Huggingface/Datasets team,
> I want to use the datasets inside Seq2SeqDataset here
> https://github... | Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks | 236 | Is dataset iterative or not?
Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks
could you tell me please if datasets also has __getitem__ any idea on how
... | [
-0.14258834719657898,
-0.34717732667922974,
-0.11831463873386383,
0.35185518860816956,
0.08034638315439224,
-0.08777709305286407,
0.22364522516727448,
0.2374294549226761,
0.033236313611269,
-0.12688331305980682,
-0.03850686177611351,
0.050097186118364334,
-0.2673036754131317,
0.36309099197... |
https://github.com/huggingface/datasets/issues/815 | Is dataset iterative or not? | `datasets.Dataset` objects implement indeed `__getitem__`. It returns a dictionary with one field per column.
We've not added the integration of the datasets library for the seq2seq utilities yet. The current seq2seq utilities are based on text files.
However as soon as you have a `datasets.Dataset` with columns ... | Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks | 76 | Is dataset iterative or not?
Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks
`datasets.Dataset` objects implement indeed `__getitem__`. It returns a di... | [
-0.10601545125246048,
-0.06535844504833221,
-0.07446186989545822,
0.2728957235813141,
0.021202994510531425,
-0.009543677791953087,
0.235049307346344,
0.2607479989528656,
0.01784987561404705,
-0.18112260103225708,
0.2007768154144287,
0.21176409721374512,
-0.387565553188324,
0.19634748995304... |
https://github.com/huggingface/datasets/issues/815 | Is dataset iterative or not? | Hi
I am sorry for asking it multiple times but I am not getting the dataloader
type, could you confirm if the dataset library returns back an iterable
type dataloader or a mapping type one where one has access to __getitem__,
in the former case, one can iterate with __iter__, and how I can configure
it to return the da... | Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks | 217 | Is dataset iterative or not?
Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks
Hi
I am sorry for asking it multiple times but I am not getting the datalo... | [
-0.16995862126350403,
-0.17140647768974304,
-0.06893111020326614,
0.2923886775970459,
0.10094963014125824,
-0.08452483266592026,
0.3338168263435364,
0.24669037759304047,
0.17970123887062073,
-0.1265341341495514,
0.06800992041826248,
0.1442488431930542,
-0.378885954618454,
0.412889093160629... |
https://github.com/huggingface/datasets/issues/815 | Is dataset iterative or not? | `datasets.Dataset` objects are both iterative and mapping types: it has both `__iter__` and `__getitem__`
For example you can do
```python
for example in dataset:
# do something
```
or
```python
for i in range(len(dataset)):
example = dataset[i]
# do something
```
When you do that, one and only ... | Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks | 57 | Is dataset iterative or not?
Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks
`datasets.Dataset` objects are both iterative and mapping types: it has bo... | [
-0.19670262932777405,
-0.4120611846446991,
-0.15225835144519806,
0.1939142495393753,
0.05039403587579727,
0.00916050560772419,
0.25218772888183594,
0.06103906035423279,
0.1378219872713089,
-0.001332986168563366,
0.22775068879127502,
0.2708902657032013,
-0.4278081953525543,
0.17694737017154... |
https://github.com/huggingface/datasets/issues/815 | Is dataset iterative or not? | Hi there,
Here is what I am trying, this is not working for me in map-style datasets, could you please tell me how to use datasets with being able to access ___getitem__ ? could you assist me please correcting this example? I need map-style datasets which is formed from concatenation of two datasets from your library... | Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks | 113 | Is dataset iterative or not?
Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks
Hi there,
Here is what I am trying, this is not working for me in map-st... | [
-0.30167075991630554,
-0.391623318195343,
-0.12449963390827179,
0.4045022428035736,
0.12186595797538757,
0.08152636885643005,
0.23458075523376465,
0.1949106603860855,
-0.04454409331083298,
-0.025844141840934753,
0.044025495648384094,
0.33468836545944214,
-0.3416885733604431,
0.019357567653... |
https://github.com/huggingface/datasets/issues/813 | How to implement DistributedSampler with datasets | Hi Apparently I need to shard the data and give one host a chunk, could you provide me please with examples on how to do it? I want to use it jointly with finetune_trainer.py in huggingface repo seq2seq examples. thanks. | Hi,
I am using your datasets to define my dataloaders, and I am training finetune_trainer.py in huggingface repo on them.
I need a distributedSampler to be able to train the models on TPUs being able to distribute the load across the TPU cores. Could you tell me how I can implement the distribued sampler when using d... | 40 | How to implement DistributedSampler with datasets
Hi,
I am using your datasets to define my dataloaders, and I am training finetune_trainer.py in huggingface repo on them.
I need a distributedSampler to be able to train the models on TPUs being able to distribute the load across the TPU cores. Could you tell me ho... | [
-0.17623133957386017,
-0.2129831612110138,
0.041828006505966187,
0.1458277851343155,
0.29600682854652405,
-0.21450498700141907,
0.20663025975227356,
-0.05965998023748398,
0.07366067171096802,
0.3631319999694824,
-0.14943528175354004,
0.27701306343078613,
-0.32634466886520386,
0.15066081285... |
https://github.com/huggingface/datasets/issues/812 | Too much logging | Hi ! Thanks for reporting :)
I agree these one should be hidden when the logging level is warning, we'll fix that | I'm doing this in the beginning of my script:
from datasets.utils import logging as datasets_logging
datasets_logging.set_verbosity_warning()
but I'm still getting these logs:
[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1... | 22 | Too much logging
I'm doing this in the beginning of my script:
from datasets.utils import logging as datasets_logging
datasets_logging.set_verbosity_warning()
but I'm still getting these logs:
[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/dat... | [
-0.09395529329776764,
-0.13829277455806732,
-0.11098213493824005,
0.11790533363819122,
0.3646460473537445,
0.13861700892448425,
0.306993693113327,
0.5121554136276245,
0.16887563467025757,
-0.010042821057140827,
0.09551207721233368,
0.06016834080219269,
-0.41010910272598267,
0.0572799965739... |
https://github.com/huggingface/datasets/issues/812 | Too much logging | +1, the amount of logging is excessive.
Most of it indeed comes from `filelock.py`, though there are occasionally messages from other sources too. Below is an example (all of these messages were logged after I already called `datasets.logging.set_verbosity_error()`)
```
I1109 21:26:01.742688 139785006901056 file... | I'm doing this in the beginning of my script:
from datasets.utils import logging as datasets_logging
datasets_logging.set_verbosity_warning()
but I'm still getting these logs:
[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1... | 145 | Too much logging
I'm doing this in the beginning of my script:
from datasets.utils import logging as datasets_logging
datasets_logging.set_verbosity_warning()
but I'm still getting these logs:
[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/dat... | [
-0.07459443807601929,
-0.031536947935819626,
-0.05137281119823456,
0.13141773641109467,
0.2844705879688263,
0.217928946018219,
0.26180875301361084,
0.48388540744781494,
0.05174607038497925,
-0.17403395473957062,
0.10809929668903351,
0.013671361841261387,
-0.4307381808757782,
-0.07917931675... |
https://github.com/huggingface/datasets/issues/812 | Too much logging | In the latest version of the lib the logs about locks are at the DEBUG level so you won't see them by default.
Also `set_verbosity_warning` does take into account these logs now.
Can you try to update the lib ?
```
pip install --upgrade datasets
``` | I'm doing this in the beginning of my script:
from datasets.utils import logging as datasets_logging
datasets_logging.set_verbosity_warning()
but I'm still getting these logs:
[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1... | 46 | Too much logging
I'm doing this in the beginning of my script:
from datasets.utils import logging as datasets_logging
datasets_logging.set_verbosity_warning()
but I'm still getting these logs:
[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/dat... | [
-0.38420021533966064,
-0.12105709314346313,
-0.07823028415441513,
0.12130691111087799,
0.30270570516586304,
0.018521180376410484,
0.19532489776611328,
0.39920178055763245,
0.20924732089042664,
-0.0036372123286128044,
0.08699489384889603,
0.22930064797401428,
-0.3797934949398041,
-0.0543082... |
https://github.com/huggingface/datasets/issues/812 | Too much logging | Thanks. For some reason I have to use the older version. Is that possible I can fix this by some surface-level trick?
I'm still using 1.13 version datasets. | I'm doing this in the beginning of my script:
from datasets.utils import logging as datasets_logging
datasets_logging.set_verbosity_warning()
but I'm still getting these logs:
[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1... | 28 | Too much logging
I'm doing this in the beginning of my script:
from datasets.utils import logging as datasets_logging
datasets_logging.set_verbosity_warning()
but I'm still getting these logs:
[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/dat... | [
-0.18454740941524506,
-0.049645405262708664,
-0.0627770721912384,
0.11527130752801895,
0.3269050717353821,
0.22352521121501923,
0.24826088547706604,
0.5285160541534424,
0.06550857424736023,
-0.11483419686555862,
0.033185265958309174,
0.0734860897064209,
-0.3805561363697052,
0.0945335701107... |
https://github.com/huggingface/datasets/issues/809 | Add Google Taskmaster dataset | Hey @yjernite. Was going to start working on this but found taskmaster 1,2 & 3 in the datasets library already so think this can be closed now? | ## Adding a Dataset
- **Name:** Taskmaster
- **Description:** A large dataset of task-oriented dialogue with annotated goals (55K dialogues covering entertainment and travel reservations)
- **Paper:** https://arxiv.org/abs/1909.05358
- **Data:** https://github.com/google-research-datasets/Taskmaster
- **Motivation... | 27 | Add Google Taskmaster dataset
## Adding a Dataset
- **Name:** Taskmaster
- **Description:** A large dataset of task-oriented dialogue with annotated goals (55K dialogues covering entertainment and travel reservations)
- **Paper:** https://arxiv.org/abs/1909.05358
- **Data:** https://github.com/google-research-dat... | [
-0.26148882508277893,
0.05524669960141182,
-0.15951380133628845,
0.21565990149974823,
0.16742852330207825,
-0.037950921803712845,
0.33362165093421936,
0.06700796633958817,
0.1323194056749344,
0.12404163181781769,
-0.21295399963855743,
0.2662530541419983,
-0.4864400029182434,
0.528317093849... |
https://github.com/huggingface/datasets/issues/807 | load_dataset for LOCAL CSV files report CONNECTION ERROR | Hi !
The url works on my side.
Is the url working in your navigator ?
Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ? | ## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).reshape(300,4))
df.to_csv('test.csv', header=False, index=Fal... | 30 | load_dataset for LOCAL CSV files report CONNECTION ERROR
## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).res... | [
-0.32471200823783875,
0.016105303540825844,
-0.10880483686923981,
0.026004865765571594,
0.24204294383525848,
0.019868796691298485,
0.7173693776130676,
0.37563660740852356,
0.2509217858314514,
0.13370388746261597,
-0.034231994301080704,
0.14970430731773376,
0.07979933172464371,
0.0211834479... |
https://github.com/huggingface/datasets/issues/807 | load_dataset for LOCAL CSV files report CONNECTION ERROR | > Hi !
> The url works on my side.
>
> Is the url working in your navigator ?
> Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?
I tried another server, it's working now. Thanks a lot.
And I'm curious about why download things from "github" when I load dataset f... | ## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).reshape(300,4))
df.to_csv('test.csv', header=False, index=Fal... | 69 | load_dataset for LOCAL CSV files report CONNECTION ERROR
## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).res... | [
-0.32471200823783875,
0.016105303540825844,
-0.10880483686923981,
0.026004865765571594,
0.24204294383525848,
0.019868796691298485,
0.7173693776130676,
0.37563660740852356,
0.2509217858314514,
0.13370388746261597,
-0.034231994301080704,
0.14970430731773376,
0.07979933172464371,
0.0211834479... |
https://github.com/huggingface/datasets/issues/807 | load_dataset for LOCAL CSV files report CONNECTION ERROR |
> > Hi !
> > The url works on my side.
> > Is the url working in your navigator ?
> > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?
>
> I tried another server, it's working now. Thanks a lot.
>
> And I'm curious about why download things from "github" whe... | ## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).reshape(300,4))
df.to_csv('test.csv', header=False, index=Fal... | 103 | load_dataset for LOCAL CSV files report CONNECTION ERROR
## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).res... | [
-0.32471200823783875,
0.016105303540825844,
-0.10880483686923981,
0.026004865765571594,
0.24204294383525848,
0.019868796691298485,
0.7173693776130676,
0.37563660740852356,
0.2509217858314514,
0.13370388746261597,
-0.034231994301080704,
0.14970430731773376,
0.07979933172464371,
0.0211834479... |
https://github.com/huggingface/datasets/issues/807 | load_dataset for LOCAL CSV files report CONNECTION ERROR | hello, how did you solve this problems?
> > > Hi !
> > > The url works on my side.
> > > Is the url working in your navigator ?
> > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?
> >
> >
> > I tried another server, it's working now. Thanks a lot.
> > And I'... | ## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).reshape(300,4))
df.to_csv('test.csv', header=False, index=Fal... | 136 | load_dataset for LOCAL CSV files report CONNECTION ERROR
## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).res... | [
-0.32471200823783875,
0.016105303540825844,
-0.10880483686923981,
0.026004865765571594,
0.24204294383525848,
0.019868796691298485,
0.7173693776130676,
0.37563660740852356,
0.2509217858314514,
0.13370388746261597,
-0.034231994301080704,
0.14970430731773376,
0.07979933172464371,
0.0211834479... |
https://github.com/huggingface/datasets/issues/807 | load_dataset for LOCAL CSV files report CONNECTION ERROR | > hello, how did you solve this problems?
>
> > > > Hi !
> > > > The url works on my side.
> > > > Is the url working in your navigator ?
> > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?
> > >
> > >
> > > I tried another server, it's working now. Thanks ... | ## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).reshape(300,4))
df.to_csv('test.csv', header=False, index=Fal... | 155 | load_dataset for LOCAL CSV files report CONNECTION ERROR
## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).res... | [
-0.32471200823783875,
0.016105303540825844,
-0.10880483686923981,
0.026004865765571594,
0.24204294383525848,
0.019868796691298485,
0.7173693776130676,
0.37563660740852356,
0.2509217858314514,
0.13370388746261597,
-0.034231994301080704,
0.14970430731773376,
0.07979933172464371,
0.0211834479... |
https://github.com/huggingface/datasets/issues/807 | load_dataset for LOCAL CSV files report CONNECTION ERROR | > > hello, how did you solve this problems?
> > > > > Hi !
> > > > > The url works on my side.
> > > > > Is the url working in your navigator ?
> > > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?
> > > >
> > > >
> > > > I tried another server, it's working ... | ## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).reshape(300,4))
df.to_csv('test.csv', header=False, index=Fal... | 174 | load_dataset for LOCAL CSV files report CONNECTION ERROR
## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).res... | [
-0.32471200823783875,
0.016105303540825844,
-0.10880483686923981,
0.026004865765571594,
0.24204294383525848,
0.019868796691298485,
0.7173693776130676,
0.37563660740852356,
0.2509217858314514,
0.13370388746261597,
-0.034231994301080704,
0.14970430731773376,
0.07979933172464371,
0.0211834479... |
https://github.com/huggingface/datasets/issues/807 | load_dataset for LOCAL CSV files report CONNECTION ERROR | >
>
> > hello, how did you solve this problems?
> > > > > Hi !
> > > > > The url works on my side.
> > > > > Is the url working in your navigator ?
> > > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?
> > > >
> > > >
> > > > I tried another server, it's ... | ## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).reshape(300,4))
df.to_csv('test.csv', header=False, index=Fal... | 316 | load_dataset for LOCAL CSV files report CONNECTION ERROR
## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).res... | [
-0.32471200823783875,
0.016105303540825844,
-0.10880483686923981,
0.026004865765571594,
0.24204294383525848,
0.019868796691298485,
0.7173693776130676,
0.37563660740852356,
0.2509217858314514,
0.13370388746261597,
-0.034231994301080704,
0.14970430731773376,
0.07979933172464371,
0.0211834479... |
https://github.com/huggingface/datasets/issues/807 | load_dataset for LOCAL CSV files report CONNECTION ERROR | I also experienced this issue this morning. Looks like something specific to windows.
I'm working on a fix | ## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).reshape(300,4))
df.to_csv('test.csv', header=False, index=Fal... | 18 | load_dataset for LOCAL CSV files report CONNECTION ERROR
## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).res... | [
-0.32471200823783875,
0.016105303540825844,
-0.10880483686923981,
0.026004865765571594,
0.24204294383525848,
0.019868796691298485,
0.7173693776130676,
0.37563660740852356,
0.2509217858314514,
0.13370388746261597,
-0.034231994301080704,
0.14970430731773376,
0.07979933172464371,
0.0211834479... |
https://github.com/huggingface/datasets/issues/806 | Quail dataset urls are out of date | Hi ! Thanks for reporting.
We should fix the urls and use quail 1.3.
If you want to contribute feel free to fix the urls and open a PR :) | <h3>Code</h3>
```
from datasets import load_dataset
quail = load_dataset('quail')
```
<h3>Error</h3>
```
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml
```
As per [quail v1.3 commit](https://github.co... | 30 | Quail dataset urls are out of date
<h3>Code</h3>
```
from datasets import load_dataset
quail = load_dataset('quail')
```
<h3>Error</h3>
```
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml
```
As per ... | [
0.11141905933618546,
0.2604740560054779,
-0.06152283400297165,
0.0869145393371582,
0.05529170483350754,
0.11656821519136429,
-0.10393816977739334,
0.2843021750450134,
-0.14580433070659637,
-0.026925740763545036,
-0.050093334168195724,
-0.10960204154253006,
0.06176072359085083,
0.0529492199... |
https://github.com/huggingface/datasets/issues/806 | Quail dataset urls are out of date | Done! PR [https://github.com/huggingface/datasets/pull/820](https://github.com/huggingface/datasets/pull/820)
Updated links and also regenerated the metadata and dummy data for v1.3 in order to pass verifications as described here: [https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to... | <h3>Code</h3>
```
from datasets import load_dataset
quail = load_dataset('quail')
```
<h3>Error</h3>
```
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml
```
As per [quail v1.3 commit](https://github.co... | 24 | Quail dataset urls are out of date
<h3>Code</h3>
```
from datasets import load_dataset
quail = load_dataset('quail')
```
<h3>Error</h3>
```
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml
```
As per ... | [
0.1814384162425995,
0.26339080929756165,
-0.030795922502875328,
0.059245672076940536,
0.04189113527536392,
0.12300227582454681,
-0.12396816164255142,
0.27636218070983887,
-0.19367776811122894,
-0.033900581300258636,
-0.07967749238014221,
-0.06463836878538132,
0.04962947219610214,
0.0543347... |
https://github.com/huggingface/datasets/issues/805 | On loading a metric from datasets, I get the following error | Hi ! We support only pyarrow > 0.17.1 so that we have access to the `PyExtensionType` object.
Could you update pyarrow and try again ?
```
pip install --upgrade pyarrow
``` | `from datasets import load_metric`
`metric = load_metric('bleurt')`
Traceback:
210 class _ArrayXDExtensionType(pa.PyExtensionType):
211
212 ndims: int = None
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType'
Any help will be appreciated. Thank you. | 31 | On loading a metric from datasets, I get the following error
`from datasets import load_metric`
`metric = load_metric('bleurt')`
Traceback:
210 class _ArrayXDExtensionType(pa.PyExtensionType):
211
212 ndims: int = None
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType'
Any... | [
-0.44773417711257935,
0.04323587939143181,
-0.029589729383587837,
0.42085689306259155,
0.525189220905304,
0.10357295721769333,
0.1647764891386032,
0.1940905749797821,
0.10968869179487228,
0.13626334071159363,
-0.10965170711278915,
0.33686983585357666,
-0.06277690082788467,
-0.1668592691421... |
https://github.com/huggingface/datasets/issues/804 | Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa') | Yes: TriviaQA has a private test set for the leaderboard [here](https://competitions.codalab.org/competitions/17208)
For the KILT training and validation portions, you need to link the examples from the TriviaQA dataset as detailed here:
https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README... | # The issue
It's all in the title, it appears to be fine on the train and validation sets.
Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md) ?
# How to reproduce
```py
from datasets import load_dataset
kilt_tas... | 32 | Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa')
# The issue
It's all in the title, it appears to be fine on the train and validation sets.
Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/READM... | [
0.43468716740608215,
-0.3452419638633728,
-0.08128155022859573,
0.12175575643777847,
0.2600175142288208,
-0.07976876199245453,
0.4260965883731842,
0.4618624150753021,
0.13298125565052032,
0.26882752776145935,
0.09468238055706024,
0.2613140344619751,
-0.0670851618051529,
0.5240862369537354,... |
https://github.com/huggingface/datasets/issues/804 | Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa') | Oh ok, I guess I read the paper too fast 😅, thank you for your answer! | # The issue
It's all in the title, it appears to be fine on the train and validation sets.
Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md) ?
# How to reproduce
```py
from datasets import load_dataset
kilt_tas... | 16 | Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa')
# The issue
It's all in the title, it appears to be fine on the train and validation sets.
Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/READM... | [
0.46811845898628235,
-0.4883507490158081,
-0.11691759526729584,
0.06333289295434952,
0.3016146719455719,
-0.06188217177987099,
0.5027801394462585,
0.44172969460487366,
0.14432986080646515,
0.2951973080635071,
0.15050816535949707,
0.24783030152320862,
-0.10581659525632858,
0.420443981885910... |
https://github.com/huggingface/datasets/issues/801 | How to join two datasets? | Hi ! Currently the only way to add new fields to a dataset is by using `.map` and picking items from the other dataset
| Hi,
I'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels?
I'm currently trying to create paired sentences for BERT from `wikipedia/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` where the second sentence is... | 24 | How to join two datasets?
Hi,
I'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels?
I'm currently trying to create paired sentences for BERT from `wikipedia/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` ... | [
-0.080713652074337,
-0.023197945207357407,
-0.008704031817615032,
0.07583297789096832,
-0.08046385645866394,
0.19386284053325653,
-0.09644570201635361,
0.15029962360858917,
0.09170269221067429,
-0.137249156832695,
-0.15640206634998322,
0.26321059465408325,
0.26520606875419617,
0.0553942695... |
https://github.com/huggingface/datasets/issues/801 | How to join two datasets? | Closing this one. Feel free to re-open if you have other questions about this issue.
Also linking another discussion about joining datasets: #853 | Hi,
I'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels?
I'm currently trying to create paired sentences for BERT from `wikipedia/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` where the second sentence is... | 23 | How to join two datasets?
Hi,
I'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels?
I'm currently trying to create paired sentences for BERT from `wikipedia/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` ... | [
-0.0025699224788695574,
-0.050123732537031174,
0.007704403717070818,
0.07763034105300903,
-0.001898635528050363,
0.17297391593456268,
-0.05921349674463272,
0.15285459160804749,
-0.02389482781291008,
-0.1077590137720108,
-0.27961161732673645,
0.22843074798583984,
0.306437224149704,
0.051166... |
https://github.com/huggingface/datasets/issues/798 | Cannot load TREC dataset: ConnectionError | Hi ! Indeed there's an issue with those links.
We should probably use the target urls of the redirections instead | ## Problem
I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True... | 20 | Cannot load TREC dataset: ConnectionError
## Problem
I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>.
* `requests.head('http://cogcomp.org/Data/Q... | [
-0.1691725105047226,
0.01683579757809639,
0.06266516447067261,
0.363399475812912,
0.29187697172164917,
-0.1796785444021225,
0.21995890140533447,
-0.04344447702169418,
-0.18381397426128387,
0.06295830756425858,
-0.32389935851097107,
0.13601748645305634,
0.07222908735275269,
0.15775860846042... |
https://github.com/huggingface/datasets/issues/798 | Cannot load TREC dataset: ConnectionError | Hi, the same issue here, could you tell me how to download it through datasets? thanks | ## Problem
I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True... | 16 | Cannot load TREC dataset: ConnectionError
## Problem
I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>.
* `requests.head('http://cogcomp.org/Data/Q... | [
-0.1691725105047226,
0.01683579757809639,
0.06266516447067261,
0.363399475812912,
0.29187697172164917,
-0.1796785444021225,
0.21995890140533447,
-0.04344447702169418,
-0.18381397426128387,
0.06295830756425858,
-0.32389935851097107,
0.13601748645305634,
0.07222908735275269,
0.15775860846042... |
https://github.com/huggingface/datasets/issues/798 | Cannot load TREC dataset: ConnectionError | Actually it's already fixed on the master branch since #740
I'll do the 1.1.3 release soon | ## Problem
I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True... | 16 | Cannot load TREC dataset: ConnectionError
## Problem
I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>.
* `requests.head('http://cogcomp.org/Data/Q... | [
-0.1691725105047226,
0.01683579757809639,
0.06266516447067261,
0.363399475812912,
0.29187697172164917,
-0.1796785444021225,
0.21995890140533447,
-0.04344447702169418,
-0.18381397426128387,
0.06295830756425858,
-0.32389935851097107,
0.13601748645305634,
0.07222908735275269,
0.15775860846042... |
https://github.com/huggingface/datasets/issues/798 | Cannot load TREC dataset: ConnectionError | Hi
thanks, but I did tried to install from the pip install git+... and it does
not work for me,. thanks for the help. I have the same issue with wmt16,
"ro-en"
thanks.
Best
Rabeeh
On Mon, Nov 16, 2020 at 10:29 AM Quentin Lhoest <notifications@github.com>
wrote:
> Actually it's already fixed on the master branch since... | ## Problem
I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True... | 98 | Cannot load TREC dataset: ConnectionError
## Problem
I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>.
* `requests.head('http://cogcomp.org/Data/Q... | [
-0.1691725105047226,
0.01683579757809639,
0.06266516447067261,
0.363399475812912,
0.29187697172164917,
-0.1796785444021225,
0.21995890140533447,
-0.04344447702169418,
-0.18381397426128387,
0.06295830756425858,
-0.32389935851097107,
0.13601748645305634,
0.07222908735275269,
0.15775860846042... |
https://github.com/huggingface/datasets/issues/798 | Cannot load TREC dataset: ConnectionError | I just tested on google colab using
```python
!pip install git+https://github.com/huggingface/datasets.git
from datasets import load_dataset
load_dataset("trec")
```
and it works.
Can you detail how you got the issue even when using the latest version on master ?
Also about wmt we'll look into it, thanks for ... | ## Problem
I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True... | 48 | Cannot load TREC dataset: ConnectionError
## Problem
I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>.
* `requests.head('http://cogcomp.org/Data/Q... | [
-0.1691725105047226,
0.01683579757809639,
0.06266516447067261,
0.363399475812912,
0.29187697172164917,
-0.1796785444021225,
0.21995890140533447,
-0.04344447702169418,
-0.18381397426128387,
0.06295830756425858,
-0.32389935851097107,
0.13601748645305634,
0.07222908735275269,
0.15775860846042... |
https://github.com/huggingface/datasets/issues/796 | Seq2Seq Metrics QOL: Bleu, Rouge | Hi ! Thanks for letting us know your experience :)
We should at least improve the error messages indeed | Putting all my QOL issues here, idt I will have time to propose fixes, but I didn't want these to be lost, in case they are useful. I tried using `rouge` and `bleu` for the first time and wrote down everything I didn't immediately understand:
+ Bleu expects tokenization, can I just kwarg it like sacrebleu?
+ differ... | 19 | Seq2Seq Metrics QOL: Bleu, Rouge
Putting all my QOL issues here, idt I will have time to propose fixes, but I didn't want these to be lost, in case they are useful. I tried using `rouge` and `bleu` for the first time and wrote down everything I didn't immediately understand:
+ Bleu expects tokenization, can I just... | [
0.17695385217666626,
-0.02816588804125786,
-0.08531196415424347,
0.0595536008477211,
0.13166363537311554,
-0.2772907018661499,
0.09355074167251587,
0.11946054548025131,
-0.0715428814291954,
0.020588498562574387,
-0.23732805252075195,
0.011510256677865982,
-0.10359315574169159,
-0.002877252... |
https://github.com/huggingface/datasets/issues/796 | Seq2Seq Metrics QOL: Bleu, Rouge | prediction = [['Hey', 'how', 'are', 'you', '?']]
reference=[['Hey', 'how', 'are', 'you', '?']]
bleu.compute(predictions=prediction,references=reference)
also tried this kind of things lol
I definitely need help too | Putting all my QOL issues here, idt I will have time to propose fixes, but I didn't want these to be lost, in case they are useful. I tried using `rouge` and `bleu` for the first time and wrote down everything I didn't immediately understand:
+ Bleu expects tokenization, can I just kwarg it like sacrebleu?
+ differ... | 25 | Seq2Seq Metrics QOL: Bleu, Rouge
Putting all my QOL issues here, idt I will have time to propose fixes, but I didn't want these to be lost, in case they are useful. I tried using `rouge` and `bleu` for the first time and wrote down everything I didn't immediately understand:
+ Bleu expects tokenization, can I just... | [
0.17695385217666626,
-0.02816588804125786,
-0.08531196415424347,
0.0595536008477211,
0.13166363537311554,
-0.2772907018661499,
0.09355074167251587,
0.11946054548025131,
-0.0715428814291954,
0.020588498562574387,
-0.23732805252075195,
0.011510256677865982,
-0.10359315574169159,
-0.002877252... |
https://github.com/huggingface/datasets/issues/796 | Seq2Seq Metrics QOL: Bleu, Rouge | Hi !
As described in the documentation for `bleu`:
```
Args:
predictions: list of translations to score.
Each translation should be tokenized into a list of tokens.
references: list of lists of references for each translation.
Each reference should be tokenized into a list of tokens.
`... | Putting all my QOL issues here, idt I will have time to propose fixes, but I didn't want these to be lost, in case they are useful. I tried using `rouge` and `bleu` for the first time and wrote down everything I didn't immediately understand:
+ Bleu expects tokenization, can I just kwarg it like sacrebleu?
+ differ... | 142 | Seq2Seq Metrics QOL: Bleu, Rouge
Putting all my QOL issues here, idt I will have time to propose fixes, but I didn't want these to be lost, in case they are useful. I tried using `rouge` and `bleu` for the first time and wrote down everything I didn't immediately understand:
+ Bleu expects tokenization, can I just... | [
0.17695385217666626,
-0.02816588804125786,
-0.08531196415424347,
0.0595536008477211,
0.13166363537311554,
-0.2772907018661499,
0.09355074167251587,
0.11946054548025131,
-0.0715428814291954,
0.020588498562574387,
-0.23732805252075195,
0.011510256677865982,
-0.10359315574169159,
-0.002877252... |
https://github.com/huggingface/datasets/issues/792 | KILT dataset: empty string in triviaqa input field | Just found out about https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md
(Not very clear in https://huggingface.co/datasets/kilt_tasks links to http://github.com/huggingface/datasets/datasets/kilt_tasks/README.md which is dead, closing the issue though :)) | # What happened
Both train and test splits of the triviaqa dataset (part of the KILT benchmark) seem to have empty string in their input field (unlike the natural questions dataset, part of the same benchmark)
# Versions
KILT version is `1.0.0`
`datasets` version is `1.1.2`
[more here](https://gist.github.com/Pa... | 21 | KILT dataset: empty string in triviaqa input field
# What happened
Both train and test splits of the triviaqa dataset (part of the KILT benchmark) seem to have empty string in their input field (unlike the natural questions dataset, part of the same benchmark)
# Versions
KILT version is `1.0.0`
`datasets` versi... | [
0.4154247045516968,
-0.3093392252922058,
-0.030962737277150154,
0.030844738706946373,
0.21697840094566345,
0.048155277967453,
0.6097395420074463,
0.4196391701698303,
0.18411841988563538,
0.1952490359544754,
0.365461140871048,
0.5179553031921387,
-0.04680781438946724,
0.4430818557739258,
... |
https://github.com/huggingface/datasets/issues/790 | Error running pip install -e ".[dev]" on MacOS 10.13.6: faiss/python does not exist | I saw that `faiss-cpu` 1.6.4.post2 was released recently to fix the installation on macos. It should work now | I was following along with https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset when I ran into this error.
```sh
git clone https://github.com/huggingface/datasets
cd datasets
virtualenv venv -p python3 --system-site-packages
source venv/bin/activate
pip install -e ".... | 18 | Error running pip install -e ".[dev]" on MacOS 10.13.6: faiss/python does not exist
I was following along with https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset when I ran into this error.
```sh
git clone https://github.com/huggingface/datasets
cd datasets
virtuale... | [
-0.07691366970539093,
-0.4744408130645752,
-0.11490054428577423,
-0.03132501244544983,
0.04703562334179878,
0.07470441609621048,
-0.23948203027248383,
0.23211319744586945,
0.342098206281662,
0.024670341983437538,
-0.07138074934482574,
0.2375306487083435,
0.0024277672637254,
0.0662577077746... |
https://github.com/huggingface/datasets/issues/786 | feat(dataset): multiprocessing _generate_examples | I agree that would be cool :)
Right now the only distributed dataset builder is based on Apache Beam so you can use distributed processing frameworks like Dataflow, Spark, Flink etc. to build your dataset but it's not really well suited for single-worker parallel processing afaik | forking this out of #741, this issue is only regarding multiprocessing
I'd love if there was a dataset configuration parameter `workers`, where when it is `1` it behaves as it does right now, and when its `>1` maybe `_generate_examples` can also get the `pool` and return an iterable using the pool.
In my use case... | 46 | feat(dataset): multiprocessing _generate_examples
forking this out of #741, this issue is only regarding multiprocessing
I'd love if there was a dataset configuration parameter `workers`, where when it is `1` it behaves as it does right now, and when its `>1` maybe `_generate_examples` can also get the `pool` and ... | [
-0.564568817615509,
-0.12574242055416107,
-0.11738550662994385,
-0.16663266718387604,
-0.07130606472492218,
-0.0848815068602562,
0.3084900975227356,
0.3450711667537689,
0.1016155257821083,
0.4008846879005432,
0.2586759924888611,
0.1760059893131256,
0.029979731887578964,
0.1242404505610466,... |
https://github.com/huggingface/datasets/issues/784 | Issue with downloading Wikipedia data for low resource language | Hello, maybe you could ty to use another date for the wikipedia dump (see the available [dates](https://dumps.wikimedia.org/jvwiki) here for `jv`) ? | Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet
```
jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner')
su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runner='DirectRunner')
```
And I get the following error for these tw... | 21 | Issue with downloading Wikipedia data for low resource language
Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet
```
jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner')
su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runne... | [
0.19471463561058044,
-0.05373465642333031,
0.025463661178946495,
0.4426939785480499,
0.16262847185134888,
0.04455549269914627,
0.03179023042321205,
0.41505879163742065,
0.06906209886074066,
-0.005340923555195332,
-0.24613437056541443,
-0.15708188712596893,
0.3839262127876282,
-0.2293258160... |
https://github.com/huggingface/datasets/issues/784 | Issue with downloading Wikipedia data for low resource language | @lhoestq
I've tried `load_dataset('wikipedia', '20200501.zh', beam_runner='DirectRunner')` and got the same `FileNotFoundError` as @SamuelCahyawijaya.
Also, using another date (e.g. `load_dataset('wikipedia', '20201120.zh', beam_runner='DirectRunner')`) will give the following error message.
```
ValueError: B... | Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet
```
jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner')
su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runner='DirectRunner')
```
And I get the following error for these tw... | 342 | Issue with downloading Wikipedia data for low resource language
Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet
```
jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner')
su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runne... | [
0.05334748700261116,
0.037484850734472275,
0.05978331342339516,
0.4869288504123688,
0.17053698003292084,
0.02335960790514946,
0.09322946518659592,
0.4074784219264984,
0.16699981689453125,
0.10148229449987411,
-0.1808922439813614,
-0.13914941251277924,
0.41886070370674133,
-0.17327180504798... |
https://github.com/huggingface/datasets/issues/778 | Unexpected behavior when loading cached csv file? | Hi ! Thanks for reporting.
The same issue was reported in #730 (but with the encodings instead of the delimiter). It was fixed by #770 .
The fix will be available in the next release :) | I read a csv file from disk and forgot so specify the right delimiter. When i read the csv file again specifying the right delimiter it had no effect since it was using the cached dataset. I am not sure if this is unwanted behavior since i can always specify `download_mode="force_redownload"`. But i think it would be n... | 36 | Unexpected behavior when loading cached csv file?
I read a csv file from disk and forgot so specify the right delimiter. When i read the csv file again specifying the right delimiter it had no effect since it was using the cached dataset. I am not sure if this is unwanted behavior since i can always specify `download... | [
0.12032388895750046,
-0.1472448855638504,
-0.1446629762649536,
0.39730745553970337,
0.05969056487083435,
0.15631917119026184,
0.7675092220306396,
-0.10106991976499557,
0.3270578682422638,
0.0822039470076561,
-0.03244864195585251,
-0.001356215332634747,
0.10880303382873535,
-0.2761223316192... |
https://github.com/huggingface/datasets/issues/778 | Unexpected behavior when loading cached csv file? | Thanks for the prompt reply and terribly sorry for the spam!
Looking forward to the new release! | I read a csv file from disk and forgot so specify the right delimiter. When i read the csv file again specifying the right delimiter it had no effect since it was using the cached dataset. I am not sure if this is unwanted behavior since i can always specify `download_mode="force_redownload"`. But i think it would be n... | 17 | Unexpected behavior when loading cached csv file?
I read a csv file from disk and forgot so specify the right delimiter. When i read the csv file again specifying the right delimiter it had no effect since it was using the cached dataset. I am not sure if this is unwanted behavior since i can always specify `download... | [
0.0635632574558258,
-0.13767115771770477,
-0.16112256050109863,
0.3697887659072876,
0.02402925305068493,
0.16985829174518585,
0.7317380905151367,
-0.09527894109487534,
0.34567609429359436,
0.06218115612864494,
-0.019688203930854797,
0.02635812759399414,
0.09261880069971085,
-0.251698851585... |
https://github.com/huggingface/datasets/issues/771 | Using `Dataset.map` with `n_proc>1` print multiple progress bars | Yes it allows to monitor the speed of each process. Currently each process takes care of one shard of the dataset.
At one point we can consider using streaming batches to a pool of processes instead of sharding the dataset in `num_proc` parts. At that point it will be easy to use only one progress bar | When using `Dataset.map` with `n_proc > 1`, only one of the processes should print a progress bar (to make the output readable). Right now, `n_proc` progress bars are printed. | 56 | Using `Dataset.map` with `n_proc>1` print multiple progress bars
When using `Dataset.map` with `n_proc > 1`, only one of the processes should print a progress bar (to make the output readable). Right now, `n_proc` progress bars are printed.
Yes it allows to monitor the speed of each process. Currently each process ... | [
-0.47093093395233154,
-0.3086605370044708,
-0.20458826422691345,
0.02824402041733265,
-0.16148467361927032,
-0.059889499098062515,
0.4274543821811676,
0.19549806416034698,
-0.14402242004871368,
0.5090811252593994,
0.03531137481331825,
0.21855528652668,
0.10202663391828537,
0.53409987688064... |
https://github.com/huggingface/datasets/issues/769 | How to choose proper download_mode in function load_dataset? | `download_mode=datasets.GenerateMode.FORCE_REDOWNLOAD` should work.
This makes me think we we should rename this to DownloadMode.FORCE_REDOWNLOAD. Currently that's confusing | Hi, I am a beginner to datasets and I try to use datasets to load my csv file.
my csv file looks like this
```
text,label
"Effective but too-tepid biopic",3
"If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",4
"Emerges as something rare , an issue movie that 's so hones... | 17 | How to choose proper download_mode in function load_dataset?
Hi, I am a beginner to datasets and I try to use datasets to load my csv file.
my csv file looks like this
```
text,label
"Effective but too-tepid biopic",3
"If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",... | [
-0.2590155601501465,
-0.10657884925603867,
-0.03249143064022064,
0.07350939512252808,
0.39223039150238037,
-0.05880067124962807,
0.4731936454772949,
0.093922920525074,
0.1750781387090683,
-0.08836005628108978,
-0.21461884677410126,
0.16051805019378662,
0.2840965688228607,
0.145775347948074... |
https://github.com/huggingface/datasets/issues/769 | How to choose proper download_mode in function load_dataset? | Indeed you should use `features` in this case.
```python
features = Features({'text': Value('string'), 'label': Value('float32')})
dataset = load_dataset('csv', data_files=['sst_test.csv'], features=features)
```
Note that because of an issue with the caching when you change the features (see #750 ) you still nee... | Hi, I am a beginner to datasets and I try to use datasets to load my csv file.
my csv file looks like this
```
text,label
"Effective but too-tepid biopic",3
"If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",4
"Emerges as something rare , an issue movie that 's so hones... | 55 | How to choose proper download_mode in function load_dataset?
Hi, I am a beginner to datasets and I try to use datasets to load my csv file.
my csv file looks like this
```
text,label
"Effective but too-tepid biopic",3
"If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",... | [
-0.2571394145488739,
-0.2127031683921814,
-0.04023990407586098,
0.16301517188549042,
0.4040104150772095,
-0.06867995113134384,
0.4393959045410156,
0.09292466193437576,
0.2167058140039444,
-0.08138319849967957,
-0.20335009694099426,
0.1510447859764099,
0.28904345631599426,
0.218875139951705... |
https://github.com/huggingface/datasets/issues/768 | Add a `lazy_map` method to `Dataset` and `DatasetDict` | This is cool! I think some aspects to think about and decide in terms of API are:
- do we allow several methods (chained i guess)
- how do we inspect the currently set method(s)
- how do we control/reset them | The library is great, but it would be even more awesome with a `lazy_map` method implemented on `Dataset` and `DatasetDict`. This would apply a function on a give item but when the item is requested. Two use cases:
1. load image on the fly
2. apply a random function and get different outputs at each epoch (like dat... | 41 | Add a `lazy_map` method to `Dataset` and `DatasetDict`
The library is great, but it would be even more awesome with a `lazy_map` method implemented on `Dataset` and `DatasetDict`. This would apply a function on a give item but when the item is requested. Two use cases:
1. load image on the fly
2. apply a random f... | [
-0.03164539858698845,
0.013758288696408272,
-0.2658604681491852,
0.011650614440441132,
-0.1261354684829712,
-0.1206699088215828,
0.08221059292554855,
0.31596657633781433,
0.38564616441726685,
0.15779148042201996,
0.0908186063170433,
0.4605908989906311,
-0.3150094747543335,
-0.0637618675827... |
https://github.com/huggingface/datasets/issues/767 | Add option for named splits when using ds.train_test_split | Yes definitely we should give more flexibility to control the name of the splits outputted by `train_test_split`.
Related is the very interesting feedback from @bramvanroy on how we should improve this method: https://discuss.huggingface.co/t/how-to-split-main-dataset-into-train-dev-test-as-datasetdict/1090/5
And... | ### Feature Request 🚀
Can we add a way to name your splits when using the `.train_test_split` function?
In almost every use case I've come across, I have a `train` and a `test` split in my `DatasetDict`, and I want to create a `validation` split. Therefore, its kinda useless to get a `test` split back from `tra... | 58 | Add option for named splits when using ds.train_test_split
### Feature Request 🚀
Can we add a way to name your splits when using the `.train_test_split` function?
In almost every use case I've come across, I have a `train` and a `test` split in my `DatasetDict`, and I want to create a `validation` split. Ther... | [
0.07257379591464996,
-0.08529040962457657,
-0.11802507936954498,
-0.045865558087825775,
0.15466883778572083,
0.2035396695137024,
0.625244677066803,
0.1981317102909088,
0.11475063860416412,
0.39402487874031067,
0.04199592024087906,
0.3692275881767273,
-0.24358975887298584,
0.278240919113159... |
https://github.com/huggingface/datasets/issues/761 | Downloaded datasets are not usable offline | Yes currently you need an internet connection because the lib tries to check for the etag of the dataset script online to see if you don't have it locally already.
If we add a way to store the etag/hash locally after the first download, it would allow users to first download the dataset with an internet connection, ... | I've been trying to use the IMDB dataset offline, but after downloading it and turning off the internet it still raises an error from the ```requests``` library trying to reach for the online dataset.
Is this the intended behavior ?
(Sorry, I wrote the the first version of this issue while still on nlp 0.3.0). | 75 | Downloaded datasets are not usable offline
I've been trying to use the IMDB dataset offline, but after downloading it and turning off the internet it still raises an error from the ```requests``` library trying to reach for the online dataset.
Is this the intended behavior ?
(Sorry, I wrote the the first version o... | [
-0.2098323255777359,
0.2670383155345917,
-0.0017678567674010992,
0.1898873746395111,
-0.05953962355852127,
-0.07488038390874863,
0.39034706354141235,
-0.01728445664048195,
0.17079374194145203,
-0.022502154111862183,
0.12379985302686691,
0.012620574794709682,
-0.03868023306131363,
0.0756759... |
https://github.com/huggingface/datasets/issues/759 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | Does this HEAD request return 200 on your machine ?
```python
import requests
requests.head("https://raw.githubuserc... | Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_da... | 28 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_da... | [
-0.17262035608291626,
-0.08458110690116882,
-0.09733720868825912,
0.1288861185312271,
0.4222562909126282,
0.20194438099861145,
0.3426603674888611,
0.062194470316171646,
-0.05755838751792908,
0.004357459954917431,
-0.14231087267398834,
0.08249624818563461,
-0.06605984270572662,
0.2489191293... |
https://github.com/huggingface/datasets/issues/759 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | Thank you very much for your response.
When I run
```
import requests
requests.head("https://raw.githubuserconten... | Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_da... | 272 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_da... | [
-0.17262035608291626,
-0.08458110690116882,
-0.09733720868825912,
0.1288861185312271,
0.4222562909126282,
0.20194438099861145,
0.3426603674888611,
0.062194470316171646,
-0.05755838751792908,
0.004357459954917431,
-0.14231087267398834,
0.08249624818563461,
-0.06605984270572662,
0.2489191293... |
https://github.com/huggingface/datasets/issues/759 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | I can browse the google drive through google chrome. It's weird. I can download the dataset through google drive manually. | Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_da... | 20 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_da... | [
-0.17262035608291626,
-0.08458110690116882,
-0.09733720868825912,
0.1288861185312271,
0.4222562909126282,
0.20194438099861145,
0.3426603674888611,
0.062194470316171646,
-0.05755838751792908,
0.004357459954917431,
-0.14231087267398834,
0.08249624818563461,
-0.06605984270572662,
0.2489191293... |
https://github.com/huggingface/datasets/issues/759 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | Is it possible I download the dataset manually from google drive and use it for further test ? How can I do this ? I want to reproduce the model in this link https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16. But I can't download the dataset through load_dataset method . I have tried many times and ... | Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_da... | 56 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_da... | [
-0.17262035608291626,
-0.08458110690116882,
-0.09733720868825912,
0.1288861185312271,
0.4222562909126282,
0.20194438099861145,
0.3426603674888611,
0.062194470316171646,
-0.05755838751792908,
0.004357459954917431,
-0.14231087267398834,
0.08249624818563461,
-0.06605984270572662,
0.2489191293... |
https://github.com/huggingface/datasets/issues/759 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | The head request should definitely work, not sure what's going on on your side.
If you find a way to make it work, please post it here since other users might encounter the same issue.
If you don't manage to fix it you can use `load_dataset` on google colab and then save it using `dataset.save_to_disk("path/to/data... | Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_da... | 75 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_da... | [
-0.17262035608291626,
-0.08458110690116882,
-0.09733720868825912,
0.1288861185312271,
0.4222562909126282,
0.20194438099861145,
0.3426603674888611,
0.062194470316171646,
-0.05755838751792908,
0.004357459954917431,
-0.14231087267398834,
0.08249624818563461,
-0.06605984270572662,
0.2489191293... |
https://github.com/huggingface/datasets/issues/759 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | Hi
I want to know if this problem has been solved because I encountered a similar issue. Thanks.
`train_data = datasets.load_dataset("xsum", `split="train")`
`ConnectionError:` Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/xsum/xsum.py` | Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_da... | 26 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_da... | [
-0.17262035608291626,
-0.08458110690116882,
-0.09733720868825912,
0.1288861185312271,
0.4222562909126282,
0.20194438099861145,
0.3426603674888611,
0.062194470316171646,
-0.05755838751792908,
0.004357459954917431,
-0.14231087267398834,
0.08249624818563461,
-0.06605984270572662,
0.2489191293... |
https://github.com/huggingface/datasets/issues/759 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | Hi @smile0925 ! Do you have an internet connection ? Are you using some kind of proxy that may block the access to this file ?
Otherwise you can try to update `datasets` since we introduced retries for http requests in the 1.2.0 version
```
pip install --upgrade datasets
```
Let me know if that helps. | Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_da... | 56 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_da... | [
-0.17262035608291626,
-0.08458110690116882,
-0.09733720868825912,
0.1288861185312271,
0.4222562909126282,
0.20194438099861145,
0.3426603674888611,
0.062194470316171646,
-0.05755838751792908,
0.004357459954917431,
-0.14231087267398834,
0.08249624818563461,
-0.06605984270572662,
0.2489191293... |
https://github.com/huggingface/datasets/issues/759 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | Hi @lhoestq
Oh, may be you are right. I find that my server uses some kind of proxy that block the access to this file.

| Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_da... | 26 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_da... | [
-0.17262035608291626,
-0.08458110690116882,
-0.09733720868825912,
0.1288861185312271,
0.4222562909126282,
0.20194438099861145,
0.3426603674888611,
0.062194470316171646,
-0.05755838751792908,
0.004357459954917431,
-0.14231087267398834,
0.08249624818563461,
-0.06605984270572662,
0.2489191293... |
https://github.com/huggingface/datasets/issues/759 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | > Hi @lhoestq
> Oh, may be you are right. I find that my server uses some kind of proxy that block the access to this file.
> 
I have the same problem, have you solved it? Many thanks | Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_da... | 40 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_da... | [
-0.17262035608291626,
-0.08458110690116882,
-0.09733720868825912,
0.1288861185312271,
0.4222562909126282,
0.20194438099861145,
0.3426603674888611,
0.062194470316171646,
-0.05755838751792908,
0.004357459954917431,
-0.14231087267398834,
0.08249624818563461,
-0.06605984270572662,
0.2489191293... |
https://github.com/huggingface/datasets/issues/759 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | Hi @ZhengxiangShi
You can first try whether your network can access these files. I need to use VPN to access these files, so I download the files that cannot be accessed to the local in advance, and then use them in the code. Like this,
`train_data = datasets.load_dataset("xsum.py", split="train")` | Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_da... | 49 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_da... | [
-0.17262035608291626,
-0.08458110690116882,
-0.09733720868825912,
0.1288861185312271,
0.4222562909126282,
0.20194438099861145,
0.3426603674888611,
0.062194470316171646,
-0.05755838751792908,
0.004357459954917431,
-0.14231087267398834,
0.08249624818563461,
-0.06605984270572662,
0.2489191293... |
https://github.com/huggingface/datasets/issues/758 | Process 0 very slow when using num_procs with map to tokenizer | Hi ! Thanks for reporting.
Is the distribution of text length of your data evenly distributed across your dataset ? I mean, could it be because the examples in the first part of your dataset are slower to process ?
Also could how many CPUs can you use for multiprocessing ?
```python
import multiprocessing
print(mu... | <img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_speci... | 62 | Process 0 very slow when using num_procs with map to tokenizer
<img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
... | [
-0.4953111410140991,
-0.5315656065940857,
-0.16952860355377197,
0.2302868813276291,
0.12561628222465515,
-0.32973286509513855,
0.3093046247959137,
0.1606469303369522,
-0.3321199119091034,
0.2106941044330597,
0.3402484357357025,
0.24001578986644745,
-0.10366961359977722,
-0.0406741015613079... |
https://github.com/huggingface/datasets/issues/758 | Process 0 very slow when using num_procs with map to tokenizer | Using pre trained HF tokenizer. The result is the same with tokenizer multiprocessing off and on.
I have (absolutely) no idea about the distribution, but since this issue occurs on all of my datasets(regardless of files), I don't think distribution is the problems.
I can use up to 16 cores. | <img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_speci... | 50 | Process 0 very slow when using num_procs with map to tokenizer
<img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
... | [
-0.45069506764411926,
-0.38954058289527893,
-0.04728243127465248,
0.16293174028396606,
0.1149340569972992,
-0.28154903650283813,
0.26568740606307983,
0.2587326765060425,
-0.3923635482788086,
0.1888352781534195,
0.06290783733129501,
0.37189579010009766,
-0.08907046169042587,
-0.210511058568... |
https://github.com/huggingface/datasets/issues/758 | Process 0 very slow when using num_procs with map to tokenizer | Ok weird, I don't manage to reproduce this issue on my side.
Does it happen even with `num_proc=2` for example ?
Also could you provide more details about your OS and the versions of tokenizers/datasets/multiprocess that you're using ? | <img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_speci... | 39 | Process 0 very slow when using num_procs with map to tokenizer
<img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
... | [
-0.44992369413375854,
-0.33846595883369446,
-0.08818309754133224,
0.1553918868303299,
0.08865959197282791,
-0.30338358879089355,
0.276115745306015,
0.24882656335830688,
-0.41450759768486023,
0.15615476667881012,
0.21321800351142883,
0.3615122437477112,
-0.22451500594615936,
-0.106233075261... |
https://github.com/huggingface/datasets/issues/758 | Process 0 very slow when using num_procs with map to tokenizer | Yes, I can confirm it also happens with ```num_proc=2```.
```
tokenizers 0.9.2
datasets 1.1.2
multiprocess 0.70.10
```
```
Linux nipa2020-0629 4.4.0-178-generic #208-Ubuntu SMP Sun Apr 5 23:45:10 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
``` | <img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_speci... | 34 | Process 0 very slow when using num_procs with map to tokenizer
<img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
... | [
-0.41471239924430847,
-0.2900134325027466,
-0.14023903012275696,
0.177593395113945,
0.08560404926538467,
-0.3098762631416321,
0.31659001111984253,
0.2695380747318268,
-0.4015323519706726,
0.1938363015651703,
0.17181840538978577,
0.44014477729797363,
-0.17750804126262665,
-0.173413753509521... |
https://github.com/huggingface/datasets/issues/758 | Process 0 very slow when using num_procs with map to tokenizer | I can't reproduce on my side unfortunately with the same versions.
Do you have issues when doing multiprocessing with python ?
```python
from tqdm.auto import tqdm
from multiprocess import Pool, RLock
def process_data(shard):
# implement
num_proc = 8
shards = [] # implement, this must be a list of siz... | <img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_speci... | 73 | Process 0 very slow when using num_procs with map to tokenizer
<img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
... | [
-0.3621354401111603,
-0.5087307095527649,
-0.1394251435995102,
0.07497772574424744,
0.040224339812994,
-0.35609763860702515,
0.38819894194602966,
0.14181579649448395,
-0.46268582344055176,
0.22303500771522522,
0.031175706535577774,
0.33822500705718994,
-0.23341882228851318,
-0.087560035288... |
https://github.com/huggingface/datasets/issues/757 | CUDA out of memory | ```python
tokenizer = FunnelTokenizer.from_pretrained('funnel-transformer/small')
def tokenize(batch):
return tokenizer(batch['text'], padding='max_length', truncation=True,max_length=512)
dataset = load_dataset("bookcorpus",split='train[:1000]').shuffle()
dataset = dataset.map(tokenize, batched=True, batc... | In your dataset ,cuda run out of memory as long as the trainer begins:
however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
| 64 | CUDA out of memory
In your dataset ,cuda run out of memory as long as the trainer begins:
however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
```python
tokenizer = FunnelTokenizer.from_pretrained('funnel-transformer/small')
def tokenize(... | [
-0.31007876992225647,
-0.129237100481987,
-0.13420474529266357,
0.11753957718610764,
0.46882668137550354,
-0.14419759809970856,
0.2783525586128235,
0.09987912327051163,
-0.24337750673294067,
0.35919103026390076,
0.08298465609550476,
-0.0730312243103981,
-0.11920901387929916,
0.321278929710... |
https://github.com/huggingface/datasets/issues/757 | CUDA out of memory | `RuntimeError: CUDA out of memory. Tried to allocate 954.00 MiB (GPU 0; 15.90 GiB total capacity; 14.35 GiB already allocated; 753.75 MiB free; 14.39 GiB reserved in total by PyTorch)
Exception raised from malloc at /pytorch/c10/cuda/CUDACachingAllocator.cpp:272 (most recent call first):`
part of error output | In your dataset ,cuda run out of memory as long as the trainer begins:
however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
| 44 | CUDA out of memory
In your dataset ,cuda run out of memory as long as the trainer begins:
however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
`RuntimeError: CUDA out of memory. Tried to allocate 954.00 MiB (GPU 0; 15.90 GiB total capacity; 1... | [
-0.2118009775876999,
-0.14360404014587402,
-0.12512393295764923,
0.37566810846328735,
0.48718583583831787,
0.022162366658449173,
0.23726381361484528,
0.18793366849422455,
-0.059131838381290436,
0.36151549220085144,
0.2032219022512436,
0.08994990587234497,
-0.11036086827516556,
0.0615567564... |
https://github.com/huggingface/datasets/issues/757 | CUDA out of memory | from funnel model to bert model : error still happened
from your dataset to LineByLineTextDataset : error disapeared | In your dataset ,cuda run out of memory as long as the trainer begins:
however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
| 18 | CUDA out of memory
In your dataset ,cuda run out of memory as long as the trainer begins:
however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
from funnel model to bert model : error still happened
from your dataset to LineByLineTextDatase... | [
-0.24299365282058716,
-0.038369353860616684,
-0.06621440500020981,
0.19402176141738892,
0.5733143091201782,
-0.09684986621141434,
0.34831422567367554,
0.09402590990066528,
-0.0914316326379776,
0.40468329191207886,
0.04622356966137886,
0.01358149852603674,
-0.08127222210168839,
0.3208117485... |
https://github.com/huggingface/datasets/issues/757 | CUDA out of memory | Since you're using a data collator you don't need to tokenizer the dataset using `map`. Could you try not to use `map` and only the data collator instead ? The data collator is supposed to pad to the longest sequence in each batch afaik, instead of padding to 512.
Also cc @sgugger | In your dataset ,cuda run out of memory as long as the trainer begins:
however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
| 52 | CUDA out of memory
In your dataset ,cuda run out of memory as long as the trainer begins:
however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
Since you're using a data collator you don't need to tokenizer the dataset using `map`. Could you t... | [
-0.35388490557670593,
-0.06870376318693161,
-0.14324316382408142,
0.305707722902298,
0.5980899333953857,
-0.03018047660589218,
0.44536417722702026,
0.2957351505756378,
-0.06214052066206932,
0.40821677446365356,
0.2172662764787674,
0.05362321436405182,
-0.07481783628463745,
0.15077415108680... |
https://github.com/huggingface/datasets/issues/751 | Error loading ms_marco v2.1 using load_dataset() | There was a similar issue in #294
Clearing the cache and download again the dataset did the job. Could you try to clear your cache and download the dataset again ? | Code:
`dataset = load_dataset('ms_marco', 'v2.1')`
Error:
```
`---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
<ipython-input-16-34378c057212> in <module>()
9
10 # Downloading and loading a data... | 31 | Error loading ms_marco v2.1 using load_dataset()
Code:
`dataset = load_dataset('ms_marco', 'v2.1')`
Error:
```
`---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
<ipython-input-16-34378c057212> in <module>()
... | [
0.010368715971708298,
0.11118967831134796,
-0.10002520680427551,
0.3606892228126526,
0.42733946442604065,
0.051915135234594345,
0.287960410118103,
0.565480649471283,
0.012548625469207764,
0.32172808051109314,
-0.012281792238354683,
0.5649031400680542,
-0.022834742441773415,
0.2471556067466... |
https://github.com/huggingface/datasets/issues/751 | Error loading ms_marco v2.1 using load_dataset() | I was able to load the dataset successfully, I'm pretty sure it's just a cache issue that you have.
Let me know if clearing your cache fixes the problem | Code:
`dataset = load_dataset('ms_marco', 'v2.1')`
Error:
```
`---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
<ipython-input-16-34378c057212> in <module>()
9
10 # Downloading and loading a data... | 29 | Error loading ms_marco v2.1 using load_dataset()
Code:
`dataset = load_dataset('ms_marco', 'v2.1')`
Error:
```
`---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
<ipython-input-16-34378c057212> in <module>()
... | [
-0.012936263345181942,
0.10301104933023453,
-0.1228330135345459,
0.3122572898864746,
0.39834195375442505,
0.05211116001009941,
0.29682525992393494,
0.5685592293739319,
0.020557519048452377,
0.3210778534412384,
-0.0469788983464241,
0.523174524307251,
0.06325192749500275,
0.19008664786815643... |
https://github.com/huggingface/datasets/issues/749 | [XGLUE] Adding new dataset | Small poll @thomwolf @yjernite @lhoestq @JetRunner @qiweizhen .
As stated in the XGLUE paper: https://arxiv.org/pdf/2004.01401.pdf , for each of the 11 down-stream tasks training data is only available in English, whereas development and test data is available in multiple different language *cf.* here:
.
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | 105 | [XGLUE] Adding new dataset
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance
Small poll @thomwolf @yjernite @lh... | [
-0.14286911487579346,
0.00992770679295063,
-0.049109816551208496,
0.16983135044574738,
-0.0771438479423523,
-0.22699467837810516,
0.30961254239082336,
-0.005992029793560505,
-0.11064036190509796,
0.026588622480630875,
-0.09334689378738403,
-0.10927963256835938,
0.06421422958374023,
0.36415... |
https://github.com/huggingface/datasets/issues/749 | [XGLUE] Adding new dataset | In this case we should have named splits, so config `ner` has splits `train`, `validation`, `test-en`, `test-ar`, `test-bg`, etc...
This is more in the spirit of the task afaiu, and will avoid making users do the filtering step themselves when testing different models or different configurations of the same model. | XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | 50 | [XGLUE] Adding new dataset
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance
In this case we should have named s... | [
-0.40260088443756104,
-0.09584981948137283,
-0.10031998157501221,
-0.14071550965309143,
-0.14861802756786346,
-0.13475021719932556,
0.26579585671424866,
0.31443363428115845,
0.013364204205572605,
0.18667294085025787,
-0.1717517375946045,
0.08897808194160461,
-0.063442163169384,
0.341416686... |
https://github.com/huggingface/datasets/issues/749 | [XGLUE] Adding new dataset | I see your point!
I think this would be quite feasible to do and makes sense to me as well! In the paper results are reported per language, so it seems more natural to do it this way.
Good for me @yjernite ! What do the others think? @lhoestq
| XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | 49 | [XGLUE] Adding new dataset
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance
I see your point!
I think this ... | [
-0.030935458838939667,
-0.015745114535093307,
-0.1258901208639145,
-0.10052040219306946,
-0.15791164338588715,
-0.17087559401988983,
0.5267764329910278,
0.005605506710708141,
0.02387300878763199,
0.09561331570148468,
-0.07178369164466858,
0.004014931619167328,
-0.029219727963209152,
0.2641... |
https://github.com/huggingface/datasets/issues/749 | [XGLUE] Adding new dataset | Okey actually not that easy to add things like `test-de` to `datasets` => this would be the first dataset to have this.
See: https://github.com/huggingface/datasets/pull/802 | XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | 24 | [XGLUE] Adding new dataset
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance
Okey actually not that easy to add ... | [
-0.18463346362113953,
-0.241859570145607,
-0.18476523458957672,
-0.13035109639167786,
-0.05181707441806793,
0.006996415089815855,
0.25056371092796326,
0.11135910451412201,
0.18022972345352173,
0.1449398547410965,
-0.11674892902374268,
0.10126728564500809,
0.048573389649391174,
0.3905766308... |
https://github.com/huggingface/datasets/issues/749 | [XGLUE] Adding new dataset | IMO we should have one config per language. That's what we're doing for xnli, xtreme etc.
Having split names that depend on the language seems wrong. We should try to avoid split names that are not train/val/test.
Sorry for late response on this one | XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | 44 | [XGLUE] Adding new dataset
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance
IMO we should have one config per l... | [
-0.1462259441614151,
-0.1761123389005661,
-0.09057218581438065,
0.0123519916087389,
-0.02412573993206024,
-0.17038308084011078,
0.46442845463752747,
0.22930048406124115,
0.006414968054741621,
0.18978449702262878,
-0.07842148095369339,
-0.012320858426392078,
0.06371519714593887,
0.317358970... |
https://github.com/huggingface/datasets/issues/749 | [XGLUE] Adding new dataset | @lhoestq agreed on having one config per language, but we also need to be able to have different split names and people are going to want to use hyphens, so we should at the very least warn them why it's failing :) E.g. for ANLI with different stages of data (currently using underscores) or https://www.tau-nlp.org/comm... | XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | 61 | [XGLUE] Adding new dataset
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance
@lhoestq agreed on having one confi... | [
-0.20009146630764008,
-0.15228109061717987,
-0.10690852254629135,
-0.11235310137271881,
-0.07928119599819183,
-0.20000159740447998,
0.5592648983001709,
0.16035202145576477,
0.12979505956172943,
0.1972675323486328,
-0.047267623245716095,
0.05498957633972168,
0.00484394421800971,
0.307424008... |
https://github.com/huggingface/datasets/issues/749 | [XGLUE] Adding new dataset | Really cool dataset 👍 btw. does Transformers support all 11 tasks 🤔 would be awesome to have a xglue script (like the "normal" glue one) | XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | 25 | [XGLUE] Adding new dataset
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance
Really cool dataset 👍 btw. does Tr... | [
-0.36098864674568176,
-0.2430482804775238,
-0.13333873450756073,
-0.10879009962081909,
-0.048819683492183685,
-0.15857158601284027,
0.4023332893848419,
0.10787016153335571,
0.17236177623271942,
0.04421286657452583,
-0.03236444666981697,
0.01676095649600029,
-0.13741017878055573,
0.33139285... |
https://github.com/huggingface/datasets/issues/749 | [XGLUE] Adding new dataset | Just to make sure this is what we want here. If we add one config per language,
this means that this dataset ends up with well over 100 different configs most of which will have the same `train` split. The train split is always in English. Also, I'm not sure whether it's better for the user to be honest.
I thin... | XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | 107 | [XGLUE] Adding new dataset
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance
Just to make sure this is what we w... | [
-0.19102658331394196,
0.05319967865943909,
-0.0776924341917038,
0.010843316093087196,
0.0387280210852623,
-0.08067557960748672,
0.5347976684570312,
0.037728458642959595,
-0.02048002928495407,
0.14164429903030396,
-0.11443256586790085,
0.06805203855037689,
-0.184329092502594,
0.248008579015... |
https://github.com/huggingface/datasets/issues/749 | [XGLUE] Adding new dataset | Oh yes right I didn't notice the train set was always in english sorry.
Moreover it seems that the way this dataset is used is to pick a pretrained multilingual model, fine-tune it on the english train set and then evaluate on each test set (one per language).
So to better fit the usual usage of this dataset, I agree... | XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | 122 | [XGLUE] Adding new dataset
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance
Oh yes right I didn't notice the tr... | [
-0.18502935767173767,
-0.013266601599752903,
-0.14869104325771332,
0.001963210990652442,
0.0076407939195632935,
0.04465555399656296,
0.5710824131965637,
0.22784386575222015,
0.07958226650953293,
-0.006703847553580999,
-0.09681916236877441,
0.03882055729627609,
-0.11224059760570526,
0.26150... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Thank you !
Could you provide a csv file that reproduces the error ?
It doesn't have to be one of your dataset. As long as it reproduces the error
That would help a lot ! | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 36 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... | [
-0.053170282393693924,
0.07086656242609024,
-0.003463788190856576,
0.36295726895332336,
0.24475911259651184,
0.22018924355506897,
0.42531827092170715,
0.30367404222488403,
0.3109276592731476,
0.0509863905608654,
-0.06706371903419495,
0.23548193275928497,
-0.09579338878393173,
-0.0218967273... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | I think another good example is the following:
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sts-dev.csv"], delimiter="\t", column_names=["one", "two", "three", "four", "score", "sentence1", "sentence2"], script_version="master")`
`
Displayed error `CSV parse error: Expe... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 72 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... | [
-0.09010732173919678,
0.01711306720972061,
0.016796071082353592,
0.3881012797355652,
0.2925540506839752,
0.2115883082151413,
0.38472771644592285,
0.22975952923297882,
0.22242243587970734,
0.08985833823680878,
-0.10132807493209839,
0.2565506100654602,
-0.09435296803712845,
0.038232605904340... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Hi, seems I also can't read csv file. I was trying with a dummy csv with only three rows.
```
text,label
I hate google,negative
I love Microsoft,positive
I don't like you,negative
```
I was using the HuggingFace image in Paperspace Gradient (datasets==1.1.3). The following code doesn't work:
```
from datas... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 141 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... | [
-0.011863143183290958,
0.08087875694036484,
-0.019371073693037033,
0.46244651079177856,
0.28913170099258423,
0.16910164058208466,
0.39548277854919434,
0.24879319965839386,
0.25492528080940247,
0.12202337384223938,
-0.18819254636764526,
0.22736796736717224,
-0.1354265958070755,
-0.015303077... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | This is because load_dataset without `split=` returns a dictionary of split names (train/validation/test) to dataset.
You can do
```python
from datasets import load_dataset
dataset = load_dataset('csv', script_version="master", data_files=['test_data.csv'], delimiter=",")
print(dataset["train"][0])
```
Or if y... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 55 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... | [
-0.09202507883310318,
0.05643486976623535,
0.003914379049092531,
0.37216076254844666,
0.2419905811548233,
0.21125133335590363,
0.530138373374939,
0.3948995769023895,
0.24741822481155396,
0.06873534619808197,
-0.14504030346870422,
0.2728797495365143,
-0.17804262042045593,
0.0853818058967590... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Good point
Design question for us, though: should `load_dataset` when no split is specified and only one split is present in the dataset (common use case with CSV/text/JSON datasets) return a `Dataset` instead of a `DatsetDict`? I feel like it's often what the user is expecting. I break a bit the paradigm of a uniqu... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 89 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... | [
-0.05377597361803055,
0.10334620624780655,
0.019004641100764275,
0.3612278997898102,
0.18078337609767914,
-0.0042515648528933525,
0.5432192087173462,
0.2224990278482437,
0.3372096121311188,
-0.0036191234830766916,
-0.023761458694934845,
0.30691924691200256,
-0.03176402673125267,
0.11891794... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | In this case the user expects to get only one dataset object instead of the dictionary of datasets since only one csv file was specified without any split specifications.
I'm ok with returning the dataset object if no split specifications are given for text/json/csv/pandas.
For the other datasets ton the other hand... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 73 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... | [
-0.044691361486911774,
0.024346955120563507,
-0.004884678870439529,
0.41943666338920593,
0.14631244540214539,
0.14585065841674805,
0.4632934033870697,
0.2901502251625061,
0.3439590036869049,
0.05064496025443077,
-0.13501352071762085,
0.22953130304813385,
-0.11264507472515106,
0.07431830465... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Thanks for your quick response! I'm fine with specifying the split as @lhoestq suggested. My only concern is when I'm loading from python dict or pandas, the library returns a dataset instead of a dictionary of datasets when no split is specified. I know that they use a different function `Dataset.from_dict` or `Datase... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 78 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... | [
-0.07806047797203064,
0.05840316414833069,
0.02330002188682556,
0.4148482382297516,
0.13612854480743408,
0.12167998403310776,
0.5085968971252441,
0.2503667175769806,
0.36034220457077026,
-0.03491527587175369,
-0.13859707117080688,
0.32225456833839417,
-0.07130804657936096,
0.12546640634536... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | ```
from datasets import load_dataset
dataset = load_dataset('csv', data_files='./amazon_data/Video_Games_5.csv', delimiter=",", split=['train', 'test'])
```
I was running the above line, but got this error.
```ValueError: Unknown split "test". Should be one of ['train'].```
The data is amazon product data. I... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 78 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... | [
0.0013099625939503312,
0.05171265825629234,
-0.019382331520318985,
0.4013645350933075,
0.2351076751947403,
0.22222940623760223,
0.5522071123123169,
0.32110974192619324,
0.23970019817352295,
-0.03639328479766846,
-0.14609752595424652,
0.22432345151901245,
-0.02505604363977909,
0.10222937166... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Hi ! the `split` argument in `load_dataset` is used to select the splits you want among the available splits.
However when loading a csv with a single file as you did, only a `train` split is available by default.
Indeed since `data_files='./amazon_data/Video_Games_5.csv'` is equivalent to `data_files={"train": './... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 123 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... | [
-0.12369927763938904,
0.05452722683548927,
0.0046684471890330315,
0.34401434659957886,
0.2238633781671524,
0.20722448825836182,
0.5557587742805481,
0.33208757638931274,
0.22926701605319977,
0.07235629856586456,
-0.12198502570390701,
0.1893659383058548,
-0.0988403707742691,
0.05809935554862... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | > In this case the user expects to get only one dataset object instead of the dictionary of datasets since only one csv file was specified without any split specifications.
> I'm ok with returning the dataset object if no split specifications are given for text/json/csv/pandas.
>
> For the other datasets ton the ot... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 107 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... | [
-0.06853340566158295,
0.019808154553174973,
0.012350010685622692,
0.41634541749954224,
0.1547636091709137,
0.13107720017433167,
0.5161140561103821,
0.33483394980430603,
0.32836687564849854,
0.07457078993320465,
-0.11742229014635086,
0.2238824963569641,
-0.13035769760608673,
0.0772809013724... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | > Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
>
> `from datasets import load_dataset`
> `dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")`
>
> Displayed error:
> `... ArrowI... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 319 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... | [
-0.030013903975486755,
0.1694912314414978,
0.02353612519800663,
0.271454781293869,
0.27539440989494324,
0.1793166846036911,
0.4045259654521942,
0.2795947790145874,
0.28730684518814087,
0.015039685182273388,
-0.0485231950879097,
0.34998229146003723,
-0.12127307057380676,
-0.0681238546967506... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Hi @kauvinlucas
You can use the latest versions of `datasets` to do this.
To do so, just `pip install datasets` instead of `nlp` (the library was renamed) and then
```python
from datasets import load_dataset
dataset = load_dataset('csv', data_files='sample_data.csv') | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 38 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... | [
-0.0319090262055397,
0.09989099204540253,
0.0016062395879998803,
0.3416562080383301,
0.22531181573867798,
0.18575182557106018,
0.3781752586364746,
0.28434982895851135,
0.3076331913471222,
-0.004604274872690439,
-0.07363000512123108,
0.33380210399627686,
-0.12021186947822571,
-0.03617533296... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.