html_url
stringlengths
48
51
title
stringlengths
5
268
comments
stringlengths
63
51.8k
body
stringlengths
0
36.2k
βŒ€
comment_length
int64
16
1.52k
text
stringlengths
164
54.1k
embeddings
list
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
I do not quite understand what you mean. as far as I can tell, using `to_bytes` does a pickle dump behind the scene (with `srsly`), recursively using `to_bytes` on the required objects. Therefore, the result of `to_bytes` is a deterministic pickle dump AFAICT. Or do you mean that you wish that using your own pickler an...
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
271
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -0.0547037572, -0.0296596158, 0.1383616775, 0.1674575657, 0.2572079003, -0.1886388958, 0.3345493674, 0.0546137951, 0.0525931679, 0.1248953864, 0.0389523171, 0.5095518827, -0.2360349447, 0.3779696524, -0.1779329479, 0.0818778947, 0.0879114941, -0.0570349731, 0.0878298432, -0.119...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
Interestingly, my PR does not solve the issue discussed above. The `tokenize` function hash is different on every run, because for some reason `nlp.__call__` has a different hash every time. The issue therefore seems to run much deeper than I thought. If you have any ideas, I'm all ears. ```shell git clone https://...
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
151
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -0.0547037572, -0.0296596158, 0.1383616775, 0.1674575657, 0.2572079003, -0.1886388958, 0.3345493674, 0.0546137951, 0.0525931679, 0.1248953864, 0.0389523171, 0.5095518827, -0.2360349447, 0.3779696524, -0.1779329479, 0.0818778947, 0.0879114941, -0.0570349731, 0.0878298432, -0.119...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
Hi ! I just answered in your PR :) In order for your custom hashing to be used for nested objects, you must integrate it into our recursive pickler that we use for hashing.
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
34
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -0.0547037572, -0.0296596158, 0.1383616775, 0.1674575657, 0.2572079003, -0.1886388958, 0.3345493674, 0.0546137951, 0.0525931679, 0.1248953864, 0.0389523171, 0.5095518827, -0.2360349447, 0.3779696524, -0.1779329479, 0.0818778947, 0.0879114941, -0.0570349731, 0.0878298432, -0.119...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
I don't quite understand the design constraints of `datasets` or the script that you're running, but my usual advice is to avoid using pickle unless you _absolutely_ have to. So for instance instead of doing your `partial` over the `nlp` object itself, can you just pass the string `en_core_web_sm` in? This will mean ca...
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
177
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -0.0547037572, -0.0296596158, 0.1383616775, 0.1674575657, 0.2572079003, -0.1886388958, 0.3345493674, 0.0546137951, 0.0525931679, 0.1248953864, 0.0389523171, 0.5095518827, -0.2360349447, 0.3779696524, -0.1779329479, 0.0818778947, 0.0879114941, -0.0570349731, 0.0878298432, -0.119...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
Hi Matthew, thanks for chiming in! We are currently implementing exactly what you suggest: `to_bytes()` as a default before pickling - but we may prefer `to_dict` to avoid double dumping. `datasets` uses pickle dumps (actually dill) to get unique representations of processing steps (a "fingerprint" or hash). So it n...
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
275
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -0.0547037572, -0.0296596158, 0.1383616775, 0.1674575657, 0.2572079003, -0.1886388958, 0.3345493674, 0.0546137951, 0.0525931679, 0.1248953864, 0.0389523171, 0.5095518827, -0.2360349447, 0.3779696524, -0.1779329479, 0.0818778947, 0.0879114941, -0.0570349731, 0.0878298432, -0.119...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
Is there a workaround for this? maybe by explicitly requesting datasets to cache the result of `.map()`?
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
17
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -0.0547037572, -0.0296596158, 0.1383616775, 0.1674575657, 0.2572079003, -0.1886388958, 0.3345493674, 0.0546137951, 0.0525931679, 0.1248953864, 0.0389523171, 0.5095518827, -0.2360349447, 0.3779696524, -0.1779329479, 0.0818778947, 0.0879114941, -0.0570349731, 0.0878298432, -0.119...
https://github.com/huggingface/datasets/issues/3178
"Property couldn't be hashed properly" even though fully picklable
Hi ! If your function is not picklable, then the fingerprint of the resulting dataset can't be computed. The fingerprint is a hash that is used by the cache to reload previously computed datasets: the dataset file is named `cache-<fingerprint>.arrow` in your dataset's cache directory. As a workaround you can set the...
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
102
"Property couldn't be hashed properly" even though fully picklable ## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It...
[ -0.0547037572, -0.0296596158, 0.1383616775, 0.1674575657, 0.2572079003, -0.1886388958, 0.3345493674, 0.0546137951, 0.0525931679, 0.1248953864, 0.0389523171, 0.5095518827, -0.2360349447, 0.3779696524, -0.1779329479, 0.0818778947, 0.0879114941, -0.0570349731, 0.0878298432, -0.119...
https://github.com/huggingface/datasets/issues/3177
More control over TQDM when using map/filter with multiple processes
Hi, It's hard to provide an API that would cover all use-cases with tqdm in this project. However, you can make it work by defining a custom decorator (a bit hacky tho) as follows: ```python import datasets def progress_only_on_rank_0(func): def wrapper(*args, **kwargs): rank = kwargs.get("rank...
It would help with the clutter in my terminal if tqdm is only shown for rank 0 when using `num_proces>0` in the map and filter methods of datasets. ```python dataset.map(lambda examples: tokenize(examples["text"]), batched=True, num_proc=6) ``` The above snippet leads to a lot of TQDM bars and depending on your...
129
More control over TQDM when using map/filter with multiple processes It would help with the clutter in my terminal if tqdm is only shown for rank 0 when using `num_proces>0` in the map and filter methods of datasets. ```python dataset.map(lambda examples: tokenize(examples["text"]), batched=True, num_proc=6) ```...
[ -0.4308310151, -0.1890668422, -0.1275618821, -0.2148895264, 0.2067866921, -0.4247946143, 0.2956124842, 0.2832436264, -0.0885269493, 0.1811400056, 0.0224597193, 0.5895184278, -0.3055741787, 0.3410098851, -0.1393373013, -0.0427788869, -0.2126195729, -0.0323361456, -0.055747021, 0...
https://github.com/huggingface/datasets/issues/3177
More control over TQDM when using map/filter with multiple processes
Inspiration may be found at `transformers`. https://github.com/huggingface/transformers/blob/4a394cf53f05e73ab9bbb4b179a40236a5ffe45a/src/transformers/trainer.py#L1231-L1233 To get unique IDs for each worker, see https://stackoverflow.com/a/10192611/1150683
It would help with the clutter in my terminal if tqdm is only shown for rank 0 when using `num_proces>0` in the map and filter methods of datasets. ```python dataset.map(lambda examples: tokenize(examples["text"]), batched=True, num_proc=6) ``` The above snippet leads to a lot of TQDM bars and depending on your...
16
More control over TQDM when using map/filter with multiple processes It would help with the clutter in my terminal if tqdm is only shown for rank 0 when using `num_proces>0` in the map and filter methods of datasets. ```python dataset.map(lambda examples: tokenize(examples["text"]), batched=True, num_proc=6) ```...
[ -0.38911587, -0.2537093163, -0.1010459661, -0.1923469156, 0.2609188259, -0.449105531, 0.5612116456, 0.3046912253, -0.1326827705, 0.2505338788, 0.0430045575, 0.4588483572, -0.3234093189, 0.3719583452, -0.075382553, -0.0509360023, -0.1899889708, -0.0078426553, -0.0903167427, 0.25...
https://github.com/huggingface/datasets/issues/3172
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1`
NB: even if the error is raised, the dataset is successfully cached. So restarting the script after every `map()` allows to ultimately run the whole preprocessing. But this prevents to realistically run the code over multiple nodes.
## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. The exception is raised only when the code runs within a specific context. Despite ~10h spent ...
37
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1` ## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. Th...
[ -0.4948825538, 0.0149980877, 0.1314036548, 0.2600641847, 0.3205186129, 0.0034364853, 0.4633168578, 0.0224105176, 0.020504212, 0.1974067092, 0.1711901277, 0.5095179677, -0.2995520234, -0.264898479, -0.2079024166, 0.20999071, 0.0466203913, -0.1581077874, -0.2430632859, 0.44501310...
https://github.com/huggingface/datasets/issues/3172
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1`
Hi, It's not easy to debug the problem without the script. I may be wrong since I'm not very familiar with PyTorch Lightning, but shouldn't you preprocess the data in the `prepare_data` function of `LightningDataModule` and not in the `setup` function. As you can't modify the module state in `prepare_data` (accordi...
## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. The exception is raised only when the code runs within a specific context. Despite ~10h spent ...
99
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1` ## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. Th...
[ -0.4948825538, 0.0149980877, 0.1314036548, 0.2600641847, 0.3205186129, 0.0034364853, 0.4633168578, 0.0224105176, 0.020504212, 0.1974067092, 0.1711901277, 0.5095179677, -0.2995520234, -0.264898479, -0.2079024166, 0.20999071, 0.0466203913, -0.1581077874, -0.2430632859, 0.44501310...
https://github.com/huggingface/datasets/issues/3172
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1`
Hi @mariosasko, thank you for the hint, that helped me to move forward with that issue. I did a major refactoring of my project to disentangle my `LightningDataModule` and `Dataset`. Just FYI, it looks like: ```python class Builder(): def __call__() -> DatasetDict: # load and preprocess the data ...
## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. The exception is raised only when the code runs within a specific context. Despite ~10h spent ...
170
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1` ## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. Th...
[ -0.4948825538, 0.0149980877, 0.1314036548, 0.2600641847, 0.3205186129, 0.0034364853, 0.4633168578, 0.0224105176, 0.020504212, 0.1974067092, 0.1711901277, 0.5095179677, -0.2995520234, -0.264898479, -0.2079024166, 0.20999071, 0.0466203913, -0.1581077874, -0.2430632859, 0.44501310...
https://github.com/huggingface/datasets/issues/3172
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1`
Please allow me to revive this discussion, as I have an extremely similar issue. Instead of an error, my datasets functions simply aren't caching properly. My setup is almost the same as yours, with hydra to configure my experiment parameters. @vlievin Could you confirm if your code correctly loads the cache? If so,...
## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. The exception is raised only when the code runs within a specific context. Despite ~10h spent ...
85
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1` ## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. Th...
[ -0.4948825538, 0.0149980877, 0.1314036548, 0.2600641847, 0.3205186129, 0.0034364853, 0.4633168578, 0.0224105176, 0.020504212, 0.1974067092, 0.1711901277, 0.5095179677, -0.2995520234, -0.264898479, -0.2079024166, 0.20999071, 0.0466203913, -0.1581077874, -0.2430632859, 0.44501310...
https://github.com/huggingface/datasets/issues/3172
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1`
Hello @mariomeissner, very sorry for the late reply, I hope you have found a solution to your problem! I don't have public code at the moment. I have not experienced any other issue with hydra, even if I don't understand why changing the location of the definition of `run()` fixed the problem. Overall, I don't h...
## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. The exception is raised only when the code runs within a specific context. Despite ~10h spent ...
74
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1` ## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. Th...
[ -0.4948825538, 0.0149980877, 0.1314036548, 0.2600641847, 0.3205186129, 0.0034364853, 0.4633168578, 0.0224105176, 0.020504212, 0.1974067092, 0.1711901277, 0.5095179677, -0.2995520234, -0.264898479, -0.2079024166, 0.20999071, 0.0466203913, -0.1581077874, -0.2430632859, 0.44501310...
https://github.com/huggingface/datasets/issues/3172
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1`
I solved my issue by turning the map callable into a class static method, like they do in `lightning-transformers`. Very strange...
## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. The exception is raised only when the code runs within a specific context. Despite ~10h spent ...
21
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1` ## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. Th...
[ -0.4948825538, 0.0149980877, 0.1314036548, 0.2600641847, 0.3205186129, 0.0034364853, 0.4633168578, 0.0224105176, 0.020504212, 0.1974067092, 0.1711901277, 0.5095179677, -0.2995520234, -0.264898479, -0.2079024166, 0.20999071, 0.0466203913, -0.1581077874, -0.2430632859, 0.44501310...
https://github.com/huggingface/datasets/issues/3171
Raise exceptions instead of using assertions for control flow
Adding the remaining tasks for this issue to help new code contributors. $ cd src/datasets && ack assert -lc - [x] commands/convert.py:1 - [x] arrow_reader.py:3 - [x] load.py:7 - [x] utils/py_utils.py:2 - [x] features/features.py:9 - [x] arrow_writer.py:7 - [x] search.py:6 - [x] table.py:1 - [x] metric.py:...
Motivated by https://github.com/huggingface/transformers/issues/12789 in Transformers, one welcoming change would be replacing assertions with proper exceptions. The only type of assertions we should keep are those used as sanity checks. Currently, there is a total of 87 files with the `assert` statements (located u...
61
Raise exceptions instead of using assertions for control flow Motivated by https://github.com/huggingface/transformers/issues/12789 in Transformers, one welcoming change would be replacing assertions with proper exceptions. The only type of assertions we should keep are those used as sanity checks. Currently, ther...
[ -0.0881650448, -0.3873391151, -0.1070538238, 0.0649696961, 0.2689186335, -0.3596983254, 0.2094397098, 0.2920357883, -0.0260700602, 0.2640655637, 0.2128269821, 0.0678462237, -0.0809202716, 0.0795362219, -0.1360158026, -0.2444677204, -0.0724885985, 0.0768777505, -0.054277271, -0....
https://github.com/huggingface/datasets/issues/3171
Raise exceptions instead of using assertions for control flow
Hi all, I am interested in taking up `fingerprint.py`, `search.py`, `arrow_writer.py` and `metric.py`. Will raise a PR soon!
Motivated by https://github.com/huggingface/transformers/issues/12789 in Transformers, one welcoming change would be replacing assertions with proper exceptions. The only type of assertions we should keep are those used as sanity checks. Currently, there is a total of 87 files with the `assert` statements (located u...
18
Raise exceptions instead of using assertions for control flow Motivated by https://github.com/huggingface/transformers/issues/12789 in Transformers, one welcoming change would be replacing assertions with proper exceptions. The only type of assertions we should keep are those used as sanity checks. Currently, ther...
[ -0.0443977453, -0.3855044544, -0.0965588316, 0.0352225304, 0.29737854, -0.4148050845, 0.1632264405, 0.3058510721, -0.0634227172, 0.2538894415, 0.2832562923, -0.1116684303, -0.0943637714, 0.136702463, -0.0752318949, -0.3936465979, -0.0165454652, 0.0376642868, 0.1048267484, -0.01...
https://github.com/huggingface/datasets/issues/3168
OpenSLR/83 is empty
Hi @tyrius02, thanks for reporting. I see you self-assigned this issue: are you working on this?
## Describe the bug As the summary says, openslr / SLR83 / train is empty. The dataset returned after loading indicates there are **zero** rows. The correct number should be **17877**. ## Steps to reproduce the bug ```python import datasets datasets.load_dataset('openslr', 'SLR83') ``` ## Expected resul...
16
OpenSLR/83 is empty ## Describe the bug As the summary says, openslr / SLR83 / train is empty. The dataset returned after loading indicates there are **zero** rows. The correct number should be **17877**. ## Steps to reproduce the bug ```python import datasets datasets.load_dataset('openslr', 'SLR83') ``...
[ -0.1339388043, 0.2001884282, 0.0174183473, 0.258898735, 0.1328081936, 0.017364895, 0.5546807051, 0.3825571239, 0.0937651321, 0.307217896, -0.0423920378, 0.4279454947, -0.2365683466, -0.0011826659, 0.0797990113, -0.0660131574, 0.0724034533, 0.1122319549, -0.0167127736, -0.257775...
https://github.com/huggingface/datasets/issues/3168
OpenSLR/83 is empty
@albertvillanova Yes. Figured I introduced the broken config, I should fix it too. I've got it working, but I'm struggling with one of the tests. I've started a PR so I/we can work through it.
## Describe the bug As the summary says, openslr / SLR83 / train is empty. The dataset returned after loading indicates there are **zero** rows. The correct number should be **17877**. ## Steps to reproduce the bug ```python import datasets datasets.load_dataset('openslr', 'SLR83') ``` ## Expected resul...
35
OpenSLR/83 is empty ## Describe the bug As the summary says, openslr / SLR83 / train is empty. The dataset returned after loading indicates there are **zero** rows. The correct number should be **17877**. ## Steps to reproduce the bug ```python import datasets datasets.load_dataset('openslr', 'SLR83') ``...
[ -0.1568451226, 0.2141989321, 0.0209850185, 0.1403115988, 0.0730733871, 0.0183356907, 0.5944601893, 0.3523910344, 0.0033563382, 0.3104201257, -0.045072481, 0.448433876, -0.2578974366, -0.0301667042, 0.1375690848, -0.0961025357, -0.0060506798, 0.0827716142, 0.0191130154, -0.12548...
https://github.com/huggingface/datasets/issues/3167
bookcorpusopen no longer works
I tried with the latest changes from #3280 on google colab and it worked fine :) We'll do a new release soon, in the meantime you can use the updated version with: ```python load_dataset("bookcorpusopen", revision="master") ```
## Describe the bug When using the latest version of datasets (1.14.0), I cannot use the `bookcorpusopen` dataset. The process blocks always around `9924 examples [00:06, 1439.61 examples/s]` when preparing the dataset. I also noticed that after half an hour the process is automatically killed because of the RAM usa...
36
bookcorpusopen no longer works ## Describe the bug When using the latest version of datasets (1.14.0), I cannot use the `bookcorpusopen` dataset. The process blocks always around `9924 examples [00:06, 1439.61 examples/s]` when preparing the dataset. I also noticed that after half an hour the process is automatica...
[ -0.3898615241, 0.012308158, 0.0758037195, 0.2028924674, -0.0502228588, -0.2223530412, 0.3556592166, 0.0869098082, -0.2748003304, 0.1474598199, -0.1287825257, 0.3929781318, 0.1643617898, 0.3723466694, -0.0846742988, -0.0717373639, 0.1279461384, -0.0915142745, 0.0168371294, 0.260...
https://github.com/huggingface/datasets/issues/3164
Add raw data files to the Hub with GitHub LFS for canonical dataset
Hi @zlucia, I would actually suggest hosting the dataset as a huggingface.co-hosted dataset. The only difference with a "canonical"/legacy dataset is that it's nested under an organization (here `stanford` or `stanfordnlp` for instance – completely up to you) but then you can upload your data using git-lfs (unlike "...
I'm interested in sharing the CaseHOLD dataset (https://arxiv.org/abs/2104.08671) as a canonical dataset on the HuggingFace Hub and would like to add the raw data files to the Hub with GitHub LFS, since it seems like a more sustainable long term storage solution, compared to other storage solutions available to my team...
74
Add raw data files to the Hub with GitHub LFS for canonical dataset I'm interested in sharing the CaseHOLD dataset (https://arxiv.org/abs/2104.08671) as a canonical dataset on the HuggingFace Hub and would like to add the raw data files to the Hub with GitHub LFS, since it seems like a more sustainable long term stor...
[ -0.2665250301, -0.1827455014, -0.006252788, 0.0318805352, -0.0109189907, 0.1445143521, -0.0868278518, 0.3903291821, 0.2624931037, -0.0303213447, -0.2295878083, -0.034314815, -0.2536515296, 0.3241304755, 0.1303447038, 0.2218011022, 0.0054547531, 0.079792276, 0.1123955995, -0.039...
https://github.com/huggingface/datasets/issues/3164
Add raw data files to the Hub with GitHub LFS for canonical dataset
Hi @zlucia, As @julien-c pointed out, the way to store/host raw data files in our Hub is by using what we call "community" datasets: - either at your personal namespace: `load_dataset("zlucia/casehold")` - or at an organization namespace: for example, if you create the organization `reglab`, then `load_dataset("re...
I'm interested in sharing the CaseHOLD dataset (https://arxiv.org/abs/2104.08671) as a canonical dataset on the HuggingFace Hub and would like to add the raw data files to the Hub with GitHub LFS, since it seems like a more sustainable long term storage solution, compared to other storage solutions available to my team...
222
Add raw data files to the Hub with GitHub LFS for canonical dataset I'm interested in sharing the CaseHOLD dataset (https://arxiv.org/abs/2104.08671) as a canonical dataset on the HuggingFace Hub and would like to add the raw data files to the Hub with GitHub LFS, since it seems like a more sustainable long term stor...
[ -0.2289415151, -0.0566736236, -0.0150377396, 0.0101023288, 0.0391702466, 0.0713321269, -0.012348704, 0.3533783257, 0.3484467566, 0.0711124986, -0.3602598906, -0.0362024754, -0.2267824113, 0.2558308542, 0.0820699781, 0.263731271, -0.0132170971, 0.0790139139, 0.2102044672, -0.045...
https://github.com/huggingface/datasets/issues/3164
Add raw data files to the Hub with GitHub LFS for canonical dataset
Ah I see, I think I was unclear whether there were benefits to uploading a canonical dataset vs. a community provided dataset. Thanks for clarifying. I'll see if we want to create an organization namespace and otherwise, will upload the dataset under my personal namespace.
I'm interested in sharing the CaseHOLD dataset (https://arxiv.org/abs/2104.08671) as a canonical dataset on the HuggingFace Hub and would like to add the raw data files to the Hub with GitHub LFS, since it seems like a more sustainable long term storage solution, compared to other storage solutions available to my team...
45
Add raw data files to the Hub with GitHub LFS for canonical dataset I'm interested in sharing the CaseHOLD dataset (https://arxiv.org/abs/2104.08671) as a canonical dataset on the HuggingFace Hub and would like to add the raw data files to the Hub with GitHub LFS, since it seems like a more sustainable long term stor...
[ -0.3026182353, -0.1087173596, -0.0212863199, 0.0045907376, -0.0189415086, 0.0792324245, 0.0541193113, 0.3037517965, 0.2015293241, 0.1099400073, -0.1391483694, -0.0599634163, -0.2016545534, 0.2130594701, 0.083874315, 0.1610624939, 0.0275533237, 0.0485899672, 0.0830217078, -0.082...
https://github.com/huggingface/datasets/issues/3162
`datasets-cli test` should work with datasets without scripts
> It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). > > I wasn't able to run the script for a private test dataset that I had created on the hub (https://huggingface.co/datasets/huggingface/DataMeasurementsT...
It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). I wasn't able to run the script for a private test dataset that I had created on the hub (https://huggingface.co/datasets/huggingface/DataMeasurementsTest/t...
75
`datasets-cli test` should work with datasets without scripts It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). I wasn't able to run the script for a private test dataset that I had created on the hub (ht...
[ -0.5560204983, -0.141455099, -0.1553225368, -0.0454636067, -0.1118019223, 0.0510702282, 0.491317302, 0.3185858727, 0.4026774168, 0.1730815619, -0.0837669447, 0.0782284811, -0.1506565064, 0.3814931214, 0.0290266071, 0.197803393, -0.1345370561, 0.195657894, -0.0688027143, 0.04412...
https://github.com/huggingface/datasets/issues/3162
`datasets-cli test` should work with datasets without scripts
Hi ! You can run the command if you download the repository ``` git clone https://huggingface.co/datasets/huggingface/DataMeasurementsTest ``` and run the command ``` datasets-cli test DataMeasurementsTest/DataMeasurementsTest.py ``` (though on my side it doesn't manage to download the data since the dataset ...
It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). I wasn't able to run the script for a private test dataset that I had created on the hub (https://huggingface.co/datasets/huggingface/DataMeasurementsTest/t...
43
`datasets-cli test` should work with datasets without scripts It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). I wasn't able to run the script for a private test dataset that I had created on the hub (ht...
[ -0.5413661599, -0.1230086237, -0.0824000537, -0.0086422274, -0.1236352697, 0.2065104097, 0.3399679065, 0.4094349146, 0.4431855381, 0.0522724837, -0.1899446398, 0.0832232535, -0.2094290704, 0.3715489805, 0.1386322379, 0.1913192123, -0.1385655254, 0.1844177842, -0.042357225, 0.03...
https://github.com/huggingface/datasets/issues/3162
`datasets-cli test` should work with datasets without scripts
> Hi ! You can run the command if you download the repository > > ``` > git clone https://huggingface.co/datasets/huggingface/DataMeasurementsTest > ``` > > and run the command > > ``` > datasets-cli test DataMeasurementsTest/DataMeasurementsTest.py > ``` > > (though on my side it doesn't manage to down...
It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). I wasn't able to run the script for a private test dataset that I had created on the hub (https://huggingface.co/datasets/huggingface/DataMeasurementsTest/t...
80
`datasets-cli test` should work with datasets without scripts It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). I wasn't able to run the script for a private test dataset that I had created on the hub (ht...
[ -0.479364872, -0.0350034609, -0.0811262652, 0.0051559978, -0.1159430221, 0.2101321816, 0.3284415007, 0.4366992414, 0.3811271787, -0.0298913866, -0.1913800538, -0.0401114896, -0.1729565263, 0.2411369383, 0.1206410006, 0.1749818772, -0.135027349, 0.1931911409, 0.0534978993, -0.02...
https://github.com/huggingface/datasets/issues/3162
`datasets-cli test` should work with datasets without scripts
I think it's become private, but feel free to try with any other dataset like `lhoestq/test` for example at `https://huggingface.co/datasets/lhoestq/test`
It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). I wasn't able to run the script for a private test dataset that I had created on the hub (https://huggingface.co/datasets/huggingface/DataMeasurementsTest/t...
20
`datasets-cli test` should work with datasets without scripts It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). I wasn't able to run the script for a private test dataset that I had created on the hub (ht...
[ -0.6349616647, -0.015622884, -0.1090960577, -0.0938599855, -0.1955115199, 0.1658551991, 0.4447169006, 0.418176353, 0.5121309161, 0.0517273992, -0.1489698589, 0.1513028592, -0.2129505575, 0.3969887495, 0.1485297382, 0.2338594943, -0.1907198727, 0.1554764658, -0.0101322839, 0.042...
https://github.com/huggingface/datasets/issues/3162
`datasets-cli test` should work with datasets without scripts
> I think it's become private, but feel free to try with any other dataset like `lhoestq/test` for example at `https://huggingface.co/datasets/lhoestq/test` your example repo and this page `https://huggingface.co/docs/datasets/add_dataset.html` helped me to solve.. thanks a lot
It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). I wasn't able to run the script for a private test dataset that I had created on the hub (https://huggingface.co/datasets/huggingface/DataMeasurementsTest/t...
35
`datasets-cli test` should work with datasets without scripts It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). I wasn't able to run the script for a private test dataset that I had created on the hub (ht...
[ -0.5352520943, -0.0844733566, -0.0369245596, -0.0430360436, -0.0821811184, 0.1424030811, 0.3522939682, 0.350266397, 0.471251756, 0.0475401841, -0.2139712125, 0.1264723837, -0.1820226461, 0.4219508171, 0.1478932202, 0.1489329487, -0.1257575899, 0.1198425442, -0.0355072618, 0.007...
https://github.com/huggingface/datasets/issues/3156
Rouge and Meteor for multiple references
Hi @avinashsai , currently, multiple references are not supported. However, we could add a `multiref` config to fix that. When working with multiple references, we can accumulate them by either taking an average or the best score. Would you like to work on that?
Hi, Currently rogue and meteor supports only single references. Can we use these metrics to calculate for multiple references?
44
Rouge and Meteor for multiple references Hi, Currently rogue and meteor supports only single references. Can we use these metrics to calculate for multiple references? Hi @avinashsai , currently, multiple references are not supported. However, we could add a `multiref` config to fix that. When working with mu...
[ -0.2006472051, -0.3965416253, -0.0614076443, 0.3612068594, 0.1863978207, -0.3232403696, 0.11220631, -0.1550199836, 0.0509825796, 0.2434674501, -0.334225148, -0.0029786187, -0.109228164, -0.3390773833, -0.1908655167, -0.1946985871, -0.0195315182, -0.1624715924, 0.2199575901, 0.0...
https://github.com/huggingface/datasets/issues/3155
Illegal instruction (core dumped) at datasets import
It seems to be an issue with how conda-forge is building the binaries. It works on some machines, but not a machine with AMD Opteron 8384 processors.
## Describe the bug I install datasets using conda and when I import datasets I get: "Illegal instruction (core dumped)" ## Steps to reproduce the bug ``` conda create --prefix path/to/env conda activate path/to/env conda install -c huggingface -c conda-forge datasets # exits with output "Illegal instruction...
27
Illegal instruction (core dumped) at datasets import ## Describe the bug I install datasets using conda and when I import datasets I get: "Illegal instruction (core dumped)" ## Steps to reproduce the bug ``` conda create --prefix path/to/env conda activate path/to/env conda install -c huggingface -c conda-f...
[ -0.12203262, -0.1705197543, -0.1097889096, 0.3383170366, 0.0886828899, 0.1632141322, 0.3851004243, 0.0958758891, -0.027489759, -0.1795628965, -0.2659413517, 0.227496475, -0.0341779515, -0.0706300363, 0.0804519951, 0.398709327, 0.4395806491, -0.1147561818, -0.4592989385, -0.2523...
https://github.com/huggingface/datasets/issues/3154
Sacrebleu unexpected behaviour/requirement for data format
Hi @BramVanroy! Good question. This project relies on PyArrow (tables) to store data too big to fit in RAM. In the case of metrics, this means that the number of predictions and references has to match to form a table. That's why your example throws an error even though it matches the schema: ```python refs = [...
## Describe the bug When comparing with the original `sacrebleu` implementation, the `datasets` implementation does some strange things that I do not quite understand. This issue was triggered when I was trying to implement TER and found the datasets implementation of BLEU [here](https://github.com/huggingface/dataset...
197
Sacrebleu unexpected behaviour/requirement for data format ## Describe the bug When comparing with the original `sacrebleu` implementation, the `datasets` implementation does some strange things that I do not quite understand. This issue was triggered when I was trying to implement TER and found the datasets impleme...
[ 0.0317308679, -0.2080372274, 0.0581464209, 0.1247445419, 0.374968797, -0.0902199149, 0.1705552936, 0.1604259908, -0.3364595175, 0.0518428683, -0.0757987946, 0.2882940471, -0.0812800378, 0.2799239457, 0.0603367947, -0.1692025214, 0.2169518769, 0.2796362638, 0.2296099812, -0.0013...
https://github.com/huggingface/datasets/issues/3154
Sacrebleu unexpected behaviour/requirement for data format
Thanks, that makes sense. It is a bit unfortunate because it may be confusing to users since the input format is suddenly different than what they may expect from the underlying library/metric. But it is understandable due to how `datasets` works!
## Describe the bug When comparing with the original `sacrebleu` implementation, the `datasets` implementation does some strange things that I do not quite understand. This issue was triggered when I was trying to implement TER and found the datasets implementation of BLEU [here](https://github.com/huggingface/dataset...
41
Sacrebleu unexpected behaviour/requirement for data format ## Describe the bug When comparing with the original `sacrebleu` implementation, the `datasets` implementation does some strange things that I do not quite understand. This issue was triggered when I was trying to implement TER and found the datasets impleme...
[ 0.0317308679, -0.2080372274, 0.0581464209, 0.1247445419, 0.374968797, -0.0902199149, 0.1705552936, 0.1604259908, -0.3364595175, 0.0518428683, -0.0757987946, 0.2882940471, -0.0812800378, 0.2799239457, 0.0603367947, -0.1692025214, 0.2169518769, 0.2796362638, 0.2296099812, -0.0013...
https://github.com/huggingface/datasets/issues/3148
Streaming with num_workers != 0
I can confirm that I was able to reproduce the bug. This seems odd given that #3423 reports duplicate data retrieval when `num_workers` and `streaming` are used together, which is obviously different from what is reported here.
## Describe the bug When using dataset streaming with pytorch DataLoader, the setting num_workers to anything other than 0 causes the code to freeze forever before yielding the first batch. The code owner is likely @lhoestq ## Steps to reproduce the bug For your convenience, we've prepped a colab notebook th...
37
Streaming with num_workers != 0 ## Describe the bug When using dataset streaming with pytorch DataLoader, the setting num_workers to anything other than 0 causes the code to freeze forever before yielding the first batch. The code owner is likely @lhoestq ## Steps to reproduce the bug For your convenience,...
[ -0.3226699531, 0.0737845227, -0.0116667831, 0.2070756406, -0.1044991091, -0.3222557902, 0.6223980188, 0.1405929178, -0.0384612866, 0.38090837, 0.1035974026, 0.3658747971, -0.4546128213, -0.1009321436, -0.0124416975, 0.1442426592, -0.0811087936, 0.109146595, -0.1322443038, 0.238...
https://github.com/huggingface/datasets/issues/3148
Streaming with num_workers != 0
Any update? A possible solution is to have multiple arrow files as shards, and handle them like what webdatasets does. ![image](https://user-images.githubusercontent.com/11533479/148176637-72746b2c-c122-47aa-bbfe-224b13ee9a71.png) Pytorch's new dataset RFC is supporting sharding now, which may helps avoid duplicate...
## Describe the bug When using dataset streaming with pytorch DataLoader, the setting num_workers to anything other than 0 causes the code to freeze forever before yielding the first batch. The code owner is likely @lhoestq ## Steps to reproduce the bug For your convenience, we've prepped a colab notebook th...
39
Streaming with num_workers != 0 ## Describe the bug When using dataset streaming with pytorch DataLoader, the setting num_workers to anything other than 0 causes the code to freeze forever before yielding the first batch. The code owner is likely @lhoestq ## Steps to reproduce the bug For your convenience,...
[ -0.4615967572, 0.0840075165, -0.0917685553, 0.1433269083, -0.12776272, -0.2838304639, 0.6224715114, 0.2356567979, 0.0436166562, 0.3701071143, 0.2003808171, 0.3652407527, -0.5244142413, -0.1296166778, -0.1155706421, 0.1345635206, -0.1440053284, 0.213054195, -0.1751621515, 0.1614...
https://github.com/huggingface/datasets/issues/3148
Streaming with num_workers != 0
Hi ! Thanks for the insights :) Note that in streaming mode there're usually no arrow files. The data are streamed from TAR, ZIP, text, etc. files directly from the web. Though for sharded datasets we can definitely adopt a similar strategy !
## Describe the bug When using dataset streaming with pytorch DataLoader, the setting num_workers to anything other than 0 causes the code to freeze forever before yielding the first batch. The code owner is likely @lhoestq ## Steps to reproduce the bug For your convenience, we've prepped a colab notebook th...
43
Streaming with num_workers != 0 ## Describe the bug When using dataset streaming with pytorch DataLoader, the setting num_workers to anything other than 0 causes the code to freeze forever before yielding the first batch. The code owner is likely @lhoestq ## Steps to reproduce the bug For your convenience,...
[ -0.4143063426, 0.0090087662, -0.0575312264, 0.090479508, -0.0889115334, -0.2557557821, 0.5883558989, 0.1792391986, -0.0135440882, 0.4365207553, 0.1794317365, 0.3351474404, -0.4951827824, -0.0813726112, -0.0375346355, 0.0914546698, -0.1148619503, 0.1715031117, -0.1394737661, 0.1...
https://github.com/huggingface/datasets/issues/3145
[when Image type will exist] provide a way to get the data as binary + filename
@severo I'll keep that in mind. You can track progress on the Image feature in #3163 (still in the early stage).
**Is your feature request related to a problem? Please describe.** When a dataset cell contains a value of type Image (be it from a remote URL, an Array2D/3D, or any other way to represent images), I want to be able to write the image to the disk, with the correct filename, and optionally to know its mimetype, in or...
21
[when Image type will exist] provide a way to get the data as binary + filename **Is your feature request related to a problem? Please describe.** When a dataset cell contains a value of type Image (be it from a remote URL, an Array2D/3D, or any other way to represent images), I want to be able to write the image ...
[ 0.0579433106, -0.104809314, -0.0994670019, 0.0846774206, 0.2207916379, -0.2173030823, 0.155513525, 0.4137360156, -0.2236264497, 0.3140285313, 0.1264708489, -0.0210624188, -0.344050318, 0.1960532814, 0.2407696694, -0.0566723123, 0.0115617588, 0.3099686205, 0.0601926036, -0.13375...
https://github.com/huggingface/datasets/issues/3145
[when Image type will exist] provide a way to get the data as binary + filename
Hi ! As discussed with @severo offline it looks like the dataset viewer already supports reading PIL images, so maybe the dataset viewer doesn't need to disable decoding after all
**Is your feature request related to a problem? Please describe.** When a dataset cell contains a value of type Image (be it from a remote URL, an Array2D/3D, or any other way to represent images), I want to be able to write the image to the disk, with the correct filename, and optionally to know its mimetype, in or...
30
[when Image type will exist] provide a way to get the data as binary + filename **Is your feature request related to a problem? Please describe.** When a dataset cell contains a value of type Image (be it from a remote URL, an Array2D/3D, or any other way to represent images), I want to be able to write the image ...
[ -0.1724829525, 0.0429175198, -0.0778839737, 0.2818343341, 0.280492425, -0.0878628194, 0.0411132835, 0.3518592417, -0.2673685551, 0.3344699442, -0.0756199807, 0.1827455908, -0.2529495656, 0.2509998083, 0.0176695026, -0.1057252809, -0.0495731197, 0.366279453, 0.1182622761, -0.050...
https://github.com/huggingface/datasets/issues/3142
Provide a way to write a streamed dataset to the disk
Yes, I agree this feature is much needed. We could do something similar to what TF does (https://www.tensorflow.org/api_docs/python/tf/data/Dataset#cache). Ideally, if the entire streamed dataset is consumed/cached, the generated cache should be reusable for the Arrow dataset.
**Is your feature request related to a problem? Please describe.** The streaming mode allows to get the 100 first rows of a dataset very quickly. But it does not cache the answer, so a posterior call to get the same 100 rows will send a request to the server again and again. **Describe the solution you'd like** ...
36
Provide a way to write a streamed dataset to the disk **Is your feature request related to a problem? Please describe.** The streaming mode allows to get the 100 first rows of a dataset very quickly. But it does not cache the answer, so a posterior call to get the same 100 rows will send a request to the server ag...
[ -0.2683077157, -0.317753166, -0.132739231, -0.0794554353, -0.0398471616, -0.0026228749, 0.1889887452, 0.4413467348, 0.2273308188, 0.2164762765, 0.0959095657, 0.225363642, -0.0826942697, 0.0931221172, 0.3210359812, -0.1213711873, -0.1736006141, 0.3577262461, -0.0479811653, -0.18...
https://github.com/huggingface/datasets/issues/3135
Make inspect.get_dataset_config_names always return a non-empty list of configs
Hi @severo, I guess this issue requests not only to be able to access the configuration name (by using `inspect.get_dataset_config_names`), but the configuration itself as well (I mean you use the name to get the configuration afterwards, maybe using `builder_cls.builder_configs`), is this right?
**Is your feature request related to a problem? Please describe.** Currently, some datasets have a configuration, while others don't. It would be simpler for the user to always have configuration names to refer to **Describe the solution you'd like** In that sense inspect.get_dataset_config_names should always...
43
Make inspect.get_dataset_config_names always return a non-empty list of configs **Is your feature request related to a problem? Please describe.** Currently, some datasets have a configuration, while others don't. It would be simpler for the user to always have configuration names to refer to **Describe the sol...
[ -0.1959385723, -0.0393929146, -0.0867162049, 0.2019756436, 0.3481401801, 0.1019185558, 0.2270127833, 0.461692065, 0.0720916912, 0.6261889935, -0.0640386716, 0.3278577328, -0.0607229322, 0.2241173536, -0.0723211616, 0.2225236446, -0.2583372295, 0.3215778172, -0.010006574, 0.0468...
https://github.com/huggingface/datasets/issues/3135
Make inspect.get_dataset_config_names always return a non-empty list of configs
Yes, maybe the issue could be reformulated. As a user, I want to avoid having to manage special cases: - I want to be able to get the names of a dataset's configs, and use them in the rest of the API (get the data, get the split names, etc). - I don't want to have to manage datasets with named configs (`glue`) differ...
**Is your feature request related to a problem? Please describe.** Currently, some datasets have a configuration, while others don't. It would be simpler for the user to always have configuration names to refer to **Describe the solution you'd like** In that sense inspect.get_dataset_config_names should always...
71
Make inspect.get_dataset_config_names always return a non-empty list of configs **Is your feature request related to a problem? Please describe.** Currently, some datasets have a configuration, while others don't. It would be simpler for the user to always have configuration names to refer to **Describe the sol...
[ -0.1433714628, -0.0727245882, -0.0968915448, 0.116970174, 0.2310522199, 0.1749294549, 0.1810453385, 0.464779526, 0.2811773121, 0.4464249015, -0.1508253217, 0.3579241931, -0.0074269748, 0.2346330583, -0.1006172225, 0.1920578778, -0.2181450129, 0.3768833578, 0.0402729101, -0.0437...
https://github.com/huggingface/datasets/issues/3134
Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py
Hi, Did you try to run the code multiple times (GitHub URLs can be down sometimes for various reasons)? I can access `https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py`, so this code is working without an error on my side. Additionally, can you please run the `datasets-cli env`...
datasets version: 1.12.1 `metric = datasets.load_metric('rouge')` The error: > ConnectionError Traceback (most recent call last) > <ipython-input-3-dd10a0c5212f> in <module> > ----> 1 metric = datasets.load_metric('rouge') > > /usr/local/lib/python3.6/dist-packages/datasets/load....
58
Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py datasets version: 1.12.1 `metric = datasets.load_metric('rouge')` The error: > ConnectionError Traceback (most recent call last) > <ipython-input-3-dd10a0c5212f> in <module> > ---->...
[ -0.4028956294, -0.0917159915, -0.1140874997, 0.2032687962, 0.2335222661, -0.0791536048, 0.0977335945, 0.401229769, 0.1732943803, 0.2921183407, -0.3156027794, -0.0292547848, 0.0664939284, 0.0566055626, 0.1343690902, -0.0982899889, -0.196165517, -0.0955754295, -0.26176548, 0.1735...
https://github.com/huggingface/datasets/issues/3134
Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py
Same issue when running `metric = datasets.load_metric("accuracy")`. Error info is: ``` metric = datasets.load_metric("accuracy") Traceback (most recent call last): File "<ipython-input-2-d25db38b26c5>", line 1, in <module> metric = datasets.load_metric("accuracy") File "D:\anaconda3\lib\site-package...
datasets version: 1.12.1 `metric = datasets.load_metric('rouge')` The error: > ConnectionError Traceback (most recent call last) > <ipython-input-3-dd10a0c5212f> in <module> > ----> 1 metric = datasets.load_metric('rouge') > > /usr/local/lib/python3.6/dist-packages/datasets/load....
103
Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py datasets version: 1.12.1 `metric = datasets.load_metric('rouge')` The error: > ConnectionError Traceback (most recent call last) > <ipython-input-3-dd10a0c5212f> in <module> > ---->...
[ -0.4028956294, -0.0917159915, -0.1140874997, 0.2032687962, 0.2335222661, -0.0791536048, 0.0977335945, 0.401229769, 0.1732943803, 0.2921183407, -0.3156027794, -0.0292547848, 0.0664939284, 0.0566055626, 0.1343690902, -0.0982899889, -0.196165517, -0.0955754295, -0.26176548, 0.1735...
https://github.com/huggingface/datasets/issues/3134
Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py
It seems to be able to solve this issue by adding the equivalent `accuracy.py` locally. change `metric = datasets.load_metric("accuracy")` to `metric = datasets.load_metric(path = "./accuracy.py")`. Copy `accuracy.py` from browser at [accuracy.py](https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metric...
datasets version: 1.12.1 `metric = datasets.load_metric('rouge')` The error: > ConnectionError Traceback (most recent call last) > <ipython-input-3-dd10a0c5212f> in <module> > ----> 1 metric = datasets.load_metric('rouge') > > /usr/local/lib/python3.6/dist-packages/datasets/load....
31
Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py datasets version: 1.12.1 `metric = datasets.load_metric('rouge')` The error: > ConnectionError Traceback (most recent call last) > <ipython-input-3-dd10a0c5212f> in <module> > ---->...
[ -0.4028956294, -0.0917159915, -0.1140874997, 0.2032687962, 0.2335222661, -0.0791536048, 0.0977335945, 0.401229769, 0.1732943803, 0.2921183407, -0.3156027794, -0.0292547848, 0.0664939284, 0.0566055626, 0.1343690902, -0.0982899889, -0.196165517, -0.0955754295, -0.26176548, 0.1735...
https://github.com/huggingface/datasets/issues/3127
datasets-cli: convertion of a tfds dataset to a huggingface one.
Hi, the MNIST dataset is already available on the Hub. You can use it as follows: ```python import datasets dataset_dict = datasets.load_dataset("mnist") ``` As for the conversion of TFDS datasets to HF datasets, we will be working on it in the coming months, so stay tuned.
### Discussed in https://github.com/huggingface/datasets/discussions/3079 <div type='discussions-op-text'> <sup>Originally posted by **vitalyshalumov** October 14, 2021</sup> I'm trying to convert a tfds dataset to a huggingface one. I've tried: 1. datasets-cli convert --tfds_path ~/tensorflow_datas...
46
datasets-cli: convertion of a tfds dataset to a huggingface one. ### Discussed in https://github.com/huggingface/datasets/discussions/3079 <div type='discussions-op-text'> <sup>Originally posted by **vitalyshalumov** October 14, 2021</sup> I'm trying to convert a tfds dataset to a huggingface one. I've trie...
[ -0.1648913324, -0.4112803638, -0.0158713572, 0.0792162716, 0.266931802, 0.225569725, -0.0300498419, 0.3266828954, 0.1026157737, 0.1285151541, -0.4481951296, -0.0443409681, -0.2346507162, 0.2429739684, 0.2353371978, -0.0526280217, 0.2293661535, 0.1147148013, -0.2987787127, -0.17...
https://github.com/huggingface/datasets/issues/3126
"arabic_billion_words" dataset does not create the full dataset
Thanks for reporting, @vitalyshalumov. Apparently the script to parse the data has a bug, and does not generate the entire dataset. I'm fixing it.
## Describe the bug When running: raw_dataset = load_dataset('arabic_billion_words','Alittihad') the correct dataset file is pulled from the url. But, the generated dataset includes just a small portion of the data included in the file. This is true for all other portions of the "arabic_billion_words" dataset ('A...
24
"arabic_billion_words" dataset does not create the full dataset ## Describe the bug When running: raw_dataset = load_dataset('arabic_billion_words','Alittihad') the correct dataset file is pulled from the url. But, the generated dataset includes just a small portion of the data included in the file. This is tru...
[ -0.028614644, 0.0881006867, -0.039816007, 0.4470804632, -0.0541644618, 0.1820729822, 0.0881036222, 0.4025402367, -0.0948741212, 0.016380338, 0.2197008282, -0.1283022165, 0.2180960923, 0.1541102678, 0.0881133527, 0.0227925815, 0.1452276111, -0.0231749788, -0.1374328285, -0.17561...
https://github.com/huggingface/datasets/issues/3123
Segmentation fault when loading datasets from file
Hi ! I created an issue on Arrow's JIRA after making a minimum reproducible example https://issues.apache.org/jira/browse/ARROW-14439 ```python import io import pyarrow.json as paj batch = b'{"a": [], "b": 1}\n{"b": 1}' block_size = 12 paj.read_json( io.BytesIO(batch), read_options=paj.ReadOptions...
## Describe the bug Custom dataset loading sometimes segfaults and kills the process if chunks contain a variety of features/ ## Steps to reproduce the bug Download an example file: ``` wget https://gist.githubusercontent.com/TevenLeScao/11e2184394b3fa47d693de2550942c6b/raw/4232704d08fbfcaf93e5b51def9e50515076...
58
Segmentation fault when loading datasets from file ## Describe the bug Custom dataset loading sometimes segfaults and kills the process if chunks contain a variety of features/ ## Steps to reproduce the bug Download an example file: ``` wget https://gist.githubusercontent.com/TevenLeScao/11e2184394b3fa47d693...
[ -0.221606046, 0.0462876186, -0.0461312346, 0.3980239332, 0.2773248553, 0.1302967519, 0.400708884, 0.4724467397, -0.2625967264, -0.0488691367, 0.0218499918, 0.5417176485, -0.0916597918, -0.1903084815, -0.0640659928, -0.1771654785, 0.0981779993, 0.0973746106, -0.0141672334, -0.01...
https://github.com/huggingface/datasets/issues/3123
Segmentation fault when loading datasets from file
The issue has been fixed in pyarrow 6.0.0, please update pyarrow :) The issue was due to missing fields in the JSON data of type list. Now it's working fine and missing list fields are replaced with empty lists
## Describe the bug Custom dataset loading sometimes segfaults and kills the process if chunks contain a variety of features/ ## Steps to reproduce the bug Download an example file: ``` wget https://gist.githubusercontent.com/TevenLeScao/11e2184394b3fa47d693de2550942c6b/raw/4232704d08fbfcaf93e5b51def9e50515076...
39
Segmentation fault when loading datasets from file ## Describe the bug Custom dataset loading sometimes segfaults and kills the process if chunks contain a variety of features/ ## Steps to reproduce the bug Download an example file: ``` wget https://gist.githubusercontent.com/TevenLeScao/11e2184394b3fa47d693...
[ -0.2918098569, 0.1214295179, -0.0827052891, 0.3285434246, 0.2168405354, 0.1759035587, 0.4262994826, 0.450145781, -0.116876334, -0.035681136, -0.0088388883, 0.4317319393, -0.07351996, -0.1450857669, 0.043353945, -0.2367306799, 0.0988489091, 0.1945877969, 0.0211999211, -0.0470167...
https://github.com/huggingface/datasets/issues/3122
OSError with a custom dataset loading script
Hi, there is a difference in how the `data_dir` is zipped between the `classla/janes_tag` and the `classla/reldi_hr` dataset. After unzipping, for the former, the data files (`*.conllup`) are in the root directory (root -> data files), and for the latter, they are inside the `data` directory (root -> `data` -> data ...
## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost identical and they have the same directory struc...
71
OSError with a custom dataset loading script ## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost i...
[ -0.112895906, 0.2815282643, -0.0133344699, 0.3955747485, 0.2419006526, 0.1173843443, 0.4183782637, 0.3338419795, 0.3993054032, 0.1943427771, -0.2293032855, 0.3192374408, -0.2365900278, -0.0586514659, -0.0276410375, 0.0171539448, -0.114003703, 0.1235067323, 0.0152645279, 0.05382...
https://github.com/huggingface/datasets/issues/3122
OSError with a custom dataset loading script
Hi Mario, I had already tried that before, but it didn't work. I have now recreated the `classla/janes_tag` zip file so that it also contains the `data` directory, but I am still getting the same error.
## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost identical and they have the same directory struc...
36
OSError with a custom dataset loading script ## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost i...
[ -0.112895906, 0.2815282643, -0.0133344699, 0.3955747485, 0.2419006526, 0.1173843443, 0.4183782637, 0.3338419795, 0.3993054032, 0.1943427771, -0.2293032855, 0.3192374408, -0.2365900278, -0.0586514659, -0.0276410375, 0.0171539448, -0.114003703, 0.1235067323, 0.0152645279, 0.05382...
https://github.com/huggingface/datasets/issues/3122
OSError with a custom dataset loading script
Hi, I just tried to download the `classla/janes_tag` dataset, and this time the zip file is extracted correctly. However, the script is now throwing the IndexError, probably due to a bug in the `_generate_examples`. Let me know if you are still getting the same error.
## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost identical and they have the same directory struc...
45
OSError with a custom dataset loading script ## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost i...
[ -0.112895906, 0.2815282643, -0.0133344699, 0.3955747485, 0.2419006526, 0.1173843443, 0.4183782637, 0.3338419795, 0.3993054032, 0.1943427771, -0.2293032855, 0.3192374408, -0.2365900278, -0.0586514659, -0.0276410375, 0.0171539448, -0.114003703, 0.1235067323, 0.0152645279, 0.05382...
https://github.com/huggingface/datasets/issues/3122
OSError with a custom dataset loading script
Hi, could you try to download the dataset with a different `cache_dir` like so: ```python import datasets dataset = datasets.load_dataset('classla/janes_tag', split='validation', cache_dir="path/to/different/cache/dir") ``` If this works, then most likely the cached extracted data is causing issues. This data ...
## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost identical and they have the same directory struc...
84
OSError with a custom dataset loading script ## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost i...
[ -0.112895906, 0.2815282643, -0.0133344699, 0.3955747485, 0.2419006526, 0.1173843443, 0.4183782637, 0.3338419795, 0.3993054032, 0.1943427771, -0.2293032855, 0.3192374408, -0.2365900278, -0.0586514659, -0.0276410375, 0.0171539448, -0.114003703, 0.1235067323, 0.0152645279, 0.05382...
https://github.com/huggingface/datasets/issues/3122
OSError with a custom dataset loading script
Thank you, deleting the `~/.cache/huggingface/datasets/downloads/extracted` directory helped. However, I am still having problems. There was indeed a bug in the script that was throwing an `IndexError`, which I have now corrected (added the condition to skip the lines starting with '# text') and it is working locall...
## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost identical and they have the same directory struc...
117
OSError with a custom dataset loading script ## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost i...
[ -0.112895906, 0.2815282643, -0.0133344699, 0.3955747485, 0.2419006526, 0.1173843443, 0.4183782637, 0.3338419795, 0.3993054032, 0.1943427771, -0.2293032855, 0.3192374408, -0.2365900278, -0.0586514659, -0.0276410375, 0.0171539448, -0.114003703, 0.1235067323, 0.0152645279, 0.05382...
https://github.com/huggingface/datasets/issues/3122
OSError with a custom dataset loading script
Hi, Did some investigation. To fix the dataset script on the Hub, append the following labels to the `names` list of the `upos_tags` field: ```'INTJ NOUN', 'AUX PRON', 'PART ADV', 'PRON ADP', 'INTJ INTJ', 'VERB NOUN', 'NOUN AUX'```. This step is required to avoid an error due to missing labels in the followin...
## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost identical and they have the same directory struc...
84
OSError with a custom dataset loading script ## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost i...
[ -0.112895906, 0.2815282643, -0.0133344699, 0.3955747485, 0.2419006526, 0.1173843443, 0.4183782637, 0.3338419795, 0.3993054032, 0.1943427771, -0.2293032855, 0.3192374408, -0.2365900278, -0.0586514659, -0.0276410375, 0.0171539448, -0.114003703, 0.1235067323, 0.0152645279, 0.05382...
https://github.com/huggingface/datasets/issues/3119
Add OpenSLR 83 - Crowdsourced high-quality UK and Ireland English Dialect speech
Ugh. The index files for SLR83 are CSV, not TSV. I need to add logic to process these index files.
## Adding a Dataset - **Name:** *openslr** - **Description:** *Data set which contains male and female recordings of English from various dialects of the UK and Ireland.* - **Paper:** *https://www.openslr.org/resources/83/about.html* - **Data:** *Eleven separate data files can be found via https://www.openslr.org/r...
20
Add OpenSLR 83 - Crowdsourced high-quality UK and Ireland English Dialect speech ## Adding a Dataset - **Name:** *openslr** - **Description:** *Data set which contains male and female recordings of English from various dialects of the UK and Ireland.* - **Paper:** *https://www.openslr.org/resources/83/about.html* ...
[ -0.0509535596, 0.233801648, -0.142274186, 0.104232505, -0.3136017919, 0.2464337945, -0.0554703474, 0.29288131, 0.5650348663, 0.2243560851, -0.1979470998, 0.151480481, -0.4145610034, 0.2901998758, -0.0463003851, 0.2556516826, 0.0738376826, 0.1669849008, 0.0678537115, -0.13436885...
https://github.com/huggingface/datasets/issues/3114
load_from_disk in DatasetsDict/Dataset not working with PyArrowHDFS wrapper implementing fsspec.spec.AbstractFileSystem
Hi ! Can you try again with pyarrow 6.0.0 ? I think it includes some changes regarding filesystems compatibility with fsspec.
## Describe the bug Passing a PyArrowHDFS implementation of fsspec.spec.AbstractFileSystem (in the `fs` param required by `load_from_disk` methods in `DatasetDict` (in datasets_dict.py) and `Dataset` (in arrow_dataset.py) results in an error when calling the download method in the `fs` parameter. ## Steps to repr...
21
load_from_disk in DatasetsDict/Dataset not working with PyArrowHDFS wrapper implementing fsspec.spec.AbstractFileSystem ## Describe the bug Passing a PyArrowHDFS implementation of fsspec.spec.AbstractFileSystem (in the `fs` param required by `load_from_disk` methods in `DatasetDict` (in datasets_dict.py) and `Datase...
[ -0.3349553645, 0.2241356224, 0.102795437, 0.1352015883, 0.1960353553, -0.2252621353, 0.361558944, 0.0040491149, -0.0512572154, -0.102641426, -0.1220099181, 0.469068706, 0.1646988243, 0.0326150768, 0.1159635484, 0.0376404598, 0.2662323713, -0.0224406663, -0.0546423085, -0.003812...
https://github.com/huggingface/datasets/issues/3114
load_from_disk in DatasetsDict/Dataset not working with PyArrowHDFS wrapper implementing fsspec.spec.AbstractFileSystem
Hi @lhoestq! I ended up using `fsspec.implementations.arrow.HadoopFileSystem` which doesn't have the problem I described with pyarrow 5.0.0. I'll try again with `PyArrowHDFS` once I update arrow to 6.0.0. Thanks!
## Describe the bug Passing a PyArrowHDFS implementation of fsspec.spec.AbstractFileSystem (in the `fs` param required by `load_from_disk` methods in `DatasetDict` (in datasets_dict.py) and `Dataset` (in arrow_dataset.py) results in an error when calling the download method in the `fs` parameter. ## Steps to repr...
29
load_from_disk in DatasetsDict/Dataset not working with PyArrowHDFS wrapper implementing fsspec.spec.AbstractFileSystem ## Describe the bug Passing a PyArrowHDFS implementation of fsspec.spec.AbstractFileSystem (in the `fs` param required by `load_from_disk` methods in `DatasetDict` (in datasets_dict.py) and `Datase...
[ -0.3349553645, 0.2241356224, 0.102795437, 0.1352015883, 0.1960353553, -0.2252621353, 0.361558944, 0.0040491149, -0.0512572154, -0.102641426, -0.1220099181, 0.469068706, 0.1646988243, 0.0326150768, 0.1159635484, 0.0376404598, 0.2662323713, -0.0224406663, -0.0546423085, -0.003812...
https://github.com/huggingface/datasets/issues/3112
OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB
I am very unsure on why you tagged me here. I am not a maintainer of the Datasets library and have no idea how to help you.
## Describe the bug Despite having batches way under 2Gb when running `datasets.map()`, after processing correctly the data of the first batch without fuss and irrespective of writer_batch_size (say 2,4,8,16,32,64 and 128 in my case), it returns the following error : > OverflowError: There was an overflow in the <c...
27
OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB ## Describe the bug Despite having batches way under 2Gb when running `datasets.map()`, after processing correctly the data of the first batch without fuss and irrespective of...
[ -0.479187727, -0.1015703529, -0.1007575765, 0.3905477524, 0.2512301207, -0.0684596151, 0.110699825, 0.2645792961, -0.1296795756, 0.3843227625, 0.2181476653, 0.3294644058, -0.1310050637, -0.171629101, 0.0497370884, -0.0603311583, 0.0917914584, -0.2516676784, -0.0394297801, 0.032...
https://github.com/huggingface/datasets/issues/3112
OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB
Ok got it, tensor full of NaNs, cf. ~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_writer.py in write_examples_on_file(self) 315 # This check fails with FloatArrays with nans, which is not what we want, so account for that:
## Describe the bug Despite having batches way under 2Gb when running `datasets.map()`, after processing correctly the data of the first batch without fuss and irrespective of writer_batch_size (say 2,4,8,16,32,64 and 128 in my case), it returns the following error : > OverflowError: There was an overflow in the <c...
30
OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB ## Describe the bug Despite having batches way under 2Gb when running `datasets.map()`, after processing correctly the data of the first batch without fuss and irrespective of...
[ -0.479187727, -0.1015703529, -0.1007575765, 0.3905477524, 0.2512301207, -0.0684596151, 0.110699825, 0.2645792961, -0.1296795756, 0.3843227625, 0.2181476653, 0.3294644058, -0.1310050637, -0.171629101, 0.0497370884, -0.0603311583, 0.0917914584, -0.2516676784, -0.0394297801, 0.032...
https://github.com/huggingface/datasets/issues/3099
AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo'
Hi @JTWang2000, thanks for reporting. However, I cannot reproduce your reported bug: ```python >>> from datasets import load_dataset >>> dataset = load_dataset("sst", "default") >>> dataset DatasetDict({ train: Dataset({ features: ['sentence', 'label', 'tokens', 'tree'], num_rows: 8544 ...
## Describe the bug When using `pip install datasets` or use `conda install -c huggingface -c conda-forge datasets` cannot install datasets ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("sst", "default") ``` ## Actual results ---------------------------...
90
AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo' ## Describe the bug When using `pip install datasets` or use `conda install -c huggingface -c conda-forge datasets` cannot install datasets ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load...
[ -0.4111837745, -0.2402752638, -0.0685085058, 0.432022661, 0.2099774629, 0.0242406782, 0.2803740501, 0.4057570994, 0.1470829546, 0.0813488215, -0.2076935172, 0.3919777572, -0.0483211763, 0.0626079962, -0.0157080498, -0.1405219734, 0.0408853665, 0.2579431832, -0.3039068282, -0.13...
https://github.com/huggingface/datasets/issues/3099
AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo'
Im facing the same issue. I did run the upgrade command but that doesnt seem to resolve the issue
## Describe the bug When using `pip install datasets` or use `conda install -c huggingface -c conda-forge datasets` cannot install datasets ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("sst", "default") ``` ## Actual results ---------------------------...
19
AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo' ## Describe the bug When using `pip install datasets` or use `conda install -c huggingface -c conda-forge datasets` cannot install datasets ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load...
[ -0.4111837745, -0.2402752638, -0.0685085058, 0.432022661, 0.2099774629, 0.0242406782, 0.2803740501, 0.4057570994, 0.1470829546, 0.0813488215, -0.2076935172, 0.3919777572, -0.0483211763, 0.0626079962, -0.0157080498, -0.1405219734, 0.0408853665, 0.2579431832, -0.3039068282, -0.13...
https://github.com/huggingface/datasets/issues/3099
AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo'
Hi @aneeshjain, could you please specify which `huggingface_hub` version you are using? Besides that, please run `datasets-cli env` and copy-and-paste its output below.
## Describe the bug When using `pip install datasets` or use `conda install -c huggingface -c conda-forge datasets` cannot install datasets ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("sst", "default") ``` ## Actual results ---------------------------...
23
AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo' ## Describe the bug When using `pip install datasets` or use `conda install -c huggingface -c conda-forge datasets` cannot install datasets ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load...
[ -0.4111837745, -0.2402752638, -0.0685085058, 0.432022661, 0.2099774629, 0.0242406782, 0.2803740501, 0.4057570994, 0.1470829546, 0.0813488215, -0.2076935172, 0.3919777572, -0.0483211763, 0.0626079962, -0.0157080498, -0.1405219734, 0.0408853665, 0.2579431832, -0.3039068282, -0.13...
https://github.com/huggingface/datasets/issues/3099
AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo'
The problem seems to be with the latest version of `datasets`. After running `pip install -U datasets huggingface_hub`, I get the following: ```bash python -c "import huggingface_hub; print(f'hbvers={huggingface_hub.__version__}'); import datasets; print(f'dvers={datasets.__version__}')" hbvers=0.0.8 Traceback (...
## Describe the bug When using `pip install datasets` or use `conda install -c huggingface -c conda-forge datasets` cannot install datasets ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("sst", "default") ``` ## Actual results ---------------------------...
128
AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo' ## Describe the bug When using `pip install datasets` or use `conda install -c huggingface -c conda-forge datasets` cannot install datasets ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load...
[ -0.4111837745, -0.2402752638, -0.0685085058, 0.432022661, 0.2099774629, 0.0242406782, 0.2803740501, 0.4057570994, 0.1470829546, 0.0813488215, -0.2076935172, 0.3919777572, -0.0483211763, 0.0626079962, -0.0157080498, -0.1405219734, 0.0408853665, 0.2579431832, -0.3039068282, -0.13...
https://github.com/huggingface/datasets/issues/3099
AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo'
> Hi @JTWang2000, thanks for reporting. > > However, I cannot reproduce your reported bug: > > ```python > >>> from datasets import load_dataset > > >>> dataset = load_dataset("sst", "default") > >>> dataset > DatasetDict({ > train: Dataset({ > features: ['sentence', 'label', 'tokens', 'tree']...
## Describe the bug When using `pip install datasets` or use `conda install -c huggingface -c conda-forge datasets` cannot install datasets ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("sst", "default") ``` ## Actual results ---------------------------...
137
AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo' ## Describe the bug When using `pip install datasets` or use `conda install -c huggingface -c conda-forge datasets` cannot install datasets ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load...
[ -0.4111837745, -0.2402752638, -0.0685085058, 0.432022661, 0.2099774629, 0.0242406782, 0.2803740501, 0.4057570994, 0.1470829546, 0.0813488215, -0.2076935172, 0.3919777572, -0.0483211763, 0.0626079962, -0.0157080498, -0.1405219734, 0.0408853665, 0.2579431832, -0.3039068282, -0.13...
https://github.com/huggingface/datasets/issues/3099
AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo'
@tjruwase, please note that versions of `datsets` and `huggingface_hub` must be compatible one with each other: - In `datasets` version `1.11.0`, the requirement on `huggingface_hub` was: `huggingface_hub<0.1.0` https://github.com/huggingface/datasets/blob/2cc00f372a96133e701275eec4d6b26d15257289/setup.py#L90 - ...
## Describe the bug When using `pip install datasets` or use `conda install -c huggingface -c conda-forge datasets` cannot install datasets ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("sst", "default") ``` ## Actual results ---------------------------...
83
AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo' ## Describe the bug When using `pip install datasets` or use `conda install -c huggingface -c conda-forge datasets` cannot install datasets ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load...
[ -0.4111837745, -0.2402752638, -0.0685085058, 0.432022661, 0.2099774629, 0.0242406782, 0.2803740501, 0.4057570994, 0.1470829546, 0.0813488215, -0.2076935172, 0.3919777572, -0.0483211763, 0.0626079962, -0.0157080498, -0.1405219734, 0.0408853665, 0.2579431832, -0.3039068282, -0.13...
https://github.com/huggingface/datasets/issues/3095
`cast_column` makes audio decoding fail
Thanks for reporting, @patrickvonplaten. I think the issue is related to mp3 resampling, not to `cast_column`. You can check that `cast_column` works OK with non-mp3 audio files: ```python from datasets import load_dataset import datasets ds = load_dataset("arabic_speech_corpus", split="train") ds = ds.cast_...
## Describe the bug After changing the sampling rate automatic decoding fails. ## Steps to reproduce the bug ```python from datasets import load_dataset import datasets ds = load_dataset("common_voice", "ab", split="train") ds = ds.cast_column("audio", datasets.features.Audio(sampling_rate=16_000)) pr...
47
`cast_column` makes audio decoding fail ## Describe the bug After changing the sampling rate automatic decoding fails. ## Steps to reproduce the bug ```python from datasets import load_dataset import datasets ds = load_dataset("common_voice", "ab", split="train") ds = ds.cast_column("audio", datasets.f...
[ -0.3449611366, 0.0809736252, 0.0290255547, -0.1023004204, 0.513559401, -0.0765639544, 0.3012081981, 0.2489967644, -0.1418651044, 0.1867393106, -0.2701362669, 0.4867173135, 0.0446885265, -0.1314160675, -0.32100299, -0.292571187, 0.1798269302, 0.0890951827, -0.0834234208, -0.0481...
https://github.com/huggingface/datasets/issues/3093
Error loading json dataset with multiple splits if keys in nested dicts have a different order
Hi, even Pandas, which is less strict compared to PyArrow when it comes to reading JSON, doesn't support different orderings: ```python import io import pandas as pd s = """ {"a": {"c": 8, "b": 5}} {"a": {"b": 7, "c": 6}} """ buffer = io.StringIO(s) df = pd.read_json(buffer, lines=True) print(df.sha...
## Describe the bug Loading a json dataset with multiple splits that have nested dicts with keys in different order results in the error below. If the keys in the nested dicts always have the same order or even if you just load a single split in which the nested dicts don't have the same order, everything works fin...
102
Error loading json dataset with multiple splits if keys in nested dicts have a different order ## Describe the bug Loading a json dataset with multiple splits that have nested dicts with keys in different order results in the error below. If the keys in the nested dicts always have the same order or even if you j...
[ 0.0832343027, -0.2473295927, -0.0849277079, 0.4981232584, -0.034176562, 0.0925235674, 0.4738396704, 0.2425033897, 0.4540650845, -0.0762436315, 0.0957988128, 0.3503094018, -0.0273534022, 0.1923106313, -0.4392800927, -0.3062129915, 0.1353116632, 0.0029305192, 0.2856116295, 0.2145...
https://github.com/huggingface/datasets/issues/3091
`blog_authorship_corpus` is broken
Hi @fdtomasi, thanks for reporting. You are right: the original host data URL does no longer exist. I've contacted the authors of the dataset to ask them if they host this dataset in another URL.
## Describe the bug The dataset `blog_authorship_corpus` is broken. By bypassing the checksum checks, the loading does not return any error but the resulting dataset is empty. I suspect it is because the data download url is broken (http://www.cs.biu.ac.il/~koppel/blogs/blogs.zip). ## Steps to reproduce the bug ...
35
`blog_authorship_corpus` is broken ## Describe the bug The dataset `blog_authorship_corpus` is broken. By bypassing the checksum checks, the loading does not return any error but the resulting dataset is empty. I suspect it is because the data download url is broken (http://www.cs.biu.ac.il/~koppel/blogs/blogs.zip...
[ -0.0562496819, 0.4549151957, -0.0059019011, 0.2907787561, 0.0022765598, 0.2032642514, 0.3583987057, 0.3165001571, 0.0460988358, -0.0797857717, -0.1252786815, 0.1028041691, 0.1488698423, -0.2176861316, 0.0624524802, 0.1197558641, 0.0223905351, 0.0131186349, -0.0396164022, -0.113...
https://github.com/huggingface/datasets/issues/3091
`blog_authorship_corpus` is broken
Hi, @fdtomasi, the URL is fixed. The fix is already in our master branch and it will be accessible in our next release. In the meantime, you can include the fix if you install the `datasets` library from the master branch: ``` pip install -U git+ssh://git@github.com/huggingface/datasets.git@master#egg=datasest ...
## Describe the bug The dataset `blog_authorship_corpus` is broken. By bypassing the checksum checks, the loading does not return any error but the resulting dataset is empty. I suspect it is because the data download url is broken (http://www.cs.biu.ac.il/~koppel/blogs/blogs.zip). ## Steps to reproduce the bug ...
54
`blog_authorship_corpus` is broken ## Describe the bug The dataset `blog_authorship_corpus` is broken. By bypassing the checksum checks, the loading does not return any error but the resulting dataset is empty. I suspect it is because the data download url is broken (http://www.cs.biu.ac.il/~koppel/blogs/blogs.zip...
[ -0.0562496819, 0.4549151957, -0.0059019011, 0.2907787561, 0.0022765598, 0.2032642514, 0.3583987057, 0.3165001571, 0.0460988358, -0.0797857717, -0.1252786815, 0.1028041691, 0.1488698423, -0.2176861316, 0.0624524802, 0.1197558641, 0.0223905351, 0.0131186349, -0.0396164022, -0.113...
https://github.com/huggingface/datasets/issues/3089
JNLPBA Dataset
# Steps to reproduce To reproduce: ```python from datasets import load_dataset dataset = load_dataset('jnlpba') dataset['train'].features['ner_tags'] ``` Output: ```python Sequence(feature=ClassLabel(num_classes=3, names=['O', 'B', 'I'], names_file=None, id=None), length=-1, id=None) ```
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` ## Expected results The dataset loading script for this dataset is incorrect. This is a biomedical dataset used for named entity recognition. The entities in ...
27
JNLPBA Dataset ## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` ## Expected results The dataset loading script for this dataset is incorrect. This is a biomedical dataset used for named entity recognition....
[ 0.1463600993, -0.0191736072, 0.0117603214, 0.1949786991, 0.2566965818, 0.0052962326, 0.2601889968, 0.3991121054, 0.1160128042, 0.2989786565, -0.0805608705, 0.4474713504, 0.1182021126, 0.1445478052, 0.2085645497, -0.0914914757, 0.0858130604, 0.201355502, 0.0577949509, -0.0255822...
https://github.com/huggingface/datasets/issues/3089
JNLPBA Dataset
Since I cannot create a branch here is the updated code: ```python # coding=utf-8 # Copyright 2020 HuggingFace Datasets Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # ...
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` ## Expected results The dataset loading script for this dataset is incorrect. This is a biomedical dataset used for named entity recognition. The entities in ...
455
JNLPBA Dataset ## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` ## Expected results The dataset loading script for this dataset is incorrect. This is a biomedical dataset used for named entity recognition....
[ 0.1486541629, -0.1739543825, 0.0470738113, 0.2513509691, 0.2042787522, -0.0211737603, 0.1656205058, 0.4094052017, 0.1123951748, 0.2550165951, -0.185700506, 0.3755782545, 0.1516994387, 0.121623002, 0.2561086714, -0.143202588, 0.0996645615, 0.144936353, -0.1114028543, -0.03914111...
https://github.com/huggingface/datasets/issues/3084
VisibleDeprecationWarning when using `set_format("numpy")`
I just opened a PR and I verified that the code you provided doesn't show any deprecation warning :)
Code to reproduce: ``` from datasets import load_dataset dataset = load_dataset("glue", "mnli") from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('distilbert-base-cased') def tokenize_function(dataset): return tokenizer(dataset['premise']) tokenized_datasets = dataset....
19
VisibleDeprecationWarning when using `set_format("numpy")` Code to reproduce: ``` from datasets import load_dataset dataset = load_dataset("glue", "mnli") from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('distilbert-base-cased') def tokenize_function(dataset): return t...
[ -0.2761725783, -0.1842715144, -0.0695534199, -0.1787900478, 0.4122674465, -0.0492860191, 0.558613658, 0.387201488, -0.2987534106, -0.0875777453, 0.0455400348, 0.4911158085, -0.1978975683, -0.103918694, -0.0582349338, -0.1153115556, 0.2907199562, 0.2943540215, 0.212075755, 0.115...
https://github.com/huggingface/datasets/issues/3073
Import error installing with ppc64le
This seems to be an issue with importing PyArrow so I posted the problem [here](https://issues.apache.org/jira/browse/ARROW-14323), and I'm closing this issue.
## Describe the bug Installing the datasets library with a computer running with ppc64le seems to cause an issue when importing the datasets library. ``` python Python 3.6.13 | packaged by conda-forge | (default, Sep 23 2021, 07:37:44) [GCC 9.4.0] on linux Type "help", "copyright", "credits" or "license" for...
20
Import error installing with ppc64le ## Describe the bug Installing the datasets library with a computer running with ppc64le seems to cause an issue when importing the datasets library. ``` python Python 3.6.13 | packaged by conda-forge | (default, Sep 23 2021, 07:37:44) [GCC 9.4.0] on linux Type "help", ...
[ -0.1681165248, 0.1741706878, 0.0191158, 0.2278842032, 0.2522402406, 0.2248131782, 0.3580008447, 0.2304055691, -0.3304048181, -0.1519774348, -0.155306071, 0.3825338483, -0.0719909519, -0.1810771525, 0.1235699058, -0.1017437503, 0.2075526714, 0.1592648178, -0.3558947444, 0.010444...
https://github.com/huggingface/datasets/issues/3071
Custom plain text dataset, plain json dataset and plain csv dataset are remove from datasets template folder
Hi @zixiliuUSC, As explained in the documentation (https://huggingface.co/docs/datasets/loading.html#json), we support loading any dataset in JSON (as well as CSV, text, Parquet) format: ```python ds = load_dataset('json', data_files='my_file.json') ```
## Adding a Dataset - **Name:** text, json, csv - **Description:** I am developing a customized dataset loading script. The problem is mainly about my custom dataset is seperate into many files and I only find a dataset loading template in [https://github.com/huggingface/datasets/blob/1.2.1/datasets/json/json.py](ht...
28
Custom plain text dataset, plain json dataset and plain csv dataset are remove from datasets template folder ## Adding a Dataset - **Name:** text, json, csv - **Description:** I am developing a customized dataset loading script. The problem is mainly about my custom dataset is seperate into many files and I only f...
[ 0.2397148758, -0.3320982158, -0.0408485979, 0.3953283131, 0.1127283648, 0.2876621783, 0.1925071627, 0.1587703377, 0.2423329651, -0.1266016215, -0.3314575255, 0.1281609088, -0.1774953157, 0.2487667352, -0.103711836, -0.130127281, 0.0871858671, 0.2242774665, 0.0038969533, -0.0486...
https://github.com/huggingface/datasets/issues/3064
Make `interleave_datasets` more robust
Hi ! Sorry for the late response I agree `interleave_datasets` would benefit a lot from having more flexibility. If I understand correctly it would be nice to be able to define stopping strategies like `stop="first_exhausted"` (default) or `stop="all_exhausted"`. If you'd like to contribute this feature I'd be happy...
**Is your feature request related to a problem? Please describe.** Right now there are few hiccups using `interleave_datasets`. Interleaved dataset iterates until the smallest dataset completes it's iterator. In this way larger datasets may not complete full epoch of iteration. It creates new problems in calculation...
112
Make `interleave_datasets` more robust **Is your feature request related to a problem? Please describe.** Right now there are few hiccups using `interleave_datasets`. Interleaved dataset iterates until the smallest dataset completes it's iterator. In this way larger datasets may not complete full epoch of iteration....
[ -0.2761626542, -0.0267761201, -0.2281200737, 0.0825817436, -0.1436164826, 0.0176417418, 0.2350521237, 0.1321875006, 0.0668103099, 0.1262001544, -0.0727010593, 0.2160599679, -0.293736428, 0.2195408344, -0.3901974857, -0.0787840784, -0.1078506485, 0.0256603323, 0.0055010775, 0.10...
https://github.com/huggingface/datasets/issues/3063
Windows CI is unable to test streaming properly because of SSL issues
I think this problem is already fixed: ```python In [4]: import fsspec ...: ...: url = "https://moon-staging.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/my-dataset-16242824690709/resolve/main/.gitattributes" ...: ...: fsspec.open(url).open() Out[4]: <File-like object HTTPFileSystem, https://m...
In https://github.com/huggingface/datasets/pull/3041 the windows tests were skipped because of SSL issues with moon-staging.huggingface.co:443 The issue appears only on windows with asyncio. On Linux it works. With requests it works as well. And with the production environment huggingface.co it also works. to rep...
26
Windows CI is unable to test streaming properly because of SSL issues In https://github.com/huggingface/datasets/pull/3041 the windows tests were skipped because of SSL issues with moon-staging.huggingface.co:443 The issue appears only on windows with asyncio. On Linux it works. With requests it works as well. And...
[ -0.354352355, 0.0034217243, 0.0198071152, 0.0342933424, 0.0507179536, -0.1108404174, 0.2119017243, 0.131444484, 0.028769182, -0.0833966061, 0.0598858893, -0.2223212123, 0.1826367676, 0.2124496251, -0.0842105299, -0.102616176, 0.0015288341, -0.2221381366, 0.0797981322, 0.1013273...
https://github.com/huggingface/datasets/issues/3061
Feature request : add leave=True to dataset.map to enable tqdm nested bars (and whilst we're at it couldn't we get a way to access directly tqdm underneath?)
@lhoestq, @albertvillanova can we have `**tqdm_kwargs` in `map`? If there are any fields that are important to our tqdm (like iterable or unit), we can pop them before initialising the tqdm object so as to avoid duplicity.
**A clear and concise description of what you want to happen.** It would be so nice to be able to nest HuggingFace `Datasets.map() ` progress bars in the grander scheme of things and whilst we're at it why not other functions. **Describe alternatives you've considered** By the way is there not a way to directl...
37
Feature request : add leave=True to dataset.map to enable tqdm nested bars (and whilst we're at it couldn't we get a way to access directly tqdm underneath?) **A clear and concise description of what you want to happen.** It would be so nice to be able to nest HuggingFace `Datasets.map() ` progress bars in the gr...
[ -0.3079598546, -0.4473890662, -0.0674706623, -0.1451707333, 0.1558555812, -0.0401673317, 0.3228403628, 0.1836805046, -0.2261488438, 0.1609632969, -0.2775020897, 0.5143533945, -0.0580768585, 0.4776198864, -0.0116167683, -0.0091586513, -0.0279599, -0.0164654795, -0.5783259869, 0....
https://github.com/huggingface/datasets/issues/3061
Feature request : add leave=True to dataset.map to enable tqdm nested bars (and whilst we're at it couldn't we get a way to access directly tqdm underneath?)
Hi ! Sounds like a good idea :) Also I think it would be better to have this as an actual parameters instead of kwargs to make it clearer
**A clear and concise description of what you want to happen.** It would be so nice to be able to nest HuggingFace `Datasets.map() ` progress bars in the grander scheme of things and whilst we're at it why not other functions. **Describe alternatives you've considered** By the way is there not a way to directl...
29
Feature request : add leave=True to dataset.map to enable tqdm nested bars (and whilst we're at it couldn't we get a way to access directly tqdm underneath?) **A clear and concise description of what you want to happen.** It would be so nice to be able to nest HuggingFace `Datasets.map() ` progress bars in the gr...
[ -0.3040522933, -0.4401244819, -0.0733553544, -0.1609271765, 0.1207133383, -0.083848469, 0.3616036177, 0.1839642376, -0.2422847599, 0.1404825151, -0.2260065824, 0.503369689, -0.020245567, 0.5093122721, -0.0045803525, -0.0898854882, -0.0212960914, 0.0006459563, -0.5398647785, 0.2...
https://github.com/huggingface/datasets/issues/3060
load_dataset('openwebtext') yields "Compressed file ended before the end-of-stream marker was reached"
Hi @RylanSchaeffer, thanks for reporting. I'm sorry, but I was not able to reproduce your problem. Normally, the reason for this type of error is that, during your download of the data files, this was not fully complete. Could you please try to load the dataset again but forcing its redownload? Please use: ``...
## Describe the bug When I try `load_dataset('openwebtext')`, I receive a "EOFError: Compressed file ended before the end-of-stream marker was reached" error. ## Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset('openwebtext') ``` ## Expected results I expect the `datas...
66
load_dataset('openwebtext') yields "Compressed file ended before the end-of-stream marker was reached" ## Describe the bug When I try `load_dataset('openwebtext')`, I receive a "EOFError: Compressed file ended before the end-of-stream marker was reached" error. ## Steps to reproduce the bug ``` from datasets im...
[ -0.3312988877, -0.0927271023, 0.003960914, 0.4996192455, 0.2171835154, 0.021186851, 0.1519744992, 0.3008338213, -0.0964679196, 0.2880549431, 0.0456488989, 0.4010351002, 0.1004999951, 0.157242164, -0.1340151429, 0.1229330227, -0.0532350391, 0.227064684, -0.3442712724, -0.1649356...
https://github.com/huggingface/datasets/issues/3060
load_dataset('openwebtext') yields "Compressed file ended before the end-of-stream marker was reached"
I close this issue for the moment. Feel free to re-open it again if the problem persists.
## Describe the bug When I try `load_dataset('openwebtext')`, I receive a "EOFError: Compressed file ended before the end-of-stream marker was reached" error. ## Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset('openwebtext') ``` ## Expected results I expect the `datas...
17
load_dataset('openwebtext') yields "Compressed file ended before the end-of-stream marker was reached" ## Describe the bug When I try `load_dataset('openwebtext')`, I receive a "EOFError: Compressed file ended before the end-of-stream marker was reached" error. ## Steps to reproduce the bug ``` from datasets im...
[ -0.3312988877, -0.0927271023, 0.003960914, 0.4996192455, 0.2171835154, 0.021186851, 0.1519744992, 0.3008338213, -0.0964679196, 0.2880549431, 0.0456488989, 0.4010351002, 0.1004999951, 0.157242164, -0.1340151429, 0.1229330227, -0.0532350391, 0.227064684, -0.3442712724, -0.1649356...
https://github.com/huggingface/datasets/issues/3058
Dataset wikipedia and Bookcorpusopen cannot be fetched from dataloader.
Hi ! I think this issue is more related to the `transformers` project. Could you open an issue on https://github.com/huggingface/transformers ? Anyway I think the issue could be that both wikipedia and bookcorpusopen have an additional "title" column, contrary to wikitext which only has a "text" column. After callin...
## Describe the bug I have used the previous version of `transformers` and `datasets`. The dataset `wikipedia` can be successfully used. Recently, I upgrade them to the newest version and find it raises errors. I also tried other datasets. The `wikitext` works and the `bookcorpusopen` raises the same errors as `wikipe...
58
Dataset wikipedia and Bookcorpusopen cannot be fetched from dataloader. ## Describe the bug I have used the previous version of `transformers` and `datasets`. The dataset `wikipedia` can be successfully used. Recently, I upgrade them to the newest version and find it raises errors. I also tried other datasets. The `...
[ -0.1171703935, -0.0888658836, 0.0287200026, 0.5902850032, 0.1981574446, 0.0083723227, 0.4193763435, 0.2448296249, 0.0080309622, 0.0493949912, -0.2614282072, 0.0099563012, 0.0961200818, -0.1403683573, -0.186207518, -0.5285599828, 0.087584205, 0.1113325432, -0.438457042, -0.05910...
https://github.com/huggingface/datasets/issues/3058
Dataset wikipedia and Bookcorpusopen cannot be fetched from dataloader.
Removing the "title" column works! Thanks for your advice. Maybe I should still create an issue to `transformers' to mark this solution?
## Describe the bug I have used the previous version of `transformers` and `datasets`. The dataset `wikipedia` can be successfully used. Recently, I upgrade them to the newest version and find it raises errors. I also tried other datasets. The `wikitext` works and the `bookcorpusopen` raises the same errors as `wikipe...
22
Dataset wikipedia and Bookcorpusopen cannot be fetched from dataloader. ## Describe the bug I have used the previous version of `transformers` and `datasets`. The dataset `wikipedia` can be successfully used. Recently, I upgrade them to the newest version and find it raises errors. I also tried other datasets. The `...
[ -0.1171703935, -0.0888658836, 0.0287200026, 0.5902850032, 0.1981574446, 0.0083723227, 0.4193763435, 0.2448296249, 0.0080309622, 0.0493949912, -0.2614282072, 0.0099563012, 0.0961200818, -0.1403683573, -0.186207518, -0.5285599828, 0.087584205, 0.1113325432, -0.438457042, -0.05910...
https://github.com/huggingface/datasets/issues/3057
Error in per class precision computation
Hi @tidhamecha2, thanks for reporting. Indeed, we fixed this issue just one week ago: #3008 The fix will be included in our next version release. In the meantime, you can incorporate the fix by installing `datasets` from the master branch: ``` pip install -U git+ssh://git@github.com/huggingface/datasets.git@...
## Describe the bug When trying to get the per class precision values by providing `average=None`, following error is thrown `ValueError: can only convert an array of size 1 to a Python scalar` ## Steps to reproduce the bug ```python from datasets import load_dataset, load_metric precision_metric = load_metric("...
53
Error in per class precision computation ## Describe the bug When trying to get the per class precision values by providing `average=None`, following error is thrown `ValueError: can only convert an array of size 1 to a Python scalar` ## Steps to reproduce the bug ```python from datasets import load_dataset, l...
[ -0.1642785072, -0.414196521, -0.0621304736, 0.3582138419, 0.5411623716, 0.1221359745, 0.0809524357, 0.1405230761, -0.0493667312, 0.6348305345, -0.0748264343, 0.1664376706, -0.0171341896, 0.1542899907, -0.1672088653, -0.1889114827, -0.0766216815, 0.2445577085, -0.2314056754, -0....
https://github.com/huggingface/datasets/issues/3052
load_dataset cannot download the data and hangs on forever if cache dir specified
Issue was environment inconsistency, updating packages did the trick `conda install -c huggingface -c conda-forge datasets` > Collecting package metadata (current_repodata.json): done > Solving environment: | > The environment is inconsistent, please check the package plan carefully > The following packages ar...
## Describe the bug After updating datasets, a code that ran just fine for ages began to fail. Specifying _datasets.load_dataset_'s _cache_dir_ optional argument on Windows 10 machine results in data download to hang on forever. Same call without cache_dir works just fine. Surprisingly exact same code just runs perfec...
118
load_dataset cannot download the data and hangs on forever if cache dir specified ## Describe the bug After updating datasets, a code that ran just fine for ages began to fail. Specifying _datasets.load_dataset_'s _cache_dir_ optional argument on Windows 10 machine results in data download to hang on forever. Same c...
[ -0.3561401665, 0.6039686203, -0.112191759, 0.0641633123, 0.3905137181, 0.0553010851, 0.4375729859, 0.094742924, -0.0405043215, 0.1036619917, -0.0045552352, 0.4003577232, 0.1727004349, -0.2531112432, -0.1021793559, 0.0295671653, 0.3168690503, 0.0264830776, -0.2681316733, 0.02974...
https://github.com/huggingface/datasets/issues/3051
Non-Matching Checksum Error with crd3 dataset
I got the same error for another dataset (`multi_woz_v22`): ``` datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/M...
## Describe the bug When I try loading the crd3 dataset (https://huggingface.co/datasets/crd3), an error is thrown. ## Steps to reproduce the bug ```python dataset = load_dataset('crd3', split='train') ``` ## Expected results I expect no error to be thrown. ## Actual results A non-matching checksum err...
21
Non-Matching Checksum Error with crd3 dataset ## Describe the bug When I try loading the crd3 dataset (https://huggingface.co/datasets/crd3), an error is thrown. ## Steps to reproduce the bug ```python dataset = load_dataset('crd3', split='train') ``` ## Expected results I expect no error to be thrown. ...
[ 0.0334027298, -0.0302853491, 0.0008105278, 0.2788746953, 0.017549051, -0.0104441335, 0.265627861, 0.3295178413, -0.037674509, -0.0168874599, 0.0127356676, 0.1477050483, 0.1147076041, 0.0427603424, -0.1140761748, -0.0273571424, 0.3024690747, 0.0541087575, 0.1257388145, -0.032854...
https://github.com/huggingface/datasets/issues/3051
Non-Matching Checksum Error with crd3 dataset
I'm seeing the same issue as @RylanSchaeffer: Python 3.7.11, macOs 11.4 datasets==1.14.0 fails on: ```python dataset = datasets.load_dataset("multi_woz_v22") ```
## Describe the bug When I try loading the crd3 dataset (https://huggingface.co/datasets/crd3), an error is thrown. ## Steps to reproduce the bug ```python dataset = load_dataset('crd3', split='train') ``` ## Expected results I expect no error to be thrown. ## Actual results A non-matching checksum err...
19
Non-Matching Checksum Error with crd3 dataset ## Describe the bug When I try loading the crd3 dataset (https://huggingface.co/datasets/crd3), an error is thrown. ## Steps to reproduce the bug ```python dataset = load_dataset('crd3', split='train') ``` ## Expected results I expect no error to be thrown. ...
[ -0.0050890408, -0.0740979537, 0.0115769226, 0.2142196149, 0.0446547531, -0.0815154389, 0.2199154198, 0.2908580601, -0.020732725, -0.0038823609, 0.0393177643, 0.2184229642, -0.0073493998, 0.1165500805, -0.1907240599, -0.0479840301, 0.3433543444, 0.0568099096, 0.0270323008, -0.01...
https://github.com/huggingface/datasets/issues/3048
Identify which shard data belongs to
Independently of this I think it raises the need to allow multiprocessing during streaming so that we get samples from multiple shards in one batch.
**Is your feature request related to a problem? Please describe.** I'm training on a large dataset made of multiple sub-datasets. During training I can observe some jumps in loss which may correspond to different shards. ![image](https://user-images.githubusercontent.com/715491/136668758-521263aa-a9b2-4ad2-8d22-...
25
Identify which shard data belongs to **Is your feature request related to a problem? Please describe.** I'm training on a large dataset made of multiple sub-datasets. During training I can observe some jumps in loss which may correspond to different shards. ![image](https://user-images.githubusercontent.com/71...
[ -0.5709902644, -0.269620657, -0.0461121276, 0.3155331016, -0.2868869007, -0.2091677189, 0.3178083003, 0.1625363678, -0.1746657342, 0.1725877821, -0.0845634788, -0.0563050136, -0.1931997538, 0.3816587925, 0.2079123557, -0.1998602301, 0.0493255593, -0.0795502886, 0.1737654209, -0...
https://github.com/huggingface/datasets/issues/3044
Inconsistent caching behaviour when using `Dataset.map()` with a `new_fingerprint` and `num_proc>1`
Following the discussion in #3045 if would be nice to have a way to let users have a nice experience with caching even if the function is not hashable. Currently a workaround is to make the function picklable. This can be done by implementing a callable class instead, that can be pickled using by implementing a cust...
## Describe the bug Caching does not work when using `Dataset.map()` with: 1. a function that cannot be deterministically fingerprinted 2. `num_proc>1` 3. using a custom fingerprint set with the argument `new_fingerprint`. This means that the dataset will be mapped with the function for each and every call, w...
129
Inconsistent caching behaviour when using `Dataset.map()` with a `new_fingerprint` and `num_proc>1` ## Describe the bug Caching does not work when using `Dataset.map()` with: 1. a function that cannot be deterministically fingerprinted 2. `num_proc>1` 3. using a custom fingerprint set with the argument `new_fin...
[ -0.1125867143, 0.1165608466, -0.0157670174, 0.0611556247, -0.118484512, -0.124060981, 0.3861939013, 0.351592958, 0.3012607396, -0.0997984707, 0.2086912692, 0.3295020461, 0.0391608924, -0.143107295, -0.0197527502, 0.3910205364, 0.3168309033, -0.0682448819, -0.0389394686, -0.0995...
https://github.com/huggingface/datasets/issues/3044
Inconsistent caching behaviour when using `Dataset.map()` with a `new_fingerprint` and `num_proc>1`
My solution was to generate a custom hash, and use the hash as a `new_fingerprint` argument to the `map()` method to enable caching. This works, but is quite hacky. @lhoestq, this approach is very neat, this would make the whole caching mechanic more explicit. I don't have so much time to look into this right now, b...
## Describe the bug Caching does not work when using `Dataset.map()` with: 1. a function that cannot be deterministically fingerprinted 2. `num_proc>1` 3. using a custom fingerprint set with the argument `new_fingerprint`. This means that the dataset will be mapped with the function for each and every call, w...
66
Inconsistent caching behaviour when using `Dataset.map()` with a `new_fingerprint` and `num_proc>1` ## Describe the bug Caching does not work when using `Dataset.map()` with: 1. a function that cannot be deterministically fingerprinted 2. `num_proc>1` 3. using a custom fingerprint set with the argument `new_fin...
[ -0.1125867143, 0.1165608466, -0.0157670174, 0.0611556247, -0.118484512, -0.124060981, 0.3861939013, 0.351592958, 0.3012607396, -0.0997984707, 0.2086912692, 0.3295020461, 0.0391608924, -0.143107295, -0.0197527502, 0.3910205364, 0.3168309033, -0.0682448819, -0.0389394686, -0.0995...
https://github.com/huggingface/datasets/issues/3040
[save_to_disk] Using `select()` followed by `save_to_disk` saves complete dataset making it hard to create dummy dataset
Hi, the `save_to_disk` docstring explains that `flatten_indices` has to be called on a dataset before saving it to save only the shard/slice of the dataset.
## Describe the bug When only keeping a dummy size of a dataset (say the first 100 samples), and then saving it to disk to upload it in the following to the hub for easy demo/use - not just the small dataset is saved but the whole dataset with an indices file. The problem with this is that the dataset is still very...
25
[save_to_disk] Using `select()` followed by `save_to_disk` saves complete dataset making it hard to create dummy dataset ## Describe the bug When only keeping a dummy size of a dataset (say the first 100 samples), and then saving it to disk to upload it in the following to the hub for easy demo/use - not just the...
[ -0.1809477359, -0.1576081365, 0.0806470141, 0.2456984222, 0.1063633189, 0.1908272207, 0.2535611093, 0.4150107503, 0.0799368545, 0.4557602108, 0.1509388089, 0.3872535527, -0.1541717201, 0.0044502676, 0.099456884, 0.0693321601, 0.1867137104, 0.1767671108, 0.0718551874, -0.2415307...
https://github.com/huggingface/datasets/issues/3040
[save_to_disk] Using `select()` followed by `save_to_disk` saves complete dataset making it hard to create dummy dataset
That works! Thansk! Might be worth doing that automatically actually in case the `save_to_disk` is called on a dataset that has an indices mapping :-)
## Describe the bug When only keeping a dummy size of a dataset (say the first 100 samples), and then saving it to disk to upload it in the following to the hub for easy demo/use - not just the small dataset is saved but the whole dataset with an indices file. The problem with this is that the dataset is still very...
25
[save_to_disk] Using `select()` followed by `save_to_disk` saves complete dataset making it hard to create dummy dataset ## Describe the bug When only keeping a dummy size of a dataset (say the first 100 samples), and then saving it to disk to upload it in the following to the hub for easy demo/use - not just the...
[ -0.197787419, -0.1849525869, 0.0728686303, 0.2110226601, 0.1640472859, 0.1720052212, 0.2364407331, 0.4040087163, 0.1711795777, 0.4311820567, 0.1333653331, 0.4044635594, -0.1757007092, -0.0216431208, 0.1236590296, 0.0688508749, 0.1950651854, 0.1609197855, 0.0407716408, -0.209208...
https://github.com/huggingface/datasets/issues/3040
[save_to_disk] Using `select()` followed by `save_to_disk` saves complete dataset making it hard to create dummy dataset
I agree with @patrickvonplaten: this issue is reported recurrently, so better if we implement the `.flatten_indices()` automatically?
## Describe the bug When only keeping a dummy size of a dataset (say the first 100 samples), and then saving it to disk to upload it in the following to the hub for easy demo/use - not just the small dataset is saved but the whole dataset with an indices file. The problem with this is that the dataset is still very...
17
[save_to_disk] Using `select()` followed by `save_to_disk` saves complete dataset making it hard to create dummy dataset ## Describe the bug When only keeping a dummy size of a dataset (say the first 100 samples), and then saving it to disk to upload it in the following to the hub for easy demo/use - not just the...
[ -0.1824212819, -0.1138091013, 0.0977014303, 0.3485226929, 0.1396681368, 0.2466376871, 0.2202671617, 0.4321454465, 0.0931755453, 0.4231398702, 0.2067134529, 0.4109373391, -0.1631097049, 0.0241229497, 0.025754679, 0.0898382515, 0.1856224388, 0.1863040477, 0.1133499369, -0.1867262...
https://github.com/huggingface/datasets/issues/3040
[save_to_disk] Using `select()` followed by `save_to_disk` saves complete dataset making it hard to create dummy dataset
That would be great indeed - I don't really see a use case where one would not like to call `.flatten_indices()` before calling `save_to_disk`
## Describe the bug When only keeping a dummy size of a dataset (say the first 100 samples), and then saving it to disk to upload it in the following to the hub for easy demo/use - not just the small dataset is saved but the whole dataset with an indices file. The problem with this is that the dataset is still very...
24
[save_to_disk] Using `select()` followed by `save_to_disk` saves complete dataset making it hard to create dummy dataset ## Describe the bug When only keeping a dummy size of a dataset (say the first 100 samples), and then saving it to disk to upload it in the following to the hub for easy demo/use - not just the...
[ -0.1977925748, -0.1417638808, 0.0666226596, 0.2484792322, 0.1402246207, 0.2132204026, 0.230078578, 0.409286499, 0.1048449352, 0.4188419282, 0.1798494756, 0.4132268429, -0.1830236167, -0.0113674682, 0.0974143818, 0.0815899968, 0.2100721449, 0.1471536309, 0.0857833847, -0.1938104...
https://github.com/huggingface/datasets/issues/3036
Protect master branch to force contributions via Pull Requests
It would be nice to protect the master from direct commits, but still having a way to merge our own PRs when no review is required (for example when updating a dataset_infos.json file, or minor bug fixes - things that happen quite often actually). Do you know if there's a way ?
In order to have a clearer Git history in the master branch, I propose to protect it so that all contributions must be done through a Pull Request and no direct commits to master are allowed. - The Pull Request allows to give context, discuss any potential issues and improve the quality of the contribution - The Pull...
52
Protect master branch to force contributions via Pull Requests In order to have a clearer Git history in the master branch, I propose to protect it so that all contributions must be done through a Pull Request and no direct commits to master are allowed. - The Pull Request allows to give context, discuss any potenti...
[ -0.0392103828, 0.0052621169, 0.0302887913, -0.0834547877, -0.273763597, -0.1612029821, -0.0911575779, 0.375121057, -0.2183747739, 0.2085062861, 0.4694797993, -0.1339307725, 0.0275087617, -0.0358083546, 0.026731519, 0.2096523941, 0.1608106494, 0.1743799597, 0.2682784498, 0.20843...
https://github.com/huggingface/datasets/issues/3036
Protect master branch to force contributions via Pull Requests
This is done. Now the master branch is protected: - [x] Require a pull request before merging: all commits must be made to a non-protected branch and submitted via a pull request - Required number of approvals before merging: 1 - [x] Require linear history: prevent merge commits from being pushed - [x] These req...
In order to have a clearer Git history in the master branch, I propose to protect it so that all contributions must be done through a Pull Request and no direct commits to master are allowed. - The Pull Request allows to give context, discuss any potential issues and improve the quality of the contribution - The Pull...
78
Protect master branch to force contributions via Pull Requests In order to have a clearer Git history in the master branch, I propose to protect it so that all contributions must be done through a Pull Request and no direct commits to master are allowed. - The Pull Request allows to give context, discuss any potenti...
[ -0.0546304137, 0.0769248307, -0.0444107726, -0.0714356154, -0.2691425979, -0.1964896917, 0.0576879419, 0.364843905, -0.2206805944, 0.1525520384, 0.3992947638, -0.0609557368, 0.0369373262, -0.0986791924, -0.0556436889, 0.1957637221, 0.0636075437, 0.1572287232, 0.1213736609, 0.15...
https://github.com/huggingface/datasets/issues/3035
`load_dataset` does not work with uploaded arrow file
Hi ! This is not a bug, this is simply not implemented. `save_to_disk` is for on-disk serialization and was not made compatible for the Hub. That being said, I agree we actually should make it work with the Hub x)
## Describe the bug I've preprocessed and uploaded a dataset here: https://huggingface.co/datasets/ami-wav2vec2/ami_headset_single_preprocessed . The dataset is in `.arrow` format. The dataset can correctly be loaded when doing: ```bash git lfs install git clone https://huggingface.co/datasets/ami-wav2vec2/a...
40
`load_dataset` does not work with uploaded arrow file ## Describe the bug I've preprocessed and uploaded a dataset here: https://huggingface.co/datasets/ami-wav2vec2/ami_headset_single_preprocessed . The dataset is in `.arrow` format. The dataset can correctly be loaded when doing: ```bash git lfs install ...
[ -0.392804116, -0.0385757759, 0.0167483445, 0.2226260602, 0.1754208654, -0.0094852252, 0.5628120899, 0.085138008, 0.2055555284, -0.0173035618, 0.0880222917, 0.4524753094, -0.0171562377, 0.0238931589, -0.0179101508, 0.0333889201, 0.0853120387, 0.102919139, -0.3803859055, 0.019657...
https://github.com/huggingface/datasets/issues/3032
Error when loading private dataset with "data_files" arg
We'll do a release tomorrow or on wednesday to make the fix available :) Thanks for reproting !
## Describe the bug A clear and concise description of what the bug is. Private datasets with no loading script can't be loaded using `data_files` parameter. ## Steps to reproduce the bug ```python from datasets import load_dataset data_files = {"train": "**/train/*/*.jsonl", "valid": "**/valid/*/*.jsonl"} d...
18
Error when loading private dataset with "data_files" arg ## Describe the bug A clear and concise description of what the bug is. Private datasets with no loading script can't be loaded using `data_files` parameter. ## Steps to reproduce the bug ```python from datasets import load_dataset data_files = {"trai...
[ -0.2903700173, 0.1561492085, 0.030724667, 0.430328846, 0.1911777556, 0.0257792454, 0.4299963117, 0.4009223878, 0.1551964581, 0.0551268384, -0.1564759314, 0.2306967527, -0.1625583321, -0.0824603289, 0.0650249943, 0.066634275, -0.0849666372, 0.108527936, 0.0225837827, 0.057989470...
https://github.com/huggingface/datasets/issues/3027
Resolve data_files by split name
Awesome @lhoestq I like the proposal and it works great on my JSON community dataset. Here is the [log](https://gist.github.com/vblagoje/714babc325bcbdd5de579fd8e1648892).
This issue is about discussing the default behavior when someone loads a dataset that consists in data files. For example: ```python load_dataset("lhoestq/demo1") ``` should return two splits "train" and "test" since the dataset repostiory is like ``` data/ β”œβ”€β”€ train.csv └── test.csv ``` Currently it returns ...
19
Resolve data_files by split name This issue is about discussing the default behavior when someone loads a dataset that consists in data files. For example: ```python load_dataset("lhoestq/demo1") ``` should return two splits "train" and "test" since the dataset repostiory is like ``` data/ β”œβ”€β”€ train.csv └── t...
[ 0.0484914854, 0.0916158333, -0.0840783715, 0.1035428643, 0.1421851218, -0.0512200519, 0.3762041032, 0.6377544999, 0.1425939798, 0.0023931437, 0.2522251606, 0.3661669493, -0.1567634493, 0.247655049, -0.3365218639, -0.2140483707, -0.0455784127, 0.0632640347, 0.1257698238, 0.10078...
https://github.com/huggingface/datasets/issues/3027
Resolve data_files by split name
From my discussion with @borisdayma it would be more general the files match if their paths contains the split name - not only if the filename contains the split name. For example for a dataset like this: ``` train/ └── data.csv test/ └── data.csv ``` But IMO the default should be ``` data/ β”œβ”€β”€ train.csv ...
This issue is about discussing the default behavior when someone loads a dataset that consists in data files. For example: ```python load_dataset("lhoestq/demo1") ``` should return two splits "train" and "test" since the dataset repostiory is like ``` data/ β”œβ”€β”€ train.csv └── test.csv ``` Currently it returns ...
78
Resolve data_files by split name This issue is about discussing the default behavior when someone loads a dataset that consists in data files. For example: ```python load_dataset("lhoestq/demo1") ``` should return two splits "train" and "test" since the dataset repostiory is like ``` data/ β”œβ”€β”€ train.csv └── t...
[ 0.0270637404, -0.089249678, -0.1097629964, 0.135138616, 0.0538497344, -0.1521060318, 0.3358279169, 0.5063108802, 0.1621054113, -0.0166821554, 0.2228462696, 0.1351218373, -0.113688089, 0.2894289196, -0.1998567432, -0.2272009104, -0.0563692376, 0.0884641558, 0.0838888437, 0.04577...
https://github.com/huggingface/datasets/issues/3027
Resolve data_files by split name
I just created a PR for this at https://github.com/huggingface/datasets/pull/3221, let me know what you think :)
This issue is about discussing the default behavior when someone loads a dataset that consists in data files. For example: ```python load_dataset("lhoestq/demo1") ``` should return two splits "train" and "test" since the dataset repostiory is like ``` data/ β”œβ”€β”€ train.csv └── test.csv ``` Currently it returns ...
16
Resolve data_files by split name This issue is about discussing the default behavior when someone loads a dataset that consists in data files. For example: ```python load_dataset("lhoestq/demo1") ``` should return two splits "train" and "test" since the dataset repostiory is like ``` data/ β”œβ”€β”€ train.csv └── t...
[ 0.0649660304, -0.1720188856, -0.0812089145, 0.1602075845, 0.2143258005, -0.0422248095, 0.3630499542, 0.4919017851, 0.1566627175, 0.0642593503, 0.1475191414, 0.1404820979, -0.1865478456, 0.3594828844, -0.091802679, -0.3215191662, -0.0239059534, 0.062793985, 0.1788839996, 0.14500...
https://github.com/huggingface/datasets/issues/3018
Support multiple zipped CSV data files
@lhoestq I would like to draw your attention to the proposed API by @lewtun, using `data_dir` to pass the ZIP URL. I'm not totally convinced with this... What do you think? Maybe we could discuss other approaches... One brainstorming idea: what about using URL chaining with the hop operator in `data_files`?
As requested by @lewtun, support loading multiple zipped CSV data files. ```python from datasets import load_dataset url = "https://domain.org/filename.zip" data_files = {"train": "train_filename.csv", "test": "test_filename.csv"} dataset = load_dataset("csv", data_dir=url, data_files=data_files) ```
51
Support multiple zipped CSV data files As requested by @lewtun, support loading multiple zipped CSV data files. ```python from datasets import load_dataset url = "https://domain.org/filename.zip" data_files = {"train": "train_filename.csv", "test": "test_filename.csv"} dataset = load_dataset("csv", data_dir=...
[ -0.04363662, 0.0263967384, -0.2615993917, -0.0185473356, 0.0255619343, -0.0422075503, 0.3967736363, 0.2515758872, 0.300516367, 0.1386565119, 0.0246002208, 0.3511759639, -0.0738245323, 0.4689002633, -0.0553035401, -0.031526234, 0.1049562395, 0.0371839143, -0.254355669, 0.1798335...