html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 70 51.8k | body stringlengths 0 29.8k | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings list |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Hi
I'm having a different problem with loading local csv.
```Python
from datasets import load_dataset
dataset = load_dataset('csv', data_files='sample.csv')
```
gives `ValueError: Specified named and prefix; you can only specify one.` error
versions:
- datasets: 1.1.3
- python: 3.9.6
- py... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 42 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... | [
-0.05010176822543144,
0.07889048010110855,
-0.02207389660179615,
0.3402084410190582,
0.22076581418514252,
0.18077467381954193,
0.458517849445343,
0.2670557498931885,
0.29908278584480286,
0.1005546972155571,
-0.15667478740215302,
0.2878020703792572,
-0.06055660918354988,
-0.0458840839564800... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Oh.. I figured it out. According to issue #[42387](https://github.com/pandas-dev/pandas/issues/42387) from pandas, this new version does not accept None for both parameters (which was being done by the repo I'm testing). Dowgrading Pandas==1.0.4 and Python==3.8 worked | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 35 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... | [
0.024177396669983864,
0.12562017142772675,
0.03067488968372345,
0.3196994960308075,
0.2818748950958252,
0.13472771644592285,
0.47052156925201416,
0.3493143916130066,
0.27203112840652466,
-0.010259208269417286,
-0.10921032726764679,
0.293904572725296,
-0.04485112428665161,
-0.02656795829534... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Hi,
I got an `OSError: Cannot find data file. ` when I tried to use load_dataset with tsv files. I have checked the paths, and they are correct.
versions
- python: 3.7.9
- datasets: 1.1.3
- pyarrow: 2.0.0
- transformers: 4.2.2
~~~
data_files = {"train": "train.tsv", "test",: "test.tsv"}
datasets = load_... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 229 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... | [
-0.12203819304704666,
0.13535358011722565,
-0.00495773833245039,
0.4093732237815857,
0.3437597155570984,
0.15759040415287018,
0.49117061495780945,
0.30186185240745544,
0.24092984199523926,
0.07781320065259933,
-0.12081102281808853,
0.3113327920436859,
-0.22293399274349213,
0.01617286726832... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Hi ! It looks like the error stacktrace doesn't match with your code snippet.
What error do you get when running this ?
```
data_files = {"train": "train.tsv", "test",: "test.tsv"}
datasets = load_dataset("csv", data_files=data_files, delimiter="\t")
```
can you check that both tsv files are in the same folder ... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 57 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... | [
-0.1632082462310791,
0.12012788653373718,
-0.020389843732118607,
0.3711951971054077,
0.29089295864105225,
0.19067364931106567,
0.33762603998184204,
0.3088066577911377,
0.2558356523513794,
0.10500713437795639,
-0.18691948056221008,
0.20546793937683105,
-0.1593431532382965,
0.061178732663393... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Hi @lhoestq, Below is the entire error message after I move both tsv files to the same directory. It's the same with I got before.
```
/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that ... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 311 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... | [
-0.07138241827487946,
0.03632244095206261,
-0.024686481803655624,
0.5199027061462402,
0.32693707942962646,
0.1928398460149765,
0.4462187588214874,
0.3241175711154938,
0.22433064877986908,
0.0428139865398407,
-0.019542407244443893,
0.2665969729423523,
-0.03721685707569122,
0.039764892309904... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Hi !
Can you try running this into a python shell directly ?
```python
import os
from datasets import load_dataset
data_files = {"train": "train.tsv", "test": "test.tsv"}
assert all(os.path.isfile(data_file) for data_file in data_files.values()), "Couln't find files"
datasets = load_dataset("csv", data_fil... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 56 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... | [
-0.006286135409027338,
0.06183338165283203,
0.00043583798105828464,
0.34390532970428467,
0.25248798727989197,
0.1751338690519333,
0.505518913269043,
0.3278372287750244,
0.38393062353134155,
0.0075204698368906975,
-0.09473129361867905,
0.26486772298812866,
-0.09982766956090927,
-0.009366484... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Hi @lhoestq,
Below is what I got from terminal after I copied and run your code. I think the files themselves are good since there is no assertion error.
```
Using custom data configuration default-df627c23ac0e98ec
Downloading and preparing dataset csv/default (download: Unknown size, generated: Unknown size,... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 160 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... | [
-0.07662054896354675,
0.09206584095954895,
-0.0017136819660663605,
0.42625975608825684,
0.26033511757850647,
0.17576567828655243,
0.38547423481941223,
0.28411251306533813,
0.2640710473060608,
0.03102598711848259,
-0.11899057775735855,
0.2788831293582916,
-0.05575266852974892,
0.04360662400... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Hi, could this be a permission error ? I think it fails to close the arrow file that contains the data from your CSVs in the cache.
By default datasets are cached in `~/.cache/huggingface/datasets`, could you check that you have the right permissions ?
You can also try to change the cache directory by passing `cach... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 58 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... | [
-0.08710847795009613,
0.10207011550664902,
0.005998773965984583,
0.4314919710159302,
0.2185792475938797,
0.21847839653491974,
0.4292903244495392,
0.23896896839141846,
0.3231711685657501,
0.024806462228298187,
-0.1687801480293274,
0.2384757697582245,
-0.11012855172157288,
-0.093341663479804... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Thank you!! @lhoestq
For some reason, I don't have the default path for datasets to cache, maybe because I work from a remote system. The issue solved after I pass the `cache_dir` argument to the function. Thank you very much!! | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 40 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... | [
-0.08596627414226532,
0.09212024509906769,
0.005469316150993109,
0.39305952191352844,
0.27935880422592163,
0.1994733363389969,
0.4239503741264343,
0.2563420236110687,
0.3121509552001953,
0.009807831607758999,
-0.10004027187824249,
0.2833048403263092,
-0.09020068496465683,
-0.06926276534795... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | Thanks for reporting.
In theory since the dataset script is just made to yield examples to write them into an arrow file, it's not supposed to create memory issues.
Could you please try to run this exact same loop in a separate script to see if it's not an issue with `PIL` ?
You can just copy paste what's inside `... | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 96 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... | [
-0.24722431600093842,
-0.048453379422426224,
-0.016176236793398857,
0.26154762506484985,
0.1814279705286026,
0.20232397317886353,
0.14100590348243713,
0.2836431562900543,
-0.09369143843650818,
0.20162492990493774,
0.4972399175167084,
0.1490253508090973,
-0.234624445438385,
0.00673378957435... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | Here's an equivalent loading code:
```python
images_path = "PHOENIX-2014-T-release-v3/PHOENIX-2014-T/features/fullFrame-210x260px/train"
for dir_path in tqdm(os.listdir(images_path)):
frames_path = os.path.join(images_path, dir_path)
np_frames = []
for frame_name in os.listdir(frames_path):
... | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 75 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... | [
-0.24722431600093842,
-0.048453379422426224,
-0.016176236793398857,
0.26154762506484985,
0.1814279705286026,
0.20232397317886353,
0.14100590348243713,
0.2836431562900543,
-0.09369143843650818,
0.20162492990493774,
0.4972399175167084,
0.1490253508090973,
-0.234624445438385,
0.00673378957435... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | I've had similar issues with Arrow once. I'll investigate...
For now maybe we can simply use the images paths in the dataset you want to add. I don't expect to fix this memory issue until 1-2 weeks unfortunately. Then we can just update the dataset with the images. What do you think ? | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 53 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... | [
-0.24722431600093842,
-0.048453379422426224,
-0.016176236793398857,
0.26154762506484985,
0.1814279705286026,
0.20232397317886353,
0.14100590348243713,
0.2836431562900543,
-0.09369143843650818,
0.20162492990493774,
0.4972399175167084,
0.1490253508090973,
-0.234624445438385,
0.00673378957435... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | If it's just 1-2 weeks, I think it's best if we wait. I don't think it is very urgent to add it, and it will be much more useful with the images loaded rather than not (the images are low resolution, and thus papers using this dataset actually fit the entire video into memory anyway)
I'll keep working on other datas... | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 65 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... | [
-0.24722431600093842,
-0.048453379422426224,
-0.016176236793398857,
0.26154762506484985,
0.1814279705286026,
0.20232397317886353,
0.14100590348243713,
0.2836431562900543,
-0.09369143843650818,
0.20162492990493774,
0.4972399175167084,
0.1490253508090973,
-0.234624445438385,
0.00673378957435... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | Ok found the issue. This is because the batch size used by the writer is set to 10 000 elements by default so it would load your full dataset in memory (the writer has a buffer that flushes only after each batch). Moreover to write in Apache Arrow we have to use python objects so what's stored inside the ArrowWriter's ... | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 97 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... | [
-0.24722431600093842,
-0.048453379422426224,
-0.016176236793398857,
0.26154762506484985,
0.1814279705286026,
0.20232397317886353,
0.14100590348243713,
0.2836431562900543,
-0.09369143843650818,
0.20162492990493774,
0.4972399175167084,
0.1490253508090973,
-0.234624445438385,
0.00673378957435... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | Thanks, that's awesome you managed to find the problem.
About the 32 bits - really? there isn't a way to serialize the numpy array somehow? 32 bits would take 4 times the memory / disk space needed to store these videos.
Please let me know when the batch size is customizable and I'll try again! | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 55 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... | [
-0.24722431600093842,
-0.048453379422426224,
-0.016176236793398857,
0.26154762506484985,
0.1814279705286026,
0.20232397317886353,
0.14100590348243713,
0.2836431562900543,
-0.09369143843650818,
0.20162492990493774,
0.4972399175167084,
0.1490253508090973,
-0.234624445438385,
0.00673378957435... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | The 32 bit integrers are only used in the writer's buffer because Arrow doesn't take numpy arrays correctly as input. On disk it's stored as uint8 in arrow format ;) | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 30 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... | [
-0.24722431600093842,
-0.048453379422426224,
-0.016176236793398857,
0.26154762506484985,
0.1814279705286026,
0.20232397317886353,
0.14100590348243713,
0.2836431562900543,
-0.09369143843650818,
0.20162492990493774,
0.4972399175167084,
0.1490253508090973,
-0.234624445438385,
0.00673378957435... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | > I don't expect to fix this memory issue until 1-2 weeks unfortunately.
Hi @lhoestq
not to rush of course, but I was wondering if you have a new timeline so I know how to plan my work around this :) | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 41 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... | [
-0.24722431600093842,
-0.048453379422426224,
-0.016176236793398857,
0.26154762506484985,
0.1814279705286026,
0.20232397317886353,
0.14100590348243713,
0.2836431562900543,
-0.09369143843650818,
0.20162492990493774,
0.4972399175167084,
0.1490253508090973,
-0.234624445438385,
0.00673378957435... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | Alright it should be good now.
You just have to specify `_writer_batch_size = 10` for example as a class attribute of the dataset builder class. | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 25 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... | [
-0.24722431600093842,
-0.048453379422426224,
-0.016176236793398857,
0.26154762506484985,
0.1814279705286026,
0.20232397317886353,
0.14100590348243713,
0.2836431562900543,
-0.09369143843650818,
0.20162492990493774,
0.4972399175167084,
0.1490253508090973,
-0.234624445438385,
0.00673378957435... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | I added it, but still it consumes as much memory
https://github.com/huggingface/datasets/pull/722/files#diff-2e0d865dd4a60dedd1861d6f8c5ed281ded71508467908e1e0b1dbe7d2d420b1R66
Did I not do it correctly? | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 17 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... | [
-0.24722431600093842,
-0.048453379422426224,
-0.016176236793398857,
0.26154762506484985,
0.1814279705286026,
0.20232397317886353,
0.14100590348243713,
0.2836431562900543,
-0.09369143843650818,
0.20162492990493774,
0.4972399175167084,
0.1490253508090973,
-0.234624445438385,
0.00673378957435... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | Yes you did it right.
Did you rebase to include the changes of #828 ?
EDIT: looks like you merged from master in the PR. Not sure why you still have an issue then, I will investigate | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 37 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... | [
-0.24722431600093842,
-0.048453379422426224,
-0.016176236793398857,
0.26154762506484985,
0.1814279705286026,
0.20232397317886353,
0.14100590348243713,
0.2836431562900543,
-0.09369143843650818,
0.20162492990493774,
0.4972399175167084,
0.1490253508090973,
-0.234624445438385,
0.00673378957435... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | Sorry for the delay, I was busy with the dataset sprint and the incredible amount of contributions to the library ^^'
What you can try to do to find what's wrong is check at which frequency the arrow writer writes all the examples from its in-memory buffer on disk. This happens [here](https://github.com/huggingface/... | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 128 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... | [
-0.24722431600093842,
-0.048453379422426224,
-0.016176236793398857,
0.26154762506484985,
0.1814279705286026,
0.20232397317886353,
0.14100590348243713,
0.2836431562900543,
-0.09369143843650818,
0.20162492990493774,
0.4972399175167084,
0.1490253508090973,
-0.234624445438385,
0.00673378957435... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | I had the same issue. It works for me by setting `DEFAULT_WRITER_BATCH_SIZE = 10` of my dataset builder class. (And not `_writer_batch_size` as previously mentioned). I guess this is because `_writer_batch_size` is overwritten in `__init__` (see [here](https://github.com/huggingface/datasets/blob/0e2563e5d5c2fc193ea27... | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 37 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... | [
-0.24722431600093842,
-0.048453379422426224,
-0.016176236793398857,
0.26154762506484985,
0.1814279705286026,
0.20232397317886353,
0.14100590348243713,
0.2836431562900543,
-0.09369143843650818,
0.20162492990493774,
0.4972399175167084,
0.1490253508090973,
-0.234624445438385,
0.00673378957435... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | Yes the class attribute you can change is `DEFAULT_WRITER_BATCH_SIZE`.
Otherwise in `load_dataset` you can specify `writer_batch_size=` | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 16 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... | [
-0.24722431600093842,
-0.048453379422426224,
-0.016176236793398857,
0.26154762506484985,
0.1814279705286026,
0.20232397317886353,
0.14100590348243713,
0.2836431562900543,
-0.09369143843650818,
0.20162492990493774,
0.4972399175167084,
0.1490253508090973,
-0.234624445438385,
0.00673378957435... |
https://github.com/huggingface/datasets/issues/737 | Trec Dataset Connection Error | Thanks for reporting.
That's because the download url has changed. The old url now redirects to the new one but we don't support redirection for downloads.
I'm opening a PR to update the url | **Datasets Version:**
1.1.2
**Python Version:**
3.6/3.7
**Code:**
```python
from datasets import load_dataset
load_dataset("trec")
```
**Expected behavior:**
Download Trec dataset and load Dataset object
**Current Behavior:**
Get a connection error saying it couldn't reach http://cogcomp.org/Data/... | 34 | Trec Dataset Connection Error
**Datasets Version:**
1.1.2
**Python Version:**
3.6/3.7
**Code:**
```python
from datasets import load_dataset
load_dataset("trec")
```
**Expected behavior:**
Download Trec dataset and load Dataset object
**Current Behavior:**
Get a connection error saying it couldn'... | [
-0.24363940954208374,
0.1074596643447876,
-0.014968793839216232,
0.15051642060279846,
0.3800680637359619,
-0.12880131602287292,
0.27454182505607605,
0.33223429322242737,
-0.20466113090515137,
0.11629489809274673,
-0.19129395484924316,
0.2581113278865814,
0.07228156179189682,
-0.16582368314... |
https://github.com/huggingface/datasets/issues/730 | Possible caching bug | Thanks for reporting. That's a bug indeed.
Apparently only the `data_files` parameter is taken into account right now in `DatasetBuilder._create_builder_config` but it should also be the case for `config_kwargs` (or at least the instantiated `builder_config`) | The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
```
produc... | 35 | Possible caching bug
The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(da... | [
0.06326138973236084,
-0.16895818710327148,
-0.13569599390029907,
0.3066903352737427,
0.36635851860046387,
-0.08226651698350906,
0.21097038686275482,
0.19322766363620758,
-0.08093331009149551,
0.14596490561962128,
-0.052267953753471375,
0.06104806438088417,
0.17751982808113098,
-0.055984776... |
https://github.com/huggingface/datasets/issues/726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | Hi try, to provide more information please.
Example code in a colab to reproduce the error, details on what you are trying to do and what you were expected and details on your environment (OS, PyPi packages version). | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/op... | 38 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset
Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, p... | [
-0.4920597970485687,
0.03898787871003151,
-0.059488244354724884,
0.3662635385990143,
0.24437648057937622,
-0.033787619322538376,
0.13654403388500214,
0.5504938364028931,
-0.0006113475537858903,
-0.06028177589178085,
-0.06665404886007309,
-0.03101108781993389,
0.023809120059013367,
0.364807... |
https://github.com/huggingface/datasets/issues/726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | > Hi try, to provide more information please.
>
> Example code in a colab to reproduce the error, details on what you are trying to do and what you were expected and details on your environment (OS, PyPi packages version).
I have update the description, sorry for the incomplete issue by mistake. | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/op... | 53 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset
Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, p... | [
-0.4920597970485687,
0.03898787871003151,
-0.059488244354724884,
0.3662635385990143,
0.24437648057937622,
-0.033787619322538376,
0.13654403388500214,
0.5504938364028931,
-0.0006113475537858903,
-0.06028177589178085,
-0.06665404886007309,
-0.03101108781993389,
0.023809120059013367,
0.364807... |
https://github.com/huggingface/datasets/issues/726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | Hi, I have manually downloaded the compressed dataset `openwebtext.tar.xz' and use the following command to preprocess the examples:
```
>>> dataset = load_dataset('/home/admin/workspace/datasets/datasets-master/datasets-master/datasets/openwebtext', data_dir='/home/admin/workspace/datasets')
Using custom data confi... | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/op... | 87 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset
Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, p... | [
-0.4920597970485687,
0.03898787871003151,
-0.059488244354724884,
0.3662635385990143,
0.24437648057937622,
-0.033787619322538376,
0.13654403388500214,
0.5504938364028931,
-0.0006113475537858903,
-0.06028177589178085,
-0.06665404886007309,
-0.03101108781993389,
0.023809120059013367,
0.364807... |
https://github.com/huggingface/datasets/issues/726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | NonMatchingChecksumError: Checksums didn't match for dataset source files:
i got this issue when i try to work on my own datasets kindly tell me, from where i can get checksums of train and dev file in my github repo | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/op... | 39 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset
Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, p... | [
-0.4920597970485687,
0.03898787871003151,
-0.059488244354724884,
0.3662635385990143,
0.24437648057937622,
-0.033787619322538376,
0.13654403388500214,
0.5504938364028931,
-0.0006113475537858903,
-0.06028177589178085,
-0.06665404886007309,
-0.03101108781993389,
0.023809120059013367,
0.364807... |
https://github.com/huggingface/datasets/issues/726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | Hi, I got the similar issue for xnli dataset while working on colab with python3.7.
`nlp.load_dataset(path = 'xnli')`
The above command resulted in following issue :
```
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip']
```... | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/op... | 44 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset
Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, p... | [
-0.4920597970485687,
0.03898787871003151,
-0.059488244354724884,
0.3662635385990143,
0.24437648057937622,
-0.033787619322538376,
0.13654403388500214,
0.5504938364028931,
-0.0006113475537858903,
-0.06028177589178085,
-0.06665404886007309,
-0.03101108781993389,
0.023809120059013367,
0.364807... |
https://github.com/huggingface/datasets/issues/724 | need to redirect /nlp to /datasets and remove outdated info | Should be fixed now:

Not sure I understand what you mean by the second part?
| It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had t... | 16 | need to redirect /nlp to /datasets and remove outdated info
It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked... | [
0.2948439121246338,
0.1105564683675766,
-0.0633748471736908,
0.10946319252252579,
-0.07132765650749207,
0.03539532423019409,
-0.2602992653846741,
0.6290500164031982,
-0.29175320267677307,
-0.4541856646537781,
-0.1458732634782791,
0.21951080858707428,
0.29476091265678406,
0.0850116387009620... |
https://github.com/huggingface/datasets/issues/724 | need to redirect /nlp to /datasets and remove outdated info | Thank you!
> Not sure I understand what you mean by the second part?
Compare the 2:
* https://huggingface.co/datasets/wikihow
* https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
Can you see the difference? 2nd has formatting, 1st doesn't.
| It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had t... | 31 | need to redirect /nlp to /datasets and remove outdated info
It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked... | [
0.334769606590271,
-0.03131091594696045,
-0.024543114006519318,
0.12239256501197815,
-0.11466517299413681,
0.04532257840037346,
-0.3248899281024933,
0.6779084205627441,
-0.43325069546699524,
-0.5133860111236572,
-0.15811647474765778,
0.030890105292201042,
0.2777540385723114,
0.037030603736... |
https://github.com/huggingface/datasets/issues/724 | need to redirect /nlp to /datasets and remove outdated info | For context, those are two different pages (not an old vs new one), one is from the dataset viewer (you can browse data inside the datasets) while the other is just a basic reference page displayed some metadata about the dataset.
For the second one, we'll move to markdown parsing soon, so it'll be formatted better. | It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had t... | 56 | need to redirect /nlp to /datasets and remove outdated info
It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked... | [
0.1384781450033188,
0.023690126836299896,
-0.09800001233816147,
0.028966521844267845,
-0.05199552699923515,
0.04589768871665001,
-0.13160385191440582,
0.4961523115634918,
-0.1375632882118225,
-0.20727407932281494,
-0.1407681107521057,
0.34675806760787964,
0.24117939174175262,
0.07146142423... |
https://github.com/huggingface/datasets/issues/723 | Adding pseudo-labels to datasets | Nice ! :)
It's indeed the first time we have such contributions so we'll have to figure out the appropriate way to integrate them.
Could you add details on what they could be used for ?
| I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generations on an existing dataset, what is ... | 36 | Adding pseudo-labels to datasets
I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generatio... | [
0.3508720099925995,
-0.13396552205085754,
0.06405710428953171,
0.014674526639282703,
0.1172194704413414,
0.042593419551849365,
0.2484150528907776,
-0.05569258704781532,
-0.14547505974769592,
-0.04287600517272949,
-0.22757747769355774,
0.07623125612735748,
-0.1649905890226364,
0.54502409696... |
https://github.com/huggingface/datasets/issues/723 | Adding pseudo-labels to datasets | A new configuration for those datasets should do the job then.
Note that until now datasets like xsum only had one configuration. It means that users didn't have to specify the configuration name when loading the dataset. If we add new configs, users that update the lib will have to update their code to specify the de... | I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generations on an existing dataset, what is ... | 65 | Adding pseudo-labels to datasets
I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generatio... | [
0.2998083829879761,
-0.13442695140838623,
0.10647685080766678,
-0.04210348799824715,
0.14162950217723846,
0.038590800017118454,
0.3353704512119293,
-0.06112533062696457,
-0.12132629752159119,
0.012237335555255413,
-0.23087529838085175,
0.10576248914003372,
-0.1279498040676117,
0.5004864335... |
https://github.com/huggingface/datasets/issues/723 | Adding pseudo-labels to datasets | Oh yes why not. I'm more in favor of this actually since pseudo labels are things that users (not dataset authors in general) can compute by themselves and share with the community | I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generations on an existing dataset, what is ... | 32 | Adding pseudo-labels to datasets
I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generatio... | [
0.2814161777496338,
-0.032071810215711594,
0.07933174073696136,
-0.008559051901102066,
0.08331089466810226,
0.030321519821882248,
0.3540935516357422,
-0.11276240646839142,
-0.1846269965171814,
0.02580098994076252,
-0.2245691865682602,
0.06162689998745918,
-0.11643055081367493,
0.5070893168... |
https://github.com/huggingface/datasets/issues/723 | Adding pseudo-labels to datasets | 
I assume I should (for example) rename the xsum dir, change the URL, and put the modified dir somewhere in S3? | I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generations on an existing dataset, what is ... | 22 | Adding pseudo-labels to datasets
I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generatio... | [
0.38241881132125854,
-0.05905153974890709,
0.0575784295797348,
0.00741454865783453,
0.13427874445915222,
0.013138149864971638,
0.29631954431533813,
-0.12175401300191879,
-0.21988330781459808,
-0.043528422713279724,
-0.2787756621837616,
0.08048395067453384,
-0.08394205570220947,
0.512279450... |
https://github.com/huggingface/datasets/issues/723 | Adding pseudo-labels to datasets | You can use the `datasets-cli` to upload the folder with your version of xsum with the pseudo labels.
```
datasets-cli upload_dataset path/to/xsum
``` | I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generations on an existing dataset, what is ... | 23 | Adding pseudo-labels to datasets
I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generatio... | [
0.3005567491054535,
-0.08655230700969696,
0.07898779213428497,
0.012747716158628464,
0.1379011869430542,
0.04259266331791878,
0.2965245842933655,
-0.08927103877067566,
-0.13117709755897522,
0.004527386277914047,
-0.26155638694763184,
0.11702122539281845,
-0.10187524557113647,
0.50301033258... |
https://github.com/huggingface/datasets/issues/721 | feat(dl_manager): add support for ftp downloads | We only support http by default for downloading.
If you really need to use ftp, then feel free to use a library that allows to download through ftp in your dataset script (I see that you've started working on #722 , that's awesome !). The users will get a message to install the extra library when they load the dataset... | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.do... | 120 | feat(dl_manager): add support for ftp downloads
I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoen... | [
-0.2777805030345917,
0.09652718901634216,
-0.029432687908411026,
0.28272637724876404,
-0.04885731264948845,
0.06269553303718567,
-0.219370499253273,
0.4863722622394562,
0.26718124747276306,
0.004616571124643087,
-0.06453897804021835,
0.1767716109752655,
-0.13386377692222595,
0.027690269052... |
https://github.com/huggingface/datasets/issues/721 | feat(dl_manager): add support for ftp downloads | Also maybe it coud be interesting to have a direct support of ftp inside the `datasets` library. Do you know any good libraries that we might consider adding as a (optional ?) dependency ? | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.do... | 34 | feat(dl_manager): add support for ftp downloads
I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoen... | [
-0.3274756073951721,
0.059870243072509766,
-0.016492880880832672,
0.2534116804599762,
-0.0806325152516365,
0.09111279249191284,
-0.3827841579914093,
0.4749820828437805,
0.2600124776363373,
-0.01288545411080122,
0.06837443262338638,
0.09324315935373306,
-0.12189716845750809,
0.0611720308661... |
https://github.com/huggingface/datasets/issues/721 | feat(dl_manager): add support for ftp downloads | Downloading an `ftp` file is as simple as:
```python
import urllib
urllib.urlretrieve('ftp://server/path/to/file', 'file')
```
I believe this should be supported by the library, as its not using any dependency and is trivial amount of code. | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.do... | 35 | feat(dl_manager): add support for ftp downloads
I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoen... | [
-0.2455560863018036,
-0.00568254804238677,
0.011401119641959667,
0.2680147886276245,
-0.08103975653648376,
0.0563570000231266,
-0.34725862741470337,
0.49602317810058594,
0.21058423817157745,
0.05592768266797066,
0.0054222168400883675,
0.10407725721597672,
-0.16249853372573853,
-0.000300768... |
https://github.com/huggingface/datasets/issues/721 | feat(dl_manager): add support for ftp downloads | I know its unorthodox, but I added `ftp` download support to `file_utils` in the same PR https://github.com/huggingface/datasets/pull/722
So its possible to understand the interaction of the download component with the ftp download ability | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.do... | 33 | feat(dl_manager): add support for ftp downloads
I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoen... | [
-0.20793183147907257,
-0.09989303350448608,
-0.02025313675403595,
0.3010162115097046,
0.004464118275791407,
0.038478076457977295,
-0.3553127348423004,
0.3622785806655884,
0.1900351494550705,
-0.0035543260164558887,
-0.03681369870901108,
0.021745270118117332,
0.015501108020544052,
-0.003892... |
https://github.com/huggingface/datasets/issues/721 | feat(dl_manager): add support for ftp downloads | @hoanganhpham1006 yes.
See pull request https://github.com/huggingface/datasets/pull/722 , it has a loader for this dataset, mostly ready.
There's one issue that delays it being merged - https://github.com/huggingface/datasets/issues/741 - regarding memory consumption. | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.do... | 30 | feat(dl_manager): add support for ftp downloads
I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoen... | [
-0.2078181505203247,
-0.136214479804039,
0.02233983762562275,
0.3684147894382477,
-0.04260316863656044,
0.094168521463871,
-0.41605785489082336,
0.45873919129371643,
0.24659867584705353,
-0.02197880670428276,
-0.10486660897731781,
0.05493021011352539,
-0.17395687103271484,
0.04061772301793... |
https://github.com/huggingface/datasets/issues/721 | feat(dl_manager): add support for ftp downloads | The problem which I have now is that this dataset seems does not allow to download? Can you share it with me pls | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.do... | 23 | feat(dl_manager): add support for ftp downloads
I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoen... | [
-0.219590961933136,
-0.11206863075494766,
-0.02461928129196167,
0.3834896385669708,
-0.05260276794433594,
0.11774010956287384,
-0.382847398519516,
0.41394463181495667,
0.23226265609264374,
0.02230575494468212,
-0.014095491729676723,
0.017189815640449524,
-0.10221917927265167,
0.03190845623... |
https://github.com/huggingface/datasets/issues/721 | feat(dl_manager): add support for ftp downloads | The dataset loader is not yet ready, because of that issue.
If you want to just download the dataset the old-fashioned way, just go to: https://www-i6.informatik.rwth-aachen.de/ftp/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz (the ftp link is now broken, and its available over https) | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.do... | 37 | feat(dl_manager): add support for ftp downloads
I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoen... | [
-0.28215110301971436,
-0.02154102362692356,
-0.008781285025179386,
0.23533028364181519,
-0.07501149922609329,
0.08813749253749847,
-0.2813771665096283,
0.46096208691596985,
0.23764562606811523,
0.02131047658622265,
0.019712630659341812,
0.07932423800230026,
-0.1120402067899704,
0.028805390... |
https://github.com/huggingface/datasets/issues/720 | OSError: Cannot find data file when not using the dummy dataset in RAG | Same issue here. I will be digging further, but it looks like the [script](https://github.com/huggingface/datasets/blob/master/datasets/wiki_dpr/wiki_dpr.py#L132) is attempting to open a file that is not downloaded yet.
```
99dcbca09109e58502e6b9271d4d3f3791b43f61f3161a76b25d2775ab1a4498.lock
```
```
--------... | ## Environment info
transformers version: 3.3.1
Platform: Linux-4.19
Python version: 3.7.7
PyTorch version (GPU?): 1.6.0
Tensorflow version (GPU?): No
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No
## To reproduce
Steps to reproduce the behaviour... | 387 | OSError: Cannot find data file when not using the dummy dataset in RAG
## Environment info
transformers version: 3.3.1
Platform: Linux-4.19
Python version: 3.7.7
PyTorch version (GPU?): 1.6.0
Tensorflow version (GPU?): No
Using GPU in script?: Yes
Using distributed or parallel set... | [
-0.22586897015571594,
-0.04871680215001106,
0.012397187761962414,
0.10525500029325485,
0.3706665337085724,
-0.06985383480787277,
0.33231648802757263,
0.2692304849624634,
0.015512652695178986,
0.32509127259254456,
-0.13942646980285645,
0.21765242516994476,
-0.06373496353626251,
-0.321982145... |
https://github.com/huggingface/datasets/issues/720 | OSError: Cannot find data file when not using the dummy dataset in RAG | An update on my end. This seems like a transient issue. Reran the script from scratch overnight with no errors. | ## Environment info
transformers version: 3.3.1
Platform: Linux-4.19
Python version: 3.7.7
PyTorch version (GPU?): 1.6.0
Tensorflow version (GPU?): No
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No
## To reproduce
Steps to reproduce the behaviour... | 20 | OSError: Cannot find data file when not using the dummy dataset in RAG
## Environment info
transformers version: 3.3.1
Platform: Linux-4.19
Python version: 3.7.7
PyTorch version (GPU?): 1.6.0
Tensorflow version (GPU?): No
Using GPU in script?: Yes
Using distributed or parallel set... | [
-0.22586897015571594,
-0.04871680215001106,
0.012397187761962414,
0.10525500029325485,
0.3706665337085724,
-0.06985383480787277,
0.33231648802757263,
0.2692304849624634,
0.015512652695178986,
0.32509127259254456,
-0.13942646980285645,
0.21765242516994476,
-0.06373496353626251,
-0.321982145... |
https://github.com/huggingface/datasets/issues/709 | How to use similarity settings other then "BM25" in Elasticsearch index ? | Datasets does not use elasticsearch API to define custom similarity. If you want to use a custom similarity, the best would be to run a curl request directly to your elasticsearch instance (see sample hereafter, directly from ES documentation), then you should be able to use `my_similarity` in your configuration passed... | **QUESTION : How should we use other similarity algorithms supported by Elasticsearch other than "BM25" ?**
**ES Reference**
https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-similarity.html
**HF doc reference:**
https://huggingface.co/docs/datasets/faiss_and_ea.html
**context :**
=... | 88 | How to use similarity settings other then "BM25" in Elasticsearch index ?
**QUESTION : How should we use other similarity algorithms supported by Elasticsearch other than "BM25" ?**
**ES Reference**
https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-similarity.html
**HF doc reference:*... | [
-0.10997945815324783,
-0.6802697777748108,
-0.04501504451036453,
-0.039937280118465424,
-0.16626335680484772,
0.051654547452926636,
-0.18381772935390472,
0.22279569506645203,
0.44525638222694397,
0.13827157020568848,
-0.34139055013656616,
-0.14689716696739197,
0.06352045387029648,
-0.20899... |
https://github.com/huggingface/datasets/issues/708 | Datasets performance slow? - 6.4x slower than in memory dataset | Facing a similar issue here. My model using SQuAD dataset takes about 1h to process with in memory data and more than 2h with datasets directly. | I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.... | 26 | Datasets performance slow? - 6.4x slower than in memory dataset
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you do... | [
-0.3806666135787964,
-0.042408764362335205,
-0.0314098559319973,
0.4377959668636322,
0.10050857812166214,
0.1263737827539444,
0.013654963113367558,
0.30718186497688293,
0.09141010791063309,
-0.0730985701084137,
-0.06624199450016022,
0.3011278808116913,
-0.013989030383527279,
-0.27428945899... |
https://github.com/huggingface/datasets/issues/708 | Datasets performance slow? - 6.4x slower than in memory dataset | Thanks for the tip @thomwolf ! I did not see that flag in the docs. I'll try with that. | I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.... | 19 | Datasets performance slow? - 6.4x slower than in memory dataset
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you do... | [
-0.3806666135787964,
-0.042408764362335205,
-0.0314098559319973,
0.4377959668636322,
0.10050857812166214,
0.1263737827539444,
0.013654963113367558,
0.30718186497688293,
0.09141010791063309,
-0.0730985701084137,
-0.06624199450016022,
0.3011278808116913,
-0.013989030383527279,
-0.27428945899... |
https://github.com/huggingface/datasets/issues/708 | Datasets performance slow? - 6.4x slower than in memory dataset | We should add it indeed and also maybe a specific section with all the tips for maximal speed. What do you think @lhoestq @SBrandeis @yjernite ? | I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.... | 26 | Datasets performance slow? - 6.4x slower than in memory dataset
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you do... | [
-0.3806666135787964,
-0.042408764362335205,
-0.0314098559319973,
0.4377959668636322,
0.10050857812166214,
0.1263737827539444,
0.013654963113367558,
0.30718186497688293,
0.09141010791063309,
-0.0730985701084137,
-0.06624199450016022,
0.3011278808116913,
-0.013989030383527279,
-0.27428945899... |
https://github.com/huggingface/datasets/issues/708 | Datasets performance slow? - 6.4x slower than in memory dataset | By default the datasets loaded with `load_dataset` live on disk.
It's possible to load them in memory by using some transforms like `.map(..., keep_in_memory=True)`.
Small correction to @thomwolf 's comment above: currently we don't have the `keep_in_memory` parameter for `load_dataset` AFAIK but it would be nice t... | I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.... | 51 | Datasets performance slow? - 6.4x slower than in memory dataset
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you do... | [
-0.3806666135787964,
-0.042408764362335205,
-0.0314098559319973,
0.4377959668636322,
0.10050857812166214,
0.1263737827539444,
0.013654963113367558,
0.30718186497688293,
0.09141010791063309,
-0.0730985701084137,
-0.06624199450016022,
0.3011278808116913,
-0.013989030383527279,
-0.27428945899... |
https://github.com/huggingface/datasets/issues/708 | Datasets performance slow? - 6.4x slower than in memory dataset | Great! Thanks a lot.
I did a test using `map(..., keep_in_memory=True)` and also a test using in-memory only data.
```python
features = dataset.map(tokenize, batched=True, remove_columns=dataset['train'].column_names)
features.set_format(type='torch', columns=['input_ids', 'token_type_ids', 'attention_mask'])
... | I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.... | 170 | Datasets performance slow? - 6.4x slower than in memory dataset
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you do... | [
-0.3806666135787964,
-0.042408764362335205,
-0.0314098559319973,
0.4377959668636322,
0.10050857812166214,
0.1263737827539444,
0.013654963113367558,
0.30718186497688293,
0.09141010791063309,
-0.0730985701084137,
-0.06624199450016022,
0.3011278808116913,
-0.013989030383527279,
-0.27428945899... |
https://github.com/huggingface/datasets/issues/708 | Datasets performance slow? - 6.4x slower than in memory dataset | I am having the same issue here. When loading from memory I can get the GPU up to 70% util but when loading after mapping I can only get 40%.
In disk:
```
book_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:20%]')
book_corpus = book_corpus.map(encode, batc... | I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.... | 247 | Datasets performance slow? - 6.4x slower than in memory dataset
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you do... | [
-0.3806666135787964,
-0.042408764362335205,
-0.0314098559319973,
0.4377959668636322,
0.10050857812166214,
0.1263737827539444,
0.013654963113367558,
0.30718186497688293,
0.09141010791063309,
-0.0730985701084137,
-0.06624199450016022,
0.3011278808116913,
-0.013989030383527279,
-0.27428945899... |
https://github.com/huggingface/datasets/issues/708 | Datasets performance slow? - 6.4x slower than in memory dataset | There is a way to increase the batches read from memory? or multiprocessed it? I think that one of two or it is reading with just 1 core o it is reading very small chunks from disk and left my GPU at 0 between batches | I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.... | 45 | Datasets performance slow? - 6.4x slower than in memory dataset
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you do... | [
-0.3806666135787964,
-0.042408764362335205,
-0.0314098559319973,
0.4377959668636322,
0.10050857812166214,
0.1263737827539444,
0.013654963113367558,
0.30718186497688293,
0.09141010791063309,
-0.0730985701084137,
-0.06624199450016022,
0.3011278808116913,
-0.013989030383527279,
-0.27428945899... |
https://github.com/huggingface/datasets/issues/708 | Datasets performance slow? - 6.4x slower than in memory dataset | My fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks. | I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.... | 21 | Datasets performance slow? - 6.4x slower than in memory dataset
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you do... | [
-0.3806666135787964,
-0.042408764362335205,
-0.0314098559319973,
0.4377959668636322,
0.10050857812166214,
0.1263737827539444,
0.013654963113367558,
0.30718186497688293,
0.09141010791063309,
-0.0730985701084137,
-0.06624199450016022,
0.3011278808116913,
-0.013989030383527279,
-0.27428945899... |
https://github.com/huggingface/datasets/issues/707 | Requirements should specify pyarrow<1 | @punitaojha, certainly. Feel free to work on this. Let me know if you need any help or clarity. | I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error,
```
module 'pyarrow' has no attribute 'PyExtensionType'
```
I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinni... | 18 | Requirements should specify pyarrow<1
I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error,
```
module 'pyarrow' has no attribute 'PyExtensionType'
```
I traced it back to datasets having insta... | [
-0.2744114398956299,
-0.20904046297073364,
0.015253044664859772,
0.18003489077091217,
0.04734575003385544,
-0.06197463721036911,
0.03909021243453026,
0.18513715267181396,
-0.044680118560791016,
0.056902628391981125,
-0.05083269998431206,
0.2923361361026764,
-0.005038939882069826,
0.0731777... |
https://github.com/huggingface/datasets/issues/707 | Requirements should specify pyarrow<1 | Hello @mathcass
1. I did fork the repository and clone the same on my local system.
2. Then learnt about how we can publish our package on pypi.org. Also, found some instructions on same in setup.py documentation.
3. Then I Perplexity document link that you shared above. I created a colab link from there keep ... | I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error,
```
module 'pyarrow' has no attribute 'PyExtensionType'
```
I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinni... | 103 | Requirements should specify pyarrow<1
I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error,
```
module 'pyarrow' has no attribute 'PyExtensionType'
```
I traced it back to datasets having insta... | [
-0.20867237448692322,
-0.24923856556415558,
0.010470684617757797,
0.1431630551815033,
-0.13511890172958374,
-0.13676480948925018,
0.060109324753284454,
0.17778412997722626,
-0.07222621142864227,
0.18768416345119476,
-0.13494989275932312,
0.3921404182910919,
-0.010591456666588783,
0.2025153... |
https://github.com/huggingface/datasets/issues/707 | Requirements should specify pyarrow<1 | Thanks for looking at this @punitaojha and thanks for sharing the notebook.
I just tried to reproduce this on my own (based on the environment where I had this issue) and I can't reproduce it somehow. If I run into this again, I'll include some steps to reproduce it. I'll close this as invalid.
Thanks again. | I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error,
```
module 'pyarrow' has no attribute 'PyExtensionType'
```
I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinni... | 56 | Requirements should specify pyarrow<1
I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error,
```
module 'pyarrow' has no attribute 'PyExtensionType'
```
I traced it back to datasets having insta... | [
-0.2600722312927246,
-0.22944632172584534,
0.04454610124230385,
0.21667850017547607,
0.07497327774763107,
-0.06393115222454071,
-0.002250131219625473,
0.1691177785396576,
-0.012705996632575989,
0.06574314087629318,
0.02089698053896427,
0.29015499353408813,
-0.011995860375463963,
0.01658499... |
https://github.com/huggingface/datasets/issues/707 | Requirements should specify pyarrow<1 | I am sorry for hijacking this closed issue, but I believe I was able to reproduce this very issue. Strangely enough, it also turned out that running `pip install "pyarrow<1" --upgrade` did indeed fix the issue (PyArrow was installed in version `0.14.1` in my case).
Please see the Colab below:
https://colab.resear... | I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error,
```
module 'pyarrow' has no attribute 'PyExtensionType'
```
I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinni... | 52 | Requirements should specify pyarrow<1
I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error,
```
module 'pyarrow' has no attribute 'PyExtensionType'
```
I traced it back to datasets having insta... | [
-0.27619466185569763,
-0.14012391865253448,
0.034747492522001266,
0.16038553416728973,
0.04856106638908386,
-0.04270876199007034,
0.026622308418154716,
0.19843699038028717,
-0.05828109756112099,
0.06005063280463219,
0.0015631151618435979,
0.28385061025619507,
-0.004442551173269749,
0.03769... |
https://github.com/huggingface/datasets/issues/705 | TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit' | Hi !
Thanks for reporting :)
Indeed this is an issue on the `datasets` side.
I'm creating a PR | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1 (installed from master)
- `datasets` version: 1.0.2 (installed as a dependency from transformers)
... | 19 | TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1 (installed from ma... | [
-0.29192742705345154,
-0.6131457686424255,
-0.053877681493759155,
0.07242880761623383,
0.535854160785675,
0.05128543823957443,
0.579014778137207,
0.3014279007911682,
0.2885266840457916,
0.17930838465690613,
-0.03490874171257019,
0.1695186197757721,
-0.04231121018528938,
-0.1191877201199531... |
https://github.com/huggingface/datasets/issues/699 | XNLI dataset is not loading | also i tried below code to solve checksum error
`datasets-cli test ./datasets/xnli --save_infos --all_configs`
and it shows
```
2020-10-02 07:06:16.588760: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
Traceback (most recent call last):
... | `dataset = datasets.load_dataset(path='xnli')`
showing below error
```
/opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
36 if len(bad_urls) > 0:
37 error_msg = "Checksums didn't match" + for_verifi... | 170 | XNLI dataset is not loading
`dataset = datasets.load_dataset(path='xnli')`
showing below error
```
/opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
36 if len(bad_urls) > 0:
37 error_msg = "Check... | [
-0.2810518443584442,
0.14010030031204224,
-0.09117189794778824,
0.12604504823684692,
0.22666770219802856,
-0.24099750816822052,
0.3072919249534607,
0.42818430066108704,
0.08965208381414413,
-0.06145206093788147,
-0.019094549119472504,
0.47371333837509155,
0.1864164024591446,
0.117680601775... |
https://github.com/huggingface/datasets/issues/699 | XNLI dataset is not loading | Hi !
Yes the download url changed.
It's updated on the master branch. I'm doing a release today to fix that :) | `dataset = datasets.load_dataset(path='xnli')`
showing below error
```
/opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
36 if len(bad_urls) > 0:
37 error_msg = "Checksums didn't match" + for_verifi... | 22 | XNLI dataset is not loading
`dataset = datasets.load_dataset(path='xnli')`
showing below error
```
/opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
36 if len(bad_urls) > 0:
37 error_msg = "Check... | [
-0.15453669428825378,
0.3282201290130615,
-0.06746216863393784,
0.10164979100227356,
0.118445985019207,
-0.08490344136953354,
0.1139143705368042,
0.35178565979003906,
-0.08381658792495728,
-0.13450472056865692,
-0.07374469190835953,
0.3510932922363281,
0.2048695832490921,
0.050639804452657... |
https://github.com/huggingface/datasets/issues/690 | XNLI dataset: NonMatchingChecksumError | Thanks for reporting.
The data file must have been updated by the host.
I'll update the checksum with the new one. | Hi,
I tried to download "xnli" dataset in colab using
`xnli = load_dataset(path='xnli')`
but got 'NonMatchingChecksumError' error
`NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-27-a87bedc82eeb> in <module>()
----> 1 xnli = load_dataset(path='xnli')
3 frames
/usr... | 21 | XNLI dataset: NonMatchingChecksumError
Hi,
I tried to download "xnli" dataset in colab using
`xnli = load_dataset(path='xnli')`
but got 'NonMatchingChecksumError' error
`NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-27-a87bedc82eeb> in <module>()
----> 1 xnli = lo... | [
-0.2586138844490051,
0.2562105655670166,
0.03577614948153496,
0.17244935035705566,
-0.006708642467856407,
-0.036675021052360535,
0.08086803555488586,
0.4067073166370392,
0.1755131483078003,
0.23367825150489807,
-0.21354560554027557,
0.30520403385162354,
-0.01876629889011383,
-0.07204684615... |
https://github.com/huggingface/datasets/issues/690 | XNLI dataset: NonMatchingChecksumError | I'll do a release in the next few days to make the fix available for everyone.
In the meantime you can load `xnli` with
```
xnli = load_dataset('xnli', script_version="master")
```
This will use the latest version of the xnli script (available on master branch), instead of the old one. | Hi,
I tried to download "xnli" dataset in colab using
`xnli = load_dataset(path='xnli')`
but got 'NonMatchingChecksumError' error
`NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-27-a87bedc82eeb> in <module>()
----> 1 xnli = load_dataset(path='xnli')
3 frames
/usr... | 49 | XNLI dataset: NonMatchingChecksumError
Hi,
I tried to download "xnli" dataset in colab using
`xnli = load_dataset(path='xnli')`
but got 'NonMatchingChecksumError' error
`NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-27-a87bedc82eeb> in <module>()
----> 1 xnli = lo... | [
-0.3440379500389099,
0.2931837737560272,
-0.00019781394803430885,
0.14771141111850739,
0.042899224907159805,
-0.016493184491991997,
0.06692828983068466,
0.43357783555984497,
0.19698016345500946,
0.2615416944026947,
-0.19293831288814545,
0.3978193700313568,
-0.03805821016430855,
-0.00336134... |
https://github.com/huggingface/datasets/issues/687 | `ArrowInvalid` occurs while running `Dataset.map()` function | Hi !
This is because `encode` expects one single text as input (str), or one tokenized text (List[str]).
I believe that you actually wanted to use `encode_batch` which expects a batch of texts.
However this method is only available for our "fast" tokenizers (ex: BertTokenizerFast).
BertJapanese is not one of them... | It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error.
Code:
```python
# train_ds = Dataset(features: {
# 'title': Value(dtype='string', id=None),
# 'score': Value(dtype='float64', id=Non... | 128 | `ArrowInvalid` occurs while running `Dataset.map()` function
It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error.
Code:
```python
# train_ds = Dataset(features: {
# 'title': Value(dtype='st... | [
-0.3730499744415283,
0.020999398082494736,
-0.10263191163539886,
-0.016630040481686592,
0.11070270836353302,
0.16786187887191772,
0.12684781849384308,
0.38617536425590515,
-0.19624216854572296,
0.11606311053037643,
0.2094029188156128,
0.6037948727607727,
-0.1167866513133049,
0.040495380759... |
https://github.com/huggingface/datasets/issues/687 | `ArrowInvalid` occurs while running `Dataset.map()` function | Thank you very much for the kind and precise suggestion!
I'm looking forward to seeing BertJapaneseTokenizer built into the "fast" tokenizers.
I tried `map` with multiprocessing as follows, and it worked!
```python
# There was a Pickle problem if I use `lambda` for multiprocessing
def encode(examples):
re... | It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error.
Code:
```python
# train_ds = Dataset(features: {
# 'title': Value(dtype='string', id=None),
# 'score': Value(dtype='float64', id=Non... | 61 | `ArrowInvalid` occurs while running `Dataset.map()` function
It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error.
Code:
```python
# train_ds = Dataset(features: {
# 'title': Value(dtype='st... | [
-0.3730499744415283,
0.020999398082494736,
-0.10263191163539886,
-0.016630040481686592,
0.11070270836353302,
0.16786187887191772,
0.12684781849384308,
0.38617536425590515,
-0.19624216854572296,
0.11606311053037643,
0.2094029188156128,
0.6037948727607727,
-0.1167866513133049,
0.040495380759... |
https://github.com/huggingface/datasets/issues/686 | Dataset browser url is still https://huggingface.co/nlp/viewer/ | Yes! might do it with @srush one of these days. Hopefully it won't break too many links (we can always redirect from old url to new) | Might be worth updating to https://huggingface.co/datasets/viewer/ | 26 | Dataset browser url is still https://huggingface.co/nlp/viewer/
Might be worth updating to https://huggingface.co/datasets/viewer/
Yes! might do it with @srush one of these days. Hopefully it won't break too many links (we can always redirect from old url to new) | [
-0.15292483568191528,
0.20321430265903473,
-0.09984347969293594,
-0.17287243902683258,
0.10616985708475113,
0.04673735052347183,
0.1794256716966629,
0.23907047510147095,
-0.05006711184978485,
-0.018579622730612755,
-0.1565578579902649,
0.3058709502220154,
0.12803860008716583,
0.17247346043... |
https://github.com/huggingface/datasets/issues/678 | The download instructions for c4 datasets are not contained in the error message | Also not that C4 is a dataset that needs an Apache Beam runtime to be generated.
For example Dataflow, Spark, Flink etc.
Usually we generate the dataset on our side once and for all, but we haven't done it for C4 yet.
More info about beam datasets [here](https://huggingface.co/docs/datasets/beam_dataset.html)
L... | The manual download instructions are not clear
```The dataset c4 with config en requires manual data.
Please follow the manual download instructions: <bound method C4.manual_download_instructions of <datasets_modules.datasets.c4.830b0c218bd41fed439812c8dd19dbd4767d2a3faa385eb695cf8666c982b1b3.c4.C4 object at 0x7ff... | 56 | The download instructions for c4 datasets are not contained in the error message
The manual download instructions are not clear
```The dataset c4 with config en requires manual data.
Please follow the manual download instructions: <bound method C4.manual_download_instructions of <datasets_modules.datasets.c4.830... | [
-0.1576072722673416,
-0.25204700231552124,
-0.04434425011277199,
0.24197372794151306,
0.3052387833595276,
-0.017306288704276085,
0.04620053619146347,
0.174014151096344,
-0.09261619299650192,
0.15251001715660095,
0.01120805460959673,
-0.03110540844500065,
-0.128098726272583,
0.5797958970069... |
https://github.com/huggingface/datasets/issues/676 | train_test_split returns empty dataset item | Can you reproduce this example in a Colab so we can investigate? (or give more information on your software/hardware config) | I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty.
The codes:
```
yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp')
print(yelp_data[0])
yelp_data = yelp_data.train_test_split(test_size=0.1)
print(yelp_data)
pri... | 20 | train_test_split returns empty dataset item
I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty.
The codes:
```
yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp')
print(yelp_data[0])
yelp_data = yelp_data.train_test_split... | [
-0.11503519117832184,
-0.05038169398903847,
-0.038193605840206146,
0.3756461441516876,
-0.008087930269539356,
0.23104852437973022,
0.6476925611495972,
0.28867846727371216,
-0.022267788648605347,
0.1684436947107315,
-0.08318233489990234,
0.49569815397262573,
-0.15546242892742157,
0.22362796... |
https://github.com/huggingface/datasets/issues/676 | train_test_split returns empty dataset item | We'll do a release pretty soon to include the fix :)
In the meantime you can install the lib from source if you want to | I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty.
The codes:
```
yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp')
print(yelp_data[0])
yelp_data = yelp_data.train_test_split(test_size=0.1)
print(yelp_data)
pri... | 25 | train_test_split returns empty dataset item
I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty.
The codes:
```
yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp')
print(yelp_data[0])
yelp_data = yelp_data.train_test_split... | [
-0.11764165759086609,
-0.0009292102186009288,
-0.1334935426712036,
0.28376710414886475,
0.031641338020563126,
0.24874557554721832,
0.515235960483551,
0.5261709094047546,
0.060589175671339035,
0.08247903734445572,
0.02234896458685398,
0.38036486506462097,
-0.1783173680305481,
0.118417724967... |
https://github.com/huggingface/datasets/issues/674 | load_dataset() won't download in Windows | I have the same issue. Tried to download a few of them and not a single one is downloaded successfully.
This is the output:
```
>>> dataset = load_dataset('blended_skill_talk', split='train')
Using custom data configuration default <-- This step never ends
``` | I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefinitely without downloading anything. I've wa... | 41 | load_dataset() won't download in Windows
I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefin... | [
-0.4348374903202057,
0.2638888657093048,
-0.09682511538267136,
0.3132835626602173,
0.26725634932518005,
0.3075500726699829,
0.35405823588371277,
0.11063136160373688,
0.44255900382995605,
-0.01652861200273037,
0.1196887195110321,
-0.10646151751279831,
-0.06712805479764938,
0.213916003704071... |
https://github.com/huggingface/datasets/issues/674 | load_dataset() won't download in Windows | This was fixed in #644
I'll do a new release soon :)
In the meantime you can run it by installing from source | I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefinitely without downloading anything. I've wa... | 23 | load_dataset() won't download in Windows
I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefin... | [
-0.5447331070899963,
0.29377612471580505,
-0.07057071477174759,
0.2974265217781067,
0.14205150306224823,
0.32118022441864014,
0.27409034967422485,
0.21087978780269623,
0.3762368857860565,
-0.03033912181854248,
0.1460883617401123,
-0.05348740518093109,
-0.053637728095054626,
0.0993808582425... |
https://github.com/huggingface/datasets/issues/674 | load_dataset() won't download in Windows | Closing since version 1.1.0 got released with Windows support :)
Let me know if it works for you now | I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefinitely without downloading anything. I've wa... | 19 | load_dataset() won't download in Windows
I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefin... | [
-0.5312042832374573,
0.22850652039051056,
-0.05759866163134575,
0.2666495144367218,
0.1350667029619217,
0.32391831278800964,
0.25619587302207947,
0.21989303827285767,
0.35728132724761963,
0.0436171293258667,
0.18822138011455536,
-0.06253885477781296,
-0.05458955094218254,
0.176407888531684... |
https://github.com/huggingface/datasets/issues/672 | Questions about XSUM | We should try to regenerate the data using the official script.
But iirc that's what we used in the first place, so not sure why it didn't match in the first place.
I'll let you know when the dataset is updated | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu... | 41 | Questions about XSUM
Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype... | [
-0.0942947193980217,
-0.36811739206314087,
-0.14567510783672333,
0.4842414855957031,
0.33991801738739014,
-0.005833042785525322,
0.1115037053823471,
-0.018654780462384224,
0.2138753980398178,
0.26003703474998474,
-0.23007610440254211,
0.22797609865665436,
0.11959730088710785,
0.42903906106... |
https://github.com/huggingface/datasets/issues/672 | Questions about XSUM | Thanks, looking forward to hearing your update on this thread.
This is a blocking issue for us; would appreciate any progress on this front. We can also help with the fix, if you deem it appropriately. | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu... | 36 | Questions about XSUM
Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype... | [
-0.08709453791379929,
-0.37684544920921326,
-0.15216407179832458,
0.46886101365089417,
0.3402540683746338,
-0.007884308695793152,
0.16154229640960693,
-0.03076464682817459,
0.24769742786884308,
0.28326526284217834,
-0.22370094060897827,
0.24992859363555908,
0.10184453427791595,
0.453922659... |
https://github.com/huggingface/datasets/issues/672 | Questions about XSUM | I just started the generation on my side, I'll let you know how it goes :) | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu... | 16 | Questions about XSUM
Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype... | [
-0.09919897466897964,
-0.3942701816558838,
-0.15124912559986115,
0.49925071001052856,
0.32503649592399597,
0.008215104229748249,
0.18995141983032227,
-0.033403266221284866,
0.2197479009628296,
0.2975791096687317,
-0.18568019568920135,
0.22146008908748627,
0.0975368320941925,
0.440178364515... |
https://github.com/huggingface/datasets/issues/672 | Questions about XSUM | Hmm after a first run I'm still missing 136668/226711 urls.
I'll relaunch it tomorrow to try to get the remaining ones. | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu... | 21 | Questions about XSUM
Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype... | [
-0.019233670085668564,
-0.2787490785121918,
-0.13158775866031647,
0.45143750309944153,
0.32941097021102905,
0.01324752252548933,
0.08347161114215851,
-0.032411202788352966,
0.1632881909608841,
0.25323668122291565,
-0.21402189135551453,
0.18124106526374817,
0.1041177436709404,
0.34332108497... |
https://github.com/huggingface/datasets/issues/672 | Questions about XSUM | So I managed to download them all but when parsing only 226,181/226,711 worked.
Not sure if it's worth digging and debugging parsing at this point :/ | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu... | 26 | Questions about XSUM
Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype... | [
-0.08440277725458145,
-0.37095630168914795,
-0.15182127058506012,
0.55153888463974,
0.35943683981895447,
0.04052409902215004,
0.06831640750169754,
0.05969969928264618,
0.13474319875240326,
0.2831321656703949,
-0.26395514607429504,
0.22201287746429443,
0.16651348769664764,
0.379598975181579... |
https://github.com/huggingface/datasets/issues/672 | Questions about XSUM | Thanks @lhoestq
It would be great to improve coverage, but IDs are the really crucial part for us. We'd really appreciate an update to the dataset with IDs either way! | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu... | 30 | Questions about XSUM
Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype... | [
-0.12053827941417694,
-0.3434903025627136,
-0.15728309750556946,
0.40109577775001526,
0.3150457739830017,
0.05394117161631584,
0.16806791722774506,
0.01825978234410286,
0.22903034090995789,
0.20511941611766815,
-0.17854993045330048,
0.2475748360157013,
0.11445388197898865,
0.44367393851280... |
https://github.com/huggingface/datasets/issues/672 | Questions about XSUM | I gave up at an even earlier point. The dataset I use has 204,017 train examples. | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu... | 16 | Questions about XSUM
Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype... | [
-0.099273681640625,
-0.3724808990955353,
-0.1397750973701477,
0.4924609959125519,
0.32378315925598145,
-0.0019076134776696563,
0.17883449792861938,
-0.03059270977973938,
0.2268306165933609,
0.2765004336833954,
-0.1957729607820511,
0.2362508624792099,
0.08172474056482315,
0.4162519276142120... |
https://github.com/huggingface/datasets/issues/672 | Questions about XSUM | @lhoestq @sshleifer like @jbragg said earlier, the main issue for us is that the current XSUM dataset (in your package) does not have IDs suggested by the original dataset ([here is the file](https://raw.githubusercontent.com/EdinburghNLP/XSum/master/XSum-Dataset/XSum-TRAINING-DEV-TEST-SPLIT-90-5-5.json).) Would apprec... | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu... | 63 | Questions about XSUM
Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype... | [
-0.16643187403678894,
-0.3865373134613037,
-0.1475519984960556,
0.4103156626224518,
0.33750972151756287,
-0.005334240850061178,
0.1342155486345291,
0.04272158443927765,
0.2139725536108017,
0.2906927466392517,
-0.2072104811668396,
0.30437490344047546,
0.05202598497271538,
0.395408570766449,... |
https://github.com/huggingface/datasets/issues/672 | Questions about XSUM | >So I managed to download them all but when parsing only 226,181/226,711 worked.
@lhoestq any chance we could update the HF-hosted dataset with the IDs in your new version? Happy to help if there's something I can do. | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu... | 38 | Questions about XSUM
Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype... | [
-0.11086846888065338,
-0.2911382019519806,
-0.13783490657806396,
0.4550747871398926,
0.28178033232688904,
0.07282978296279907,
0.05440913885831833,
0.09135884791612625,
0.28216418623924255,
0.2655702531337738,
-0.4389285445213318,
0.0682012140750885,
0.20142997801303864,
0.3437095582485199... |
https://github.com/huggingface/datasets/issues/672 | Questions about XSUM | Well I couldn't parse what I downloaded.
Unfortunately I think I won't be able to take a look at it this week.
I can try to send you what I got if you want to give it a shot @jbragg
Otherwise feel free to re-run the xsum download script, maybe you'll be luckier than me | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu... | 55 | Questions about XSUM
Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype... | [
-0.07263357937335968,
-0.40362077951431274,
-0.16236552596092224,
0.5180845856666565,
0.33256158232688904,
0.02856389619410038,
0.12278083711862564,
-0.03300489857792854,
0.2053791582584381,
0.2613716423511505,
-0.24682800471782684,
0.23382048308849335,
0.08773522078990936,
0.4263716340065... |
https://github.com/huggingface/datasets/issues/669 | How to skip a example when running dataset.map | Hi @xixiaoyao,
Depending on what you want to do you can:
- use a first step of `filter` to filter out the invalid examples: https://huggingface.co/docs/datasets/processing.html#filtering-rows-select-and-filter
- or directly detect the invalid examples inside the callable used with `map` and return them unchanged or ... | in processing func, I process examples and detect some invalid examples, which I did not want it to be added into train dataset. However I did not find how to skip this recognized invalid example when doing dataset.map. | 95 | How to skip a example when running dataset.map
in processing func, I process examples and detect some invalid examples, which I did not want it to be added into train dataset. However I did not find how to skip this recognized invalid example when doing dataset.map.
Hi @xixiaoyao,
Depending on what you want to do... | [
-0.32426366209983826,
-0.22462047636508942,
0.03135381266474724,
0.00025800627190619707,
0.0896156057715416,
0.3332221210002899,
0.030069714412093163,
0.10557087510824203,
0.1422979086637497,
0.25204282999038696,
0.6323100328445435,
0.4480105936527252,
-0.26008427143096924,
0.4908131361007... |
https://github.com/huggingface/datasets/issues/667 | Loss not decrease with Datasets and Transformers | Hi did you manage to fix your issue ?
If so feel free to share your fix and close this thread | HI,
The following script is used to fine-tune a BertForSequenceClassification model on SST2.
The script is adapted from [this colab](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) that presents an example of fine-tuning BertForQuestionAnswering using squad data... | 21 | Loss not decrease with Datasets and Transformers
HI,
The following script is used to fine-tune a BertForSequenceClassification model on SST2.
The script is adapted from [this colab](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) that presents an example of fi... | [
0.014326917007565498,
-0.01784001849591732,
0.07662566751241684,
0.27096015214920044,
0.19093304872512817,
-0.21145839989185333,
0.32164478302001953,
0.18336623907089233,
-0.3186630308628082,
0.144070565700531,
-0.11057427525520325,
0.22517706453800201,
0.043298523873090744,
-0.37116634845... |
https://github.com/huggingface/datasets/issues/666 | Does both 'bookcorpus' and 'wikipedia' belong to the same datasets which Google used for pretraining BERT? | No they are other similar copies but they are not provided by the official Bert models authors. | 17 | Does both 'bookcorpus' and 'wikipedia' belong to the same datasets which Google used for pretraining BERT?
No they are other similar copies but they are not provided by the official Bert models authors. | [
0.13242000341415405,
-0.13323472440242767,
-0.08413030952215195,
0.41013240814208984,
-0.06199273467063904,
0.06862860172986984,
0.4990638196468353,
0.050850760191679,
0.058727819472551346,
-0.18183599412441254,
-0.5121809244155884,
0.010585615411400795,
0.16044265031814575,
0.263494789600... | |
https://github.com/huggingface/datasets/issues/665 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects | Hi !
It works on my side with both the LongFormerTokenizer and the LongFormerTokenizerFast.
Which version of transformers/datasets are you using ? | I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [example['question'], example['context']]
encodings = tokenizer.encode... | 22 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects
I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [... | [
-0.23877939581871033,
-0.2978488504886627,
-0.019680766388773918,
0.23303532600402832,
0.4450324475765228,
-0.1729920655488968,
0.28515252470970154,
0.13093915581703186,
-0.27609163522720337,
0.11031361669301987,
-0.06978598982095718,
0.4891907870769501,
-0.00872588437050581,
-0.0467342920... |
https://github.com/huggingface/datasets/issues/665 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects | Then I guess you need to give us more informations on your setup (OS, python, GPU, etc) or a Google Colab reproducing the error for us to be able to debug this error. | I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [example['question'], example['context']]
encodings = tokenizer.encode... | 33 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects
I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [... | [
-0.23877939581871033,
-0.2978488504886627,
-0.019680766388773918,
0.23303532600402832,
0.4450324475765228,
-0.1729920655488968,
0.28515252470970154,
0.13093915581703186,
-0.27609163522720337,
0.11031361669301987,
-0.06978598982095718,
0.4891907870769501,
-0.00872588437050581,
-0.0467342920... |
https://github.com/huggingface/datasets/issues/665 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects | I have the same issue with `transformers/BertJapaneseTokenizer`.
```python
# train_ds = Dataset(features: {
# 'title': Value(dtype='string', id=None),
# 'score': Value(dtype='float64', id=None)
# }, num_rows: 99999)
t = BertJapaneseTokenizer.from_pretrained('bert-base-japanese-whole-word-masking'... | I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [example['question'], example['context']]
encodings = tokenizer.encode... | 861 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects
I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [... | [
-0.23877939581871033,
-0.2978488504886627,
-0.019680766388773918,
0.23303532600402832,
0.4450324475765228,
-0.1729920655488968,
0.28515252470970154,
0.13093915581703186,
-0.27609163522720337,
0.11031361669301987,
-0.06978598982095718,
0.4891907870769501,
-0.00872588437050581,
-0.0467342920... |
https://github.com/huggingface/datasets/issues/665 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects | > I have the same issue with `transformers/BertJapaneseTokenizer`.
It looks like it this tokenizer is not supported unfortunately.
This is because `t.word_tokenizer.mecab` is a `fugashi.fugashi.GenericTagger` which is not compatible with pickle nor dill.
We need objects passes to `map` to be picklable for our ca... | I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [example['question'], example['context']]
encodings = tokenizer.encode... | 153 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects
I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [... | [
-0.23877939581871033,
-0.2978488504886627,
-0.019680766388773918,
0.23303532600402832,
0.4450324475765228,
-0.1729920655488968,
0.28515252470970154,
0.13093915581703186,
-0.27609163522720337,
0.11031361669301987,
-0.06978598982095718,
0.4891907870769501,
-0.00872588437050581,
-0.0467342920... |
https://github.com/huggingface/datasets/issues/665 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects | We can also update the `BertJapaneseTokenizer` in `transformers` as you just shown @lhoestq to make it compatible with pickle. It will be faster than asking on fugashi 's repo and good for the other users of `transformers` as well.
I'm currently working on `transformers` I'll include it in the https://github.com/hug... | I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [example['question'], example['context']]
encodings = tokenizer.encode... | 57 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects
I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [... | [
-0.23877939581871033,
-0.2978488504886627,
-0.019680766388773918,
0.23303532600402832,
0.4450324475765228,
-0.1729920655488968,
0.28515252470970154,
0.13093915581703186,
-0.27609163522720337,
0.11031361669301987,
-0.06978598982095718,
0.4891907870769501,
-0.00872588437050581,
-0.0467342920... |
https://github.com/huggingface/datasets/issues/665 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects | Thank you for the rapid and polite response!
@lhoestq Thanks for the suggestion! I've passed the pickle phase, but another `ArrowInvalid` problem occored. I created another issue #687 .
@thomwolf Wow, really fast work. I'm looking forward to the next release 🤗 | I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [example['question'], example['context']]
encodings = tokenizer.encode... | 42 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects
I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [... | [
-0.23877939581871033,
-0.2978488504886627,
-0.019680766388773918,
0.23303532600402832,
0.4450324475765228,
-0.1729920655488968,
0.28515252470970154,
0.13093915581703186,
-0.27609163522720337,
0.11031361669301987,
-0.06978598982095718,
0.4891907870769501,
-0.00872588437050581,
-0.0467342920... |
https://github.com/huggingface/datasets/issues/664 | load_dataset from local squad.py, raise error: TypeError: 'NoneType' object is not callable | Hi !
Thanks for reporting.
It looks like no object inherits from `datasets.GeneratorBasedBuilder` (or more generally from `datasets.DatasetBuilder`) in your script.
Could you check that there exist at least one dataset builder class ? |
version: 1.0.2
```
train_dataset = datasets.load_dataset('squad')
```
The above code can works. However, when I download the squad.py from your server, and saved as `my_squad.py` to local. I run followings raise errors.
```
train_dataset = datasets.load_dataset('./my_squad.py') ... | 34 | load_dataset from local squad.py, raise error: TypeError: 'NoneType' object is not callable
version: 1.0.2
```
train_dataset = datasets.load_dataset('squad')
```
The above code can works. However, when I download the squad.py from your server, and saved as `my_squad.py` to local. I run followings raise e... | [
-0.22435876727104187,
0.3296632766723633,
0.11147113889455795,
0.1580808162689209,
0.3371320366859436,
-0.02546568401157856,
0.6201825737953186,
0.39641162753105164,
-0.13718798756599426,
0.05467939376831055,
-0.0827251747250557,
0.41127198934555054,
-0.1245826855301857,
-0.024264136329293... |
https://github.com/huggingface/datasets/issues/657 | Squad Metric Description & Feature Mismatch | Thanks for reporting !
There indeed a mismatch between the features and the kwargs description
I believe `answer_start` was added to match the squad dataset format for consistency, even though it is not used in the metric computation. I think I'd rather keep it this way, so that you can just give `references=squad[... | The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also not used in the evaluation. | 63 | Squad Metric Description & Feature Mismatch
The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also... | [
0.03843463957309723,
-0.19004258513450623,
-0.05375963822007179,
-0.07253194600343704,
0.41921716928482056,
-0.04260034114122391,
0.10886414349079132,
0.07943806797266006,
-0.21590843796730042,
0.10625765472650528,
-0.18197500705718994,
0.41403570771217346,
0.336286336183548,
-0.0604617111... |
https://github.com/huggingface/datasets/issues/657 | Squad Metric Description & Feature Mismatch | But then providing the `answer_start` becomes mandatory since the format of the features is checked against the one provided in the squad [file](https://github.com/huggingface/datasets/pull/658/files). | The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also not used in the evaluation. | 23 | Squad Metric Description & Feature Mismatch
The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also... | [
0.07148358225822449,
-0.3686690032482147,
-0.0856219232082367,
-0.09630409628152847,
0.40952470898628235,
-0.10431182384490967,
0.12534311413764954,
0.03956552594900131,
-0.28089168667793274,
0.016171734780073166,
-0.015554187819361687,
0.3508734107017517,
0.21422342956066132,
0.0877157896... |
https://github.com/huggingface/datasets/issues/651 | Problem with JSON dataset format | Currently the `json` dataset doesn't support this format unfortunately.
However you could load it with
```python
from datasets import Dataset
import pandas as pd
df = pd.read_json("path_to_local.json", orient="index")
dataset = Dataset.from_pandas(df)
``` | I have a local json dataset with the following form.
{
'id01234': {'key1': value1, 'key2': value2, 'key3': value3},
'id01235': {'key1': value1, 'key2': value2, 'key3': value3},
.
.
.
'id09999': {'key1': value1, 'key2': value2, 'key3': value3}
}
Note that instead of a list of records i... | 32 | Problem with JSON dataset format
I have a local json dataset with the following form.
{
'id01234': {'key1': value1, 'key2': value2, 'key3': value3},
'id01235': {'key1': value1, 'key2': value2, 'key3': value3},
.
.
.
'id09999': {'key1': value1, 'key2': value2, 'key3': value3}
}
Note ... | [
0.145869180560112,
0.11998605728149414,
-0.0658116340637207,
0.38023868203163147,
-0.09949313849210739,
0.25731804966926575,
0.23786446452140808,
0.4517602324485779,
0.4588269293308258,
-0.06212341785430908,
0.13688036799430847,
0.48565250635147095,
-0.14550799131393433,
0.2548017799854278... |
https://github.com/huggingface/datasets/issues/650 | dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators` | Hi :)
In your dummy data zip file you can just have `subset000.xz` as directories instead of compressed files.
Let me know if it helps | Hi, I recently want to add a dataset whose source data is like this
```
openwebtext.tar.xz
|__ openwebtext
|__subset000.xz
| |__ ....txt
| |__ ....txt
| ...
|__ subset001.xz
|
....
```
So I wrote `openwebtext.py` like this
```
d... | 25 | dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators`
Hi, I recently want to add a dataset whose source data is like this
```
openwebtext.tar.xz
|__ openwebtext
|__subset000.xz
| |__ ....txt
| |__ ....txt
| ...
|__ s... | [
-0.2775549292564392,
0.05349740386009216,
0.01349702663719654,
0.39139923453330994,
-0.07236114144325256,
0.13569225370883942,
0.4522339105606079,
0.3349655270576477,
-0.016599401831626892,
0.08433186262845993,
-0.05750945210456848,
0.22363892197608948,
-0.163284033536911,
-0.0482925996184... |
https://github.com/huggingface/datasets/issues/650 | dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators` | Thanks for your comment @lhoestq ,
Just for confirmation, changing dummy data like this won't make dummy test test the functionality to extract `subsetxxx.xz` but actually kind of circumvent it. But since we will test the real data so it is ok ? | Hi, I recently want to add a dataset whose source data is like this
```
openwebtext.tar.xz
|__ openwebtext
|__subset000.xz
| |__ ....txt
| |__ ....txt
| ...
|__ subset001.xz
|
....
```
So I wrote `openwebtext.py` like this
```
d... | 43 | dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators`
Hi, I recently want to add a dataset whose source data is like this
```
openwebtext.tar.xz
|__ openwebtext
|__subset000.xz
| |__ ....txt
| |__ ....txt
| ...
|__ s... | [
-0.2775549292564392,
0.05349740386009216,
0.01349702663719654,
0.39139923453330994,
-0.07236114144325256,
0.13569225370883942,
0.4522339105606079,
0.3349655270576477,
-0.016599401831626892,
0.08433186262845993,
-0.05750945210456848,
0.22363892197608948,
-0.163284033536911,
-0.0482925996184... |
https://github.com/huggingface/datasets/issues/650 | dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators` | Yes it's fine for now. We plan to add a job for slow tests.
And at one point we'll also do another pass on the dummy data handling and consider extracting files. | Hi, I recently want to add a dataset whose source data is like this
```
openwebtext.tar.xz
|__ openwebtext
|__subset000.xz
| |__ ....txt
| |__ ....txt
| ...
|__ subset001.xz
|
....
```
So I wrote `openwebtext.py` like this
```
d... | 32 | dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators`
Hi, I recently want to add a dataset whose source data is like this
```
openwebtext.tar.xz
|__ openwebtext
|__subset000.xz
| |__ ....txt
| |__ ....txt
| ...
|__ s... | [
-0.2775549292564392,
0.05349740386009216,
0.01349702663719654,
0.39139923453330994,
-0.07236114144325256,
0.13569225370883942,
0.4522339105606079,
0.3349655270576477,
-0.016599401831626892,
0.08433186262845993,
-0.05750945210456848,
0.22363892197608948,
-0.163284033536911,
-0.0482925996184... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.