id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
701,517,550
629
straddling object straddles two block boundaries
I am trying to read json data (it's an array with lots of dictionaries) and getting block boundaries issue as below : I tried calling read_json with readOptions but no luck . ``` table = json.read_json(fn) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "pyarrow/_json.pyx", li...
closed
https://github.com/huggingface/datasets/issues/629
2020-09-15T00:30:46
2020-09-15T00:36:17
2020-09-15T00:32:17
{ "login": "bharaniabhishek123", "id": 17970177, "type": "User" }
[]
false
[]
701,496,053
628
Update docs links in the contribution guideline
Fixed the `add a dataset` and `share a dataset` links in the contribution guideline to refer to the new docs website.
closed
https://github.com/huggingface/datasets/pull/628
2020-09-14T23:27:19
2020-11-02T21:03:23
2020-09-15T06:19:35
{ "login": "M-Salti", "id": 9285264, "type": "User" }
[]
true
[]
701,411,661
627
fix (#619) MLQA features names
Fixed the features names as suggested in (#619) in the `_generate_examples` and `_info` methods in the MLQA loading script and also changed the names in the `dataset_infos.json` file.
closed
https://github.com/huggingface/datasets/pull/627
2020-09-14T20:41:59
2020-11-02T21:04:32
2020-09-16T06:54:11
{ "login": "M-Salti", "id": 9285264, "type": "User" }
[]
true
[]
701,352,605
626
Update GLUE URLs (now hosted on FB)
NYU is switching dataset hosting from Google to FB. This PR closes https://github.com/huggingface/datasets/issues/608 and is necessary for https://github.com/jiant-dev/jiant/issues/161. This PR updates the data URLs based on changes made in https://github.com/nyu-mll/jiant/pull/1112. Note: rebased on huggingface/dat...
closed
https://github.com/huggingface/datasets/pull/626
2020-09-14T19:05:39
2020-09-16T06:53:18
2020-09-16T06:53:18
{ "login": "jeswan", "id": 57466294, "type": "User" }
[]
true
[]
701,057,799
625
dtype of tensors should be preserved
After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-...
closed
https://github.com/huggingface/datasets/issues/625
2020-09-14T12:38:05
2021-08-17T08:30:04
2021-08-17T08:30:04
{ "login": "BramVanroy", "id": 2779410, "type": "User" }
[]
false
[]
700,541,628
624
Add learningq dataset
Hi, Thank you again for this amazing repo. Would it be possible for y'all to add the LearningQ dataset - https://github.com/AngusGLChen/LearningQ ?
open
https://github.com/huggingface/datasets/issues/624
2020-09-13T10:20:27
2020-09-14T09:50:02
null
{ "login": "krrishdholakia", "id": 17561003, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
700,235,308
623
Custom feature types in `load_dataset` from CSV
I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`. I am working with the local files from the emotion dataset. To get the data you can use the followi...
closed
https://github.com/huggingface/datasets/issues/623
2020-09-12T13:21:34
2020-09-30T19:51:43
2020-09-30T08:39:54
{ "login": "lvwerra", "id": 8264887, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
700,225,826
622
load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
closed
https://github.com/huggingface/datasets/issues/622
2020-09-12T12:49:28
2020-10-28T11:07:31
2020-10-28T11:07:30
{ "login": "BramVanroy", "id": 2779410, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
700,171,097
621
[docs] Index: The native emoji looks kinda ugly in large size
closed
https://github.com/huggingface/datasets/pull/621
2020-09-12T09:48:40
2020-09-15T06:20:03
2020-09-15T06:20:02
{ "login": "julien-c", "id": 326577, "type": "User" }
[]
true
[]
699,815,135
620
map/filter multiprocessing raises errors and corrupts datasets
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_si...
closed
https://github.com/huggingface/datasets/issues/620
2020-09-11T22:30:06
2020-10-08T16:31:47
2020-10-08T16:31:46
{ "login": "timothyjlaurent", "id": 2000204, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
699,733,612
619
Mistakes in MLQA features names
I think the following features in MLQA shouldn't be named the way they are: 1. `questions` (should be `question`) 2. `ids` (should be `id`) 3. `start` (should be `answer_start`) The reasons I'm suggesting these features be renamed are: * To make them consistent with other QA datasets like SQuAD, XQuAD, TyDiQA et...
closed
https://github.com/huggingface/datasets/issues/619
2020-09-11T20:46:23
2020-09-16T06:59:19
2020-09-16T06:59:19
{ "login": "M-Salti", "id": 9285264, "type": "User" }
[]
false
[]
699,684,831
618
sync logging utils with transformers
sync the docs/code with the recent changes in transformers' `logging` utils: 1. change the default level to `WARNING` 2. add `DATASETS_VERBOSITY` env var 3. expand docs
closed
https://github.com/huggingface/datasets/pull/618
2020-09-11T19:46:13
2020-09-17T15:40:59
2020-09-17T09:53:47
{ "login": "stas00", "id": 10676103, "type": "User" }
[]
true
[]
699,472,596
617
Compare different Rouge implementations
I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://arxiv.org/pdf/1909.03186.pdf) for example. Ca...
closed
https://github.com/huggingface/datasets/issues/617
2020-09-11T15:49:32
2023-03-22T12:08:44
2020-10-02T09:52:18
{ "login": "ibeltagy", "id": 2287797, "type": "User" }
[]
false
[]
699,462,293
616
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors
I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this stra...
open
https://github.com/huggingface/datasets/issues/616
2020-09-11T15:39:16
2021-07-22T21:12:21
null
{ "login": "BramVanroy", "id": 2779410, "type": "User" }
[]
false
[]
699,410,773
615
Offset overflow when slicing a big dataset with an array of indices in Pyarrow >= 1.0.0
How to reproduce: ```python from datasets import load_dataset wiki = load_dataset("wikipedia", "20200501.en", split="train") wiki[[0]] --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) <ipython-input-13-38...
closed
https://github.com/huggingface/datasets/issues/615
2020-09-11T14:50:38
2024-05-02T06:53:15
2020-09-19T16:46:31
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
699,177,110
614
[doc] Update deploy.sh
closed
https://github.com/huggingface/datasets/pull/614
2020-09-11T11:06:13
2020-09-14T08:49:19
2020-09-14T08:49:17
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
699,117,070
613
Add CoNLL-2003 shared task dataset
Please consider adding CoNLL-2003 shared task dataset as it's beneficial for token classification tasks. The motivation behind this PR is the [PR](https://github.com/huggingface/transformers/pull/7041) in the transformers project. This dataset would be not only useful for the usual run-of-the-mill NER tasks but also fo...
closed
https://github.com/huggingface/datasets/pull/613
2020-09-11T10:02:30
2020-10-05T10:43:05
2020-09-17T10:36:38
{ "login": "vblagoje", "id": 458335, "type": "User" }
[]
true
[]
699,008,644
612
add multi-proc to dataset dict
Add multi-proc to `DatasetDict`
closed
https://github.com/huggingface/datasets/pull/612
2020-09-11T08:18:13
2020-09-11T10:20:13
2020-09-11T10:20:11
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
698,863,988
611
ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648
Hi, I'm trying to load a dataset from Dataframe, but I get the error: ```bash --------------------------------------------------------------------------- ArrowCapacityError Traceback (most recent call last) <ipython-input-7-146b6b495963> in <module> ----> 1 dataset = Dataset.from_pandas(emb)...
closed
https://github.com/huggingface/datasets/issues/611
2020-09-11T05:29:12
2022-06-01T15:11:43
2022-06-01T15:11:43
{ "login": "sangyx", "id": 32364921, "type": "User" }
[]
false
[]
698,349,388
610
Load text file for RoBERTa pre-training.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
closed
https://github.com/huggingface/datasets/issues/610
2020-09-10T18:41:38
2022-11-22T13:51:24
2022-11-22T13:51:23
{ "login": "chiyuzhang94", "id": 33407613, "type": "User" }
[]
false
[]
698,323,989
609
Update GLUE URLs (now hosted on FB)
NYU is switching dataset hosting from Google to FB. This PR closes https://github.com/huggingface/datasets/issues/608 and is necessary for https://github.com/jiant-dev/jiant/issues/161. This PR updates the data URLs based on changes made in https://github.com/nyu-mll/jiant/pull/1112.
closed
https://github.com/huggingface/datasets/pull/609
2020-09-10T18:16:32
2020-09-14T19:06:02
2020-09-14T19:06:01
{ "login": "jeswan", "id": 57466294, "type": "User" }
[]
true
[]
698,291,156
608
Don't use the old NYU GLUE dataset URLs
NYU is switching dataset hosting from Google to FB. Initial changes to `datasets` are in https://github.com/jeswan/nlp/commit/b7d4a071d432592ded971e30ef73330529de25ce. What tests do you suggest I run before opening a PR? See: https://github.com/jiant-dev/jiant/issues/161 and https://github.com/nyu-mll/jiant/pull/111...
closed
https://github.com/huggingface/datasets/issues/608
2020-09-10T17:47:02
2020-09-16T06:53:18
2020-09-16T06:53:18
{ "login": "jeswan", "id": 57466294, "type": "User" }
[]
false
[]
698,094,442
607
Add transmit_format wrapper and tests
Same as #605 but using a decorator on-top of dataset transforms that are not in place
closed
https://github.com/huggingface/datasets/pull/607
2020-09-10T15:03:50
2020-09-10T15:21:48
2020-09-10T15:21:47
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
698,050,442
606
Quick fix :)
`nlp` => `datasets`
closed
https://github.com/huggingface/datasets/pull/606
2020-09-10T14:32:06
2020-09-10T16:18:32
2020-09-10T16:18:30
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
697,887,401
605
[Datasets] Transmit format to children
Transmit format to children obtained when processing a dataset. Added a test. When concatenating datasets, if the formats are disparate, the concatenated dataset has a format reset to defaults.
closed
https://github.com/huggingface/datasets/pull/605
2020-09-10T12:30:18
2023-09-24T09:49:47
2020-09-10T16:15:21
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
697,774,581
604
Update bucket prefix
cc @julien-c
closed
https://github.com/huggingface/datasets/pull/604
2020-09-10T11:01:13
2020-09-10T12:45:33
2020-09-10T12:45:32
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
697,758,750
603
Set scripts version to master
By default the scripts version is master, so that if the library is installed with ``` pip install git+http://github.com/huggingface/nlp.git ``` or ``` git clone http://github.com/huggingface/nlp.git pip install -e ./nlp ``` will use the latest scripts, and not the ones from the previous version.
closed
https://github.com/huggingface/datasets/pull/603
2020-09-10T10:47:44
2020-09-10T11:02:05
2020-09-10T11:02:04
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
697,636,605
602
apply offset to indices in multiprocessed map
Fix #597 I fixed the indices by applying an offset. I added the case to our tests to make sure it doesn't happen again. I also added the message proposed by @thomwolf in #597 ```python >>> d.select(range(10)).map(fn, with_indices=True, batched=True, num_proc=2, load_from_cache_file=False) Done writing 10 ...
closed
https://github.com/huggingface/datasets/pull/602
2020-09-10T08:54:30
2020-09-10T11:03:39
2020-09-10T11:03:37
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
697,574,848
601
check if trasnformers has PreTrainedTokenizerBase
Fix #598
closed
https://github.com/huggingface/datasets/pull/601
2020-09-10T07:54:56
2020-09-10T11:01:37
2020-09-10T11:01:36
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
697,496,913
600
Pickling error when loading dataset
Hi, I modified line 136 in the original [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) as: ``` # line 136: return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size) dataset = load_da...
closed
https://github.com/huggingface/datasets/issues/600
2020-09-10T06:28:08
2020-09-25T14:31:54
2020-09-25T14:31:54
{ "login": "kandorm", "id": 17310286, "type": "User" }
[]
false
[]
697,377,786
599
Add MATINF dataset
@lhoestq The command to create metadata failed. I guess it's because the zip is not downloaded from a remote address? How to solve that? Also the CI fails and I don't know how to fix that :(
closed
https://github.com/huggingface/datasets/pull/599
2020-09-10T03:31:09
2023-09-24T09:50:08
2020-09-17T12:17:25
{ "login": "JetRunner", "id": 22514219, "type": "User" }
[]
true
[]
697,156,501
598
The current version of the package on github has an error when loading dataset
Instead of downloading the package from pip, downloading the version from source will result in an error when loading dataset (the pip version is completely fine): To recreate the error: First, installing nlp directly from source: ``` git clone https://github.com/huggingface/nlp.git cd nlp pip install -e . ``...
closed
https://github.com/huggingface/datasets/issues/598
2020-09-09T21:03:23
2020-09-10T06:25:21
2020-09-09T22:57:28
{ "login": "zeyuyun1", "id": 43428393, "type": "User" }
[]
false
[]
697,112,029
597
Indices incorrect with multiprocessing
When `num_proc` > 1, the indices argument passed to the map function is incorrect: ```python d = load_dataset('imdb', split='test[:1%]') def fn(x, inds): print(inds) return x d.select(range(10)).map(fn, with_indices=True, batched=True) # [0, 1] # [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] d.select(range(10...
closed
https://github.com/huggingface/datasets/issues/597
2020-09-09T19:50:56
2020-09-10T11:03:37
2020-09-10T11:03:37
{ "login": "joeddav", "id": 9353833, "type": "User" }
[]
false
[]
696,928,139
596
[style/quality] Moving to isort 5.0.0 + style/quality on datasets and metrics
Move the repo to isort 5.0.0. Also start testing style/quality on datasets and metrics. Specific rule: we allow F401 (unused imports) in metrics to be able to add imports to detect early on missing dependencies. Maybe we could add this in datasets but while cleaning this I've seen many example of really unused i...
closed
https://github.com/huggingface/datasets/pull/596
2020-09-09T15:47:21
2020-09-10T10:05:04
2020-09-10T10:05:03
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
696,892,304
595
`Dataset`/`DatasetDict` has no attribute 'save_to_disk'
Hi, As the title indicates, both `Dataset` and `DatasetDict` classes don't seem to have the `save_to_disk` method. While the file [`arrow_dataset.py`](https://github.com/huggingface/nlp/blob/34bf0b03bfe03e7f77b8fec1cd48f5452c4fc7c1/src/nlp/arrow_dataset.py) in the repo here has the method, the file `arrow_dataset.p...
closed
https://github.com/huggingface/datasets/issues/595
2020-09-09T15:01:52
2020-09-09T16:20:19
2020-09-09T16:20:18
{ "login": "sudarshan85", "id": 488428, "type": "User" }
[]
false
[]
696,816,893
594
Fix germeval url
Continuation of #593 but without the dummy data hack
closed
https://github.com/huggingface/datasets/pull/594
2020-09-09T13:29:35
2020-09-09T13:34:35
2020-09-09T13:34:34
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
696,679,182
593
GermEval 2014: new download urls
Hi, unfortunately, the download links for the GermEval 2014 dataset have changed: they're now located on a Google Drive. I changed the URLs and bump version from 1.0.0 to 2.0.0.
closed
https://github.com/huggingface/datasets/pull/593
2020-09-09T10:07:29
2020-09-09T14:16:54
2020-09-09T13:35:15
{ "login": "stefan-it", "id": 20651387, "type": "User" }
[]
true
[]
696,619,986
592
Test in memory and on disk
I added test parameters to do every test both in memory and on disk. I also found a bug in concatenate_dataset thanks to the new tests and fixed it.
closed
https://github.com/huggingface/datasets/pull/592
2020-09-09T08:59:30
2020-09-09T13:50:04
2020-09-09T13:50:03
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
696,530,413
591
fix #589 (backward compat)
Fix #589
closed
https://github.com/huggingface/datasets/pull/591
2020-09-09T07:33:13
2020-09-09T08:57:56
2020-09-09T08:57:55
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
696,501,827
590
The process cannot access the file because it is being used by another process (windows)
Hi, I consistently get the following error when developing in my PC (windows 10): ``` train_dataset = train_dataset.map(convert_to_features, batched=True) File "C:\Users\saareliad\AppData\Local\Continuum\miniconda3\envs\py38\lib\site-packages\nlp\arrow_dataset.py", line 970, in map shutil.move(tmp_file....
closed
https://github.com/huggingface/datasets/issues/590
2020-09-09T07:01:36
2020-09-25T14:02:28
2020-09-25T14:02:28
{ "login": "saareliad", "id": 22762845, "type": "User" }
[]
false
[]
696,488,447
589
Cannot use nlp.load_dataset text, AttributeError: module 'nlp.utils' has no attribute 'logging'
``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp/load.py", line 533, in load_dataset builder_cls = import_main_class(module_path, dataset=True) File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp...
closed
https://github.com/huggingface/datasets/issues/589
2020-09-09T06:46:53
2020-09-09T08:57:54
2020-09-09T08:57:54
{ "login": "ksjae", "id": 17930170, "type": "User" }
[]
false
[]
695,249,809
588
Support pathlike obj in load dataset
Fix #582 (I recreated the PR, I got an issue with git)
closed
https://github.com/huggingface/datasets/pull/588
2020-09-07T16:13:21
2020-09-08T07:45:19
2020-09-08T07:45:18
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
695,246,018
587
Support pathlike obj in load dataset
Fix #582
closed
https://github.com/huggingface/datasets/pull/587
2020-09-07T16:09:16
2020-09-07T16:10:35
2020-09-07T16:10:35
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
695,237,999
586
Better message when data files is empty
Fix #581
closed
https://github.com/huggingface/datasets/pull/586
2020-09-07T15:59:57
2020-09-09T09:00:09
2020-09-09T09:00:08
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
695,191,209
585
Fix select for pyarrow < 1.0.0
Fix #583
closed
https://github.com/huggingface/datasets/pull/585
2020-09-07T15:02:52
2020-09-08T07:43:17
2020-09-08T07:43:15
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
695,186,652
584
Use github versioning
Right now dataset scripts and metrics are downloaded from S3 which is in sync with master. It means that it's not currently possible to pin the dataset/metric script version. To fix that I changed the download url from S3 to github, and adding a `version` parameter in `load_dataset` and `load_metric` to pin a certai...
closed
https://github.com/huggingface/datasets/pull/584
2020-09-07T14:58:15
2020-09-09T13:37:35
2020-09-09T13:37:34
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
695,166,265
583
ArrowIndexError on Dataset.select
If the indices table consists in several chunks, then `dataset.select` results in an `ArrowIndexError` error for pyarrow < 1.0.0 Example: ```python from nlp import load_dataset mnli = load_dataset("glue", "mnli", split="train") shuffled = mnli.shuffle(seed=42) mnli.select(list(range(len(mnli)))) ``` rai...
closed
https://github.com/huggingface/datasets/issues/583
2020-09-07T14:36:29
2020-09-08T07:43:15
2020-09-08T07:43:15
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
695,126,456
582
Allow for PathLike objects
Using PathLike objects as input for `load_dataset` does not seem to work. The following will throw an error. ```python files = list(Path(r"D:\corpora\yourcorpus").glob("*.txt")) dataset = load_dataset("text", data_files=files) ``` Traceback: ``` Traceback (most recent call last): File "C:/dev/python/dut...
closed
https://github.com/huggingface/datasets/issues/582
2020-09-07T13:54:51
2020-09-08T07:45:17
2020-09-08T07:45:17
{ "login": "BramVanroy", "id": 2779410, "type": "User" }
[]
false
[]
695,120,517
581
Better error message when input file does not exist
In the following scenario, when `data_files` is an empty list, the stack trace and error message could be improved. This can probably be solved by checking for each file whether it actually exists and/or whether the argument is not false-y. ```python dataset = load_dataset("text", data_files=[]) ``` Example err...
closed
https://github.com/huggingface/datasets/issues/581
2020-09-07T13:47:59
2020-09-09T09:00:07
2020-09-09T09:00:07
{ "login": "BramVanroy", "id": 2779410, "type": "User" }
[]
false
[]
694,954,551
580
nlp re-creates already-there caches when using a script, but not within a shell
`nlp` keeps creating new caches for the same file when launching `filter` from a script, and behaves correctly from within the shell. Example: try running ``` import nlp hans_easy_data = nlp.load_dataset('hans', split="validation").filter(lambda x: x['label'] == 0) hans_hard_data = nlp.load_dataset('hans', s...
closed
https://github.com/huggingface/datasets/issues/580
2020-09-07T10:23:50
2020-09-07T15:19:09
2020-09-07T14:26:41
{ "login": "TevenLeScao", "id": 26709476, "type": "User" }
[]
false
[]
694,947,599
579
Doc metrics
Adding documentation on metrics loading/using/sharing
closed
https://github.com/huggingface/datasets/pull/579
2020-09-07T10:15:24
2020-09-10T13:06:11
2020-09-10T13:06:10
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
694,849,940
578
Add CommonGen Dataset
CC Authors: @yuchenlin @MichaelZhouwang
closed
https://github.com/huggingface/datasets/pull/578
2020-09-07T08:17:17
2020-09-07T11:50:29
2020-09-07T11:49:07
{ "login": "JetRunner", "id": 22514219, "type": "User" }
[]
true
[]
694,607,148
577
Some languages in wikipedia dataset are not loading
Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am loading them: ``` import nlp langs = ['ar'. 'af', '...
closed
https://github.com/huggingface/datasets/issues/577
2020-09-07T01:16:29
2023-04-11T22:50:48
2022-10-11T11:16:04
{ "login": "gaguilar", "id": 5833357, "type": "User" }
[]
false
[]
694,348,645
576
Fix the code block in doc
closed
https://github.com/huggingface/datasets/pull/576
2020-09-06T11:40:55
2020-09-07T07:37:32
2020-09-07T07:37:18
{ "login": "JetRunner", "id": 22514219, "type": "User" }
[]
true
[]
693,691,611
575
Couldn't reach certain URLs and for the ones that can be reached, code just blocks after downloading.
Hi, I'm following the [quick tour](https://huggingface.co/nlp/quicktour.html) and tried to load the glue dataset: ``` >>> from nlp import load_dataset >>> dataset = load_dataset('glue', 'mrpc', split='train') ``` However, this ran into a `ConnectionError` saying it could not reach the URL (just pasting the la...
closed
https://github.com/huggingface/datasets/issues/575
2020-09-04T21:46:25
2020-09-22T10:41:36
2020-09-22T10:41:36
{ "login": "sudarshan85", "id": 488428, "type": "User" }
[]
false
[]
693,364,853
574
Add modules cache
As discusses in #554 , we should use a module cache directory outside of the python packages directory since we may not have write permissions. I added a new HF_MODULES_PATH directory that is added to the python path when doing `import nlp`. In this directory, a module `nlp_modules` is created so that datasets can ...
closed
https://github.com/huggingface/datasets/pull/574
2020-09-04T16:30:03
2020-09-22T10:27:08
2020-09-07T09:01:35
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
693,091,790
573
Faster caching for text dataset
As mentioned in #546 and #548 , hashing `data_files` contents to get the cache directory name for a text dataset can take a long time. To make it faster I changed the hashing so that it takes into account the `path` and the `last modified timestamp` of each data file, instead of iterating through the content of each...
closed
https://github.com/huggingface/datasets/pull/573
2020-09-04T11:58:34
2020-09-04T12:53:24
2020-09-04T12:53:23
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
692,598,231
572
Add CLUE Benchmark (11 datasets)
Add 11 tasks of [CLUE](https://github.com/CLUEbenchmark/CLUE).
closed
https://github.com/huggingface/datasets/pull/572
2020-09-04T01:57:40
2020-09-07T09:59:11
2020-09-07T09:59:10
{ "login": "JetRunner", "id": 22514219, "type": "User" }
[]
true
[]
692,109,287
571
Serialization
I added `save` and `load` method to serialize/deserialize a dataset object in a folder. It moves the arrow files there (or write them if the tables were in memory), and saves the pickle state in a json file `state.json`, except the info that are in a separate file `dataset_info.json`. Example: ```python import ...
closed
https://github.com/huggingface/datasets/pull/571
2020-09-03T16:21:38
2020-09-07T07:46:08
2020-09-07T07:46:07
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
691,846,397
570
add reuters21578 dataset
Reopen a PR this the merge.
closed
https://github.com/huggingface/datasets/pull/570
2020-09-03T10:25:47
2020-09-03T10:46:52
2020-09-03T10:46:51
{ "login": "jplu", "id": 959590, "type": "User" }
[]
true
[]
691,832,720
569
Revert "add reuters21578 dataset"
Reverts huggingface/nlp#471
closed
https://github.com/huggingface/datasets/pull/569
2020-09-03T10:06:16
2020-09-03T10:07:13
2020-09-03T10:07:12
{ "login": "jplu", "id": 959590, "type": "User" }
[]
true
[]
691,638,656
568
`metric.compute` throws `ArrowInvalid` error
I get the following error with `rouge.compute`. It happens only with distributed training, and it occurs randomly I can't easily reproduce it. This is using `nlp==0.4.0` ``` File "/home/beltagy/trainer.py", line 92, in validation_step rouge_scores = rouge.compute(predictions=generated_str, references=gold_st...
closed
https://github.com/huggingface/datasets/issues/568
2020-09-03T04:56:57
2020-10-05T16:33:53
2020-10-05T16:33:53
{ "login": "ibeltagy", "id": 2287797, "type": "User" }
[]
false
[]
691,430,245
567
Fix BLEURT metrics for backward compatibility
Fix #565
closed
https://github.com/huggingface/datasets/pull/567
2020-09-02T21:22:35
2020-09-03T07:29:52
2020-09-03T07:29:50
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
691,160,208
566
Remove logger pickling to fix gg colab issues
A `logger` objects are not picklable in google colab, contrary to `logger` objects in jupyter notebooks or in python shells. It creates some issues in google colab right now. Indeed by calling any `Dataset` method, the fingerprint update pickles the transform function, and as the logger comes with it, it results in...
closed
https://github.com/huggingface/datasets/pull/566
2020-09-02T16:16:21
2020-09-03T16:31:53
2020-09-03T16:31:52
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
691,039,121
565
No module named 'nlp.logging'
Hi, I am using nlp version 0.4.0. Trying to use bleurt as an eval metric, however, the bleurt script imports nlp.logging which creates the following error. What am I missing? ``` >>> import nlp 2020-09-02 13:47:09.210310: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic l...
closed
https://github.com/huggingface/datasets/issues/565
2020-09-02T13:49:50
2020-09-03T07:29:50
2020-09-03T07:29:50
{ "login": "melody-ju", "id": 66633754, "type": "User" }
[]
false
[]
691,000,020
564
Wait for writing in distributed metrics
There were CI bugs where a distributed metric would try to read all the files in process 0 while the other processes haven't started writing. To fix that I added a custom locking mechanism that waits for the file to exist before trying to read it
closed
https://github.com/huggingface/datasets/pull/564
2020-09-02T12:58:50
2020-09-09T09:13:23
2020-09-09T09:13:22
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
690,908,674
563
[Large datasets] Speed up download and processing
Various improvements to speed-up creation and processing of large scale datasets. Currently: - distributed downloads - remove etag from datafiles hashes to spare a request when restarting a failed download
closed
https://github.com/huggingface/datasets/pull/563
2020-09-02T10:31:54
2020-09-09T09:03:33
2020-09-09T09:03:32
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
690,907,604
562
[Reproductibility] Allow to pin versions of datasets/metrics
Repurpose the `version` attribute in datasets and metrics to let the user pin a specific version of datasets and metric scripts: ``` dataset = nlp.load_dataset('squad', version='1.0.0') metric = nlp.load_metric('squad', version='1.0.0') ``` Notes: - version number are the release version of the library - curre...
closed
https://github.com/huggingface/datasets/pull/562
2020-09-02T10:30:13
2023-09-24T09:49:42
2020-09-09T13:04:54
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
690,871,415
561
Made `share_dataset` more readable
closed
https://github.com/huggingface/datasets/pull/561
2020-09-02T09:34:48
2020-09-03T09:00:30
2020-09-03T09:00:29
{ "login": "TevenLeScao", "id": 26709476, "type": "User" }
[]
true
[]
690,488,764
560
Using custom DownloadConfig results in an error
## Version / Environment Ubuntu 18.04 Python 3.6.8 nlp 0.4.0 ## Description Loading `imdb` dataset works fine when when I don't specify any `download_config` argument. When I create a custom `DownloadConfig` object and pass it to the `nlp.load_dataset` function, this results in an error. ## How to reprodu...
closed
https://github.com/huggingface/datasets/issues/560
2020-09-01T22:23:02
2022-10-04T17:23:45
2022-10-04T17:23:45
{ "login": "ynouri", "id": 1789921, "type": "User" }
[]
false
[]
690,411,263
559
Adding the KILT knowledge source and tasks
This adds Wikipedia pre-processed for KILT, as well as the task data. Only the question IDs are provided for TriviaQA, but they can easily be mapped back with: ``` import nlp kilt_wikipedia = nlp.load_dataset('kilt_wikipedia') kilt_tasks = nlp.load_dataset('kilt_tasks') triviaqa = nlp.load_dataset('trivia_qa',...
closed
https://github.com/huggingface/datasets/pull/559
2020-09-01T20:05:13
2020-09-04T18:05:47
2020-09-04T18:05:47
{ "login": "yjernite", "id": 10469459, "type": "User" }
[]
true
[]
690,318,105
558
Rerun pip install -e
Hopefully it fixes the github actions
closed
https://github.com/huggingface/datasets/pull/558
2020-09-01T17:24:39
2020-09-01T17:24:51
2020-09-01T17:24:50
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
690,220,135
557
Fix a few typos
closed
https://github.com/huggingface/datasets/pull/557
2020-09-01T15:03:24
2020-09-02T07:39:08
2020-09-02T07:39:07
{ "login": "julien-c", "id": 326577, "type": "User" }
[]
true
[]
690,218,423
556
Add DailyDialog
http://yanran.li/dailydialog.html https://arxiv.org/pdf/1710.03957.pdf
closed
https://github.com/huggingface/datasets/pull/556
2020-09-01T15:01:15
2020-09-03T15:42:03
2020-09-03T15:38:39
{ "login": "julien-c", "id": 326577, "type": "User" }
[]
true
[]
690,197,725
555
Upgrade pip in benchmark github action
It looks like it fixes the `import nlp` issue we have
closed
https://github.com/huggingface/datasets/pull/555
2020-09-01T14:37:26
2020-09-01T15:26:16
2020-09-01T15:26:15
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
690,173,214
554
nlp downloads to its module path
I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems: ```>>> import nlp >>> squad_dataset = nlp.load_dataset('squad') ...
closed
https://github.com/huggingface/datasets/issues/554
2020-09-01T14:06:14
2020-09-11T06:19:24
2020-09-11T06:19:24
{ "login": "danieldk", "id": 49398, "type": "User" }
[]
false
[]
690,143,182
553
[Fix GitHub Actions] test adding tmate
closed
https://github.com/huggingface/datasets/pull/553
2020-09-01T13:28:03
2021-05-05T18:24:38
2020-09-03T09:01:13
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
690,079,429
552
Add multiprocessing
Adding multiprocessing to `.map` It works in 3 steps: - shard the dataset in `num_proc` shards - spawn one process per shard and call `map` on them - concatenate the resulting datasets Example of usage: ```python from nlp import load_dataset dataset = load_dataset("squad", split="train") def function...
closed
https://github.com/huggingface/datasets/pull/552
2020-09-01T11:56:17
2020-09-22T15:11:56
2020-09-02T10:01:25
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
690,034,762
551
added HANS dataset
Adds the [HANS](https://github.com/tommccoy1/hans) dataset to evaluate NLI systems.
closed
https://github.com/huggingface/datasets/pull/551
2020-09-01T10:42:02
2020-09-01T12:17:10
2020-09-01T12:17:10
{ "login": "TevenLeScao", "id": 26709476, "type": "User" }
[]
true
[]
689,775,914
550
[BUGFIX] Solving mismatched checksum issue for the LinCE dataset (#539)
Hi, I have added the updated `dataset_infos.json` file for the LinCE benchmark. This update is to fix the mismatched checksum bug #539 for one of the datasets in the LinCE benchmark. To update the file, I run this command from the nlp root directory: ``` python nlp-cli test ./datasets/lince --save_infos --all_co...
closed
https://github.com/huggingface/datasets/pull/550
2020-09-01T03:27:03
2020-09-03T09:06:01
2020-09-03T09:06:01
{ "login": "gaguilar", "id": 5833357, "type": "User" }
[]
true
[]
689,766,465
549
Fix bleurt logging import
Bleurt started throwing an error in some code we have. This looks like the fix but... It's also unnerving that even a prebuilt docker image with pinned versions can be working 1 day and then fail the next (especially for production systems). Any way for us to pin your metrics code so that they are guaranteed not...
closed
https://github.com/huggingface/datasets/pull/549
2020-09-01T03:01:25
2020-09-03T18:04:46
2020-09-03T09:04:20
{ "login": "jbragg", "id": 2238344, "type": "User" }
[]
true
[]
689,285,996
548
[Breaking] Switch text loading to multi-threaded PyArrow loading
Test if we can get better performances for large-scale text datasets by using multi-threaded text file loading based on Apache Arrow multi-threaded CSV loader. If it works ok, it would fix #546. **Breaking change**: The text lines now do not include final line-breaks anymore.
closed
https://github.com/huggingface/datasets/pull/548
2020-08-31T15:15:41
2020-09-08T10:19:58
2020-09-08T10:19:57
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
689,268,589
547
[Distributed] Making loading distributed datasets a bit safer
Add some file-locks during dataset loading
closed
https://github.com/huggingface/datasets/pull/547
2020-08-31T14:51:34
2020-08-31T15:16:30
2020-08-31T15:16:29
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
689,186,526
546
Very slow data loading on large dataset
I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread during the data loading step. ``` train_fil...
closed
https://github.com/huggingface/datasets/issues/546
2020-08-31T12:57:23
2024-01-02T20:26:24
2020-09-08T10:19:57
{ "login": "agemagician", "id": 6087313, "type": "User" }
[]
false
[]
689,138,878
545
New release coming up for this library
Hi all, A few words on the roadmap for this library. The next release will be a big one and is planed at the end of this week. In addition to the support for indexed datasets (useful for non-parametric models like REALM, RAG, DPR, knn-LM and many other fast dataset retrieval technics), it will: - have support f...
closed
https://github.com/huggingface/datasets/issues/545
2020-08-31T11:37:38
2021-01-13T10:59:04
2021-01-13T10:59:04
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
false
[]
689,062,519
544
[Distributed] Fix load_dataset error when multiprocessing + add test
Fix #543 + add test
closed
https://github.com/huggingface/datasets/pull/544
2020-08-31T09:30:10
2020-08-31T11:15:11
2020-08-31T11:15:10
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
688,644,407
543
nlp.load_dataset is not safe for multi processes when loading from local files
Loading from local files, e.g., `dataset = nlp.load_dataset('csv', data_files=['file_1.csv', 'file_2.csv'])` concurrently from multiple processes, will raise `FileExistsError` from builder's line 430, https://github.com/huggingface/nlp/blob/6655008c738cb613c522deb3bd18e35a67b2a7e5/src/nlp/builder.py#L423-L438 Likel...
closed
https://github.com/huggingface/datasets/issues/543
2020-08-30T03:20:34
2020-08-31T11:15:10
2020-08-31T11:15:10
{ "login": "luyug", "id": 55288513, "type": "User" }
[]
false
[]
688,555,036
542
Add TensorFlow example
Update the Quick Tour documentation in order to add the TensorFlow equivalent source code for the classification example. Now it is possible to select either the code in PyTorch or in TensorFlow in the Quick tour.
closed
https://github.com/huggingface/datasets/pull/542
2020-08-29T15:39:27
2020-08-31T09:49:20
2020-08-31T09:49:19
{ "login": "jplu", "id": 959590, "type": "User" }
[]
true
[]
688,521,224
541
Best practices for training tokenizers with nlp
Hi, thank you for developing this library. What do you think are the best practices for training tokenizers using `nlp`? In the document and examples, I could only find pre-trained tokenizers used.
closed
https://github.com/huggingface/datasets/issues/541
2020-08-29T12:06:49
2022-10-04T17:28:04
2022-10-04T17:28:04
{ "login": "moskomule", "id": 11806234, "type": "User" }
[]
false
[]
688,475,884
540
[BUGFIX] Fix Race Dataset Checksum bug
In #537 I noticed that there was a bug in checksum checking when I have tried to download the race dataset. The reason for this is that the current preprocessing was just considering the `high school` data and it was ignoring the `middle` one. This PR just fixes it :) Moreover, I have added some descriptions.
closed
https://github.com/huggingface/datasets/pull/540
2020-08-29T07:00:10
2020-09-18T11:42:20
2020-09-18T11:42:20
{ "login": "abarbosa94", "id": 6608232, "type": "User" }
[]
true
[]
688,323,602
539
[Dataset] `NonMatchingChecksumError` due to an update in the LinCE benchmark data
Hi, There is a `NonMatchingChecksumError` error for the `lid_msaea` (language identification for Modern Standard Arabic - Egyptian Arabic) dataset from the LinCE benchmark due to a minor update on that dataset. How can I update the checksum of the library to solve this issue? The error is below and it also appea...
closed
https://github.com/huggingface/datasets/issues/539
2020-08-28T19:55:51
2020-09-03T16:34:02
2020-09-03T16:34:01
{ "login": "gaguilar", "id": 5833357, "type": "User" }
[]
false
[]
688,015,912
538
[logging] Add centralized logging - Bump-up cache loads to warnings
Add a `nlp.logging` module to set the global logging level easily. The verbosity level also controls the tqdm bars (disabled when set higher than INFO). You can use: ``` nlp.logging.set_verbosity(verbosity: int) nlp.logging.set_verbosity_info() nlp.logging.set_verbosity_warning() nlp.logging.set_verbosity_debug...
closed
https://github.com/huggingface/datasets/pull/538
2020-08-28T11:42:29
2020-08-31T11:42:51
2020-08-31T11:42:51
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
687,614,699
537
[Dataset] RACE dataset Checksums error
Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps: ``` dataset = nlp.load_dataset("race") len(dataset["train"]), len(dataset["validation"]) ``` But then I got the following error: ``` ----------------------------------...
closed
https://github.com/huggingface/datasets/issues/537
2020-08-27T23:58:16
2020-09-18T12:07:04
2020-09-18T12:07:04
{ "login": "abarbosa94", "id": 6608232, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
687,378,332
536
Fingerprint
This PR is a continuation of #513 , in which many in-place functions were introduced or updated (cast_, flatten_) etc. However the caching didn't handle these changes. Indeed the caching took into account only the previous cache file name of the table, and not the possible in-place transforms of the table. To fix t...
closed
https://github.com/huggingface/datasets/pull/536
2020-08-27T16:27:09
2020-08-31T14:20:40
2020-08-31T14:20:39
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
686,238,315
535
Benchmarks
Adding some benchmarks with DVC/CML To add a new tracked benchmark: - create a new python benchmarking script in `./benchmarks/`. The script can use the utilities in `./benchmarks/utils.py` and should output a JSON file with results in `./benchmarks/results/`. - add a new pipeline stage in [dvc.yaml](./dvc.yaml) w...
closed
https://github.com/huggingface/datasets/pull/535
2020-08-26T11:21:26
2020-08-27T08:40:00
2020-08-27T08:39:59
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
686,115,912
534
`list_datasets()` is broken.
version = '0.4.0' `list_datasets()` is broken. It results in the following error : ``` In [3]: nlp.list_datasets() Out[3]: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) ~/.virtualenvs/san-lgUCsFg_/lib/py...
closed
https://github.com/huggingface/datasets/issues/534
2020-08-26T08:19:01
2020-08-27T06:31:11
2020-08-27T06:31:11
{ "login": "ashutosh-dwivedi-e3502", "id": 314169, "type": "User" }
[]
false
[]
685,585,914
533
Fix ArrayXD for pyarrow 0.17.1 by using non fixed length list arrays
It should fix the CI problems in #513
closed
https://github.com/huggingface/datasets/pull/533
2020-08-25T15:32:44
2020-08-26T08:02:24
2020-08-26T08:02:23
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
685,540,614
532
File exists error when used with TPU
Hi, I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8). I modified [line 131 in the original `run_language_modeling.py`](https://github.com/...
open
https://github.com/huggingface/datasets/issues/532
2020-08-25T14:36:38
2020-09-01T12:14:56
null
{ "login": "go-inoue", "id": 20531705, "type": "User" }
[]
false
[]
685,291,036
531
add concatenate_datasets to the docs
closed
https://github.com/huggingface/datasets/pull/531
2020-08-25T08:40:05
2020-08-25T09:02:20
2020-08-25T09:02:19
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
684,825,612
530
use ragged tensor by default
I think it's better if it's clear whether the returned tensor is ragged or not when the type is set to tensorflow. Previously it was a tensor (not ragged) if numpy could stack the output (which can change depending on the batch of example you take), which make things difficult to handle, as it may sometimes return a r...
closed
https://github.com/huggingface/datasets/pull/530
2020-08-24T17:06:15
2021-10-22T19:38:40
2020-08-24T19:22:25
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]