id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
684,797,157 | 529 | Add MLSUM | Hello (again :) !),
So, I started a new branch because of a [rebase issue](https://github.com/huggingface/nlp/pull/463), sorry for the mess.
However, the command `pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_mlsum` still fails because there is no default language dataset : the s... | closed | https://github.com/huggingface/datasets/pull/529 | 2020-08-24T16:18:35 | 2020-08-26T08:04:11 | 2020-08-26T08:04:11 | {
"login": "RachelKer",
"id": 36986299,
"type": "User"
} | [] | true | [] |
684,673,673 | 528 | fix missing variable names in docs | fix #524 | closed | https://github.com/huggingface/datasets/pull/528 | 2020-08-24T13:31:48 | 2020-08-25T09:04:04 | 2020-08-25T09:04:03 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
684,632,930 | 527 | Fix config used for slow test on real dataset | As noticed in #470, #474, #476, #504 , the slow test `test_load_real_dataset` couldn't run on datasets that require config parameters.
To fix that I replaced it with one test with the first config of BUILDER_CONFIGS `test_load_real_dataset`, and another test that runs all of the configs in BUILDER_CONFIGS `test_load... | closed | https://github.com/huggingface/datasets/pull/527 | 2020-08-24T12:39:34 | 2020-08-25T09:20:45 | 2020-08-25T09:20:44 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
684,615,455 | 526 | Returning None instead of "python" if dataset is unformatted | Following the discussion on Slack, this small fix ensures that calling `dataset.set_format(type=dataset.format["type"])` works properly. Slightly breaking as calling `dataset.format` when the dataset is unformatted will return `None` instead of `python`. | closed | https://github.com/huggingface/datasets/pull/526 | 2020-08-24T12:10:35 | 2020-08-24T12:50:43 | 2020-08-24T12:50:42 | {
"login": "TevenLeScao",
"id": 26709476,
"type": "User"
} | [] | true | [] |
683,875,483 | 525 | wmt download speed example | Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine.
```
import nlp
nlp.load_dataset('wmt16', 'de-en')
```
Downloads at 49.1 K... | closed | https://github.com/huggingface/datasets/issues/525 | 2020-08-21T23:29:06 | 2022-10-04T17:45:39 | 2022-10-04T17:45:39 | {
"login": "sshleifer",
"id": 6045025,
"type": "User"
} | [] | false | [] |
683,686,359 | 524 | Some docs are missing parameter names | See https://huggingface.co/nlp/master/package_reference/main_classes.html#nlp.Dataset.map. I believe this is because the parameter names are enclosed in backticks in the docstrings, maybe it's an old docstring format that doesn't work with the current Sphinx version. | closed | https://github.com/huggingface/datasets/issues/524 | 2020-08-21T16:47:34 | 2020-08-25T09:04:03 | 2020-08-25T09:04:03 | {
"login": "jarednielsen",
"id": 4564897,
"type": "User"
} | [] | false | [] |
682,573,232 | 523 | Speed up Tokenization by optimizing cast_to_python_objects | I changed how `cast_to_python_objects` works to make it faster.
It is used to cast numpy/pytorch/tensorflow/pandas objects to python lists, and it works recursively.
To avoid iterating over possibly long lists, it first checks if the first element that is not None has to be casted.
If the first element needs to be... | closed | https://github.com/huggingface/datasets/pull/523 | 2020-08-20T09:42:02 | 2020-08-24T08:54:15 | 2020-08-24T08:54:14 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
682,478,833 | 522 | dictionnary typo in docs | Many places dictionary is spelled dictionnary, not sure if its on purpose or not.
Fixed in this pr:
https://github.com/huggingface/nlp/pull/521 | closed | https://github.com/huggingface/datasets/issues/522 | 2020-08-20T07:11:05 | 2020-08-20T07:52:14 | 2020-08-20T07:52:13 | {
"login": "yonigottesman",
"id": 4004127,
"type": "User"
} | [] | false | [] |
682,477,648 | 521 | Fix dictionnary (dictionary) typo | This error happens many times I'm thinking maybe its spelled like this on purpose? | closed | https://github.com/huggingface/datasets/pull/521 | 2020-08-20T07:09:02 | 2020-08-20T07:52:04 | 2020-08-20T07:52:04 | {
"login": "yonigottesman",
"id": 4004127,
"type": "User"
} | [] | true | [] |
682,264,839 | 520 | Transform references for sacrebleu | Currently it is impossible to use sacrebleu when len(predictions) != the number of references per prediction (very uncommon), due to a strange format expected by sacrebleu. If one passes in the data to `nlp.metric.compute()` in sacrebleu format, `nlp` throws an error due to mismatching lengths between predictions and r... | closed | https://github.com/huggingface/datasets/pull/520 | 2020-08-20T00:26:55 | 2020-08-20T09:30:54 | 2020-08-20T09:30:53 | {
"login": "jbragg",
"id": 2238344,
"type": "User"
} | [] | true | [] |
682,193,882 | 519 | [BUG] Metrics throwing new error on master since 0.4.0 | The following error occurs when passing in references of type `List[List[str]]` to metrics like bleu.
Wasn't happening on 0.4.0 but happening now on master.
```
File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 226, in compute
self.add_batch(predictions=predictions, references=references)
... | closed | https://github.com/huggingface/datasets/issues/519 | 2020-08-19T21:29:15 | 2022-06-02T16:41:01 | 2020-08-19T22:04:40 | {
"login": "jbragg",
"id": 2238344,
"type": "User"
} | [] | false | [] |
682,131,165 | 518 | [METRICS, breaking] Refactor caching behavior, pickle/cloudpickle metrics and dataset, add tests on metrics | Move the acquisition of the filelock at a later stage during metrics processing so it can be pickled/cloudpickled after instantiation.
Also add some tests on pickling, concurrent but separate metric instances and concurrent and distributed metric instances.
Changes significantly the caching behavior for the metri... | closed | https://github.com/huggingface/datasets/pull/518 | 2020-08-19T19:43:08 | 2020-08-24T16:01:40 | 2020-08-24T16:01:39 | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [] | true | [] |
681,896,944 | 517 | add MLDoc dataset | Hi,
I am recommending that someone add MLDoc, a multilingual news topic classification dataset.
- Here's a link to the Github: https://github.com/facebookresearch/MLDoc
- and the paper: http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf
Looks like the dataset contains news stories in multiple languages... | open | https://github.com/huggingface/datasets/issues/517 | 2020-08-19T14:41:59 | 2021-08-03T05:59:33 | null | {
"login": "jxmorris12",
"id": 13238952,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
681,846,032 | 516 | [Breaking] Rename formated to formatted | `formated` is not correct but `formatted` is | closed | https://github.com/huggingface/datasets/pull/516 | 2020-08-19T13:35:23 | 2020-08-20T08:41:17 | 2020-08-20T08:41:16 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
681,845,619 | 515 | Fix batched map for formatted dataset | If you had a dataset formatted as numpy for example, and tried to do a batched map, then it would crash because one of the elements from the inputs was missing for unchanged columns (ex: batch of length 999 instead of 1000).
The happened during the creation of the `pa.Table`, since columns had different lengths. | closed | https://github.com/huggingface/datasets/pull/515 | 2020-08-19T13:34:50 | 2020-08-20T20:30:43 | 2020-08-20T20:30:42 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
681,256,348 | 514 | dataset.shuffle(keep_in_memory=True) is never allowed | As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)`
The commit added the lines
```python
# lines 994-996 in src/nlp/arrow_dataset.py
assert (
not keep_in_memory or cache_file_name is None
), "Please use either... | closed | https://github.com/huggingface/datasets/issues/514 | 2020-08-18T18:47:40 | 2022-10-10T12:21:58 | 2022-10-10T12:21:58 | {
"login": "vegarab",
"id": 24683907,
"type": "User"
} | [
{
"name": "good first issue",
"color": "7057ff"
},
{
"name": "hacktoberfest",
"color": "DF8D62"
}
] | false | [] |
681,215,612 | 513 | [speedup] Use indices mappings instead of deepcopy for all the samples reordering methods | Use an indices mapping instead of rewriting the dataset for all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`).
Added a `flatten_indices` method which copy the dataset to a new table to remove the indices mapping with tests.
All the samples re-ordering/selecti... | closed | https://github.com/huggingface/datasets/pull/513 | 2020-08-18T17:36:02 | 2020-08-28T08:41:51 | 2020-08-28T08:41:50 | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [] | true | [] |
681,137,164 | 512 | Delete CONTRIBUTING.md | closed | https://github.com/huggingface/datasets/pull/512 | 2020-08-18T15:33:25 | 2020-08-18T15:48:21 | 2020-08-18T15:39:07 | {
"login": "ChenZehong13",
"id": 56394989,
"type": "User"
} | [] | true | [] | |
681,055,553 | 511 | dataset.shuffle() and select() resets format. Intended? | Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight?
When working on quite large datasets that require a lot of preprocessing I find it convenient to save the processed dataset to file using `torch.save("dataset.pt")`. Later... | closed | https://github.com/huggingface/datasets/issues/511 | 2020-08-18T13:46:01 | 2020-09-14T08:45:38 | 2020-09-14T08:45:38 | {
"login": "vegarab",
"id": 24683907,
"type": "User"
} | [] | false | [] |
680,823,644 | 510 | Version of numpy to use the library | Thank you so much for your excellent work! I would like to use nlp library in my project. While importing nlp, I am receiving the following error `AttributeError: module 'numpy.random' has no attribute 'Generator'` Numpy version in my project is 1.16.0. May I learn which numpy version is used for the nlp library.
Th... | closed | https://github.com/huggingface/datasets/issues/510 | 2020-08-18T08:59:13 | 2020-08-19T18:35:56 | 2020-08-19T18:35:56 | {
"login": "isspek",
"id": 6966175,
"type": "User"
} | [] | false | [] |
679,711,585 | 509 | Converting TensorFlow dataset example | Hi,
I want to use TensorFlow datasets with this repo, I noticed you made some conversion script,
can you give a simple example of using it?
Thanks
| closed | https://github.com/huggingface/datasets/issues/509 | 2020-08-16T08:05:20 | 2021-08-03T06:01:18 | 2021-08-03T06:01:17 | {
"login": "saareliad",
"id": 22762845,
"type": "User"
} | [] | false | [] |
679,705,734 | 508 | TypeError: Receiver() takes no arguments | I am trying to load a wikipedia data set
```
import nlp
from nlp import load_dataset
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner')
#dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner')
```
Th... | closed | https://github.com/huggingface/datasets/issues/508 | 2020-08-16T07:18:16 | 2020-09-01T14:53:33 | 2020-09-01T14:49:03 | {
"login": "sebastiantomac",
"id": 1225851,
"type": "User"
} | [] | false | [] |
679,400,683 | 507 | Errors when I use | I tried the following example code from https://huggingface.co/deepset/roberta-base-squad2 and got errors
I am using **transformers 3.0.2** code .
from transformers.pipelines import pipeline
from transformers.modeling_auto import AutoModelForQuestionAnswering
from transformers.tokenization_auto import AutoToke... | closed | https://github.com/huggingface/datasets/issues/507 | 2020-08-14T21:03:57 | 2020-08-14T21:39:10 | 2020-08-14T21:39:10 | {
"login": "mchari",
"id": 30506151,
"type": "User"
} | [] | false | [] |
679,164,788 | 506 | fix dataset.map for function without outputs | As noticed in #505 , giving a function that doesn't return anything in `.map` raises an error because of an unreferenced variable.
I fixed that and added tests.
Thanks @avloss for reporting | closed | https://github.com/huggingface/datasets/pull/506 | 2020-08-14T13:40:22 | 2020-08-17T11:24:39 | 2020-08-17T11:24:38 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
678,791,400 | 505 | tmp_file referenced before assignment | Just learning about this library - so might've not set up all the flags correctly, but was getting this error about "tmp_file". | closed | https://github.com/huggingface/datasets/pull/505 | 2020-08-13T23:27:33 | 2020-08-14T13:42:46 | 2020-08-14T13:42:46 | {
"login": "avloss",
"id": 17853685,
"type": "User"
} | [] | true | [] |
678,756,211 | 504 | Added downloading to Hyperpartisan news detection | Following the discussion on Slack and #349, I've updated the hyperpartisan dataset to pull directly from Zenodo rather than manual install, which should make this dataset much more accessible. Many thanks to @johanneskiesel !
Currently doesn't pass `test_load_real_dataset` - I'm using `self.config.name` which is `de... | closed | https://github.com/huggingface/datasets/pull/504 | 2020-08-13T21:53:46 | 2020-08-27T08:18:41 | 2020-08-27T08:18:41 | {
"login": "ghomasHudson",
"id": 13795113,
"type": "User"
} | [] | true | [] |
678,726,538 | 503 | CompGuessWhat?! 0.2.0 | We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset. | closed | https://github.com/huggingface/datasets/pull/503 | 2020-08-13T20:51:26 | 2020-10-21T06:54:29 | 2020-10-21T06:54:29 | {
"login": "aleSuglia",
"id": 1479733,
"type": "User"
} | [] | true | [] |
678,546,070 | 502 | Fix tokenizers caching | I've found some cases where the caching didn't work properly for tokenizers:
1. if a tokenizer has a regex pattern, then the caching would be inconsistent across sessions
2. if a tokenizer has a cache attribute that changes after some calls, the the caching would not work after cache updates
3. if a tokenizer is u... | closed | https://github.com/huggingface/datasets/pull/502 | 2020-08-13T15:53:37 | 2020-08-19T13:37:19 | 2020-08-19T13:37:18 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
677,952,893 | 501 | Caching doesn't work for map (non-deterministic) | The caching functionality doesn't work reliably when tokenizing a dataset. Here's a small example to reproduce it.
```python
import nlp
import transformers
def main():
ds = nlp.load_dataset("reddit", split="train[:500]")
tokenizer = transformers.AutoTokenizer.from_pretrained("gpt2")
def conv... | closed | https://github.com/huggingface/datasets/issues/501 | 2020-08-12T20:20:07 | 2022-08-08T11:02:23 | 2020-08-24T16:34:35 | {
"login": "wulu473",
"id": 8149933,
"type": "User"
} | [] | false | [] |
677,841,708 | 500 | Use hnsw in wiki_dpr | The HNSW faiss index is much faster that regular Flat index. | closed | https://github.com/huggingface/datasets/pull/500 | 2020-08-12T16:58:07 | 2020-08-20T07:59:19 | 2020-08-20T07:59:18 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
677,709,938 | 499 | Narrativeqa (with full text) | Following the uploading of the full text data in #309, I've added the full text to the narrativeqa dataset.
Few notes:
- Had some encoding issues using the default `open` so am using `open(encoding="latin-1"...` which seems to fix it. Looks fine.
- Can't get the dummy data to work. Currently putting stuff at:
... | closed | https://github.com/huggingface/datasets/pull/499 | 2020-08-12T13:49:43 | 2020-12-09T11:21:02 | 2020-12-09T11:21:02 | {
"login": "ghomasHudson",
"id": 13795113,
"type": "User"
} | [] | true | [] |
677,597,479 | 498 | dont use beam fs to save info for local cache dir | If the cache dir is local, then we shouldn't use beam's filesystem to save the dataset info
Fix #490
| closed | https://github.com/huggingface/datasets/pull/498 | 2020-08-12T11:00:00 | 2020-08-14T13:17:21 | 2020-08-14T13:17:20 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
677,057,116 | 497 | skip header in PAWS-X | This should fix #485
I also updated the `dataset_infos.json` file that is used to verify the integrity of the generated splits (the number of examples was reduced by one).
Note that there are new fields in `dataset_infos.json` introduced in the latest release 0.4.0 corresponding to post processing info. I remove... | closed | https://github.com/huggingface/datasets/pull/497 | 2020-08-11T17:26:25 | 2020-08-19T09:50:02 | 2020-08-19T09:50:01 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
677,016,998 | 496 | fix bad type in overflow check | When writing an arrow file and inferring the features, the overflow check could fail if the first example had a `null` field.
This is because we were not using the inferred features to do this check, and we could end up with arrays that don't match because of a type mismatch (`null` vs `string` for example).
This s... | closed | https://github.com/huggingface/datasets/pull/496 | 2020-08-11T16:24:58 | 2020-08-14T13:29:35 | 2020-08-14T13:29:34 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
676,959,289 | 495 | stack vectors in pytorch and tensorflow | When the format of a dataset is set to pytorch or tensorflow, and if the dataset has vectors in it, they were not stacked together as tensors when calling `dataset[i:i + batch_size][column]` or `dataset[column]`.
I added support for stacked tensors for both pytorch and tensorflow.
For ragged tensors, they are stack... | closed | https://github.com/huggingface/datasets/pull/495 | 2020-08-11T15:12:53 | 2020-08-12T09:30:49 | 2020-08-12T09:30:48 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
676,886,955 | 494 | Fix numpy stacking | When getting items using a column name as a key, numpy arrays were not stacked.
I fixed that and added some tests.
There is another issue that still needs to be fixed though: when getting items using a column name as a key, pytorch tensors are not stacked (it outputs a list of tensors). This PR should help with the... | closed | https://github.com/huggingface/datasets/pull/494 | 2020-08-11T13:40:30 | 2020-08-11T14:56:50 | 2020-08-11T13:49:52 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
676,527,351 | 493 | Fix wmt zh-en url | I verified that
```
wget https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-zh.tar.gz.00
```
runs in 2 minutes. | closed | https://github.com/huggingface/datasets/pull/493 | 2020-08-11T02:14:52 | 2020-08-11T02:22:28 | 2020-08-11T02:22:12 | {
"login": "sshleifer",
"id": 6045025,
"type": "User"
} | [] | true | [] |
676,495,064 | 492 | nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema | Here's the code I'm trying to run:
```python
dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir)
dset_wikipedia.drop(columns=["title"])
dset_wikipedia.features.pop("title")
dset_books = nlp.load_dataset("bookcorpus", split="train", cache_dir=args.cache_dir)
dse... | closed | https://github.com/huggingface/datasets/issues/492 | 2020-08-11T00:27:46 | 2020-08-26T16:17:19 | 2020-08-26T16:17:19 | {
"login": "jarednielsen",
"id": 4564897,
"type": "User"
} | [] | false | [] |
676,486,275 | 491 | No 0.4.0 release on GitHub | 0.4.0 was released on PyPi, but not on GitHub. This means [the documentation](https://huggingface.co/nlp/) is still displaying from 0.3.0, and that there's no tag to easily clone the 0.4.0 version of the repo. | closed | https://github.com/huggingface/datasets/issues/491 | 2020-08-10T23:59:57 | 2020-08-11T16:50:07 | 2020-08-11T16:50:07 | {
"login": "jarednielsen",
"id": 4564897,
"type": "User"
} | [] | false | [] |
676,482,242 | 490 | Loading preprocessed Wikipedia dataset requires apache_beam | Running
`nlp.load_dataset("wikipedia", "20200501.en", split="train", dir="/tmp/wikipedia")`
gives an error if apache_beam is not installed, stemming from
https://github.com/huggingface/nlp/blob/38eb2413de54ee804b0be81781bd65ac4a748ced/src/nlp/builder.py#L981-L988
This succeeded without the dependency in ve... | closed | https://github.com/huggingface/datasets/issues/490 | 2020-08-10T23:46:50 | 2020-08-14T13:17:20 | 2020-08-14T13:17:20 | {
"login": "jarednielsen",
"id": 4564897,
"type": "User"
} | [] | false | [] |
676,456,257 | 489 | ug | closed | https://github.com/huggingface/datasets/issues/489 | 2020-08-10T22:33:03 | 2020-08-10T22:55:14 | 2020-08-10T22:33:40 | {
"login": "timothyjlaurent",
"id": 2000204,
"type": "User"
} | [] | false | [] | |
676,299,993 | 488 | issues with downloading datasets for wmt16 and wmt19 | I have encountered multiple issues while trying to:
```
import nlp
dataset = nlp.load_dataset('wmt16', 'ru-en')
metric = nlp.load_metric('wmt16')
```
1. I had to do `pip install -e ".[dev]" ` on master, currently released nlp didn't work (sorry, didn't save the error) - I went back to the released version and no... | closed | https://github.com/huggingface/datasets/issues/488 | 2020-08-10T17:32:51 | 2022-10-04T17:46:59 | 2022-10-04T17:46:58 | {
"login": "stas00",
"id": 10676103,
"type": "User"
} | [] | false | [] |
676,143,029 | 487 | Fix elasticsearch result ids returning as strings | I am using the latest elasticsearch binary and master of nlp. For me elasticsearch searches failed because the resultant "id_" returned for searches are strings, but our library assumes them to be integers. | closed | https://github.com/huggingface/datasets/pull/487 | 2020-08-10T13:37:11 | 2020-08-31T10:42:46 | 2020-08-31T10:42:46 | {
"login": "sai-prasanna",
"id": 3595526,
"type": "User"
} | [] | true | [] |
675,649,034 | 486 | Bookcorpus data contains pretokenized text | It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end q... | closed | https://github.com/huggingface/datasets/issues/486 | 2020-08-09T06:53:24 | 2022-10-04T17:44:33 | 2022-10-04T17:44:33 | {
"login": "orsharir",
"id": 99543,
"type": "User"
} | [] | false | [] |
675,595,393 | 485 | PAWS dataset first item is header | ```
import nlp
dataset = nlp.load_dataset('xtreme', 'PAWS-X.en')
dataset['test'][0]
```
prints the following
```
{'label': 'label', 'sentence1': 'sentence1', 'sentence2': 'sentence2'}
```
dataset['test'][0] should probably be the first item in the dataset, not just a dictionary mapping the column names t... | closed | https://github.com/huggingface/datasets/issues/485 | 2020-08-08T22:05:25 | 2020-08-19T09:50:01 | 2020-08-19T09:50:01 | {
"login": "jxmorris12",
"id": 13238952,
"type": "User"
} | [] | false | [] |
675,088,983 | 484 | update mirror for RT dataset | closed | https://github.com/huggingface/datasets/pull/484 | 2020-08-07T15:25:45 | 2020-08-24T13:33:37 | 2020-08-24T13:33:37 | {
"login": "jxmorris12",
"id": 13238952,
"type": "User"
} | [] | true | [] | |
675,080,694 | 483 | rotten tomatoes movie review dataset taken down | In an interesting twist of events, the individual who created the movie review seems to have left Cornell, and their webpage has been removed, along with the movie review dataset (http://www.cs.cornell.edu/people/pabo/movie-review-data/rt-polaritydata.tar.gz). It's not downloadable anymore. | closed | https://github.com/huggingface/datasets/issues/483 | 2020-08-07T15:12:01 | 2020-09-08T09:36:34 | 2020-09-08T09:36:33 | {
"login": "jxmorris12",
"id": 13238952,
"type": "User"
} | [] | false | [] |
674,851,147 | 482 | Bugs : dataset.map() is frozen on ELI5 | Hi Huggingface Team!
Thank you guys once again for this amazing repo.
I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb)
However, when I run `dataset.map()` on ELI5 to prepare `input_text, ta... | closed | https://github.com/huggingface/datasets/issues/482 | 2020-08-07T08:23:35 | 2023-04-06T09:39:59 | 2020-08-11T23:55:15 | {
"login": "ratthachat",
"id": 56621342,
"type": "User"
} | [] | false | [] |
674,567,389 | 481 | Apply utf-8 encoding to all datasets | ## Description
This PR applies utf-8 encoding for all instances of `with open(...) as f` to all Python files in `datasets/`. As suggested by @thomwolf in #468 , we use regular expressions and the following function
```python
def apply_encoding_on_file_open(filepath: str):
"""Apply UTF-8 encoding for all insta... | closed | https://github.com/huggingface/datasets/pull/481 | 2020-08-06T20:02:09 | 2020-08-20T08:16:08 | 2020-08-20T08:16:08 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | true | [] |
674,245,959 | 480 | Column indexing hotfix | As observed for example in #469 , currently `__getitem__` does not convert the data to the dataset format when indexing by column. This is a hotfix that imitates functional 0.3.0. code. In the future it'd probably be nice to have a test there. | closed | https://github.com/huggingface/datasets/pull/480 | 2020-08-06T11:37:05 | 2023-09-24T09:49:33 | 2020-08-12T08:36:10 | {
"login": "TevenLeScao",
"id": 26709476,
"type": "User"
} | [] | true | [] |
673,905,407 | 479 | add METEOR metric | Added the METEOR metric. Can be used like this:
```python
import nlp
meteor = nlp.load_metric('metrics/meteor')
meteor.compute(["some string", "some string"], ["some string", "some similar string"])
# {'meteor': 0.6411637931034483}
meteor.add("some string", "some string")
meteor.add('some string", "some simila... | closed | https://github.com/huggingface/datasets/pull/479 | 2020-08-05T23:13:00 | 2020-08-19T13:39:09 | 2020-08-19T13:39:09 | {
"login": "vegarab",
"id": 24683907,
"type": "User"
} | [] | true | [] |
673,178,317 | 478 | Export TFRecord to GCP bucket | Previously, I was writing TFRecords manually to GCP bucket with : `with tf.io.TFRecordWriter('gs://my_bucket/x.tfrecord')`
Since `0.4.0` is out with the `export()` function, I tried it. But it seems TFRecords cannot be directly written to GCP bucket.
`dataset.export('local.tfrecord')` works fine,
but `dataset.... | closed | https://github.com/huggingface/datasets/issues/478 | 2020-08-05T01:08:32 | 2020-08-05T01:21:37 | 2020-08-05T01:21:36 | {
"login": "astariul",
"id": 43774355,
"type": "User"
} | [] | false | [] |
673,142,143 | 477 | Overview.ipynb throws exceptions with nlp 0.4.0 | with nlp 0.4.0, the TensorFlow example in Overview.ipynb throws the following exceptions:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-48907f2ad433> in <module>
----> 1 features = {x: trai... | closed | https://github.com/huggingface/datasets/issues/477 | 2020-08-04T23:18:15 | 2021-08-03T06:02:15 | 2021-08-03T06:02:15 | {
"login": "mandy-li",
"id": 23109219,
"type": "User"
} | [] | false | [] |
672,991,854 | 476 | CheckList | Sorry for the large pull request.
- Added checklists as datasets. I can't run `test_load_real_dataset` (see #474), but I can load the datasets successfully as shown in the example notebook
- Added a checklist wrapper | closed | https://github.com/huggingface/datasets/pull/476 | 2020-08-04T18:32:05 | 2022-10-03T09:43:37 | 2022-10-03T09:43:37 | {
"login": "marcotcr",
"id": 698010,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
672,884,595 | 475 | misc. bugs and quality of life | A few misc. bugs and QOL improvements that I've come across in using the library. Let me know if you don't like any of them and I can adjust/remove them.
1. Printing datasets without a description field throws an error when formatting the `single_line_description`. This fixes that, and also adds some formatting to t... | closed | https://github.com/huggingface/datasets/pull/475 | 2020-08-04T15:32:29 | 2020-08-17T21:14:08 | 2020-08-17T21:14:07 | {
"login": "joeddav",
"id": 9353833,
"type": "User"
} | [] | true | [] |
672,407,330 | 474 | test_load_real_dataset when config has BUILDER_CONFIGS that matter | It a dataset has custom `BUILDER_CONFIGS` with non-keyword arguments (or keyword arguments with non default values), the config is not loaded during the test and causes an error.
I think the problem is that `test_load_real_dataset` calls `load_dataset` with `data_dir=temp_data_dir` ([here](https://github.com/huggingfa... | closed | https://github.com/huggingface/datasets/issues/474 | 2020-08-03T23:46:36 | 2020-09-07T14:53:13 | 2020-09-07T14:53:13 | {
"login": "marcotcr",
"id": 698010,
"type": "User"
} | [] | false | [] |
672,007,247 | 473 | add DoQA dataset (ACL 2020) | add DoQA dataset (ACL 2020) http://ixa.eus/node/12931 | closed | https://github.com/huggingface/datasets/pull/473 | 2020-08-03T11:26:52 | 2020-09-10T17:19:11 | 2020-09-03T11:44:15 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
672,000,745 | 472 | add crd3 dataset | opening new PR for CRD3 dataset (ACL2020) to fix the circle CI problems | closed | https://github.com/huggingface/datasets/pull/472 | 2020-08-03T11:15:02 | 2020-08-03T11:22:10 | 2020-08-03T11:22:09 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
671,996,423 | 471 | add reuters21578 dataset | new PR to add the reuters21578 dataset and fix the circle CI problems.
Fix partially:
- #353
Subsequent PR after:
- #449 | closed | https://github.com/huggingface/datasets/pull/471 | 2020-08-03T11:07:14 | 2022-08-04T08:39:11 | 2020-09-03T09:58:50 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
671,952,276 | 470 | Adding IWSLT 2017 dataset. | Created a [IWSLT 2017](https://sites.google.com/site/iwsltevaluation2017/TED-tasks) dataset script for the *multilingual data*.
```
Bilingual data: {Arabic, German, French, Japanese, Korean, Chinese} <-> English
Multilingual data: German, English, Italian, Dutch, Romanian. (Any pair)
```
I'm unsure how to h... | closed | https://github.com/huggingface/datasets/pull/470 | 2020-08-03T09:52:39 | 2020-09-07T12:33:30 | 2020-09-07T12:33:30 | {
"login": "Narsil",
"id": 204321,
"type": "User"
} | [] | true | [] |
671,876,963 | 469 | invalid data type 'str' at _convert_outputs in arrow_dataset.py | I trying to build multi label text classifier model using Transformers lib.
I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error
File "C:\***\arrow_dataset.py", line 343, in _convert_outputs
v = command(v)
TypeError: new(): invalid data type ... | closed | https://github.com/huggingface/datasets/issues/469 | 2020-08-03T07:48:29 | 2023-07-20T15:54:17 | 2023-07-20T15:54:17 | {
"login": "Murgates",
"id": 30617486,
"type": "User"
} | [] | false | [] |
671,622,441 | 468 | UnicodeDecodeError while loading PAN-X task of XTREME dataset | Hi 🤗 team!
## Description of the problem
I'm running into a `UnicodeDecodeError` while trying to load the PAN-X subset the XTREME dataset:
```
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-inp... | closed | https://github.com/huggingface/datasets/issues/468 | 2020-08-02T14:05:10 | 2020-08-20T08:16:08 | 2020-08-20T08:16:08 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | false | [] |
671,580,010 | 467 | DOCS: Fix typo | Fix typo from dictionnary -> dictionary | closed | https://github.com/huggingface/datasets/pull/467 | 2020-08-02T08:59:37 | 2020-08-02T13:52:27 | 2020-08-02T09:18:54 | {
"login": "bharatr21",
"id": 13381361,
"type": "User"
} | [] | true | [] |
670,766,891 | 466 | [METRICS] Various improvements on metrics | - Disallow the use of positional arguments to avoid `predictions` vs `references` mistakes
- Allow to directly feed numpy/pytorch/tensorflow/pandas objects in metrics | closed | https://github.com/huggingface/datasets/pull/466 | 2020-08-01T11:03:45 | 2020-08-17T15:15:00 | 2020-08-17T15:14:59 | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [] | true | [] |
669,889,779 | 465 | Keep features after transform | When applying a transform like `map`, some features were lost (and inferred features were used).
It was the case for ClassLabel, Translation, etc.
To fix that, I did some modifications in the `ArrowWriter`:
- added the `update_features` parameter. When it's `True`, then the features specified by the user (if any... | closed | https://github.com/huggingface/datasets/pull/465 | 2020-07-31T14:43:21 | 2020-07-31T18:27:33 | 2020-07-31T18:27:32 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
669,767,381 | 464 | Add rename, remove and cast in-place operations | Add a bunch of in-place operation leveraging the Arrow back-end to rename and remove columns and cast to new features without using the more expensive `map` method.
These methods are added to `Dataset` as well as `DatasetDict`.
Added tests for these new methods and add the methods to the doc.
Naming follows th... | closed | https://github.com/huggingface/datasets/pull/464 | 2020-07-31T12:30:21 | 2020-07-31T15:50:02 | 2020-07-31T15:50:00 | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [] | true | [] |
669,735,455 | 463 | Add dataset/mlsum | New pull request that should correct the previous errors.
The load_real_data stills fails because it is looking for a default dataset URL that does not exists, this does not happen when loading the dataset with load_dataset | closed | https://github.com/huggingface/datasets/pull/463 | 2020-07-31T11:50:52 | 2020-08-24T14:54:42 | 2020-08-24T14:54:42 | {
"login": "RachelKer",
"id": 36986299,
"type": "User"
} | [] | true | [] |
669,715,547 | 462 | add DoQA (ACL 2020) dataset | adds DoQA (ACL 2020) dataset | closed | https://github.com/huggingface/datasets/pull/462 | 2020-07-31T11:25:56 | 2023-09-24T09:48:42 | 2020-08-03T11:28:27 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
669,703,508 | 461 | Doqa | add DoQA (ACL 2020) dataset | closed | https://github.com/huggingface/datasets/pull/461 | 2020-07-31T11:11:12 | 2023-09-24T09:48:40 | 2020-07-31T11:13:15 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
669,585,256 | 460 | Fix KeyboardInterrupt in map and bad indices in select | If you interrupted a map function while it was writing, the cached file was not discarded.
Therefore the next time you called map, it was loading an incomplete arrow file.
We had the same issue with select if there was a bad indice at one point.
To fix that I used temporary files that are renamed once everything... | closed | https://github.com/huggingface/datasets/pull/460 | 2020-07-31T08:57:15 | 2020-07-31T11:32:19 | 2020-07-31T11:32:18 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
669,545,437 | 459 | [Breaking] Update Dataset and DatasetDict API | This PR contains a few breaking changes so it's probably good to keep it for the next (major) release:
- rename the `flatten`, `drop` and `dictionary_encode_column` methods in `flatten_`, `drop_` and `dictionary_encode_column_` to indicate that these methods have in-place effects as discussed in #166. From now on we s... | closed | https://github.com/huggingface/datasets/pull/459 | 2020-07-31T08:11:33 | 2020-08-26T08:28:36 | 2020-08-26T08:28:35 | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [] | true | [] |
668,972,666 | 458 | Install CoVal metric from github | Changed the import statements in `coval.py` to direct the user to install the original package from github if it's not already installed (the warning will only display properly after merging [PR455](https://github.com/huggingface/nlp/pull/455))
Also changed the function call to use named rather than positional argum... | closed | https://github.com/huggingface/datasets/pull/458 | 2020-07-30T16:59:25 | 2020-07-31T13:56:33 | 2020-07-31T13:56:33 | {
"login": "yjernite",
"id": 10469459,
"type": "User"
} | [] | true | [] |
668,898,386 | 457 | add set_format to DatasetDict + tests | Add the `set_format` and `formated_as` and `reset_format` to `DatasetDict`.
Add tests to these for `Dataset` and `DatasetDict`.
Fix some bugs uncovered by the tests for `pandas` formating. | closed | https://github.com/huggingface/datasets/pull/457 | 2020-07-30T15:53:20 | 2020-07-30T17:34:36 | 2020-07-30T17:34:34 | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [] | true | [] |
668,723,785 | 456 | add crd3(ACL 2020) dataset | This PR adds the **Critical Role Dungeons and Dragons Dataset** published at ACL 2020 | closed | https://github.com/huggingface/datasets/pull/456 | 2020-07-30T13:28:35 | 2023-09-24T09:48:47 | 2020-08-03T11:28:52 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
668,037,965 | 455 | Add bleurt | This PR adds the BLEURT metric to the library.
The BLEURT `Metric` downloads a TF checkpoint corresponding to its `config_name` at creation (in the `_info` function). Default is set to `bleurt-base-128`.
Note that the default in the original package is `bleurt-tiny-128`, but they throw a warning and recommend usi... | closed | https://github.com/huggingface/datasets/pull/455 | 2020-07-29T18:08:32 | 2020-07-31T13:56:14 | 2020-07-31T13:56:14 | {
"login": "yjernite",
"id": 10469459,
"type": "User"
} | [] | true | [] |
668,011,577 | 454 | Create SECURITY.md | closed | https://github.com/huggingface/datasets/pull/454 | 2020-07-29T17:23:34 | 2020-07-29T21:45:52 | 2020-07-29T21:45:52 | {
"login": "ChenZehong13",
"id": 56394989,
"type": "User"
} | [] | true | [] | |
667,728,247 | 453 | add builder tests | I added `as_dataset` and `download_and_prepare` to the tests | closed | https://github.com/huggingface/datasets/pull/453 | 2020-07-29T10:22:07 | 2020-07-29T11:14:06 | 2020-07-29T11:14:05 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
667,498,295 | 452 | Guardian authorship dataset | A new dataset: Guardian news articles for authorship attribution
**tests passed:**
python nlp-cli dummy_data datasets/guardian_authorship --save_infos --all_configs
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_guardian_authorship
**Tests failed:**
Real data:... | closed | https://github.com/huggingface/datasets/pull/452 | 2020-07-29T02:23:57 | 2020-08-20T15:09:57 | 2020-08-20T15:07:56 | {
"login": "malikaltakrori",
"id": 25109412,
"type": "User"
} | [] | true | [] |
667,210,468 | 451 | Fix csv/json/txt cache dir | The cache dir for csv/json/txt datasets was always the same. This is an issue because it should be different depending on the data files provided by the user.
To fix that, I added a line that use the hash of the data files provided by the user to define the cache dir.
This should fix #444 | closed | https://github.com/huggingface/datasets/pull/451 | 2020-07-28T16:30:51 | 2020-07-29T13:57:23 | 2020-07-29T13:57:22 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
667,074,120 | 450 | add sogou_news | This PR adds the sogou news dataset
#353 | closed | https://github.com/huggingface/datasets/pull/450 | 2020-07-28T13:29:10 | 2020-07-29T13:30:18 | 2020-07-29T13:30:17 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
666,898,923 | 449 | add reuters21578 dataset | This PR adds the `Reuters_21578` dataset https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.html
#353
The datasets is a lit of `.sgm` files which are a bit different from xml file indeed `xml.etree` couldn't be used to read files. I consider them as text file (to avoid using external library) and read ... | closed | https://github.com/huggingface/datasets/pull/449 | 2020-07-28T08:58:12 | 2023-09-24T09:49:28 | 2020-08-03T11:10:31 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
666,893,443 | 448 | add aws load metric test | Following issue #445
Added a test to recognize import errors of all metrics | closed | https://github.com/huggingface/datasets/pull/448 | 2020-07-28T08:50:22 | 2020-07-28T15:02:27 | 2020-07-28T15:02:27 | {
"login": "idoh",
"id": 5303103,
"type": "User"
} | [] | true | [] |
666,842,115 | 447 | [BugFix] fix wrong import of DEFAULT_TOKENIZER | Fixed the path to `DEFAULT_TOKENIZER`
#445 | closed | https://github.com/huggingface/datasets/pull/447 | 2020-07-28T07:41:10 | 2020-07-28T12:58:01 | 2020-07-28T12:52:05 | {
"login": "idoh",
"id": 5303103,
"type": "User"
} | [] | true | [] |
666,837,351 | 446 | [BugFix] fix wrong import of DEFAULT_TOKENIZER | Fixed the path to `DEFAULT_TOKENIZER`
#445 | closed | https://github.com/huggingface/datasets/pull/446 | 2020-07-28T07:32:47 | 2020-07-28T07:34:46 | 2020-07-28T07:33:59 | {
"login": "idoh",
"id": 5303103,
"type": "User"
} | [] | true | [] |
666,836,658 | 445 | DEFAULT_TOKENIZER import error in sacrebleu | Latest Version 0.3.0
When loading the metric "sacrebleu" there is an import error due to the wrong path

| closed | https://github.com/huggingface/datasets/issues/445 | 2020-07-28T07:31:30 | 2020-07-28T12:58:56 | 2020-07-28T12:58:56 | {
"login": "idoh",
"id": 5303103,
"type": "User"
} | [] | false | [] |
666,280,842 | 444 | Keep loading old file even I specify a new file in load_dataset | I used load a file called 'a.csv' by
```
dataset = load_dataset('csv', data_file='./a.csv')
```
And after a while, I tried to load another csv called 'b.csv'
```
dataset = load_dataset('csv', data_file='./b.csv')
```
However, the new dataset seems to remain the old 'a.csv' and not loading new csv file.
Even... | closed | https://github.com/huggingface/datasets/issues/444 | 2020-07-27T13:08:06 | 2020-07-29T13:57:22 | 2020-07-29T13:57:22 | {
"login": "joshhu",
"id": 10594453,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
666,246,716 | 443 | Cannot unpickle saved .pt dataset with torch.save()/load() | Saving a formatted torch dataset to file using `torch.save()`. Loading the same file fails during unpickling:
```python
>>> import torch
>>> import nlp
>>> squad = nlp.load_dataset("squad.py", split="train")
>>> squad
Dataset(features: {'source_text': Value(dtype='string', id=None), 'target_text': Value(dtype... | closed | https://github.com/huggingface/datasets/issues/443 | 2020-07-27T12:13:37 | 2020-07-27T13:05:11 | 2020-07-27T13:05:11 | {
"login": "vegarab",
"id": 24683907,
"type": "User"
} | [] | false | [] |
666,201,810 | 442 | [Suggestion] Glue Diagnostic Data with Labels | Hello! First of all, thanks for setting up this useful project!
I've just realised you provide the the [Glue Diagnostics Data](https://huggingface.co/nlp/viewer/?dataset=glue&config=ax) without labels, indicating in the `GlueConfig` that you've only a test set.
Yet, the data with labels is available, too (see als... | open | https://github.com/huggingface/datasets/issues/442 | 2020-07-27T10:59:58 | 2020-08-24T15:13:20 | null | {
"login": "ggbetz",
"id": 3662782,
"type": "User"
} | [
{
"name": "Dataset discussion",
"color": "72f99f"
}
] | false | [] |
666,148,413 | 441 | Add features parameter in load dataset | Added `features` argument in `nlp.load_dataset`.
If they don't match the data type, it raises a `ValueError`.
It's a draft PR because #440 needs to be merged first. | closed | https://github.com/huggingface/datasets/pull/441 | 2020-07-27T09:50:01 | 2020-07-30T12:51:17 | 2020-07-30T12:51:16 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
666,116,823 | 440 | Fix user specified features in map | `.map` didn't keep the user specified features because of an issue in the writer.
The writer used to overwrite the user specified features with inferred features.
I also added tests to make sure it doesn't happen again. | closed | https://github.com/huggingface/datasets/pull/440 | 2020-07-27T09:04:26 | 2020-07-28T09:25:23 | 2020-07-28T09:25:22 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
665,964,673 | 439 | Issues: Adding a FAISS or Elastic Search index to a Dataset | It seems the DPRContextEncoder, DPRContextEncoderTokenizer cited[ in this documentation](https://huggingface.co/nlp/faiss_and_ea.html) is not implemented ? It didnot work with the standard nlp installation . Also, I couldn't find or use it with the latest nlp install from github in Colab. Is there any dependency on t... | closed | https://github.com/huggingface/datasets/issues/439 | 2020-07-27T04:25:17 | 2020-10-28T01:46:24 | 2020-10-28T01:46:24 | {
"login": "nsankar",
"id": 431890,
"type": "User"
} | [] | false | [] |
665,865,490 | 438 | New Datasets: IWSLT15+, ITTB | **Links:**
[iwslt](https://pytorchnlp.readthedocs.io/en/latest/_modules/torchnlp/datasets/iwslt.html)
Don't know if that link is up to date.
[ittb](http://www.cfilt.iitb.ac.in/iitb_parallel/)
**Motivation**: replicate mbart finetuning results (table below)
 dataset? | I have written a generic dataset for corpora created with the Brat annotation tool ([specification](https://brat.nlplab.org/standoff.html), [dataset code](https://github.com/ArneBinder/nlp/blob/brat/datasets/brat/brat.py)). Now I wonder how to use that to create specific dataset instances. What's the recommended way to... | closed | https://github.com/huggingface/datasets/issues/433 | 2020-07-24T17:27:37 | 2022-10-04T17:59:34 | 2022-10-04T17:59:33 | {
"login": "ArneBinder",
"id": 3375489,
"type": "User"
} | [] | false | [] |
665,234,340 | 432 | Fix handling of config files while loading datasets from multiple processes | When loading shards on several processes, each process upon loading the dataset will overwrite dataset_infos.json in <package path>/datasets/<dataset name>/<hash>/dataset_infos.json. It does so every time, even when the target file already exists and is identical. Because multiple processes rewrite the same file in par... | closed | https://github.com/huggingface/datasets/pull/432 | 2020-07-24T15:10:57 | 2020-08-01T17:11:42 | 2020-07-30T08:25:28 | {
"login": "orsharir",
"id": 99543,
"type": "User"
} | [] | true | [] |
665,044,416 | 431 | Specify split post processing + Add post processing resources downloading | Previously if you tried to do
```python
from nlp import load_dataset
wiki = load_dataset("wiki_dpr", "psgs_w100_with_nq_embeddings", split="train[:100]", with_index=True)
```
Then you'd get an error `Index size should match Dataset size...`
This was because it was trying to use the full index (21M elements).
... | closed | https://github.com/huggingface/datasets/pull/431 | 2020-07-24T09:29:19 | 2020-07-31T09:05:04 | 2020-07-31T09:05:03 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
664,583,837 | 430 | add DatasetDict | ## Add DatasetDict
### Overview
When you call `load_dataset` it can return a dictionary of datasets if there are several splits (train/test for example).
If you wanted to apply dataset transforms you had to iterate over each split and apply the transform.
Instead of returning a dict, it now returns a `nlp.Dat... | closed | https://github.com/huggingface/datasets/pull/430 | 2020-07-23T15:43:49 | 2020-08-04T01:01:53 | 2020-07-29T09:06:22 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.